July 30, 2025

Design System Deviation is a Signal

In a recent episode of The Question, Adrianne Daley and I invited the design system community to dig into one of those quiet-but-critical topics: tracking design system deviation.

We asked design system practitioners three questions:

  1. Are you tracking deviations from the design system in any way?
  2. If so, how do you do this? If not, what’s holding you back?
  3. Tell us something you’ve learned by gathering (or not gathering) this kind of metric.

When asked, “Are you tracking deviations from your design system in any way?” well over half of participants responded “No.”When asked, “Are you tracking deviations from your design system in any way?” well over half of participants responded “No.”

Check out the raw data responses for yourself.

We sent these questions to 913 design system nerds and received 57 responses. After reviewing the data, we saw a story forming…one of awkward spreadsheets, elegant code instrumentation, a bit of organizational fear, and, ultimately, some pretty cool opportunities.

Let’s define deviation”

First, a little housekeeping: when we asked about deviation,” we didn’t define it very clearly. I accept the blame on this, and I’ll try to be better in the future when asking about words that can have so many connotations.

Luckily, the community jumped in.

Adrianne started with their view, explaining deviations are overrides of style, behavior, or structure. Sometimes teams use entirely different components, opting out of the system altogether. Others mentioned token overrides, detached Figma instances, or teams avoiding system components due to complexity, deadlines, or a lack of understanding or practical skill.

How and why to track deviations

Adrianne has a unique perspective on this problem because they work at Honeycomb which is an observability platform. By using some smart react-scanner scripts, a custom processor, and a clear naming convention (using the UNSAFE_className prop), they have a fairly robust approach to capturing deviations from the core design system.

Each quarter, they analyze this data to identify which props are overridden, which teams are doing it, and what patterns are emerging. Their goal is to guide design system product decisions. The action they take from this learning could land anywhere on a spectrum from improving documentation or offering trainings to evolving component APIs or deprecating components altogether.

The real power in this approach is it’s intent—this is deviation tracking not as surveillance, but as signal.

Deviation police

The idea of tracking deviations makes people nervous. Several folks in the conversation shared stories of teams hiding their deviations, or quietly rebuilding components from scratch to create the illusion of adoption.

As Grant put it:

If you do anything that creates more work for me, my path is to stay quiet.”

This is where posture matters. If your design system team acts like the UI police, you’ll get silence. But if you treat deviations as the start of a conversation, as evidence of unmet needs, you’ll begin to build trust with your consumers.

Guy expressed this succinctly in the deep dive conversation:

The dream is that when someone needs to deviate, they come back to the system team and say, Here’s why. Want to fix it together?’”

A spectrum of deviation tracking

You know I love a good spectrum diagram :) and the further into the raw data and discussion we went, the more one emerged. Here’s my rough sketch explaining what I heard in the conversation.

We can view deviation tracking on a spectrum from “passive tracking” to “human connection.” The strongest programs use a smart combination of these techniques.We can view deviation tracking on a spectrum from “passive tracking” to “human connection.” The strongest programs use a smart combination of these techniques.

On the far left we have a very passive approach to tracking design system deviations, on the far right, one that is more active and centered on human connection.

Static logging isn’t simple to accomplish, but using tools like Adrianne shared or Zeplin’s Omlet, you can begin to capture the specific deviations or overrides from the core coded components. For design, Figma gives you some view into detached instances, but there’s a lot of work to do to capture metrics here.

Moving right, there are two more active ways to engage. One is to segment the insights you get. This can be done by grouping deviations and categorizing them. Grouping offers a way to view deviations or overrides by the team making them or by the asset itself. Categorization can look very different for each org, but one option would be necessary deviation,” optional deviation,” and unnecessary deviation.” Now you can prioritize where to focus.

Encouraging engagement involves trying to identify the moments when someone makes a decision to deviate. Inserting an opportunity to file a bug, request a feature, or reach out to the DS team to see if there’s a better way to solve the problem you’re facing can yield great results because the timing makes the action relevant.

Finally, the most active approach is to catch deviations before they happen by being more present in the processes of your consuming teams—maybe a design critique or a code review. This requires time, of course, a resource many design system practitioners are short on. But it does build empathy and trust.

The most effective teams find ways to layer these approaches, casting a wide net to catch overrides and deviations.

Right or wrong

Many of our respondents and deep dive participants described the tension of deviation being framed as failure when sometimes, deviation is exactly what the system needs.

Sarah stated:

If everyone is deviating the same way, we need to adapt to the needs of our users.”

Francesco shared how their team built custom linters and plugins to track when designers step outside the system—but also to offer guidance and migration paths when they do. This aligns strongly with my biggest takeaway from the episode—deviation is often a sign that something in the system needs care.

The cost of ignoring deviation

We closed the deep dive with a simple question from Adrianne:

What is the cost of not tracking deviation?”

Rebecca shared that, in her experience, it’s operational:

We’ve assumed people were doing things right. But when we made global updates, things blew up.”

For others, it’s strategic:

We can’t improve the system if we don’t know where it’s breaking down.”

On the whole, we found some agreement in the fact that it can be expensive from a creative or technical debt perspective to ignore deviation. But it can also break down trust with leadership—specifically if deviation results in the inability for the design system to live up to its promises.

A few solid approaches

From the stories you all shared, here are a few patterns worth borrowing:

Track the why,” not just the what.” Use deviations as a prompt for conversation. Categorize them as justified” or unnecessary” based on actual team interviews.

Offer visible contribution paths. Let people file bugs and feature requests directly from your docs. Even show them what you’re already aware of and what’s in-progress.

Summarize regularly. Create quarterly reports on deviation patterns, including quotes, usage stats, and areas of concern. Making this a regular part of the conversation and demonstrating that you don’t see deviation as a crime but as a signal that attention is needed frees consuming teams to meet their deadlines and encourages them to see you as a partner.

Participate early. Show up when you can, be involved in the process of adoption, and try not to wait until something breaks to begin digging in.

Deviation isn’t always the problem

Most design system teams I’ve coached tend to spend a lot of time trying to enforce consistency. I’m walking away from this conversation with a renewed conviction that deviation isn’t usually the problem, it’s the evidence of one.

Rather than assuming it’s a failure of governance, consider that it may be a signal of unmet needs, of time pressure, of system gaps, or of a need for training. It’s our job, as system stewards, to listen.

Find your community

Curious how others are doing this? Come hang with us in Redwoods. It’s a space for people who want to support each other on the journey of building better design system programs.

Learning Mode

I am continually inspired by the people who participate week after week to dive into the answers we gather. Each one of you shows up in learning mode. Because of that, we all walk away with broadened perspectives and an appreciation for the experiences we each bring to these conversations.

To those of you who attended, thanks for joining with such a gracious posture.

Resources

Thank you

Many thanks to all who participated.

If you missed out this week, sign up for The Question and be ready to answer next time.


Writing Design Systems The Question Featured

Join Redwoods, a design system community.

Sign up for The Question, a weekly design system deep dive.

Explore more of my writing and speaking.

Or, inquire about design system coaching


© Copyright Ben Callahan All Rights Reserved