Untangling Systems: Winton Was First. Ford Was Faster.


Winton Was First. Ford Was Faster.

Here’s something I learned recently about my own family: my great-great-grandfather Thomas Henderson and his brother-in-law Alexander Winton co-founded the Winton Motor Carriage Company. They were one of the first to commercially sell automobiles in the United States. By 1913 it was a serious operation. Skilled people, proud craft, excellent cars built by hand. Then Ford’s assembly line arrived and the ground shifted. Winton was first. Ford was faster. (My relatives passed on hiring Ford in 1898. We don’t talk about that.)

Are we seeing these same patterns today? Being first doesn’t put you in the history books. Adapting your patterns of production does.


The Pattern

The good news is you don’t need to start from scratch. The engineering processes that work for AI-generated code aren’t new. They’re the same principles good teams already use. Code review, testing, clear standards for when something needs a second set of eyes. You just have to extend them. And think in parallel to how you build a human team: what do you need in your agent and human team?

The mistake most organizations make is treating AI like a single new hire who needs one supervisor. It’s not. It’s more like adding a large team that can try many different things and bring in many different skillsets who need clear direction and well-formed processes around them. That means thinking about specialization, which agents do what, and building review systems that scale. Human oversight still matters. It just needs to be applied at the right moments, for the right types of change, not as a bottleneck on everything.

“Given enough eyeballs, all bugs are shallow” (Eric S. Raymond, The Cathedral and the Bazaar, named Linus’s Law in honor of Linus Torvalds). That’s still true. Some of those eyes are now automated. The job of a leader is to design the system, not review every line.


The Skeptics

I watched OpenStreetMap go from a niche project that most people in tech had never heard of to something you could mention in almost any technical circle and get a nod of recognition. It didn’t happen overnight. There were moments along the way. Nationally using OpenStreetMap for disaster planning operations in Indonesia. The State of the Map US conference hosted at the United Nations headquarters. Eventually sitting across from teams at Apple, Amazon, and Facebook as they started participating. Each community had its own crossover point. And along the way, every type of skeptic showed up.

The Process and Capacity Skeptics

In the early days of OpenStreetMap, GIS analysts were skeptical of crowdsourced data for legitimate reasons. Authoritative data sources had real quality advantages. The data from government surveys and professional cartographers was more reliable, more consistent, more defensible. The crowdsourced stuff was messy.

But the answer wasn’t to abandon the approach. It was to build better systems around it. Academic research helped. Dr. Muki Haklay’s work showed that OSM data quality in well-mapped areas was comparable to authoritative sources. Many eyes made the maps more accurate over time, not less.

The same is true for AI-generated code. Process skeptics are often your most willing people. They’re not opposed to AI. They’re worried about quality, about not having enough time to learn, about not understanding the technology well enough to use it responsibly. They want support and they’re not getting it.

For capacity skeptics the challenge is similar but more immediate. They don’t know where to start and they’re afraid of making mistakes. They need time and permission to learn. If your organization is mandating AI adoption without giving people actual space to experiment and fail safely, you’re not solving a technology problem. You’re creating a people problem.

Fix the process. Give people time. These skeptics will come around.

The Values Skeptics

The environmental costs of AI are real. The water usage, the energy consumption, the questions about human displacement. These deserve genuine acknowledgment, not a talking point about productivity gains.

But before you respond to a values skeptic, understand them. Are these concerns deeply held across the board, or have they decided to focus specifically on AI? That distinction matters. Sometimes values concerns are genuine and run deep. Sometimes they’re a more socially acceptable way of expressing something else. Fear, loss of control, uncertainty about the future. You need to know which one you’re dealing with before you can respond well.

And there are concrete things you can do. Using the smallest model that gets the job done is a start. Being thoughtful about when AI is actually the right tool matters too. You don’t have to resolve every ethical question about AI to make responsible choices inside your own organization.

The Identity Skeptics

This is the hardest group. And the trickiest part is that they often don’t show up as identity skeptics. They show up as process skeptics. They have legitimate technical concerns about quality and workflow. Those concerns are real. But underneath them is something harder to fix.

For some engineers, writing code is not just a job. It’s how they think, how they create, who they are. They code on weekends. Not because they have to. Because they love it. AI doesn’t just threaten their workflow. It threatens their identity.

We saw this in OpenStreetMap too. Some of the most passionate resistance came from professional cartographers and GIS specialists who had spent careers developing expertise that suddenly felt less valuable. Some of them never came around. But some did. And what shifted them wasn’t being convinced by an argument. It was finding community. They found people inside OSM, in academic circles, at conferences, who took their expertise seriously and showed them how it fit into the new way of doing things. Dr. Haklay’s research helped legitimize the field. Being taken seriously as professionals mattered to people whose identity was at stake.

The same applies inside your organization. You don’t have to build the whole community yourself. But you can help people find where they belong in the broader conversation. An internal community of practice, an external one, a conference, a paper that legitimizes what they’re feeling. The right community depends on the person.

The Mixed Skeptics

Here’s the thing. Most people aren’t one type. They’re a combination of all of them, in different proportions, on different days. Someone might have genuine values concerns and be protecting their identity and be worried about process quality all at once. The categories above are diagnostic tools, not boxes to sort people into.

And that’s exactly why none of this works from a distance. You can’t read someone’s resistance from an org chart. You have to talk to them. Which is where we’re going next.


The Enthusiasts

Here’s something leaders get wrong almost every time. They find their most excited AI adopters, point to them, and say “follow these people.” It feels like the right move. Let the energy lead.

But enthusiasts have a problem. They move fast and they make mistakes. Not because they’re careless. Because they’re running on excitement and the frontier is genuinely messy. And the skeptics are watching every stumble.

I know this because I’ve been the enthusiast.

A colleague and I once did an escape room with our team. The two of us ran ahead, figured out the puzzles, cracked the codes. And then entered the combination wrong on the lock. Twice. Our other colleague was quietly moving behind us, cleaning up our mistakes, making sure the things we figured out actually worked. That was also exactly how we worked together every day.

The enthusiast finds the thing. The enthusiast is often right about the thing. But the enthusiast is not always the right person to land it.

Way back when I was early in my tech career I found major security holes in the platform I worked on and started fixing them myself. My boss at the time didn’t ignore me or shut me down. He said “you’ve done enough. Now we need to fix this properly.” That one sentence channeled everything. It acknowledged what I found, validated the instinct, and redirected the energy into something the whole organization could actually use.

That’s what good enthusiast management looks like. It’s not “slow down.” It’s “you’re right, and here’s how we do this together.”

If you just unleash your enthusiasts without that guidance, a few things happen. They build things the rest of the team can’t maintain. They make the skeptics feel steamrolled. And they create the impression that AI adoption means chaos and cutting corners. Which is exactly the story your skeptics were already telling themselves.

Your enthusiasts are not your messengers to the skeptics. They’re a different kind of resource entirely. They’re your scouts. They find what’s possible. Your job as a leader is to take what they find and make it usable for everyone else.

And that requires understanding both groups well enough to hold them in balance. Which means you have to talk to both of them. Really talk to them.


The Tools Are Already There

In technical organizations we like to believe we operate on logic. Someone shows us a better way to do something, we evaluate it on the merits, we adopt it if it makes sense. Clean, rational, efficient.

That’s not how humans work.

Just training people on new technology isn’t enough. You have to get down to the emotional reasons, the deep humanness of why adopting it is hard for them. Without working on the humans themselves, it will never work. That’s the core insight of Kegan and Lahey’s immunity to change framework. People aren’t resisting because they’re irrational. They’re resisting because they have hidden competing commitments. Things they’re protecting that run directly counter to the change you’re asking for, and that they may not even be fully aware of. One foot on the gas, one foot on the brake.

The framework gives you a practical diagnostic called the immunity map. It walks through four columns: the stated commitment to change, the behaviors working against it, the hidden competing commitment underneath those behaviors, and finally the big assumptions. These are the deep beliefs a person holds as absolute truth that make the competing commitment feel non-negotiable. The engineer who says “I’m worried about code quality” might be hiding a competing commitment to not looking incompetent. The big assumption underneath might be “if my judgment is replaced by a machine, I will lose the respect of my team.” You can’t get to that assumption by pushing harder. You have to create the conditions where it’s safe enough to surface.

Nonviolent Communication

That’s where nonviolent communication comes in. NVC is the conversational tool that makes the immunity map actually work.

NVC is not about being nice. It’s about being precise. It gives you a structure for observing without judgment, understanding the need underneath the stated position, and making requests that don’t trigger defensiveness. In practice, it’s how you help someone move from “I’m worried about code quality” through to the competing commitment underneath, and eventually to the big assumption driving it all.

And once you’ve surfaced the assumption, NVC helps you find the next step. Kegan and Lahey call these “safe-to-try experiments.” Small, low-risk actions that let someone test whether their big assumption is actually true. NVC gives you the language to propose those experiments without making the person feel cornered or judged. “Would you be willing to try X for two weeks and see what you notice?” lands very differently than “you should just try it.”

It works in both directions. With your skeptics, it helps you surface what they’re actually protecting and find a first step they can genuinely commit to. With your enthusiasts, it helps you channel their energy without making them feel shut down.

If these frameworks are new to you, don’t let that stop you. You may already have something similar in your pocket. Any practice that helps you listen deeply and understand what people actually need will get you started. A little reading goes a long way, and going more deeply into both will serve you well beyond this moment in your career. But don’t wait until you’ve read everything. Start with a few conversations first. The conversations will show you exactly which tools you need.


Start Here

You have the map. You have the language. What you need now is the courage to use them.

Start with one skeptic. Sit down with them. Don’t try to convince them of anything. Ask what they’re actually worried about losing. Then ask the same of your most enthusiastic AI adopter. You’ll find they need completely different things from you. And that you need both of them.

-Kate

Untangling Systems

I believe in the power of open collaboration to create digital commons. My promise to you is I explore the leverage points that create change in complex systems keeping the humans in those systems at the forefront with empathy and humor.

Read more from Untangling Systems
A path with arrows with swirls and questions and a warning triangle.

Faux Consensus and the Least Bad Decision Trap We will get back to talking about AI soon. I promise. Those that were waiting for me to take a break from the AI, here we go! Today, let’s talk about an older and much messier technology: humans trying to make decisions together. I have been thinking about data governance for a new project I am working on, and it keeps reminding me that, in plain language, governance is the rules and norms a community agrees to play by. Not just what tools we...

Tiles over a map turning to a network of lines and nodes

Tobler’s Law in Latent Space There’s an idea in geography called Tobler’s First Law of Geography: “everything is related to everything else, but near things are more related than distant things.” It sounds almost obvious when you first hear it. Of course nearby things are similar. But recently I’ve been wondering whether AI is quietly breaking this intuition, or revealing that “near” was always more complicated than we thought. Deeper into Tobler Tobler was not the first to notice this...

A network of nodes where most are in the background but one path

Thoughts on meeting fatigue, because I have thoughts again I was staring at the wall. My mind was blankish. I say blankish because there was still this nagging feeling that I had forgotten something. That there was something left to do. This was not simply resting, it was more light disassociation. If disassociation could ever be light. My brain had quietly opted out. I was anxious about all the things I should be doing, aware of time passing, but unable to pick anything up. No book. No...