Untangling Systems: Nearness is changing, and we didn’t notice


Tobler’s Law in Latent Space

There’s an idea in geography called Tobler’s First Law of Geography: “everything is related to everything else, but near things are more related than distant things.” It sounds almost obvious when you first hear it. Of course nearby things are similar. But recently I’ve been wondering whether AI is quietly breaking this intuition, or revealing that “near” was always more complicated than we thought.


Deeper into Tobler

Tobler was not the first to notice this pattern, but he was the one who popularized it. Earlier work by R. A. Fisher in 1935 described similar ideas in the context of crops, where nearby plants tended to be more alike than those farther apart. Tobler first introduced his formulation publicly in 1969, and it was published in 1970 in A Computer Movie Simulating Urban Growth in the Detroit Region.

That paper focused on modeling population growth in Detroit, simulating how cities expand over time based on a handful of variables. There is some evidence that Tobler himself was not entirely serious about the phrasing when he introduced it. He likely did not expect it to be quoted over and over again as a “law.”

And yet, it stuck.

The idea comes up constantly in geography because it works. Things that are near each other are often more similar than things that are far apart. You can see it in forests, in neighborhoods, in groups of people. It’s one of those statements that feels immediately obvious, almost tautological, and that obviousness is part of its power.


What are vector embeddings?

What got me questioning Tobler’s Law is the rise of vector embeddings, a technique that has become central to how AI systems understand the world. These embeddings are used in large language models as well as in earth observation foundation models. To see why this matters, I need to explain what embeddings actually are.

At a high level, embeddings are a way of turning things into points in a space, where distance represents similarity. Instead of organizing information only by physical location, embeddings organize it by learned relationships. In that sense, they are better thought of as spaces rather than networks, even though we often visualize them as clusters or constellations.

This idea first became popular through language models. Words, sentences, or entire documents can be represented as points in an embedding space, where things with similar meanings end up close together. “King” and “queen” are near each other. So are “Paris” and “France.” “Near” no longer means physically adjacent; it means semantically similar.

The same approach can be applied to images and geography. In remote sensing, satellite imagery is often divided into small square tiles. Each tile can be transformed into a point in an embedding space. These embeddings are not just based on what it looks like to us, but on infrared and other data humans cannot see. When you do this, patterns emerge that are not constrained by physical distance.

Tiles containing runways cluster together. Cornfields cluster with other cornfields. Chicken farms group with chicken farms. These tiles may be scattered across the globe, but in embedding space, they are near each other because they are similar.


What does this all mean?

This brings us back to Tobler. What does Tobler’s First Law mean in this virtual or latent space? If airports across continents are “near,” what kind of nearness are we talking about? And as we are able to sense more and more information about the world, does physical distance remain the dominant way we should understand relatedness?

One way to think about this is that Tobler’s Law still holds, but “near” now has many dimensions. Geographic distance is one of them. Embedding space is another. Time can be another still, since things close together in time are often more similar than things far apart in time.

In that sense, Tobler’s Law may be better understood not as a strict rule, but as a heuristic: a useful way of reasoning about similarity within a given space. Embeddings don’t break it. They extend it into new dimensions.

Or put differently, every model creates its own notion of distance. As George Box famously said, “All models are wrong, but some are useful.” Tobler’s model was useful for the space we could see. Embeddings invite us to ask what other spaces might matter.

-Kate

Untangling Systems

I believe in the power of open collaboration to create digital commons. My promise to you is I explore the leverage points that create change in complex systems keeping the humans in those systems at the forefront with empathy and humor.

Read more from Untangling Systems
A path with arrows with swirls and questions and a warning triangle.

Faux Consensus and the Least Bad Decision Trap We will get back to talking about AI soon. I promise. Those that were waiting for me to take a break from the AI, here we go! Today, let’s talk about an older and much messier technology: humans trying to make decisions together. I have been thinking about data governance for a new project I am working on, and it keeps reminding me that, in plain language, governance is the rules and norms a community agrees to play by. Not just what tools we...

A network of nodes where most are in the background but one path

Thoughts on meeting fatigue, because I have thoughts again I was staring at the wall. My mind was blankish. I say blankish because there was still this nagging feeling that I had forgotten something. That there was something left to do. This was not simply resting, it was more light disassociation. If disassociation could ever be light. My brain had quietly opted out. I was anxious about all the things I should be doing, aware of time passing, but unable to pick anything up. No book. No...

two robots are computers with a speech bubble between them

ChatGPT and Claude are typing...to each other Back in the heyday of AOL I made AOL Instant Messenger (AIM) chatbots. I was part of an entire community that did this. These bots were not intelligent at all, but they had all sorts of themes and we had fun. Creating different types of chatbots has always been an interest of mine, especially having them interact with each other. This is what led me to think about how to get LLMs to talk to each other while retaining their memory about the human...