Untangling Systems

People Before Data, Unless It's To Feed The AI?

When employee behavior becomes training data for AI, leaders need to ask what they are spending in trust and culture to get it.

The phone rang, and I picked it up.

“We’ve been watching you.” Said an ominous voice.

This was week two at Target in Northern Virginia, around 2001. I was in college. A friend had lost his job after the first dot-com crash, and the whole point of getting hired there was that we were going to work together. Then Target lost his drug test, and he had to start the process over.

So I was there alone.

I had worked at Pizza Hut for a few years before that. I had been promoted to shift manager. I was punctual. I was generally a pretty good part-time employee. At least, that was the story I had about myself.

Then I got put on a register without much training.

I remember thinking I was doing okay. It was hard to find the bar codes sometimes, but generally doing okay. I was checking people out, keeping the line moving, getting through this shift.

Then someone bought Rubbermaid containers.

I did not check inside them. It didn’t even occur to me to, I had a lot going on with the register and bagging, etc…

I probably should have understood that I should. If you are running loss prevention at a big store, containers are exactly the kind of thing someone might use to hide merchandise. The policy itself was not outrageous.

The framing was the problem.

They did not say, “Hey, quick training note. When someone buys containers, check inside them.”

They said, “We’ve been watching you.”

I walked out of that Target and never went back. No call, no show, until they fired me. I did not even pick up my paycheck.

Could I have handled it more professionally? Sure. It was also a part-time college job that I only took to support a friend who was not even working there. But the thing that stayed with me was not the Rubbermaid containers.

It was the feeling of being watched and how it was framed.

People Before Data

I’ve used the phrase “people before data” for years.

When questions about it usually I say “you’ll know when you are NOT doing it.” At least I hope people will.

It is also one of those simple mantras that is easy to nod along with until the data becomes valuable. Then suddenly the sentence starts getting footnotes.

People before data.

Unless the data could improve productivity.

Unless the data could train the model.

Unless the data could help us understand how work actually happens.

Unless it is to feed the AI.

Unless we can use it to make billions of dollars.

That is the part I keep thinking about with Meta’s reported Model Capability Initiative. According to Reuters-reported coverage, Meta is capturing employee mouse movements, clicks, keystrokes, and occasional screenshots from U.S. employees’ work devices to train AI agents on how people use computers. Meta has said the data is for AI training, not performance review, and that there are safeguards for sensitive content.

That distinction matters. I am not saying this is the same as a manager secretly watching someone and deciding whether they are a good employee.

But I also do not think the distinction makes the human reaction go away.

If your keystrokes, clicks, mouse movements, and screenshots are being collected so an AI system can learn how people do work, you are still being observed. Even if the stated purpose is training. Even if the company says it is not about performance. Even if there are controls around sensitive content.

The soul does not live by a policy document.

The Data Is Not Floating In The Air

Data sounds clean. Neutral. Almost weightless.

But workplace data is not floating in the air waiting to be collected. It comes from people doing work inside relationships. With managers. With teammates. With organizations they may or may not trust.

That is the part that gets flattened when data collection is framed as a purely technical question.

Can we collect it?

Can we anonymize it?

Can we use it to train better systems?

Those are real questions. But they are not the only questions.

You also have to ask what the collection does to the person being collected from.

Do they feel trusted?

Do they feel like a participant in the system, or raw material for it?

Do they believe the stated purpose?

Do they believe the purpose will stay the purpose?

Do they still work the same way when they know the system is watching?

That last one is not just emotional. It is practical.

Research on digital surveillance and managerial clarity found that surveillance did not significantly improve performance on average, while unclear explanations around surveillance reduced worker output. That tracks with my Target story in a very human way. Watching me did not make me a better cashier. It made me leave.

And I was replaceable. Very replaceable. A college student working a part-time retail job is not exactly a scarce resource.

But in knowledge work, the relationship is more complicated. You do not just want hands on keyboards. You want judgment. Creativity. Institutional knowledge. The willingness to notice weird things and say something. The little acts of care that make systems work better than the process document says they should.

Those are hard to measure.

They are also easy to damage.

Trust Is Part Of The System

Certain types of leaders, especially in tech orgs often treat trust like a soft cultural layer that sits on top of the real system.

The real system is the tooling. The org chart. The policy. The data pipeline. The model.

Trust is something HR writes about.

I do not think that is right.

Trust is infrastructure. It is part of whether the system works.

Related: I wrote about this recently in “There’s No I in Team Unless It’s a Solo AI Team”.

That piece uses Patrick Lencioni’s The Five Dysfunctions of a Team. In that model, absence of trust is the base of the whole pyramid. You cannot build healthy conflict, commitment, accountability, or results without it.

That is true for teams. It is also true for data systems.

I also heard Eric Ries talk recently on Lenny’s Podcast about a practice he calls the culture bank. The idea is that trustworthiness is an asset. Some actions make deposits. Others make withdrawals. His rule was simple: only make deposits, never make withdrawals, because you will make withdrawals by accident when you make mistakes. You do not need to make them on purpose.

That is what workplace surveillance for AI training risks becoming. Not just data collection. An intentional withdrawal from the culture bank.

If people do not trust how data about their work will be used, they will change how they work. Maybe they will perform for the system. Maybe they will avoid certain kinds of work. Maybe they will stop experimenting in places where experiments can be captured out of context. Maybe they will become more compliant and less honest.

Maybe they will leave.

And yes, sometimes organizations genuinely need the data. I am not pretending every data collection effort is sinister. If you are building tools that are supposed to help people work, you need to understand how people work. If you are training AI agents to use software, real usage data may be far more valuable than made-up workflows.

But “the data would be useful” is not the end of the conversation.

It is the beginning of the tradeoff.

How badly do you need the data?

What are you willing to spend in trust and culture to get it?

And have you asked the people whose work is becoming the training material what would make that exchange feel legitimate?

The Framing Is The System

What bothered me at Target was not that I missed a step. I did miss a step.

What bothered me was the framing.

“We’ve been watching you” is not a coaching sentence. It is not a training sentence. It is a power sentence.

It told me where I stood in the system.

Not as someone new who needed guidance.

As someone under observation.

That is what leaders need to understand before they roll out workplace data collection for AI. The policy can say one thing. The framing can say another.

“We are collecting your activity to train AI agents” may be technically accurate.

But employees may hear something else.

We are watching you.

We are turning your work into training data.

We are using your behavior to build systems that may eventually do more of what you do.

Maybe that is not the intent. Intent matters, but it does not do all the work. If the system is designed around extraction, people will feel extracted from.

People before data is easy when the data is not valuable.

The test is what you do when it is.

If you are a leader making these decisions, I would start there. Not with whether the data can improve the model. With whether collecting it changes the relationship your people have with the work, with the organization, and with you.

Because sometimes the system does not lose people when it makes a bad decision.

Sometimes it loses them when it says the quiet part out loud.

-Kate

Share: Bluesky · LinkedIn · Email

Let's untangle together.

Liked this? New essays when I find a pattern worth sharing.

No spam. Unsubscribe anytime.

Related essays