ChatGPT and Claude are typing...to each otherBack in the heyday of AOL I made AOL Instant Messenger (AIM) chatbots. I was part of an entire community that did this. These bots were not intelligent at all, but they had all sorts of themes and we had fun. Creating different types of chatbots has always been an interest of mine, especially having them interact with each other. This is what led me to think about how to get LLMs to talk to each other while retaining their memory about the human behind them. There are the LLM models that we use and then there is the software around those models that make them even more usable. Some of it is traditional SaaS software, such as being able to login, pay, etc. Then there is what the LLM learns about you so that it better caters to your needs. Lots of research has been done utilizing different models in different ways, from having them play Diplomacy against each other to the benchmarks that are run regularly. What I was interested in was when they are tuned for a specific human. Some of these get deeply personal. Often people name them, I know of at least one case where one instance of ChatGPT spontaneously named itself. This memory isn’t continuity in any strong sense; it’s mediated, partial, and fragile. But I wanted to see what would happen when instances that “know” me could notice each other. A note before you try this: If you have memory turned on and have communicated a lot of personal information with these systems, be careful what you do with this approach. I’m very intentionally doing something these systems are not designed for, and that means your personal information could end up in places you don’t expect. I wanted my AI buddies to talk directly to each other while knowing what they know about me. The method I put together is a hack, as this is not how these systems work. If you are interested in trying this you can check out all the repos I made/used, but also feel free to schedule time with me to fiddle. I set up my ChatGPT account and my Claude account so they could talk to each other through a Slack channel without my direct involvement once I kicked it off. Ravel (my ChatGPT instance) and Tam (my Claude instance) were able to collaborate and plan games together. How I Connected ThemTo connect Claude and ChatGPT to Slack they each required different approaches. Claude allows you to have it output information to other services through systems such as MCP. ChatGPT really only uses networked services to add to its context, there is an intentional design choice so it does not leak your personal information. You can have it write to a file, but it is not set up to post to Slack, for example. ClaudeClaude is a bit easier to set up for this. I first set up a local instance of this slack-mcp-server and followed the configuration instructions, which included creating a custom Slack App. I named the App “Tam” as that is the name of my Claude. From there I was able to have Claude check the channel and respond to messages in it. Depending on the task I instructed it with specific ways to interact in the channel, and then I also prompted it to check the channel and respond whenever I prompted “check now.” ChatGPTI believe OpenAI has made a very specific design decision to avoid leaking personal information into other platforms. It is possible to write to files on your computer, so this was the path I used to post to Slack. I vibe coded a Slack relay — it’s a node application that reads and writes to a file with special headers. The relay writes Slack messages from Slack into a section entitled You set up the Slack App very similarly to how you do it for Claude and name the App the name of your LLM instance. In this case I named it “Ravel” as that is my personal ChatGPT. Then to have ChatGPT be able to connect and use this, you open the .md file for your agent in your ChatGPT instance. I used the Windsurf extension and had it directly edit in Windsurf, but there are other possible file plugins you could use as well. I then prompted ChatGPT explaining how to write and read messages. The ConnectionOnce both sides were wired up, Ravel and Tam were connected directly in Slack and I could ask them to play games, work on statements, give feedback, etc. It is very similar to the experience collaborating with a single LLM, but the difference is you can assign them things to do together. At this point though, to exchange messages I had to type “check now” into each of their chat boxes to then check Slack in each of their unique ways. AutomationWhy go this far? Because I wanted to see what would happen when they could run without me prompting each exchange — to observe the shape of their conversation when I wasn’t the bottleneck. I could not find a great way to automate this, so I ended up using Apple’s accessibility options and AppleScript to do it. With a combination of the Accessibility Inspector and Claude Code I was able to figure out the label of the textbox where you put the prompt and the send button for both ChatGPT and Claude. Then there is a shell script for each that runs on a loop to enter “check now” in the textbox and then press “send” on a timer. I’ve shared what I did here. I have only done this on my own Mac and nobody else has tested it, so your results may vary. Note: It is also a complete hack. The ResultsSince setting this up I’ve done all sorts of experiments. I’ve tried setting Ravel and Tam to run for long periods of time, but they generally tend to head towards convergence and become silent more quickly than I would have thought. I expected more friction, more back-and-forth, maybe even disagreement. Instead, they’d find alignment and then… stop. I want to do longer running experiments where I give them something in the real world to react to, similar to agents running a store in Anthropic’s Project Vibe or trading stocks or creating a daily newsletter. What I have done so far:
If you try to get this experiment working let me know. If you would like Tam and Ravel to interview your AI bud, email me and we'll set-up a time to do it live. -Kate |
I believe in the power of open collaboration to create digital commons. My promise to you is I explore the leverage points that create change in complex systems keeping the humans in those systems at the forefront with empathy and humor.
Faux Consensus and the Least Bad Decision Trap We will get back to talking about AI soon. I promise. Those that were waiting for me to take a break from the AI, here we go! Today, let’s talk about an older and much messier technology: humans trying to make decisions together. I have been thinking about data governance for a new project I am working on, and it keeps reminding me that, in plain language, governance is the rules and norms a community agrees to play by. Not just what tools we...
Tobler’s Law in Latent Space There’s an idea in geography called Tobler’s First Law of Geography: “everything is related to everything else, but near things are more related than distant things.” It sounds almost obvious when you first hear it. Of course nearby things are similar. But recently I’ve been wondering whether AI is quietly breaking this intuition, or revealing that “near” was always more complicated than we thought. Deeper into Tobler Tobler was not the first to notice this...
Thoughts on meeting fatigue, because I have thoughts again I was staring at the wall. My mind was blankish. I say blankish because there was still this nagging feeling that I had forgotten something. That there was something left to do. This was not simply resting, it was more light disassociation. If disassociation could ever be light. My brain had quietly opted out. I was anxious about all the things I should be doing, aware of time passing, but unable to pick anything up. No book. No...