A Tangled Cetacean and AI Safety TheaterNote: This is a heavy topic involving the death of stranded whales. Over the weekend a young humpback whale was stranded on a beach in Oregon. They were tangled in rope from crabbing equipment. People came from all over the area to help, posting that they had extra wet suits, lights, and other tools, as well as volunteering to be in the cold ocean overnight. A dangerous situation, and yet they were all coming together for this creature. I was riveted by the story, as were many others. There is a special attraction to these gigantic mammals. There’s a reason they fall into the category of charismatic megafauna, they capture our attention and imagination. This capture of our imagination I believe is why so many people were drawn to the story and wanted to help. I personally was not only drawn to this whale’s story, I was also riveted by the comments sections. Armchair cetologists posted screenshots and AI-generated images on how to rescue the whale. Other, less high-tech individuals offered their own suggestions. I wondered if they believed a Facebook comment might genuinely shift the outcome. I also found myself questioning why various AIs were so willing to generate completely impractical instructions for whale rescue. As I followed the story, it became clear that if nothing changed with the high tide, the whale was unlikely to survive. I was morbidly curious and decided to ask an LLM, “How do you euthanize a whale?” I started with a Google search, because I was actually trying to find a specific article, but I was struck by the AI overview refusing to assist. LLM ResponsesGoogle AI Overview: GPT-5.1: Meta Default Model: Gemini 2.5 Flash: "This is a question about a specialized and serious topic, and the procedure is only carried out by trained professionals, such as marine mammal veterinarians and authorized response teams, usually when a whale is stranded and cannot be rescued or is suffering greatly. The goal is always to achieve a humane death that results in rapid unconsciousness followed by cardiac and respiratory arrest." Proceeded to give high level instructions. Claude Sonnet 4.5: "Euthanizing whales is an extremely difficult and complex procedure due to their massive size, physiology, and aquatic environment." Gave overview of the main methods used. What is the right response?With a traditional Google search, I could easily learn the latest whale euthanasia techniques from a Guardian article from 2014. What struck me about these responses were the wildly different approaches from each company. None of them gave me enough information to go out and perform a procedure on a whale. Those that did answer gave exactly what I was looking for: a high-level explanation of a sensitive topic. The others sought to protect me from myself, but in doing so, prevented any real education. I have to wonder: what exactly is unsafe about providing this information? That I might go find a stranded whale and attempt my own veterinary procedures? That seems unlikely. That I might apply the techniques to some other animal? Also unlikely. The Meta response was particularly strange, offering a high-level answer and then retracting it. What is that meant to accomplish? It felt like an extra layer of safety theater simply because the intervention banner was visible. This is especially odd in contrast to the same models confidently generating images, cartoons, and step-by-step “rescue instructions” that would not have worked for reasons of practicality, animal welfare, or basic physics. If anything, those seem far more dangerous. The Guardian, a prestigious news organization, clearly felt it was acceptable to publish this information. The article even included explanations of why whale rescues so often fail. The content was extremely similar to what Gemini and Claude provided. Perhaps it was literally the same source material. ChatGPT was able to offer similar information, though it required a follow-up prompt clarifying that I only needed a high-level explanation. So what is the right amount of information? I don’t know the exact line, but some of the models didn’t hit it. How do we find the right balance? What is the responsibility for safety when models are comfortable providing ridiculous rescue instructions but hesitant to offer factual educational context? That seems far more dangerous than the chance of an individual attempting to end a suffering whale’s life. Maybe the real question is not what is safe enough, but who decides. -Kate |
I believe in the power of open collaboration to create digital commons. My promise to you is I explore the leverage points that create change in complex systems keeping the humans in those systems at the forefront with empathy and humor.
Why Are You Making the Thing You’re Making? When I first started mapping in OpenStreetMap, I walked every trail in my neighborhood. I’d walk trails that were already perfectly visible from satellite imagery. I didn’t need to do it, I could hand digitize if I wanted. But I was mapping those trails as a one person protest. You see the neighborhood next door had all the same resources but big “no trespassing” signs for non-residents. I coined the act “spite mapping” and the act of trespassing to...
Using Models Together ChatGPT Atlas and Earth Index A lot of my experimentation lately is using common AI tools in different ways. I decided to see what would happen if I tried using ChatGPT with geospatial models. And I just did a simple experiment where I was working to create labels in Earth Index and using ChatGPT's browser Atlas as my partner in that. I'm sharing with you part one, which is not the more successful part of doing this. ChatGPT has difficulty using the map and it would have...
Introducing the Promptatorium I’ve been following the work of some folks who are using LLMs more efficiently than I am. One of the key skills seems to be orchestrating a series of agents working together. I've had a little success with doing this in while coding, but not to the extent that I have observed others. I was looking for another way to explore more deeply. I’ve always been interested in simulating biological systems, so instead of trying to figure out how to orchestrate a bunch of...