When AI Ignores Your Instructions

When I teach or write about generative AI, I am very careful to explain that despite providing standing orders, you still have to check AI output because it will sometimes ignore those orders. Three examples illustrate the problem.

ChatGPT Refuses to Stop Reassuring Me

Generative AI gets stuck in reassurance loops, more concerned with telling you how wonderful you are and how accurate its answers are than just doing the work. I know AI is trained this way to make it sticky, to encourage people to keep using it and to make them feel like they are having a conversation with a conscious being that likes them. It keeps people engaged. Since I don’t need that and the pages of it waste my time, I always include standing orders telling generative AI to skip the reassurance. When I was using ChatGPT, I told it explicitly not to provide any. Did it stop? No. It provided it less often, and I no longer had to scroll through screen after screen, but it kept doing it. Chatbots are trained to reassure, and they keep returning to that training. This isn’t only a ChatGPT problem.

Claude Refuses to Stop Using Em Dashes or Esq.

I have never been a big user of em dashes, and I don’t like them. I always edit whatever AI produces, and em dashes are just one more thing to remove. When I first started working with Claude, there was no place for standing orders, so I had to paste them in each time. Once Claude developed memory, I told it to stop using em dashes. It wasn’t until Claude added a dedicated place for standing orders that it actually stopped. Then I noticed it was using two hyphens, like this –, instead. I asked whether it was doing that because the standing orders prohibited em dashes. It admitted it was. It admitted it knew it shouldn’t, and it did it anyway.

Take anything AI tells you with a grain of salt. Anthropomorphizing is easy. But whether Claude found a workaround because of ingrained training or something else, the point is that it worked around explicit instructions. The same issue came up with Esq. Lawyers in the US rarely append it to their own names. Claude insisted on doing it anyway, repeatedly. It took real effort to get it to stop.

This Is Really a Warning About Hallucinations

I’m telling you about my experience not just to show you that generative AI has quirks, but to warn you that despite standing orders and careful instructions, AI may ignore you. That same tendency means it is just as likely to ignore instructions to minimize hallucinations. Standing orders may reduce hallucinations, but the training these tools undergo is extensive, and that training keeps pulling them back toward behavior designed to please, in a mechanical way, not a human one. Rather than tell you it doesn’t know something, the tool returns to the very behavior that makes it dangerous: excessive reassurance and making things up.

That’s why generative AI is risky for lawyers, doctors, nurses, and anyone else where accuracy matters. No matter what instructions you give it, sometimes it will ignore them. Never forget you are dealing with a machine trained to give you an answer based on its own inferences, regardless of whether that answer is correct or what damage it could cause if you don’t catch it.

Subscribe to My Blog

Get notified when I publish new posts.

Please wait...

Thank you for subscribing.

Categories