I have a process for writing PowerPoint presentations. It works. I have used it for years.
I take prior PowerPoints, articles I have written, ethics opinions and whatever other materials are relevant. Then I work methodically through them. I build the deck. The entire process takes hours to a day, maybe two. Then I proofread it later. Done.
This one was different.
How I Usually Work
My normal approach is efficient. It is systematic. I know what I know. I assemble it. I move on.
For most presentations, this works fine. I am not reinventing the wheel. I am sharing expertise I already have, organized in a way that makes sense for a particular audience.
But when I started working on my ABA presentation on Bias and AI: Ethical, Practical and Legal Realities for Lawyers, something felt off about doing it the usual way.
What Actually Happened
I started reading other people’s work. Not just to cite it. To think about it.
I began building an outline. Slowly. It was about halfway done when I invited another person, attorney William D. Goren, to join me as a co-speaker. Bill accepted. I sent him the PowerPoint. He added his thoughts.
Then I made a mistake.
I put the new PowerPoint and my original through ChatGPT and told it to deduplicate them and merge them into one document. When I reviewed the results, I discovered that ChatGPT had completely erased all Bill’s work. Unfortunately, I had not followed my own advice. I failed to check closely before I shared the results with my colleague. As we worked through the presentation, Bill asked me why I had removed his contributions.
Needless to say, this was upsetting. I still had his original PowerPoint. I explained what had happened. He accepted my explanation. I fixed it quickly and we moved on. Bill was incredibly gracious about it.
But it was a reminder. AI tools are useful. They are not foolproof. And they require supervision.
ChatGPT Erased My Colleague’s Voice Twice
This also wouldn’t be the last time AI decided to take away and alter Bill’s voice while I worked with him. It did it again with an article he wrote for Law Practice Today. Fortunately, I had learned my lesson, and I saw that instead of simply making edits for grammar and spelling, ChatGPT had completely rewritten the piece. This time, I took the article over to Claude, making it very clear that I wanted a tracked document with minimal changes, and got what I wanted.
I want to say something about my colleague. Bill is not only a highly skilled writer and widely published. He also uses voice dictation technology to compensate for joint issues and is also congenitally deaf but functions entirely in the hearing world with advanced hearing aids, lipreading, and Bluetooth technology. Thus, his unique voice is very important to him. Not surprising that he was not happy about how the generative AI completely did away with his unique voice. And here I was, repeatedly using AI tools that erased and altered that voice without my realizing it until after the fact.
This was not abstract bias. This was the exact problem we were preparing to teach about showing up in real time, in our own collaboration, through tools I thought I was using responsibly.
It was useful for the course we are teaching. But more importantly, it was humbling.
The Long Middle
Over the next week or two, I kept adding to the presentation. I was not satisfied.
I kept going back to it. I kept adjusting it.
I added intersectionality. That was not there at first. It needed to be.
I added privacy laws, then removed them. They would have been a distraction. The presentation is about bias, not privacy. Cutting them made it stronger.
I talked with my co-speaker about where he felt most comfortable presenting. We adjusted the structure to fit both of our strengths. He was always supportive of my changes.
It took quite a while.
Finally, it is done. And for some reason, I feel exceptionally proud of it.
Why This Feels Different
I have written a lot of presentations. I do not usually feel proud of them in this particular way. They are fine. They do the job. But this one is different.
I think it is because I actually wrestled with the ideas.
I did not just assemble existing material. I let the presentation evolve. I made space for collaboration in a real way. I kept refining it until it felt right, not just done.
My usual process prioritizes efficiency. This process prioritized depth.
Both have their place. But today I am realizing that the things I feel most proud of are the ones where I gave myself permission to take the longer route.
What Working on AI Bias Taught Me About Process
There is something ironic about the fact that I learned this while working on a presentation about AI bias.
AI tools promise efficiency. Speed. Consistency. And they deliver those things. But they can also make us impatient with slower, messier human processes.
Building this presentation the slow way reminded me that some work cannot be rushed. Some work requires iteration. Some work gets better precisely because you are willing to walk away, come back, and question what you built the first time.
The best work, at least the work I feel most proud of, the work that is most emotionally meaningful, does not come from optimization. It comes from wrestling with something until it feels true.
The Takeaway
I am not abandoning my usual process. It works for most things.
But I am paying more attention to the moments when something feels like it needs more. When the efficient path does not quite fit.
Those are the moments when the work might actually matter.
And those are the moments worth the extra time.