What Prompting Can and Cannot Do for Lawyers

Once you accept that prompting artificial intelligence is an exercise in professional judgment, the next question is straightforward: what is it actually useful for?

Like every other tool lawyers have adopted over time, AI is helpful in some situations and unreliable in others. Most lawyers understand that in theory. It helps to be clear about what prompting does well, what it does poorly, and where lawyers tend to get into trouble.

What Prompting Does Well

Prompting works best when the task is structural rather than substantive.

Used carefully, AI tools can assist with things lawyers already do every day. They can help organize information, generate outlines, summarize large volumes of material, draft preliminary language, or reframe content for a different audience.

In those situations, prompting can save time and reduce friction. The tool is functioning as an assistant that helps move work forward, not as a decision-maker.

This is not fundamentally different from asking a junior lawyer or a paralegal to prepare a first pass or organize research. The output is a starting point. It is not an answer.

What Prompting Does Poorly

Prompting performs poorly when the task requires verification, nuance, or legal judgment.

AI tools are not reliable at confirming factual accuracy., this is true even for basic facts. They do not reliably identify missing information. They do not understand jurisdictional differences or recognize when an issue is unsettled. They will not flag risks unless the lawyer already knows enough to ask about them.

AI systems will often produce output that sounds confident even when it is incomplete or wrong. They will not tell you when they are guessing. They will not warn you that an answer is plausible but incorrect.

That is not unique to AI. It is a predictable consequence of how these tools operate.

Where Lawyers Get into Trouble

Problems tend to arise when lawyers treat prompting as a substitute for judgment rather than as a support for it.

Frequently, AI output is treated as research instead of a draft. Assertions are accepted without verification. Context is assumed rather than supplied. The tool is expected to catch issues the lawyer did not already identify.

In those situations, the issue is not the prompt itself. The issue is the expectation placed on the tool.

Prompting cannot fix an ill-defined task. It cannot compensate for a lack of subject-matter understanding. And it cannot take responsibility for the result.

Responsibility Has Not Shifted

Nothing about AI changes who is responsible for the work product.

Lawyers remain responsible for understanding what the tool is being asked to do, supervising the output, verifying accuracy, and deciding whether the result is usable.

Prompting does not shift responsibility to the tool any more than using a research database, document automation software, or a junior team member ever did.

The tool assists. The lawyer decides.

Using Prompting Competently

Competent use of prompting starts with realistic expectations.

Prompting works when lawyers know what question they are trying to answer, understand what kind of output would be appropriate, provide clear constraints and context, and critically evaluate what comes back.

It fails when the tool is treated as authoritative rather than assistive.

That distinction matters, not just for efficiency, but for professional responsibility.

A Tool, Not a Substitute for Judgment

Prompting is neither magic nor menace. It is another tool lawyers have to learn to use competently.

Just as lawyers adapted to online research, plain-language searching, and the open web, they will adapt to AI tools by doing what they have always done. They will apply judgment, supervise work, and take responsibility for the outcome.

Prompting can support that work. It cannot replace it.

Subscribe to My Blog

Get notified when I publish new posts.

Please wait...

Thank you for subscribing.

Categories