Lawyers: how much should you rely on AI to make first drafts?

Jack Shepherd
10 min readNov 1, 2023

Note — this article is not really about contracts. This article is about the (in my view, rather limited) circumstances in which lawyers draft other types of documents and are faced with a blank page or unclear start point. For my thoughts on generative AI and contract drafting, please see this article here

Every lawyer (to date) has been through a similar training process when it comes to writing legal content. You start in law school, where you are told to write in a particular way. You are told to write in short sentences, get to the point, structure things properly and include an executive summary.

Then you start working in a legal environment, and then you learn how to actually do it. You’re told (as I was on my first day) that 99% accuracy isn’t good enough, and that things have to be 100% accurate or the firm gets sued. You hand in an initial draft of something, a partner tears it to pieces, and then you do your best to make it better. And then the process repeats.

Over time, you somehow learn how to do things properly, and your work gets torn apart less and less. Sometimes, the more cynical me thinks that maybe the quality of your work doesn’t change, but a combination of more wrinkles on your face and increased presence in a room means it is less likely to get torn apart.

We now have technological capabilities that might change these kinds of processes at a fundamental level. In this article, I explore the different opportunities these capabilities might present, and the potential consequences of pursuing them.

The AI-created first draft

When people talking about “AI replacing lawyers”, AI creating documents from scratch is often what people have in mind. After a series of blunders in early 2023 (i.e. lawyers discovering that AI-generated briefs can fabricate cases), the conversation has now shifted towards two things:

  • Proprietary data. It now seems clear that if we are expecting large language models to produce advanced legal analysis, they need to be supported by a suitable legal-specific dataset (e.g. caselaw, legislation, articles, internal knowhow etc), and not just based on the foundational non-legal-specific data underpinning the model
  • Human oversight. To mitigate the risk of fabrications and made-up facts creeping in, and to ensure the output meets overall objectives, humans need to rigorously review the output produced by a large language model

The pace of development in this area is fast-moving. There are, for example, technologies that plug into vast databases of caselaw and legislation to improve the quality and accuracy of output from a large language model.

Given this, many are now suggesting that instead of lawyers writing the first draft, AI should write the first draft — as long as it is then reviewed by a lawyer. The question I want to ask is, “is this a good idea?”.

Types of documents

Straight off the bat, I think this is generally a bad idea for contracts and other forms of “templatable” work product. That’s because the output of a large language model is different on each occasion, even if you use the same prompt. Imagine every lawyer in a firm using a completely different starting point for the same type of contract. It would not only be reckless from a risk and budgeting process, but there would be no opportunity for experiences to be fed back into a reusable asset that others can benefit from, such as a template.

The same can be said where there is any opportunity to work from something already done by somebody else. After all, it seems preferable to pick up from somebody else who has already done much of the legwork and obtained learnings along the way. There is an application for AI here in tailoring prior work product to meet your current circumstances, but this is different from AI drafting the entire thing from scratch.

Yet there are occasions in law where you have no starting point at all. These are situations where the facts are so specific, or the law is so niche, that you have to go off-road. You are faced with a blank page. Is this a situation more suitable for AI-generated first drafts?

The blank page problem

A blank page can throw people, because they don’t know where to start. It might be that fundamental objectives are unclear, or it might be that there are lots of things flying around your head without a structure. People tackle this in a few different ways:

  • Discussions. Lawyers engage in discussions with colleagues, and as ideas bounce off each other, priority issues quickly become apparent
  • Planning. Instead of just starting to type in a Word document, a lawyer might sketch some ideas out on a piece of paper. These might then be taken to a discussion with others
  • Inspiration. Even if it is necessary to start from a blank page, there is value in seeing similar (but different) work product others have done in the past, e.g. to get ideas on how to present something

These are all techniques used to mitigate the blank page problem. Can AI help us even further here?

Model #1: AI thinks, AI writes, human reviews

It has been suggested by some that the blank page problem can be solved through an AI-generated first draft: “I am acting for a tenant who gave a deposit that was not put into an approved tenancy deposit scheme…draft me an advice memo”. Quite rightly, most suggest that this first draft needs to be reviewed by a human before it is taken forward. Edits can either be made directly in the document, or through an adjustment in the prompt (e.g. “this is great, but make [x] point a bit clearer…”).

This makes me uncomfortable. I believe that the blank page problem forces the author of a document to properly engage with the issues. The very process of structuring ideas that exist in your head helps you establish relationships between one idea and another, and potential areas of conflict. If discussions with colleagues are also occurring, along with consideration of how this might not be so different from something somebody else has done before, important knowledge and learning activities are taking place.

In going through this process, a human forms the general direction of the work product. The starting direction of the work product is highly influential as to where it will end up. If we work from something produced by AI — especially where the prompt does not really give that much direction — my concern is that the author will not be able to tell whether the general direction “works”, because they have not properly engaged with the topic.

The risk here is not only that the work product is flawed but it is challenging for human to respond to follow-up questions about it if they have not structured their own thoughts properly. The value of legal work product is often not in the words presented on a page, but the thought process and knowledge accumulated while making it.

Indeed, I am not actually convinced that the blank page problem is that common for lawyers. Competent lawyers will have planned and discussed what they wanted to write already and established the objectives, so it is actually quite rare that they are presented with a completely blank page. Of course, this is a generalisation and there will be exceptions.

Model #2: humans think, AI writes, human reviews

The reason model #1 is flawed is because humans have all but delegated the “thinking” part to AI. This is a bad idea because AI does not (yet) “think” — instead, it predicts the next word in a sentence. The results of this capability are astonishing, but we should be under no illusion — AI is not thinking like a human. Humans act on principles and experiences. The two both have their strengths and weaknesses but are, in my view, fundamentally different and should be applied to different purposes.

So how about humans do the thinking and AI does the writing? Under this model, the prompt is likely to be filled out with some sort of structure, e.g. “[see question above]…here is the structure I have in my head with some bullets, please put this into the form of a legal advice memo”. This structure might have been influenced by all of the things I mention above that force humans to properly engage with the issues.

This situation makes me feel more comfortable, because it is the human sculpting the overall direction of the draft. The role of the AI is to “pad out” the structure with words. The human can then edit the draft itself and wordsmith to their heart’s content. There might be a few benefits to this approach:

  • Feedback loops. Lawyers probably spend too much time crafting the minutiae of their work product, only for somebody more senior to rip it apart and reword it. If a draft is going to be ripped apart by somebody anyway, why spend time on the wording in the first place?
  • Cost v. benefit. Some clients will want expensive lawyers poring over their documents multiple times. Others will not. For clients valuing speed over grammatical/stylistic excellence, letting AI do the first cut of the document based on a rigorous human thought-process might be beneficial

One issue I have with this model is that in all my experiments with generative AI writing work product, I have been quite disappointed with the end-product. It is usually full of clichés, jargon and insufficient analysis. Even when I tried using some of my existing writing to influence the output, the results were still underwhelming.

This is perhaps as a result of the current state of the art. These technologies will probably improve. In the future, it might be easier to tune the style of your writing. Junior lawyers might, for example, be able to obtain a work product written in the style of a specific partner. (It remains to be seen whether partners will nonetheless rip a document apart even if written in their own style).

More fundamentally though, I also believe that the circumstances in which end-product actually remains loyal to your opening structure are quite rare. For example, when writing this article, I began with a particular structure but when I started to put flesh on the bones, I realised it didn’t really work. I also realised, as I was writing, that some of the points I was trying to make were quite weak. I restructured it accordingly. I’m not convinced I would have done this had AI done the writing part first.

One other thought that has occurred to me is that if all the “thinking” takes place in the structuring and planning part (and not in the actual writing), why not just tidy up your structure a little bit and present a shorter document? We should try and avoid circumstances where AI is adding verbosity for the sake of form.

The reality, of course, is that there are no tight distinctions to be made here. There is no reason why you cannot “think” in relation to a draft already produced by AI. I suspect the adoption of an AI-first writing process depends on personal preferences. I am still haunted by my lawyer days somewhat, and am a bit of a control freak. I have tried quite a few times to put a rough structure into a generative AI application but have been so dissatisfied with the work product that I ripped it up and started writing myself. Others may have a different approach.

But even if you are like me, that doesn’t mean that generative AI is completely irrelevant to these workflows…

Model #3: humans think, human writes, human uses AI to review

Instead, my preferred approach is to write something and then use generative AI to sweep up anything I wrote badly or to prompt me to include points I might have otherwise forgotten about. In this way, the role played by the AI-generated content goes to the review process. You could do this by either using model #2 after you have written the first draft (to prompt you), or feed in your work product and ask it to make suggestions (to help you correct and improve it).

I see this as the main way in which AI will get adopted in the short term. It is a good combination of human and machine. The human does the thinking, the AI spots patterns in the text you might want to correct and uses its “next word prediction” capability to come at things from a different angle. Sometimes, the result is similar; sometimes it is unhelpful; sometimes it causes you to rethink things a little.

It also requires little in the way of behaviour change. The process remains the same, but AI is providing another tool in the toolkit at the review stage. It means humans are still learning and thinking as they write things, and it mitigates other risks of putting AI in the driving seat (e.g. hallucinations).

There are also use cases for AI in the collaboration workflows that follow. For example, it is conceivable that an application powered by a large language model could have a first attempt at “merging” comments written by multiple people into a consolidated version. It could also be helpful for situations where “partnerial” comments such as “expand on this”, or “pad this out a little”, or even simply “?”, need interpretation. It could help spot grammatical and drafting improvements in a document. This use cases seem to me to be a great starting point for integrating AI into legal practice.

A lawyer using AI won’t necessarily replace you

I am a firm believer in the power of generative AI to help make my content better. I am personally not yetready to put it in the driving seat of making me a first draft as a starting point. Instead, I use that kind of output after I have written a first draft. I am aware that others do not work in the same way as I do, and are more comfortable having AI create a first draft. It remains to be seen what the prevailing practice will be here, but I suspect humans might still be making the first draft of content for some time — with AI acting in a review or “sweep-up”/inspirational capacity.

What will be interesting to observe is whether there are quality tradeoffs between these different models, and whether clients of lawyers are willing to pay extra for this.

I have no answers here, but what I will say is that, as usual in this space, it depends on the circumstances. We should avoid making sweeping statements. This is why I dislike the expression “AI won’t replace you, but a lawyer using AI will”. Applying AI everywhere is not always the right answer. We need to be more nuanced. The reality is that the successful ones will understand the pros and cons of using technologies (whether AI or otherwise) and deploy them in the right place to deliver the most value.

--

--

Jack Shepherd

Ex biglaw insolvency lawyer and innovation. Now legal practice lead at iManage. Interested in human side of legal tech and actually getting things used.