What are the specific use cases for generative AI in contract drafting?

Jack Shepherd
32 min readSep 23, 2023


I return again to a subject I touched on in this article back in February 2023 — contract drafting and the potential impact of ChatGPT. Since then, firms and legal teams have been busy experimenting with and understanding large language models.

Back at the start of the year, most of the conversation centred around the game-changing nature of AI and how it would transform the legal profession. I felt like there wasn’t quite enough attention paid to the importance of providing a good foundation of business and knowledge data for large language models. I also felt like there was more room for discussion on use cases, e.g. not just talking about “contract drafting” but talking about specific stages in that process and the tasks required to be completed.

It is great to see discussion progress towards the importance of providing a good foundation of business and knowledge data around generative AI. However, I think there is still a gap around process definition in the key use case areas. That’s why I have decided to write again about contract drafting and generative AI.

In this article, I’m going to break down what contract drafting involves, and where generative AI can (and cannot) support the process.

I’m not going to go into huge detail on what generative AI is, or how it works. Instead, I’ll assume you’ve read this article.

Step 1: Getting a starting point

Lawyers never start from a blank sheet of paper when drafting anything. There may be circumstances in which a lawyer has to draft a document that nobody in their organisation has drafted before, but these are rare. Often, there is a decent starting point that can be used as a basis for either the entire document or parts of it. Such starting points might include:

  • A template (aka “precedent” or “standard form”). Templates work so well because when they are used by many people, they help teams work consistently. This helps track contractual obligations better, manage risk and also develop a feedback loop so that knowledge accumulated in the course of using the template can be incorporated back into the template for the next person.
  • An example (aka “precedent”). But the problem with templates is that they take effort to build. Where no templates exist, lawyers will hunt for prior examples of a similar contract they can repurpose. Examples are good because they can be traced back to a specific source. This allows lawyers to ask further questions about the document. However, examples do not have the strengths possessed by templates around managing risk, consistency and feedback loops.
  • Lots of examples. This is where you really have no one example to go by, and have to form a “best of” contract based on lots of different examples, picking the clauses that are most appropriate for your current situation.
  • A form (aka “letterhead”). This is where you have no substantive content to go by. You start with a blank sheet of paper, but at least the formatting is in house style. I would expect at least the boilerplate provisions to be provided from elsewhere, which is why I maintain that contracts virtually never start from a blank sheet of paper.

They key pain points lawyers experience here are:

  • Finding things. At worst, an organisation has no central repository of good starting points, in which case lawyers are forced to build their own private banks or ask around their team each time. This is slow, time consuming, and does not allow lawyers to share experiences with others. At best, an organisation has a well-built repository of starting points, but lawyers might complain about the amount of time it takes to find things.
  • Using the wrong thing. I remember vividly the time when a partner criticised another lawyer for re-using an example contract I had drafted because it was “toxic”. The partner told me they meant that the document was drafted under extreme time pressure due to client demands, and we would have ideally had more time spent on it. The point is that lawyers trawling an organisation’s data better make the right enquiries around the content they use, otherwise you risk repeating others’ mistakes. This risk scares the living daylights out of many knowledge professionals.
  • Drawing blanks. Inability to find anything, so having to “free draft” clauses or even an entire document. Even when these are drafted, there is no mechanism to share them afterwards for others to benefit from.

Finding things

Generative AI can help lawyers find things better through a capability called “embeddings”. This allows for documents to be found based on semantic similarity to a search term, rather than the existence of a search term as a keyword in the contents of a document. This solves the pain point around “having to type the right thing into the search box”, but there is likely a risk that too much content is returned. Furthermore, sometimes keyword searching is preferable if a lawyer does actually know what they want to find.

There’s something else that AI and generative AI can help with here. That is around providing more context and classification to documents. For example, AI models can determine the relevant “type” of contract so that lawyers can narrow down search results better. AI can’t provide all the context (I discuss quality of the document below), but it can provide some context. Some information can be more easily-detected by AI than others.

Large language models can also provide summaries of documents to help lawyers trawl through results better. They can also compare two documents together on a semantic rather than purely textual basis, which helps lawyers pick and choose the right starting point.

I have concluded here that this is not the case of one thing replacing another. Generative AI can add a new capability to find things based on semantic similarity and context/classification, which is very useful for more exploratory searches. However, this sits alongside other controls such as good categorisation of materials (“only show me things that relate to my own team or a specific client”) and browse workflows (“just show me all the approved precedents for [x] type of contract”).

Using the wrong thing

Which brings us onto the second pain point, around lawyers using things they shouldn’t be using. Practice differs here between the US and the rest of the world. Many legal teams in the US encourage lawyers to search in their transactional content such as a document management system for useful precedents. In other jurisdictions, this would strike fear into many people, as they worry about lawyers finding content that has not been approved.

From what I have seen, generative AI (or any type of AI) is not (yet) capable of telling you what is a safe example to use and what is not. Take my example above — was the fact that the document was drafted in a rush apparent from the contents of the document? We’ve got quite a long way to go here, although new abilities for knowledge teams to find potential examples and clean them up for others to use might go some way to help.

Drawing blanks

One of the most-talked-about use cases for generative AI is to draft a contract based on a prompt: “draft me a lock-up agreement from the perspective of a company entering into a scheme of arrangement, with a market standard early bird fee”. Or alternatively, to draft you a clause dealing with specific circumstances relevant to your situation. In both cases, the output is reviewed and checked by a qualified lawyer.

Let’s think through the consequences of doing this for a moment.

Large language models are designed to produce different output each time, even if you use the same prompt. The level of randomness can be adjusted through the “temperature” parameter, although if you turn it down too much, the output becomes very robotic and unhuman-like. It is inherent in the design of these models that the output differs each time.

So, if two lawyers draft the same contract and generate a starting point using AI, the output will be different for both. This will not only lead to stroppy clients (who get frustrated when their experience differs based on the lawyers acting for them) but it causes risk headaches and does not allow lawyers to share experiences with each other.

Then, you’ve got the problem of “hallucinations”. Perhaps less of an issue in contract drafting than in a research memo, but undoubtedly still an issue.

Yes, lawyers need to thoroughly check the output of generative AI. But generative AI does not reason, and it does not act on principles — it is a “next word predictor” (and a very good one at that). Humans, on the other hand, are not good “next word predictors”, but produce output based on experiences and learned principles. Both have their strengths and weaknesses.

Furthermore, I find it hard to pick holes in and review work product where I cannot ask the person who made it any questions. It is easy to gloss over well-written, but flawed, contract drafting in AI-produced content. These issues are mitigated when humans write the content.

It is likely to be an unpopular view, but I believe that the circumstances in which lawyers should rely on generative AI to produce a starting point for a new contract should be, and in fact are, limited to quite niche circumstances.

But are we in place where, given the pressures on legal budgets, generative AI will produce a starting point that’s “good enough”, and that any additional work to e.g. create templates or find examples, has diminishing returns? Perhaps, but I still struggle to see why, once you have already got an example of a particular type of contract, it makes more sense to start afresh next time than it does from that particular example. In short, I think it is nearly always better to start from a template or an example somebody else made. We’ll continue this discussion when we talk about “retrieval-augmented generation” in the next section.

Step 2: Producing a first draft

A starting point for the document has been found, either through a template, example or a form. The next task is to tailor that starting point for the facts at hand and to produce a first draft. There are two primary methods for this:

  • Contract automation. Lawyers fill in a form with the characteristics of a transaction. The contract not only includes the right party names etc., but it also contains logic to include the right clause permutations based on the transaction. This is usually quick and easy for lawyers to do.
  • Manual amendment. If a template is not wired up to a contract automation tool, lawyers will have to replace square brackets in the document, and delete provisions based on footnotes. If an example is being used, the process is even harder because the lawyer has to review the entire contract and spot which parts need amendment.

Contract automation tends to involve very few pain points, although we discuss one in the next stage. The only other pain point is, perhaps, that it takes time and resource not only to create a template but to build the logic to automate it.

Manual amendment is the classic example of a process lawyers hate doing. It takes a lot of time, and extensive use of CTRL+F functionality in Microsoft Word as lawyers — either to find square brackets (in a template) or to find repeat occurrences of information that needs replacing (in an example). A further issue in an example is that executed documents might only be available in PDF. Anybody who has tried to convert a document from PDF to Word to work on it further will know the pain point I am talking about here.

Contract automation v. generative AI

But can we, for example, use generative AI to repurpose a template or example to meet our needs and remove the CTRL+Fing? Can we attach the document to a prompt that instructs a large language model to “repurpose the document for [client x], with the following changes made?”.

The answer is “yes”, but the real issue is how this compares to using a contract automation tool. The latter provides certainty and has human oversight from the start. The former risks producing an inconsistent output and you would have to carefully review it against the starting point you selected to make sure nothing too funky has happened.

It seems to me that this is really not a great use case for generative AI, except in an emergency. Contract automation templates have guardrails built into them by the people who created them. All you need to do is fill in a form. Is generative AI quicker than this? Does it align with the guardrails every time? That remains to be seen.

Can you have generative AI populate the form that a human can then review? Perhaps this might be more helpful than having generative AI running the entire automation process.

Manual amendment v. generative AI

We run into the same issue here, whereby although it is slow, manual amendment will result in a more faithful replication of the original starting point. Having said this, there is probably something you can do either by adjusting model parameters such as temperature or designing a prompt (“don’t change anything except…”) that mitigates the risk.

There is potential here for generative AI to help. But given the massive advantages of using templates outlined above, might it be helpful for a lawyer to think beyond the immediate task they have at hand, and spend a couple of hours extra converting the example into a template so that others can use it? (I am, of course, aware the practicalities around asking lawyers to give up their time to do this).

Indeed, generative AI can be helpful for doing the exercise of converting an example into a template. As soon as the template is made, generative AI will probably fall out of the picture for that template in the future — especially if it is wired up to a contract automation tool.

First drafts based on lots of examples

There is a technique I referred to above called “retrieval-augmented generation”. This is jargon, but basically refers to the capability of a large language model to carry out the following steps:

  • Analyse the semantics of a prompt. Based on a prompt entered, the model can identify the key concepts (“embeddings”) that are relevant.
  • Match the semantics to documents. Find a document in a defined document or knowledge base that matches those key concepts. This is, basically, semantic search.
  • Generate some output based on the retrieved content. Use the capability of the large language model to generate an output, but ensure attention is paid to the source data from the document or knowledge base matching the prompt

This kind of exercise is the equivalent of a lawyer going through hundreds of examples, and creating a “best of” document. It straddles the process points of “finding a starting point” and “producing a first draft”, and can produce an output that is closely tied to a series of (hopefully high quality and trusted) documents that are relevant.

The output does, of course, need to be reviewed. But it is far less likely to “go rogue” than if you were simply entering a prompt into something like ChatGPT. That is, provided the set of documents you are pointing the retrieval to has the right content in it. If the underlying documents are of unknown quality, the output will also be of unknown quality.

As per the above, I think the better use case here is for organisations to use this technique to produce reusable templates, rather than for one-off single use contracts for a particular scenario. Again, after the template is produced, the AI probably falls out of the picture as its role as a synthesiser of information is complete.

This kind of capability could save huge amounts of time and be extremely beneficial for organisations that lack a central knowledge function that currently performs this role in a manual way. You will likely get more out of a highly knowledgeable individual who has personally reviewed hundreds of example contracts, but there is a trade-off here around quality v. time.

Step 3: Reviewing the first draft

It is usually a junior member of a team who produces a first draft of a contract. It then might undergo a series of reviews by a number of people:

  • Senior lawyers. Mostly relevant for law firms, but a senior lawyer (e.g. a partner) will generally want to review a document before it goes out to a client. In particular, they will want to see a comparison of the first draft against its starting point so that they can see what has changed from a trusted source such as a template
  • Specialist advisors. If the contract is particularly complicated, it might require specialist input from foreign lawyers or specialist lawyers (e.g. tax). They might be reviewing the entire document or only parts of it
  • Clients. Once the contract has been internally reviewed, the client will take a look through it to see whether they are happy for it to be used as the proper negotiation first draft

When we talk about reviewing the first draft, this is what we generally mean:

  • Proof-reading. Lawyers are sticklers for grammatical mistakes and typos. As a partner I used to work with put it, “typos are like a rotten apple in a barrel of good apples — the rest of the apples might be good, but the rotten one makes you distrust the rest” (actually, the phrase was a little more graphic than that). I would also include unused definitions, undefined terms and broken cross-references here. The honourable mention goes to Error! Reference Source Not Found, which appears often once you save a document (i.e. after you’ve proof-read it).
  • Drafting checks. These kinds of checks move beyond basic checks and encompass whether the document is drafted clearly. For example, a good lawyer will seek to clarify language where possible and remove ambiguities. They might also look for conflicts between obligations, for example, if certain obligations only arise on a specific date, and the contract is set to terminate before that date.
  • Sense checks. Given one of the primary purposes of entering into a contract is to document the relationship between parties, it is important to make sure that the wording of the contract reflects what the parties intend to agree. This often involves cross-checking the contract against other documents, e.g. a term sheet. This is another area where it is easy to make mistakes, for example, if a clause on notices refers to the wrong address of a company.
  • Legal checks. This is the next level of complexity, where lawyers are looking at the legal ramifications of what has been written. For example, if a subsidiary guarantees the obligations of its parent, this might constitute unlawful financial assistance and the lawyer would have to make sure this arrangement is legally compliant
  • Commercial checks. Lawyers will also be ensuring the document not only reflects the best legal position, but also that clients are put in the best commercial position. This requires knowledge of prevailing market practice as well as negotiation dynamics to date (e.g. what has been conceded). For example, in a loan agreement, a borrower of sound financial standing might benefit from a stronger position than a young startup with limited assets. Companies and legal teams often have “playbooks” that help them comply with agreed positions (more on this below).

The key pain points lawyers experience here are:

  • Manual exercises. Some of the checks referenced above are manual. The only thing lawyers hate doing more than this is making errors while doing it, then getting called out for not spotting something obvious.
  • Getting things wrong. Proof-reading errors are embarrassing and they happen, but making errors on the substance of the document is less forgivable. For example, it is table stakes for clients that the contract lawyers draft actually reflect terms agreed in a term sheet, and for the document to not conflict with itself or not be compliant with the law.
  • Lack of knowledge. Experienced lawyers will often have the intuition to spot risks in contracts and determine market practice. More junior ones won’t have this intuition, but are nonetheless expected to have some degree of knowledge around these things.

Manual exercises

There is a clear place for machines to assist humans in spotting things like typos, bad cross references etc. in a document. This is a classic situation where the short human attention span is not catered to spotting these things in long documents.

Generative AI is well-placed to spot these kinds of things. However, proof-reading tools have existed for decades in the legal industry. It is at the moment unclear what generative AI could add on top of these tools.

However, one capability that might be relevant is the ability of generative AI to suggest areas of a contract that might be ambiguous or where drafting could otherwise be improved. You will likely see existing proof-reading tools add this capability, and hopefully this will improve clarity of contract drafting.

One final thing here. Lawyers tend to check document meticulously, right up until the point they sent it. It is common for a lawyer to prepare the draft, produce a comparison against another document (comparisons / redlines are discussed in detail below), attach it to an email, only to find that there is a glaring typo. They then have to go through the entire process of editing the document, reproducing the comparison etc. to correct the error. If we can proactively flag risks like this while lawyers are drafting contracts, these kinds of needless cycles could be eliminated.

Getting things wrong

There is definitely a way for large language models to take one or more pieces of data and spot areas of conflict between them. For example, GPT-4 successfully detected a conflict in a notice provision that provided time periods for notices to be served through registered mail while also having a prohibition on physical methods of service. I also tried a few tests around checking contracts against term sheets, and conflicts were successfully spotted here too.

Whether or not generative AI can flag more complicated legal risks with a document remains to be seen. In my brief tests preparing for this article, it was unable to spot clear financial assistance risks present in finance documents. It was also unable to detect provisions that undermined the entire objective of a contract (see, e.g. the reference to Lock-Up Agreements in my prior article on ChatGPT and contract drafting). I don’t doubt that these things might improve with legal domain specific data, especially well-curated knowledge data held by the big content houses such as Thomson Reuters and Lexis.

In relation to both these niche legal risks and the more mechanical aspects of getting things wrong in a contract, I personally don’t feel like I have enough information to say whether generative AI is going to be game-changing here. It certainly has promise, but we need to make sure we are running extensive tests and measuring the results over a representative dataset. Until I see that kind of research — and particularly the findings of AI in comparison to humans doing these tasks — we don’t have the data to say this is anything but promising.

Lack of knowledge

The key problem here is that lawyers hold a lot of knowledge in their heads, and due to either cultural issues or lack of processes/incentives in place, it is hard to get this knowledge out of people’s heads to be shared with others. The issue is exacerbated by knowledge that cannot easily be reduced into documentary form (e.g. “there’s something weird about their approach here, it seems unusual”).

There are undoubtedly ways in which both sharing and accessing knowledge can be improved by generative AI. I have heard some say that knowledge management is dead, and that all content can now be written by generative AI. There are risks in this kind of oversimplification, that I have summarised in a prior article.

I think the future is less around AI-written content, and more around (1) how we help people find the needle in the haystack that might help them, (2) the touchpoints for sharing or knowledge or funnelling it naturally from workflows people are doing anyway, and (3) the interface through which people interact with knowledge.

In relation to (1) (“finding the needle in the haystack”), you have two levers to pull. First, helping you find the needle better. The techniques referenced above around adding context and summaries help here. Second, reducing the size of the haystack. By this, I mean focusing the enquiry into a smaller set of content that has been earmarked for reuse. I have often heard people think AI can perform this second strand, when in reality I think it is more helpful for the first. The second, right now, will rely on defining processes that help people share good quality content (as opposed to poor quality content) that might help people on things like drafting and market practice.

In relation to (2) (“touchpoints for sharing knowledge”), smart knowledge leaders often talk about lawyers working in tools, where as they work, their experience is automatically curated as knowledge for the next person to use. This would not only increase the quantity of knowledge that could be shared, but also the structure of it. This will help both levers referenced in (1). However, in my view, this a precondition for this strategy is lawyers working in a structured way.

For example, if lawyers continue to manage transactions using Word checklists and store their documents inconsistently with limited structure around them, it seems impossible to compare one transaction with another and to skim off data in the same way. Many might argue that you can apply generative AI or AI as a sticking plaster here to structure unstructured information. With complex legal matters, that seems like an unrealistic goal. Instead, it might make more sense to have lawyers working in a more consistent way and reap the incidental benefits of this around risk management and budget predictability.

In relation to (3) (“interface”), the Q&A interface offered by large language models seems promising. It is promising because of the capabilities around synthesising and semantically searching long documents and getting to the point more quickly. Instead of having to spend precious minutes flicking through a long textbook, a lawyer could ask a question about it, and get a generated answer that references the appropriate part if they wished to read further. This is promising for many use cases, but bear in mind that it is likely not a silver bullet. Lawyers will want to interact with documents differently depending on their purpose. For example, a junior looking at how to do a CP process in a financing transaction probably needs a comprehensive guide, not a short Q&A with a chatbot.

Step 4: Negotiating beyond the first draft

The next step is in many ways an extension of the previous one. One party will send the first draft to the other party. The other party then reviews the draft, using similar checks to those outlined above. They provide “comments” (otherwise known as a “markup”, “redline” or “revised draft”) on the document to suggest changes to the version they received. The process then repeats. At the start, big picture things might get amended but as things progress, the back-and-forth is ideally becomes more focused.

There are a few additional things lawyers are doing when commenting on a revised draft somebody else has sent them (indeed, these might also be considerations for the first draft as well):

  • Information flow. A revised draft of a contract might get sent to numerous people. No every one of those people has time to read the contract in detail. One of the first things they need to do is give a high level overview to everybody interested in hearing about the contract, so that everybody can align on a common approach.
  • Playbook (commercial) checks. Arguably, the party sending the first draft is at an advantage because their position is the opening gambit that frames the rest of the contractual negotiation. The contract is “on your paper”. If the contract is not “on your paper”, you will need to review the other side’s draft with reference to what you are willing to accept. Often, this is done through something called a playbook, which sets out how to respond to specific negotiation positions. A playbook might exist for a type of contract, and for a law firm, multiple playbooks might exist for specific clients to reflect their “house” positions
  • Incorporating comments. This is the logistical exercise of revising the document and showing your changes to somebody. People do this when sending a document back to the other side, either by sending back a “track changes” document (a Word document), or a “document comparison” (often a PDF). They also do it when liaising with specialist advisors or between law firms and clients.

The key pain points here are:

  • Slow information flow. Somebody sends a revised contract to five different people. Each person wants to know “what has changed”. If it takes each of those people 30 minutes to review the entire document and find this out for themselves, that’s a lot of time wasted. Some people are simply too busy to read the contract.
  • Manual alignment to playbooks. If you are a lawyer doing the same kind of contracts over and over again, constantly aligning them to your own playbook can be tedious and low value. Cross-checking somebody else’s draft and interpreting each clause against your own playbook can sometimes result in things being missed.
  • Lack of knowledge/playbooks. Furthermore, not all companies or firms will have playbooks so any alignment with your own negotiation positions will have to come from personal intuition. This is the same pain point listed above, and I have already discussed the potential impact of generative AI above.
  • Collaboration, version control and redlines. Despite it forming such a large part of their jobs, lawyers have not nailed how to collaborate on documents. I wrote about this in detail back in September 2020, and not much has really changed since. Lawyers struggle with working out who changed what in a document, working out which is the latest version, digging around in email threads and time spent doing manual redlines (which I also wrote about here).

Slow information flow

This is probably not a pain point people really think about that much, but manifests itself in most situations where a law firm is working with a client. Generative AI can help play a role, because it is especially strong at summarising information. The risks of hallucinations are lower here, because input data is provided in the prompt.

You could see, for example, how the forthcoming capabilities in Microsoft Co-Pilot might be able to scan each attachment and summarise it for you. This will be helpful to people.

But what will be more helpful in the context of contract drafting is to summarise a contract as against something else, to answer the question “what has changed”, without somebody having to read an entire document. To do this automatically would require knowledge of what the last version actually was, which in turn likely depends on how well the underlying document management system is used. I talk a little more about “semantic comparisons” below.

Manual alignment to playbooks

There are some generative AI tools on the market that will review a contract for you and suggest amendments. Some of these can be supported by playbooks, whereas others operate simply on the basic data underpinning the relevant large language model.

The tools that are not supported by playbooks might provide some good recommendations around things like drafting clarity, as these are often (but not always) textual tasks. However, it is hard to see them add so much value that they help one organisation achieve more than another. This can, surely, only be the case if they operate with reference to high quality assets that incorporate an organisation’s competitive edge.

Furthermore, I worry with these kinds of tools that they “over-lawyer” documents. When I was in practice, I remember one law firm in particular that was notorious for spotting all of my misplaced semicolons and suggesting inconsequential drafting markups. I don’t think this kind of thing really offers much to either lawyers or their clients, and I worry that overuse of AI in marking up documents autonomously will exacerbate these trends.

However, I do think that providing an initial view, or indeed a preliminary markup, of a document based on carefully curated content could deliver a significant impact to a lawyer. It could, possibly, cut out an hour for each turn of a document. Alternatively, it could provide senior stakeholders with a “quick view” of how significant a received markup might be. Wherever tools can help structure processes (in this case, compliance with a centralised playbook), consistency is better achieved and risk is managed better. If contract automation tools are currently blamed for adding limited value after the first draft (because they fall out of the picture), this could be a great opportunity for them to track the contract against a set of guardrails through its lifetime.

As per the above, we need to make sure we are running experiments here. How does the AI-based markup compare to the human-based markup? Is there a quality v. time trade-off? If so, is it suitable in some situations but not others? I don’t have these answers, but I do hope those carrying out these experiments are able to share their findings.

Collaboration, version control and redlines

As these pain points are more mechanical, my mind naturally goes to fixing processes rather than applying AI as a sticking plaster (or “band-aid” for US readers) to a more fundamental problem of a broken process. For example, if we try and fix the classic problem of “where is the latest draft” with AI, I’m not sure we will get anywhere — this is a characteristic not apparent from the face of the document, but through contextual data held in people’s heads and in disparate systems. It’s best solved through defining a process and getting people to follow it.

Lawyers need to spend time thinking about the different types of collaboration exercises they do and whether they really work or not. Collaboration in law firms is an interesting topic, because its success relies on people working in a consistent way — yet so often, the way lawyers work (perhaps, especially senior ones) can be somewhat laissez-faire: left to their own whims and personal preferences. I won’t go into huge detail here as I dive into this in great detail in the article referenced above.

I will, however, add, one thing to that article. Sometimes, lawyers are not making “drafting” changes but putting their thoughts onto a document (“marginal notes”). For example, somebody might write, “this conflicts with clause [x]”, or “unpack this a little”, or — my favourite — a simple “?”. Somebody has to read these marginal notes and then make a decision on (1) what the comment means, and (2) what changes therefore need to be made to the document. I can see potential in large language models assisting lawyers in making these calls, e.g. by making suggestions on what the comment means and what drafting could implement it.

Interestingly, the justification of somebody leaving those kinds of marginal notes usually lies in (1) training and skilling of junior lawyers in learning good drafting, and (2) the lack of time for the senior reviewer to make the actual drafting changes. Can generative AI help senior lawyers move from marginal notes to actual changes in the document? If so, what is the knock-on impact of junior lawyers learning to do this for themselves? Is the senior person “in the weeds” of the document enough to really know the ramifications of making those changes?

As much of the mechanics around incorporating comments and showing changes depends on producing document comparisons/redlines, it is worth noting that generative AI might help us perform semantic comparisons of documents rather than textual comparisons. This is useful for people to give a high level idea of what has changed, rather than having to interpret a textual comparison. However, I still think the textual comparison will be fundamental because ultimately amendments have to find their way back into the draft.

There are a number of tools on the market that are trying to tackle this problem. Some are standalone, some are embedded in contract lifecycle management products. Many of the leverage AI, but the main thing that needs to happen here, in my view, is to standardise and design a better process.

Step 5: Preparing for signing

When everyone is done with negotiations, it’s time to get the document signed. This involves the following:

  • Obtaining sign-offs. Getting final approvals on the latest draft of the document.
  • Making an execution version. This basically involves stamping the words “Execution Version” somewhere in the document’s contents or the title, and ensuring all signature blocks are in place properly.
  • Preparing signature packets. If you’re using e-signatures you don’t have to do this step. However, we are quite some way from e-signatures being the universal way to sign documents. If you are doing things the old way (“wet ink”), you have to (1) get a list of every signatory, (2) separate out each page from each contract that each signatory has to sign, (3) combine these together into a “signature packet” for each signatory, and (4) send them out to each signatory.

They key pain points from these are:

  • Last minute changes. In the latter stages of a contract, signature pages and sign-offs are often withheld to produce leverage on final negotiation points. This can cause stress, delays and disrupts version control
  • Manual processes. I probably don’t need to flag that the production of execution versions and preparation of signature packets is notoriously manual. Both involve a high degree of wrangling with Word and PDF editors, which is low value work for lawyers

We are getting away from contract drafting here and more towards the processes around contract signing and execution. At some point, I will write an article on the ancillary processes lawyers do (such as signing and execution) and spot areas where generative AI could or could not assist.

Last minute changes

These kinds of pain points are often a product of personal or business dynamics between parties, rather than any form of process that can be improved by technology. All I will say is that there is, perhaps, scope for law firms to define processes that better capture “what happened” on a transaction so that these learnings can be shared with others. The idea is that over time, lawyers might be able to anticipate these kinds of things at an earlier stage.

All forms of technology, including AI and generative AI could potentially be useful here. However, AI is not a silver bullet. You have to be specific on what the learning actually was, which is probably not apparent from the pure text of a document. You would have to make sure you are capturing the full context — e.g. who was the counterparty, what was the deal, what were the specific circumstances.

Capturing this kind of information is a well-known challenge in knowledge management. It’s hard, and people should be under no illusion that there is far more to this than throwing AI at the problem.

Manual processes

It’s been a theme of this article that contract drafting involves a number of manual processes. Some of these manual processes require a degree of sense (e.g. proof-reading), but others are purely mechanical. Producing execution copies and signature packets are largely mechanical. You do not really need much skill, knowledge or experience to go through 10 documents and extract into one file the signature pages that pertain to a particular party.

Solutions already exist to do these things. For example, transaction management solutions have long been able to scan documents for signature pages and match them up to the relevant party. Some of these use AI (e.g. when receiving signed copies and matching it up to a party), whereas others use simple rules-based algorithms (e.g. in determining whether a party is a signatory to a particular document).

Generative AI may be able to assist in creating signature pages. But to a large extent, technology exists to solve these kinds of problems, and automate the wrangling people so often end up doing themselves in PDF editors and Word. They just aren’t adopted as much as they should be.

I think this raises an interesting question for the legal industry in the context of generative AI. There is doubtless a potential for generative AI to take manual processes away from lawyers. But just because that potential exists, will we get the adoption of tools we need for this to be realised? This remains to be seen, but it’s always worth bearing in mind that busy lawyers tend to be less interested in the “wow factor” of AI than those in leadership, technology or innovation roles are.

Step [x]: What happens next?

The actual “execution” of documents is a laborious process, especially if you are not using e-signatures. I’m going to leave this out of scope for this article, because it is not really a “drafting process”. I will also leave the management of contractual obligations post-execution out of scope.

What I do want to talk about is how the experience and knowledge — both apparent from the contract itself, the context around it and also held within people’s heads — is captured for the next time somebody does the same kind of thing. And in thinking about this subject, you end up at full circle with many of the issues I have already touched upon around e.g. finding a starting point, knowing how to negotiate a document etc.

You might think that generative AI will somehow be able to automatically detect this kind of thing as somebody drafts, and it will be available the next time somebody uses it. Possibly, it is built into chat-like interfaces that power either retrieval-augmented generation or help answer questions people have in the future.

What I’ve learned in my short time in the legal technology world is that things are rarely as simple as this. Experiences and knowledge need to be translated into content, e.g. past examples, guides, templates etc, with quality checks and the right context added to it. It’s all well and good being able to draw upon 1,000 past examples of a given contract, but how do you know about the context of these? What are the key characteristics of each transaction that led to the contract being drafted in the way it was?

There will need to be, at least in the initial stages, a human steer on these things. Humans need to spend time thinking about the kinds of things they might want to know about a contract in the future, and you might also need humans to input this data if it is not apparent from the document itself. On a longer-term basis, it might be able to piece together the contract along with emails, Teams messages etc. that go to its context — but most firms are quite a long way from structuring their information to the extent required to make this happen.

We therefore find ourselves back in the classic problem legal teams and firms have been dealing with for decades: “how do we capture and share knowledge better”. As much as I would love it to be the case, I cannot see AI solving this issue by itself without some degree of process definition and human involvement. We need humans to identify the good quality assets, we need humans to tell us about the context, we need humans to tell us what was useful to know about the document and what is less useful.

The issue that all organisations encounter is the incentives (and, to a lesser extent, the time) required to make busy lawyers do this. I have worked with a number of organisations who give up on these things before they start. I have equally worked with others who have managed to shift the dial and cement a real culture of knowledge sharing. Everyone is different.

I am aware, of course, of the argument that “something is better than nothing”, and that it is preferable for people to find something to start from, no matter how poor in quality, than start from a blank sheet of paper. I have sympathy for those arguments, but the risk with any form of AI is that it exacerbates both the positives and the negatives in the underlying training set. To my mind, quality becomes a big issue here — although perhaps I am biased from my development in a UK-based law firm with a strong knowledge function.

Of course, the landscape is changing rapidly. Future developments that can enable knowledge sharing with only very minimal human intervention will be game changing. I do not see this yet with the current state of generative AI, so for now I see there being great value in law firms and legal teams structuring and capturing their knowledge and data better in order to achieve many of the outcomes listed in this article.

So where can generative AI help contract drafting?

In this article, I have tried to go through the jobs people do and the outcomes we want to achieve in the contract drafting space. I have then applied my own understanding of generative AI to how these jobs and outcomes might be better performed.

I wrote this article to try to demonstrate the value of approaching things in this way. My analysis is unlikely to be complete, and I look forward to hearing about things I have missed or where I have over or under-sold the value of generative AI. In summary, these are the areas I think are most relevant for generative AI in the contract drafting space:

  • Setting up templates. I still think a template (or a prior example) is nearly always a better starting point than asking a large language model to spin you up a first draft. However, I think generative AI can be used to get you up and running with a template quicker. After that point though, I think generative AI largely falls out of the picture for this particular process.
  • Negotiations. Rather than have humans cross-refer to playbooks or ask lots of questions while reviewing contracts, some tools are already automatically aligning a contract with pre-defined playbooks, using generative AI.
  • Collaboration. The ability to semantically compare two documents will help clients and lawyers get up to speed more quickly with e.g. what the other side has suggested in negotiations. Other internal collaboration workflows might also be improved by generative AI, e.g. the use of marginal notes in drafts. However, we should not forget the huge value you could deliver without AI in this space, e.g. by defining good collaboration practices. For example, you could develop an AI tool that translates a partner’s scribbles on a piece of paper into drafting amendments, but isn’t it easier to not have this problem in the first place?
  • Manual processes. I list a number of manual processes in this article that lawyers simply don’t do as well as a machine. Some of these are candidates for generative AI (e.g. checking whether a document makes sense); some are not (e.g. contract automation). Here, be very specific around the manual tasks and benchmark AI against “non-AI” solutions.
  • Knowledge management. I hope I have demonstrated that capturing knowledge is not as simple as just making one person’s work product visible to another. The context needs capturing, the quality needs vetting, the information needs sorting. Some of this can be done by a machine, some of it cannot.

Some might argue that the way I have approached this exercise means we don’t end up with anything “innovative”. Why are we looking at how people currently work? Do we risk being stuck in the past, and develop “faster horses” rather than truly new solutions?

Nonetheless, I believe this kind of understanding is core to making solutions that are not only relevant to people but deliver value and actually get adopted by people. I don’t personally think lawyers will adopt a new tool just because it contains AI. They will, however, adopt something that they can easily see makes their lives easier and works in the context of their businesses. You look at what people currently do to establish the incentives in doing things a better way. You look at what people are trying to achieve to help frame the value you are trying to deliver.

I remember the last “AI wave” in legal back in 2017/18, when I was still practising. I heard a lot of things similar to what I am hearing today around the potential for AI to disrupt the legal industry. I personally felt like the people saying these things did not have a good grasp of the reality of being a lawyer. The problem, I thought, was the mistaken assumption that because technologists were interested in AI, users would be too.

Whether I was right or not, let’s try to learn from last time and involve lawyers in the discussion. Let’s make sure our discussions on these subjects are rooted in specific process, tasks, realities and outcomes lawyers, legal teams and law firms want to achieve, rather than implementing AI for the sake of AI.



Jack Shepherd

Ex biglaw insolvency lawyer and innovation. Now legal practice lead at iManage. Interested in human side of legal tech and actually getting things used.