Chat GPT for contract drafting: AI v. templates

Jack Shepherd
7 min readFeb 9, 2023

What does Chat GPT do?

Chat GPT landed back at the end of 2022 from a company called Open AI. If you have not yet had a play around with it, you can do so here. It’s a very impressive tool indeed. You can ask it a question and receive seemingly magic results. Ask it to write a song in the style of James Blunt, and it will do just that — in a worryingly convincing fashion.

In this article, I will explain my perspective on Chat GPT specifically around contract drafting. In short, I think we should be careful about getting too carried away with the superficial impressiveness of Chat GPT for this use case.

How does Chat GPT work?

If we want to really dig into how useful Chat GPT is going to be, we need to understand how it works. If we don’t understand that, we are in danger if thinking this is magic wizardry, and we have no idea where exactly we can use this promising technology.

Here’s my attempt at explaining it:

  • There are two types of algorithms. Rules-based algorithms produce outputs based on a series of specific rules taught them by a human, e.g. “if time is before 12pm, display ‘Good Morning’”. Machine-learning algorithms produce outputs based (“trained”) on prior examples, e.g. “here’s 1,000,000 pictures of a cat…okay, now tell me, based on what you know from those 1,000,000 pictures, is this next picture a cat or not?”.
  • Chat GPT is a “large language model”, which means it is based on a machine-learning algorithm that has had millions upon millions of sentences thrown at it as training data. In very basic terms, it has been trained specifically to construct sentences that make sense, by calculating the probability of one word following another word. Like a super-powered autocorrect on your phone.
  • Specifically with Chat GPT, humans have given examples of questions it might get asked, answers in response to those questions and ratings of potential answers to questions. This “supervised” part of Chat GPT is what gives it its magic powers to produce impressive responses to questions. At least, that’s what I think might have happened — Open AI is vague as to what the nature of human involvement actually is here.

A few things to note, that we will come back to:

  • Chat GPT is an expert in constructing sentences that make sense
  • The output of Chat GPT will depend on what you ask it — e.g. asking the same question in a slightly different way may produce a different output
  • It does not ask clarificatory questions to make sure it understands what kind of output you are looking for
  • It is not an expert in context. It works based upon probabilities of one word following another
  • Because the training data set is so large, you cannot “trace back” its sentences to a specific source

Drafting contracts

The use case

Chat GPT can draft contracts. A prompt such as, “draft me an employment contract” will result in a fairly convincing-looking employment contract being produced. This might save lawyers time if they are struggling to find a template or example for a contract they need to produce.

It’s worth pointing out that there are a vast number of potential use cases for Chat GPT, and contract drafting is just one of them. Other use cases include legal advice memos, drafting pleadings and court documents, summarisation.

First drafts

I have thrown a number of examples of contract drafting tasks to Chat GPT, and the output always looks superficially impressive. However, because Chat GPT does not actually understand the legal consequences of its output (see above — it produces sentences, not legal advice), it is common for its output to be questionable. Also, it does not understand all the context, e.g. it never asks you what the governing law should be.

I have also seen a number of examples where the output of Chat GPT simply does not make sense. For example, I asked it to draft me a Lock-Up Agreement, an agreement used in a restructuring transaction that commits holders of shares or bonds to vote in favour of a restructuring proposal. It is key to such agreements that the securities holders cannot transfer their securities to anybody, unless the transferee also agrees to vote in favour of the proposal.

Chat GPT produced me a good-looking Lock-Up, but it allowed people to freely transfer shares if they were gifts or to transferees unaffiliated with the transferor — thereby subverting the whole purpose of the agreement. Again, this is because Chat GPT is designed to produce sentences that make sense, not legal advice that makes sense.

But maybe Chat GPT’s ride that you should its output “should not be used without consulting with an attorney” saves it. You would be a fool to rely on Chat GPT’s output without reviewing it and refining it. General consensus is that, at least right now, the use case for Chat GPT is less drafting contracts and more producing first drafts of contracts. I find this more palatable, but I still have two issues with using Chat GPT this way.

Provenance and context

First, when I am reviewing a contract, I want to know why it has been drafted the way it is. Why is there a clause about gardening leave in an employment contract? Why has it been drafted in a particular way? Do we need it in our specific circumstances? Which clauses should we definitely not agree to negotiate? As a lawyer, it would make me uncomfortable to think that I cannot answer these kinds of questions if a client asked.

When I first used Chat GPT, I presumed there was a specific source for the output. Given the large dataset it is trained on, this is of course a fool’s errand. When you think about a contract as documenting a relationship between two parties, it feels weird to think that people might be agreeing to things that one or both parties might not properly understand.

Review

Second, when I am reviewing a contract based on a template, I can see what has changed. This allows me to target my review in specific places. I can skip quickly over the boilerplate, and concentrate on the clauses that I know will require interrogation. I can also do this when working from an example document (as opposed to a template) — I can speak to the person who drafted it, and work out which bits were heavily negotiated and which were not.

With output based on a black box, I have no idea what to focus on. This would worry me, as I would have to apply 100% scrutiny to every word on the page. With long legal documents, this would undoubtedly have a negative effect on my review and I would worry I would miss things.

A last resort?

I can see why Chat GPT looks appealing to people for the contract drafting use case. But it is still a very long way from autonomously drafting contracts. Personally, I would have discomfort using it to generate first drafts as well. Very few lawyers are drafting contracts that have never been drafted before, and I would prefer to work from a firm-approved template or, failing that, an example. It has, at least been vetted by somebody in these cases and you have information about is provenance and context. Without these things, I would struggle to feel like I am able to advise clients properly or achieve the best outcomes for them.

As an aside, Chat GPT reveals something interesting about cultural differences in law firms when it comes to knowledge management. As a general rule, US attorneys are more used to drafting from examples (aka precedents), as their firms do not often have the resource to produce firm-approved templates. European lawyers, however, have more of a knowledge management tradition and often have a whole host of home-grown templates (confusingly, also aka precedents).

Of the people I have spoken to, the US attorneys seem more comfortable using Chat GPT for contract drafting because they are more used to “fishing in the DMS” for prior examples of contracts. As a result, they see challenges to provenance and context in their source materials on a daily basis. This is less of a challenge for European lawyers, who are generally more familiar with working from templates. You can probably tell where my bias lies.

I asked Chat GPT to summarise this article. This is the output it gave:

The article is about Chat GPT, a tool that uses machine-learning algorithms to generate responses to questions. The author mentions that while the outputs of Chat GPT look impressive, they can often be conflicting, wrong, or vague, and that the tool does not understand the legal consequences of its outputs. The author suggests that Chat GPT’s current use case is for producing first drafts of contracts, but raises two concerns: the lack of provenance and context in its outputs, and the inability to trace back its sentences to a specific source. The author warns that Chat GPT’s output should not be used without consulting with an attorney.

I’d say this is a decent summary. I then asked Chat GPT whether the author was correct:

As an AI language model, I do not have personal opinions or biases. However, based on the article, it appears that the author has a nuanced view of Chat GPT’s ability to draft contracts, recognizing its potential for generating first drafts but also highlighting its limitations such as the lack of understanding of legal consequences, context and provenance.

And I might just end this article there. Maybe summarisation is a better use case for Chat GPT than contract drafting.

This article was originally published in the Solicitors Journal

--

--

Jack Shepherd

Ex biglaw insolvency lawyer and innovation. Now legal practice lead at iManage. Interested in human side of legal tech and actually getting things used.