Legal Tech, AI, and Automating Our Brains: Striking the Right Balance

Jack Shepherd
9 min read2 days ago

--

Speed read

  • Legal tech companies often say things are manual and boring when they aren’t
  • AI should be used to either redesign a process or automate tasks within a process, not to just “do” the whole process itself
  • The context matters on whether things are manual and boring
  • Some boring things might have intrinsic value so be careful automating them
  • Client expectations will influence things, but they too need to be careful about what they lose in the pursuit of doing things more quickly

As a former lawyer, if there’s one thing that annoys me the most in legal tech marketing, it’s where vendors try and make out that a process is manual, cumbersome and annoying when actually it’s not. Daniel Yim put it best in his post on LinkedIn today:

This might come as a shock, but plenty of lawyers actually enjoy drafting.

They don’t regard it as drudge work.

They don’t want AI to do it for them.

They like the challenge of crafting sensitive communications, they like how writing helps them think through the issues more clearly, they like the satisfaction of figuring out wording acceptable to all the parties.

The ‘high level strategic work’ that various tech will apparently free them up to do….in their mind, that’s drafting.

I would put things like legal research into the same category.

The narrative that seems to resonate the most with buyers of legal technologies is that AI won’t replace lawyers — instead, it will empower them and remove all the manual things they hate doing so that they can get on with higher value work.

The problem is that legal tech vendors want the bucket of “manual things lawyers hate doing” to be as large as possible. They inadvertently include things in that bucket that are not actually manual at all.

As it happens, automating the higher value things like drafting and legal research is probably some of the hardest applications of AI due to issues around provenance and accuracy. Given everyone is still learning about AI, what makes more sense to me is to start with the truly manual things, nail those use cases and then work up to the clever stuff.

And when I say truly manual, I mean really boring. These are the things that make you audibly groan. They are the things that you forgot you had to do before you clock off from your hard day’s work. Things like time recording, organising your files, email filing, formatting word documents, fixing cross references etc.

If we really want to unlock lawyers to do higher value things, we should focus on these things first and then see where we get to. Of course, you should allocate time to the clever stuff as well — but it seems to me that the balance is off right now. I get why — as humans, we are always attracted to the most interesting but not necessarily the most useful work. That’s why it’s worth consciously asking where your balance actually lies when you are thinking about the work you are doing on AI.

If you’ve been reading the above and thought something seems a little off, I agree with you. I’ve overlooked two things in what I’ve written so far.

Workflows v. tasks

First, I have conflated processes with tasks. Processes operate at a higher abstraction than tasks. One or more tasks fit into a higher-level process.

Contract drafting is, for example, a process that is comprised of multiple tasks. If you don’t believe me, have a look at this mammoth article I wrote at the end of 2023 which Medium says takes 32 minutes to read. In that article, I break contract drafting down into 5+ steps, talking about the tasks involved at each step and where AI could support these tasks.

The key is that technology supports tasks rather than processes. Technology helps shape the tasks which in turn redesign the process. Contract drafting, for example, has developed over a number of users into a rather convoluted process — but people do the tasks within it for a particular reason. You have to understand those reasons and reinvent the things people do to deliver upon them, rather than not engaging with the substance of the process and just sticking AI all over it.

I said above that contract drafting and legal research are things that lawyers doing, and as processes, they tend to be high-value things that lawyers don’t want to automate.

But that’s not to say that contract drafting doesn’t involve a bunch of horrible manual tasks. For example, while a lawyer might enjoy wrangling over how to draft a termination clause that won’t trigger a bunch of cross-defaults in associated contracts, they are probably less happy to have to spend time running endless redlines against drafts they received from the other side, or replacing square brackets in an unautomated template.

There are manual tasks in high value processes such as contract drafting and legal research. But the processes as a whole are high value. My advice is to be specific on what you actually solve within that process. What I struggle with is AI companies who try and solve the interesting tasks rather than the boring ones. It makes me think, “why are people trying to automate my brain but not my keyboard clicks?”.

The context matters

Second, I have overlooked the context in which these processes take place. While drafting a termination clause carefully to take account of other contracts is valuable work, drafting a third parties provision is not. The same applies to legal resarch. When I was in practice, we templated advice for directors’ duties because it was always the same. It would have been boring and not worth anybody’s time to write this from scratch.

So, this is another bit of detail I’d like to see more. If you agree with me that you are trying to assist tasks within, for example, the contract drafting process, it would be helpful to include more clarity on the kinds of situations you are talking about. Are you talking about boilerplate drafting, or are you talking about highly bespoke drafting for high-value transactions? The context matters.

Do boring tasks still have intrinsic value?

I spent 9 months doing litigation work in my training contract, and one of the things I hated doing the most was court bundling.

For those fortunate enough not to do this process, you act as a glorified printing press in gathering key documents together in a specific order, producing an index of them, relabelling them and then putting them into a format others can consume (when I was practising, generally this was in hard copy paper, but now has moved to PDF).

I hated doing this process. It caused me to stay really late all the time, especially when people asked me to change the order of documents (e.g. “can you move reorder these so that #161 becomes #34” involved the manual renaming of nearly 130 documents). The cumbersome nature of the process affected not only culture, but also the quality of the output.

To me, court bundling has always been the prime example of something that is not automated by law firms anywhere near enough. I think this is due to a combination of it not being an interesting problem to solve, but also occasionally that discovering this process and the problems people experience doing it, requires quite a deep understanding of what lawyers do and how they work.

But when the cases when to court, and people had an urgent question about which document(s) were relevant to a particular issue, guess who was the most familiar with the court bundle and knew exactly where the documents were? That’s right, me. I had spent so long making these court bundles that I knew the documents like the back of my hand. I knew who was mentioned in each one, because I had spent so long flicking through them. This in turn unlocked the door to higher value work, because I had acquired knowledge by doing a process I hated.

I still think court bundling, and a number of other processes, can and should be automated more. But in doing so, we do need to think about why we do things the way we do now. It might be that we do things manually, and these things have no intrinsic value, in which case we should eliminate those things. But it might be that there is some deliberate or accidental value in the way we do things now.

I’m not saying we keep manual processes just because they have intrinsic value. What I’m saying is that we are aware of the intrinsic value, and find other ways to replace it. For example, by automating court bundling, you might lose somebody who knows the documents super well. How can you bring technology in to do all those things that person would have been good at? How do you enable the court bundler to move on to higher value work?

Legal research is probably the best example of this conundrum in play. Here’s a paragraph from a previous article I wrote that explains what I mean:

Let’s take a legal opinion in a banking transaction. The purpose of a legal opinion is to comfort a bank that the borrower in question is of sufficient legal standing and can enter into the transaction. Banks ask for it from a risk perspective, and its value largely lies in the words on the paper.

Contrast this with a complex legal advice memo delivered during a piece of litigation. It serves as a discussion piece for clients to clarify their understanding, “ask stupid questions,” etc. The words on paper carry value, but they also evidence the existence of a knowledgeable lawyer who can think outside the box and facilitate a helpful discussion.

Consider what would happen if both of the legal opinion and legal advice memo could be automated with complete accuracy. The legal opinion is likely okay because its value lies in the risk it resolves. However, the point of an automated legal advice memo is questionable if a discussion cannot subsequently take place with a lawyer who knows the relevant facts and case law in-depth.

Before assuming we need to involve AI in a specific workflow and that AI needs to be accurate, we should consider whether we are automating away a process that itself has intrinsic value.

The point is that we absolutely should be looking to automate as many seemingly manual things as we can. But in doing so, we must always ask ourselves whether we lose anything in the processes. Often we don’t, but sometimes we do.

What about clients and those paying for legal advice?

The elephant in the room is that much of this is likely to be driven by the appetite of clients to pay for legal advice. Regardless of whether lawyers personally find contract drafting a fulfilling exercise, what if clients simply aren’t willing to pay them to indulge in it the way they currently want to?

The answer to this I think is probably apparent from what I have written above. If the legal service in question is straightforward, not novel and automating the process does not affect its quality in any way, we should definitely be going ahead in automating it. On the other hand, if the legal service is complex, novel and we miss the point by automating vast swathes of it, it makes less sense to automate it. The fact is that the former scenario can be done much more cheaply than the latter.

The thing that law firms might think about more is being more transparent about what the truly valuable exercises are in the work they do. As a lawyer, I found it weird when clients pushed back on internal meetings, but would happily pay for a court bundle to be produced. In plain english, they were unhappy for intelligent lawyers to spent time strategising about their case and to make sure their deadlines were being met, but they were happy for poor old me to skip my dinner plans and act as a glorified printing press.

We will not get success from AI or any other technology if this conversation does not take place. As a society, we are probably now too focused on getting things done quickly rather than better.

We’ve already seen a bunch of instances of people using AI to automate a complex process, without fully thinking through the implications of doing so. If you want something quicker, you have to think about the consequences of that in terms of quality. Sometimes 70% quality might be enough, but sometimes it might not. The great thing about nascent and exciting technologies like AI is it that it forces people to pick up this conversation again, and that is always a good thing.

--

--

Jack Shepherd
Jack Shepherd

Written by Jack Shepherd

Ex biglaw insolvency lawyer and innovation. Now legal practice lead at iManage. Interested in human side of legal tech and actually getting things used.

No responses yet