The key steps to determine success or failure with the latest AI tools

Jack Shepherd
8 min readApr 3, 2023

--

If there was a slowdown in interest in AI towards the end of the 2010s, this has been reversed in the most dramatic fashion in the 2020s. 2021 saw the unveiling of DALL-E, a deep learning model developed by OpenAI to generate digital images from text prompts. Late 2022 saw the introduction of ChatGPT, which generates human-like responses from text prompts, through a chatbot interface.

ChatGPT has caught the hearts and minds of many within the legal technology space, with many starting to explore the potential use cases of this powerful technology in legal. To the extent any law firms started to lose interest in AI, the work being done in the generative AI space will undoubtedly reverse this trend.

Keep focusing on the value, problems and use cases

As legal technologists, we get excited by these new possibilities and we rush to explore them. However, we should not let our own personal interests interfere with what should be the ultimate goal: to deliver value to business, clients, lawyers and others.

A law firm leader ignores emerging technologies at their peril. But in doing so, they should not waver on their commitment to strategies that can deliver that value, such as discovering the key pain points and problems we need to solve for.

It is curious that when an exciting new technology comes along — such as blockchain or generative AI — this analysis sometimes changes. Sometimes, we forget about analysing problems and throw tools out there to see where they land. Other times, the question “how can we deliver value” morphs into a different question, namely “how can we deliver value using this particular piece of technology”.

Both approaches potentially distract from the ultimate goal, which is not to deploy technology, but to deploy technology in order to achieve something. Of course, new technologies can unlock new possibilities in how to tackle previously unsolvable problems, or how to tackle already-solved problems in a better way. But we should always focus on what is most valuable to solve, not on things that are solvable merely because a new technology has come onto the scene.

The change brought by an AI solution

Anybody who has led the deployment of any solution in a legal team or law firm will know that getting lawyers and others to change is not easy. In particular, the nature of the change must be communicated clearly along with the positive changes that it brings people. In law firms particularly, there are still challenges around how tools that bring efficiencies interplay with the traditional billable hour business model.

In the early days of AI contract review tools for due diligence purposes, some organisations managed change by focusing on the underlying technology. They would tell lawyers about the advanced AI capabilities of the tool they were deploying, and how it would transform the due diligence process.

Over time, some of these organisations have changed their strategy. Many now talk about these tools as providing an easy way to capture and consolidate findings from reviewers in a due diligence process. They also talk about some of the findings being auto-populated, reducing the amount you have to CTRL+F a phrase like “governing law” in a document. Note: there’s no mention of AI at all here.

This presents a shift towards elucidating the specific use cases and value from a user’s perspective. This must also be done from a business perspective: how will spending money on a new tool bring value or more money back into the business? Law firms have long dealt with a tension between efficiency and revenue models based on billing time. For example, will cost reductions be passed on to clients? It is not as simple as saying that the improvement in technology itself will make these issues go away.

Tackling the right workflows

Coincidentally, the quality of an AI solution and how to ensure adoption of it both rely on largely the same thing: understanding and articulating the value they bring to users. For example, it is common to hear statements (especially around AI tooling) that a given solution “will draft documents for you”, or “help you review documents”. But these statements are too high level, and not detailed enough to make a decent product or convince people to change how they work.

For example, drafting a document is usually comprised of a few different processes. Here are the processes commonly involved in producing a first draft.

  1. Finding a starting point such as a template or a prior example
  2. Amending the starting point at a basic level (often a simple exercise of replacing names and dates)
  3. Tailoring the starting point to take account of nuances in the current situation (e.g. to reflect bargaining position)
  4. Reviewing other actual examples of other contracts to see whether any further useful nuances can be added
  5. Preparing a comparison against your starting point
  6. Collecting comments from others (e.g. a partner) and reflecting these in the document

To take this example further, the “finding a starting point” workflow has been subject to a huge amount of discussion recently with generative AI tools such as ChatGPT. Indeed, many believe that generating a starting point for a contract is one of the key use cases of these tools in the legal industry.

But we should not just stop there. AI can generate legal contracts, but is this necessarily better than using a template? What are the things people are looking for from a starting point, and can AI provide these to people (e.g. context, provenance, explanations of drafting, etc)? Can AI be used to improve findability of content, rather than generating it? Or perhaps it can draft a contract, not to be used as a starting point, but to be used by the lawyer in step 4 to see if they have missed anything?

The other possibility is that emerging AI technologies might cause you to be more willing to reconsider the process as a whole. For example, if the whole process is simply too long, can a generative AI model be fed a term sheet that parties have agreed on to produce the “legal drafting”? The discussion should then be around whether this solution would work, given the need for enforceability and clarity in legal drafting. Or is the “legal drafting” needed at all? Will a term sheet just work? What about standardising the document, removing most of steps 2–6 in their entirety? These latter solutions are not AI powered, but they might solve the problem the most effectively.

It is always helpful to apply some success metrics about whether a particular AI solution is the best fit for a problem. For example, when considering such a solution you might wish to make a hypothesis, e.g. “we believe that by using generative AI to produce contracts from term sheets, the contracting process will be sped up by 50%, allowing us to reduce write-offs on our matters by $10,000”. In other words, don’t skimp on evaluating both the problem and the solution in-depth.

It is very interesting to follow what legal technology vendors are doing in the AI space. The most successful ones will focus on specific value propositions and build tools that support specific existing or optimised workflows. Ultimately, the goal here is to fall in love with solving the problem, not using the technology.

Lawyers may not all be visionaries, but they can help you be one

Sometimes, people just want to deploy a tool and see what happens, rather than speak to people who do a given workflow to understand what their issues and pain points are. Often, this is driven by a desire to not just “update” old and bad processes, but to throw away the rule book and do something completely new.

People who adopt this view often see themselves as being able to see a vision that lawyers (or other people affected by a workflow) themselves cannot see. They often say that delegating all responsibility for innovation to the lawyers themselves won’t get us anywhere — they will be clear about the problems they face, but they won’t think outside the box.

But you need these people to help you be a visionary. The reason you need to speak to lawyers and users is to understand the problem in a sufficient level of detail. You cannot be a visionary without understanding the details of the world you are operating in. Lawyers will probably not articulate the problem properly, but as a technologist, that’s your job. You should hear what they are saying and piece together the facts to work out what the most serious problems are, and how they could potentially be solved.

Approaching problems differently?

It is easy to think that new technologies might change problems people have. But the effect technology has is on the solution, and perhaps your own ability to revisit problems that you might have previously thought unsolvable.

You can take the internet as an example — a game-changing development that has had far-flung consequences that could not have been foreseen. The internet by itself did not change anything. It was the application of the internet to solve problems that led to positive (and negative) change. The discussion was not around, “what are the use cases for the internet” but more “what are the most valuable problems that we can solve, and is the internet the best way to solve them”.

By starting with the problem, you remain fixed on the quickest way to solve a particular issue. For example, many firms have learned through deployments of AI tools that they did not actually need AI in the first place. For use cases such as “extract the date from these documents” and “tell me who the parties are in this agreement”, it is entirely possible that rules-based algorithms provide a cheaper, cleaner and quicker solution than more expensive machine learning options.

In a perfect world, AI projects would start without AI being mentioned at all — a beautiful cascade of problem, and a natural confluence to an AI solution, if indeed that is the best solution.

In reality, this does not always happen. People often have a solution in mind before they start doing problem analysis. With emerging technologies, such as generative AI, this can be helpful, because it might lead you to consider a very wide range of problems — including those previously thought of as being impossible to solve. But at the same time, it can lead you to be biased towards a particular solution.

Always be mindful that humans have a tendency to think about problems through the lens of a particular solution. Doing this might have you end up fixing something that isn’t worth fixing, or fixing it in the wrong way.

Don’t change just to change

Most of what has been said above is applicable to all types of technology, in any industry. The interesting thing about AI tooling is that it often sits in a special category of emerging technology, where we tend to overestimate the short term impact and underestimate the long term impact. There is a risk that excitement around these kinds of technology might cause people to depart from a tried and tested process of putting problem before solution that has delivered success in the past.

The great thing about exciting new tech, regardless of whether you end up using it or not, is that it provides a catalyst for change and for people to reconsider things that might be gathering dust on the top shelf. This, combined with the proper analytical framework, will undoubtedly bring change that has been carefully thought through and designed for true business impact.

This article was originally published in the ILTA Spring Newsletter

--

--

Jack Shepherd
Jack Shepherd

Written by Jack Shepherd

Ex biglaw insolvency lawyer and innovation. Now legal practice lead at iManage. Interested in human side of legal tech and actually getting things used.

No responses yet