The importance of use cases for AI in legal and how to discover them

Jack Shepherd
12 min readJan 9, 2024

--

Comments such as “did you even speak to anyone about this”, “this just doesn’t work”, or “it’s just another IT project” are commonplace in technology projects. They are especially common in the legal industry: busy lawyers who don’t even have time to grab lunch, let alone use a sub-par technology. Lawyers are notorious for trying things once, gaining a negative impression, then never trying them again.

These problems often arise because of a lack of rigour in understanding what people actually do, and designing / deploying technology in a way that helps people achieve their objectives in a meaningful way.

This might be something as simple as using the right language (“stop calling our clients customers!”), but more fundamentally it is around understanding (1) the processes people do, (2) the tasks within those processes, (3) better ways of doing those processes or tasks, and (4) the individual and business benefits of people doing things in a better way. The term “use case” means different things to different people, but I use this term to cover all four of these aspects.

When we say thinlike “lawyers need to embrace AI”, we need to make sure lawyers are meaningfully embracing AI, not just embracing AI for the sake of it. In the age of generative AI, I believe that finding and understanding use cases is more important than ever.

The nature of experimentation many firms have been undertaking has shed some light on the methods these firms use to uncover use cases.

In this article, I make two points. First, that discovering use cases has to rely on serendipity and upfront design — it’s not as simple as just letting people play with tools and seeing what happens. Second, that use cases have to be related to a tangible business outcome — just “deploying AI” is not a good enough justification.

Discovering use cases

With that said, let’s talk about discovering use cases. Specifically, this:

In deploying applications for AI, finding use cases isn’t important. To the extent they are, the best way to discover them is to let people play around with AI tools

Sometimes, I also hear the notion that “use cases aren’t important”. I usually interpret this to mean that identifying use cases in advance of deploying a piece of technology is not necessary. I say this because a deployed piece of technology that does not have a use case does not have a use. And I can’t believe anybody advocates for us deploying useless technology.

Serendipitous use cases

The notion that we don’t have to know how a piece of technology will be used before we let people use it is an interesting one. It is supported by the numerous examples of technology that was deployed for one purpose, but found uses in completely unintended areas.

These “serendipitous” use cases arise not through the advance planning and thought of a product team, but through unforeseen interactions by users with a product. The product team then has a few choices:

  1. Pivot: completely change the direction of a product to meet the one or more serendipitous use cases discovered by users (e.g. YouTube, which used to be a dating app)
  2. Support: the overall direction of the product remains the same, but either new features are introduced to support serendipitous use cases or the product is marketed around an additional use case (e.g. Excel, which can also be used for project management)
  3. Ignore: the serendipitous use cases have no effect whatsoever on the development of the product or its marketing

There are some examples of inventions that started life without a clear use case in mind. For example, the lightweight and “just strong enough” adhesive developed by 3M resulted in the Post-It Note. Applications for it arose serendipitously, e.g. when somebody realised that using a Post-It Note might solve a problem they were having with bookmarks constantly falling out of hymn books.

Serendipity as the only strategy

It’s fair to say that so far, nobody has quite nailed the use cases for generative AI in the legal industry. Yet, I am sure there are some. Letting people play with AI tools allows use cases to arise through serendipity. These can be powerful use cases you might not have thought of before.

It is this line of thinking that leads people to conclude that we don’t have to worry too much about use cases, because they will arise when people start using tools. Yet relying solely on this technique has a few risks:

  • Pointless tasks. This is the risk that use cases emerge that are things people shouldn’t be doing anyway (e.g. a task that can be automated away through redesigning a process or without the need for AI at all)
  • Inside the box. The risk that use cases only emerge in the context of an existing process rather than a better process that can now be designed thanks to AI. Most lawyers are so in the weeds of their work that they do not have time to think about how to do things fundamentally differently
  • Limited time. Or perhaps people won’t have time to play around with the tools, so no use cases emerge. Yes, AI is very exciting to those of us who work in technology. Are all lawyers quite as excited? I’m not so sure. Without a sufficient “hook”, you are relying on lawyers playing with technology through intrigue, which can be challenging with time-poor professions
  • Laziness and lack of judgment. Humans sometimes fall into the trap of using technology to be lazy. They are sometimes too focused on producing the deliverable, rather than asking themselves whether the process of thinking it through was the important thing. For example, do you want a long research memo, or a trusted advisor? It depends on the circumstances, but it’s a question worth asking.
  • Bad fit. It is also possible that, without a sufficient control mechanism, use cases emerge that people think are great but don’t realise are fundamentally flawed. For example, somebody might think AI-produced memos look plausible, and might not know they have to check the citations.

Luckily, relying on serendipity is not the only way of discovering use cases for AI (or any technology or invention, for that matter). Whereas serendipity necessary relies on conducting open-ended experimentation (i.e. “let’s see what happens”), you can conduct more closed-ended experiments in specific areas where you think a given use case might exist (i.e. “can it help people do [x]”).

Upfront use cases

I’ve seen a few attempts to predict use cases for AI in the legal industry. The problem is, you need a cross-disciplinary team (or a person with cross-disciplinary skills and knowledge) to come up with decent ones. Otherwise, you end up in one of these places:

  • Use case = process: sweeping statements that AI will solve an entire process (e.g. contract review or contract drafting), when in reality AI will come support discrete tasks within these processes
  • Use case = task: a statement of a capability of generative (e.g. summarisation) without any context as to what tasks this capability maps to, and what processes those tasks fit within

Ending up in one of these places might cause you to either overvalue or undervalue the importance of AI:

  • Undervaluing: for example, many are now undervaluing the importance AI might play in legal research. They assume that the use case for AI is to automate the entire process of legal research, and get scared by the fact that generative AI fabricates cases. If these people looked at AI as a tool that helps discrete tasks within a process (e.g. finding a list of cases for you to review), they might be more optimistic
  • Overvaluing: by the same token, others look at capabilities of AI and make (unreasonable) assumptions about their impact on lawyers. Summarisation is the most common example. Lawyers do spend time summarising things, but unless you understand why, where, how and how often people do this, you cannot map AI capabilities to the fundamental goals people are trying to achieve.

Perhaps more fundamentally, if a use case is not defined at the right levels of abstraction, it is not actionable. Equating a use case with a process is not actionable because you need to break the process down into tasks. Equating a use case with a task is not actionable because you cannot capture the nuances of what people are actually trying to achieve.

What you need is to work with a team that understands (1) how lawyers work, (2) how to rethink processes, and (3) the strengths and weaknesses of AI. If you have these skills within your organisation, you are not using them to their full potential if you are solely relying on serendipity to discover use cases. These skills can be wielded in three respects, that avoid the pitfalls identified above with relying solely on users to come up with use cases:

  • Defining use cases upfront. You might not think of some of the use cases that arise through serendipity. But users might not think of some of the use cases you can think of. If you know the work people do, but are distant enough to think of a better way to do things, you can come up with your own use cases and work with users on these specific things
  • Redesign a process. An existing process might involve, e.g. writing lots of words. You can definitely use AI to produce lots of words, but is there a way we can cut to the chase and remove this step entirely? We should use AI to highlight the inefficiencies of the past, not to preserve them
  • Marketing and comms. To a busy lawyer, selling a tool that delivers an anticipated benefit is easier than selling a tool that (1) may or may not have a benefit, and (2) to the extent there is a benefit, it’s their job to work out what it is. Right now, AI hype will get some people interested. But how long will that last for, and does it apply to everyone?
  • Evaluation of purpose. Engaging with processes and tasks helps teams understand what the overarching purpose of a given process is. We can use this to point AI at the right parts, and take account of what we might lose (if anything) in doing so
  • Good fit. By concentrating on (1) the strengths and weaknesses of AI, and (2) the strengths and weaknesses of humans, we can find the best fit use cases. For example, which kinds of processes lend themselves best to a next-word-predictor (AI) rather than the application of principles and experiences (humans)?

Using both techniques

Discovering use cases for AI (or any technology, really) is not, in my experience, as easy as it sounds. When relying on serendipity, it takes a lot of time to ask the right questions to understand whether it is in fact a good or valuable use case. When defining use cases upfront, you have to spend time breaking down processes and then evaluating the question of whether a given use case is sufficiently valuable to proceed.

One thing that has changed since the start of 2023 is how easy it is to deploy AI. A few years ago, using AI in an application often involved choosing the right algorithm and building extensive datasets. AI capabilities such as OCR and image recognition where available more easily through an API. But with the likes of GPT, this capability has been extended to a vast number of potential uses.

Among the millions of potential applications of generative AI, only a fraction of these will be valuable. It is easier to include AI in technology, and the risk of producing something AI-powered, albeit useless, has vastly increased. Therefore, defining use cases is more important than ever.

Defining a use case is not something you do once. You start with something rough, and then you evolve it through a combination of open-ended (serendipity) and closed-ended (upfront) experiments.

Business outcomes

Misalignment of business outcomes

Often, I get the feeling that some teams would rather not have to come up with use cases at all. For these teams, the use case is an annoyance — something they have to get over the line to enable them to do what they really want to do, which is deploy AI.

This might be because the individuals involved are technically-minded, and are more interested in the inner workings of underlying technologies than how they are used. It might be because the team in question has not been through a project that failed because they did not understand their users well enough (see, e.g. “another project from IT…did they even talk to any lawyers!?”). Or it might be because the team in question wants to make an impact for marketing purposes, and they know that deploying basically anything that includes AI right now will get you a headline somewhere.

Both of these things are symptoms of the same problem: a misalignment of business outcomes with technology.

What is a business outcome

When I talk about business outcomes, here’s what I mean. A firm might spend money (sometimes, a lot of money) on technology. What benefit does that money convert into, and how can you measure that benefit? What would happen if the money was not spent at all?

The legal industry has struggled with this question for years, and it is not something specific to AI. The reliance on time-based billing contributes to these difficulties, but we should be slow to blame this for everything. Even with time-based billing, technology can be deployed in specific areas e.g. to decrease the amount of time spent on tasks clients are not willing to pay for, which translates into a business outcome of profitability

Although this subject deserves its own article (or, probably, a book), I see firms fall into a few common traps when talking about business outcomes:

  • Efficiency is not always an outcome. For many firms, efficiency can be meaningless unless it translates into a higher level business outcome such as profitability. This, in turn, often requires efficiency in specific areas rather than across the board for the sake of it
  • Not digging deep enough. I always ask about business outcomes when I start a project. Some firms are great at this, and are able to spend hours talking me through their client strategies in specific sectors, and how technology will aid growth and expansion. Others don’t dig deep enough. When I ask “what are you trying to achieve”, the response is “deploy [x] product”. This is a technical outcome, not a business outcome

Business outcomes of AI

When I start the conversation about business outcomes, I sometimes get a lukewarm reception. Either this is because, as a consultant, I am seen as stepping outside of my realm of responsibility (“just deploy the tech!”), because the organisation has not turned their mind to the question, or they do not see it as a relevant question.

As with use cases, business outcomes become extremely important in the age of generative AI. Business outcomes work together with use cases, to help you focus your efforts in the right areas. You can, for example, settle on an overall business outcome (e.g. retain employees and stop them leaving for our competitors) and then deploy the use cases that will achieve that business outcome, deprioritising those that don’t.

This should all be seen in light of the likely cost of many AI tools. Tools that leverage LLMs are often very expensive indeed, and they should not be purchased lightly. Many of them will also require extensive data projects in order to reduce the risk of hallucinations and to produce meaningful insights.

None of this is new, and in my view the legal teams that have historically done technology projects well have always focused on user needs, use cases and business outcomes. The impact of AI here is twofold.

First, it has created a hype cycle that has caused many to try to escape the constraints they might operated in previously. It is easier for people not to ask questions about what the value of a piece of technology is, because everybody is talking about AI. I hope that over time, this lessens and projects are deployed because they are meaningful, not because they just “feel” like the right thing to be doing.

Second, it is easier than ever to develop products that have an AI component to them. The potential for both useful and usless applications of AI therefore vastly increases. Business outcomes and use cases become crucial in separating valuable projects from wasteful ones.

When generative AI capabilities first came around, many people told me that we no longer needed to worry about use cases, because this technology was extremely versatile. I saw people deploying AI as a solution to problems that were really process issues. As with any technology, we have even in the space of the year seen some of the limitations of AI. These may be surmounted in time to come. But all of this has taught me is that spending time examining use cases is never time wasted.

--

--

Jack Shepherd
Jack Shepherd

Written by Jack Shepherd

Ex biglaw insolvency lawyer and innovation. Now legal practice lead at iManage. Interested in human side of legal tech and actually getting things used.

Responses (1)