In this article, I identify three things that I hear regularly in conversations around legal tech. While many of these things might seem obviously true at first glance, I try to explain why there is more to them than meets the eye.
1. “Features matter the most”
When embarking on a legal tech project, it’s natural to want to know about the features of a piece of tech.
The type of features people tend to want to know about is the exciting ones, such as anything that is AI-enabled or produces colourful graphs. I’ve seen many pieces of tech deployed in firms off the back of these kinds of “exciting” features, that ultimately fall flat if nobody uses them.
To the extent these exiciting features are not used, the likely reason is that an evaluation of exciting features was done in isolation of why they are actually necessary. Instead, the person evaluating a legal tech tool needs to not start with what features the technology has, but what underlying jobs the users need to be able to accomplish. Evaluating features is a secondary consideration, and needs to be pivoted in the jobs people need to do with the system.
An obsession with exciting features often means that the boring, but crucial, features are ignored. Considerations such as where data is stored (e.g., is this tool going to create yet another information silo that makes security impossible to administer?), login workflows (e.g., is single sign-on enabled, or do users need yet another username and password?) and product analytics (e.g., is anyone actually using this tool) are absolutely vital, but are overlooked.
Interestingly, the lawyers using the tool don’t actually care about the exciting features. What they do care about is how it helps them, how it saves them time, how it gets them home earlier and what it can do to make them more money. So, vendors and firms should start with these higher level considerations, not with the features.
Here are a few things that might indicate that a project or technology needs a bit of course-correction due to an obsession with features:
- During a procurement process, a firm provides a “feature checklist” to vendors, instead of a list of business outcomes and lawyer jobs the tool needs to help deliver.
- In a vendor’s marketing materials, there’s an undue emphasis on the underlying tech (e.g., everything is “AI-powered”) or on vague tech concepts such as “digitising workflows”, instead of the specific outcomes the tool can help achieve.
- A senior project stakeholder (e.g., a law firm partner on a technology approval board) asks first and foremost about what features the product has, instead of what the return on investment is likely to be.
A change in mindset and approach — features are of secondary importance to (1) the business outcomes a tool will drive, and (2) the jobs, tasks or problems a tool can help lawyers accomplish or solve. Judge a product by what it helps achieve, not what features it has.
2. “Everything needs to integrate with everything”
In legal tech, there are a number of initiatives in place to assist products to integrate with each other (e.g., Reynen Court and Theorem). Solutions such as “enterprise search” promise to bring all of a firm’s data sources into one place too.
Many of these kinds of integrations make complete sense. It makes sense, for example, for drafting tools to use the same tools used to store the firm’s work in progress data, because these two tools are part of the same workflow (in this case, document drafting and retrieval).
Integrating systems usually occurs through something called an API (“application programming interface”), which is basically a way for one system to speak to another system. Too often, I hear discussion of integrations based solely around APIs — as if the existence of an API for a system means that integrating it with another system is a fait accompli. Integrating two systems takes work. It isn’t just a technical question, it’s also a user experience question.
For instance,, when I am done in my drafting tool, how exactly does it “integrate” with my document management tooling? Does the “save” function appear in the drafting tool or the document management tool? Does it happen automatically, or is it triggered by a user? What happens if the integration fails and the data ends up in some sort of black hole?
The main problem, though, is that too much integration work is done for the sake of integration. There is little point integrating one system with another if it brings little to no value to a user. For example, is it really important for the time recording tooling to integrate with your contract review tool? If so, how important is this, and what actual need does this serve?
“Does [x system] integrate with [y system]” is, in my experience, perhaps the most asked question in a legal tech product demo. Similarly, “yes, there is an API” is a very common answer to this. Any discussion of integrations in isolation of the value or workflows in question is not time well spent.
See #1 above. But also, mapping the jobs a person does in a given workflow to the relevant tools is important here. This ensures you are not talking about integrations purely, but also about at which phase of the activity different tools come in, and what the whole experience looks like to a user.
3. “If the tech is good enough, people will use it”
When I was involved in an in-house software build, the project proceeded on the assumption that on launch day, lawyers would be queuing up to use it — much like Apple enthusiasts queue outside Apple stores for the new iPhones.
Of course, , this didn’t happen. I realised quite quickly that it couldn’t have been down to the quality of the software. I knew that because nobody was using the software. We weren’t even getting people through the door, so the quality of the tech didn’t really come into the equation.
The question of tech being good enough only becomes relevant once you have already persuaded people that it is worth their time to use it. Sometimes, badly designed tech can look daunting to new users, requiring increased effort upfront to learn to use it. In such an event, a poor quality user experience will lead to a lack of adoption.
However, badly designed tech is rarer than zombie tech that is launched but never used. The quality of the tech is just one factor surrounding whether or not it will get used. By far the most important factors are:
- If people actually know the tech is available to use in their firm.
- How well the benefits and incentives of the tech have been articulated and communicated to users.
- The extent to which efforts have been taken to change existing processes and help form new habits.
Most law firms have a catalogue of software that has been rolled out, but not used. These are likely all symptoms of unduly focusing on the quality of tech, rather than the incentives in place to get them adopted.
The solution to all of this is to have a decent change management and product launch strategy in place. There is a whole host of information out there on change management. In brief, it is a buzzword for the following activities:
- Assessing the need and appetite for a new piece of technology
- Articulating benefits and building incentives (either personal or business incentives) for people to change how they work and to adopt the new technology
- Building a product launch strategy that involves awareness of the new tech tool, and rolling out the tech incrementally, group by group
Measuring the success of product rollouts, and directly addressing the reasons why the tech is not being used.
The above-mentioned three things have a common strand: overall value of improving a given workflow or solving a particular problem comes first. Technology is just the means through which those problems are solved. There’s no point putting technology in to solve a problem that was never worth solving. Do so, and you will struggle to get the tool used, and people will ask some difficult questions about your spending decisions.
This article was originally published in the Solicitors Journal