Neszed-Mobile-header-logo
Friday, August 1, 2025
Newszed-Header-Logo
HomeAIWhat Do We Want From Legal AI? – Artificial Lawyer

What Do We Want From Legal AI? – Artificial Lawyer

Screenshot 2025 07 29 at 07.32.31

What do we want from legal AI? Is it primarily there to act as an assistant, operating at the edges of a lawyer’s work, or is this really all about seeking to automate entire workflows? The choices we make in the next few years will have a profound effect on our market.

The Purpose of Having A Purpose

First, do we need a purpose? Can’t we just wait and see where things get to? Can’t we let things happen organically? Well, yes, that is one approach. But, many things don’t happen organically. Take self-driving car systems such as Waymo. That took billions of dollars in investment, the concentration of hundreds of very smart people, and many, many years of intense work, along with thousands of trial and error tests. But, Waymo got there and self-driving cars are now a reality. (And as mentioned in a previous AL piece – the wildest thing is that once you use one it feels natural in about 10 seconds – which in itself is an incredible achievement.)

Could Waymo have happened by just letting things happen? Would a car company that is under no specific external pressure to build something like Waymo, nor with the intention of doing so, and with lots of other ways to invest its capital and brain trust resources, have just accidentally created such a system? Probably not. Why not? Because self-driving cars that are safe and sophisticated enough to travel in a super-complex real-world environment are incredibly difficult to develop. This is not like messing about in the kitchen and accidentally inventing the Caesar Salad, or making a new cocktail.

So, to conclude this initial point: purpose is necessary if you want to accomplish very difficult things. And building autonomous legal tools that can operate in the real world, which are safe, accurate and deliver exactly what’s needed, is truly a difficult task.

Little Helper or Automated Workflows?

OK, I’m not going to stretch this one out and will get straight to the point. There are two main pathways we can go down – and perhaps will go down both at once – these are:

  • The AI Assistant – the goal here is to have software that will operate as a ‘support tool’, i.e. which fits onto the edges of a lawyer’s daily work. E.g. I’d like to know about which prior cases in the High Court in London are similar to this case, show me what’s there? Once you have that information the lawyer continues their day, reading, drafting, and so on. That legal research capability is not meant in any way to replace a workflow, at least other than that of having to go to a physical law library and spend the day wandering down the aisles of reference books.
  • The Legal Automator – the goal here is to have software that can perform – as much as technically and ethically possible – an entire workflow. In this case, such efforts may well include agentic approaches, i.e. where a ‘program’ is given agency to act on the lawyer’s behalf and perhaps continue to act across a range of tasks, bringing in data, and tapping other tools in the ‘software environment’ to complete that task. This really does seek to replace a human workflow of tasks with a digitised one.

There is some overlap, as there are automation elements in nearly all forms of digital tool, but one can see this as two separate end goals: tech support on one side, tech substitution on the other.

Can we have both? Yes. Even in a world where significant streams of work are fully automated, lawyers will still also be using AI assistants.

But, as noted, fully automated systems that are good enough to be trusted to do what is very complex work with little room for error, is going to be hard to do. The last mile is going to be tough. Which is why we need to have some purpose here if it’s going to happen.

road sign 6350369 1920

The Last Mile

If the goal is primarily to have assistants, then the last mile is not such a big deal. If you want to find some similar cases heard in London’s High Court and you don’t find every possible one then it’s not the end of the world. You find, let’s say a dozen, similar cases. Most of these are very useful and it helps you to do your work.

With full automation the last mile really is of critical importance: the difference between safely landing a spacecraft using automation and it crashing into the ground is a good example. You can send the rocket up, it orbits, comes all the way back down, but then fails in the last few moments….and explodes in a huge ball of flame. The ‘last mile’ really was make or break, despite the huge success of all the previous stages.

The same is true for legal automation, (using genAI, agents, ML/NLP, or any other approach, or combination of approaches).

You seek to automate the drafting of a contract from start to finish – with no human oversight, at least not until the very end. E.g. it’s just a long, complex prompt, but still, the lawyer is ‘hands off’ beyond that.

The system misses something, or maybe misses a lot, and the contract is worse than useless. Now it’s dangerous.

Which leads to the next issue with full automation: quality checking, AKA human-in-the-loop, or HIL

Human In The Loop (HIL)

Although an inhouse lawyer can go to ChatGPT and prompt it to give them a legal document, e.g. an employment contract for a prospective employee of the company, and then use it without checking it, they would be naïve to do so. In fact, they may well end up getting fired some time down the line, when someone realises the contract is totally unenforceable.

So, that’s a failure of the system and also a HIL failure.

Next, a lawyer uses a system that’s built for the legal world, an automated legal tech tool. It also knocks out an employment contract in moments. It is much better than the ChatGPT one as it’s based on reliable legal templates and uses the correct legal language.

Can the lawyer just send it off and not check it? Aside from the issue of legal responsibility that lawyer is now in a very interesting situation: should they trust the system?

Personally, I’d suggest that even now, any lawyer who just assumed that a contract generated by AI is ‘totally fine’ and didn’t check it would be very reckless. It may be OK, but we are not at that point of confidence yet, to be sure.

So, to conclude this point: the last mile aspect of legal automation – using just these simple examples above – remains key. Our automated systems for legal are not like Waymo yet. We cannot just ‘prompt and go’. Not yet. We must doubt the result. We must check the result. And it will be that way for a long time to come.

But, if dozens of legal tech companies, law firms, inhouse teams, VC funds, and more, all decide that they want to really push for legal AI-driven automation that we can trust, then we have a chance of getting there. Lawyers – for regulatory reasons – will still have to personally sign off on even the most reliable outputs from an automated system. But, a world where more and more work is ‘fully’ automated is a very different one to a world where the goal is primarily to have legal AI act in a support role to the lawyer.

Conclusion

Hard problems rarely get solved by chance. Sometimes they do, but usually they don’t. We can probably keep refining and refining the legal AI assistant approach for many years to come without really planning too hard. Iteration after iteration, improvement after improvement, will build slightly better assistants each year.

But, getting to a point where you can really ‘prompt and go’ with near to absolute confidence – and in the very tough environment of commercial law, with all its risks and liabilities – that will require real focus and the desire to get there.

If we want to get there, rather like the people behind Waymo wanted a self-driving car, I am sure we will get there, whether that’s by relying on agentic approaches, or something else. The key is having the goal to achieve that, and this will not happen by accident, it will be a choice.

(More on this theme in the future.)

Richard Tromans, Founder, Artificial Lawyer, July 2025

Legal Innovators Conferences in New York and London – Both In November ’25

If you’d like to stay ahead of the legal AI curve….then come along to Legal Innovators New York, Nov 19 + 20, where the brightest minds will be sharing their insights on where we are now and where we are heading. 

Screenshot 2025 06 18 at 09.16.05

And also, Legal Innovators UK – Nov 4 + 5 + 6

Screenshot 2025 06 18 at 09.17.37

Both events, as always, are organised by the awesome Cosmonauts team! 

Please get in contact with them if you’d like to take part. 


Discover more from Artificial Lawyer

Subscribe to get the latest posts sent to your email.



Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments