Skip to main content
Smith Business Insight Podcast | Series 4 . Episode 5 AI Reality Check

Legal Disruptors

Smith Business Insight Podcast

AI has the potential to take over almost half of the work in law firms and to give individuals and small businesses easier and less expensive access to pricey legal services

Subscribe

The legal profession is one of the first to feel the brunt of the artificial intelligence revolution. AI techniques such as machine learning are helping legal practitioners review contracts, prepare for litigation and even predict legal outcomes. ChatGPT has already been used in court cases. There’s even talk of robot judges in our future. 

This episode features Samuel Dahan, a law professor at Queen’s and Cornell universities and director of the Conflict Analytics Lab at Smith School of Business. Dr. Dahan is one of the world’s leading authorities on AI and the law and is well-placed to discuss how AI will impact the way legal services are delivered. He is joined in conversation by host Meredith Dault. 

Transcript 

[Music playing] 

Meredith Dault: I asked ChatGPT to write me a lawyer joke. Here’s what it came up with: “Why did the lawyer go to therapy? Because he had too many objections to work through.” I wasn’t overly impressed with that, so I asked for another one. Actually, what I wrote was a funnier one, please. And ChatGPT offered this: “Why don’t lawyers go to the beach? Because cats keep trying to bury them in the sand,” which took me a minute to process. And when I protested, it then offered a different ending: “Why don’t lawyers go to the beach? Because they can’t help but bring their work home with them and the sand keeps getting in their suits,” to which I replied: “Ugh.” ChatGPT was quick to apologize. “I’m sorry if the previous jokes didn’t quite hit the mark,” the system told me. “Humour can be subjective. And I understand that different people have different preferences for jokes.” 

And indeed, the world is full of lawyer jokes, even if they aren’t always very funny. We like to poke fun at lawyers for their high fees and for their legalese. But lawyers do important work. And these days they’re also feeling the brunt of the AI revolution. AI techniques such as machine learning are helping legal practitioners review contracts, prepare litigation and even predict legal outcomes. ChatGPT has already been used in court cases, and there’s even talk of robot judges in our future. Just how big an impact will artificial intelligence have on how legal services are delivered? Will it help release the inner lawyer hidden in all of us? 

Welcome to this episode of AI Reality Check. I’m your host Meredith Dault, a journalist and media producer at Smith School of Business, and today we’re talking about AI and the law. And my guest is a trailblazer in the area. Samuel Dahan is a law professor at Queen’s and Cornell universities, and the director of the Conflict Analytics Lab, which is a consortium for AI research on law, compliance and conflict resolution based at Queen’s Law and at Smith School of Business. Dr. Dahan is also chair of the Deel Lab, a research group studying the future of global employment policy. In case you’re not sufficiently impressed, he is also a former cabinet member at the Court of Justice of the European Union. Welcome, Samuel. 

Samuel Dahan: Hi. 

MD: It’s great to have you here. 

SD: Thank you so much for the invitation. 

2:26: MD: We just heard your bio. I’m fascinated by your journey from your PhD at Cambridge to following a scholarly track to the frontiers of artificial intelligence and the law. What was the inflection point that caused you to invest so much energy into disrupting legal services using AI? 

SD: Oh, there are many moments. So when I was a PhD student, I was already interested in the computational dimensions of law. I think at the time it was called empirical legal studies — so looking at the impact of law on the economy, looking at how the economy can influence regulation. But then after the PhD, many of my colleagues would probably not like to hear that, but I was not interested in academia at all. I ended up going first to private practice, a firm called White & Case. Then I ended up working for the French government and then eventually the European Court of Justice. 

And what I found troubling at the court was the often inconsistency of the case law that was sometimes not driven by intention. You know, sometimes judges, they have the right to diverge from precedent, especially if it’s the European Court of Justice. It’s the highest authority in Europe. But the problem was that they were not even aware that there was a similar precedent. I think there was something missing in the way we did research. It’s still like very much the way it works. I mean, Lexis, Westlaw and most legal databases, they work in a very linear fashion. It’s a Boolean search and you just insert a couple of keywords, and you see tons of decisions. 

I’ll give you a specific example. The European Court of Justice produces a lot of judgments in trademark law. Trademark is a simple legal concept, but it’s very complicated to compare two trademarks. What you do as a lawyer or as a judge, when you’re presented with a new legal question, you have to review a lot of cases and see a lot of trademarks. And we’re talking about hundreds, and the human brain doesn’t work that way. There’s a lot of lost information. So, at that point, I had learned a lot in this job, but I wanted to do something else. And I thought it would be interesting to invest in this field of information retrieval, legal data, legal prediction, and also what we call today generative AI, which had a different name back in the day. 

MD: So that brought you to Queen’s in 2018, where you spearheaded the Conflict Analytics Lab at Queen’s. Tell us how you got there then. 

SD: Yeah. Actually, my first academic job was at Cornell. I was a visiting assistant professor in 2015-16. And there I started working in this field a little bit more. And then I met our former dean here at the business school and at the law school. And we talked about this idea of setting up a research consortium in AI and law. They were interested. I was interested in moving in that direction. I very much liked the idea of helping students and working with industry to advance AI research for law. And thanks to them, I mean, they were pretty innovative at the time in 2015. Today, it sounds like, oh, obviously that makes a lot of sense. You know, AI and law. Back in the day, it was a pretty hard sell to a lot of my colleagues. So that’s how I ended up here. 

6:18: MD: OK. So tell us a little bit about the Conflict Analytics Lab, I know it does lots of different things, but you work on applying data science and machine learning to dispute resolution. And then you’ve developed a number of tools that are used in certain situations — layoffs, workplace harassment. But can you talk to us briefly, can you pick one of those tools and give us a sense of how that works? 

SD: Of course. So the idea of the CAL, the Conflict Analytics Lab, that’s the acronym we use now, and that’s what our students call us, was to build AI for law. We’re not looking at the new questions of the regulation of AI, which is extremely important, but it’s hard to do both. I mean, it’s hard to be the judge and the assessor and the party at the same time. So, we are really very much focused on building AI for law. Our very first initiative was to create access to what we call justice AI systems. One of them is called MyOpenCourt. The idea was to help people to predict the odds of winning a case. 

So, let’s take an example. A very basic case: an employee is terminated, they’re fired, and they just don’t know whether they’re owed severance, whether they even can go to court. So the first system was a termination severance calculation. With a series of questions, a worker or even a small company could determine whether they had to pay severance, and the machine was capable, is capable, at predicting exactly how much money the employee is supposed to be paid. And it also pulls the case law depending on where the worker is located. If it’s in B.C., we’ll find the similar case in B.C. with just a few questions. So, you work for this company for five years, here’s how the termination took place, you’re in that industry, you’ve worked for that person doing a specific number of tasks, and here is what you would be entitled to receive as a termination package. 

MD: Right. And that’s something that lawyers would’ve calculated for X number of dollars over time. And so what’s been the uptake then if you are able to do this with a system? 

SD: Some tools are more popular than others, and I can tell you a little bit about that. And that’s the limitation of AI. But, for example, this termination calculation tool, there’s another tool called Layoff, and, especially during Covid, so we’ve had about 100,000 users in Canada, and the system was launched in 2020. 

MD: This is broadly accessible to the public or to lawyers? 

SD: Accessible to the public. Lawyers also can use it, but I think lawyers probably don’t find that as useful because most of them would probably know this, probably know how to do the research. So this tool was mainly designed for the general public. It’s only available in Canada because the system was trained, developed and knows only Canadian law, if you will. And yes, we’ve had about 100,000 users, about 20,000 to 30,000 users year-over-year. It’s hard to assess how many of them find it useful. We also get a lot of users from the U.S. and, unfortunately, we have to let them know that it is not working for American law, ideally. But we have a next tool coming out that will also work in American law. So that problem will be solved eventually. 

10:12: MD: Are you being given a sense of what’s to come for the law industry? A lot of experts have said that legal services are probably, maybe one of the industries that’s going to be most disrupted by AI. And I’ve got some stats here. It’s been estimated that more than 40 per cent of legal work can be automated. We know that big law firms are already embracing the efficiencies of AI to review and draft contracts, to conduct legal research, to predict legal outcomes, things like that. But the Conflicts Analytics Lab and others are showing that AI can really upend the business model for law firms. So how worried should they be? 

SD: OK, so, that’s a hard question. Usually, I leave predictions for my colleagues in economics [laughs]. But here’s what I can say: I’ve heard many times this prediction that the legal industry will radically change. And that was already back in 2010. Then in 2015, I think, a group of researchers at Duke University, if I’m not mistaken, they managed to a contract auditing system that was much better than what lawyers could do, in a shorter period of time. But I think there’s a big gap between what can the technology do and how are we going to implement it. So that’s one gap. The implementation component is quite a significant part of how technology works and how it changes an industry. 

There have been a lot of legal tech companies that have been really making, trying to make, a difference in the space. And some of them have. However, it hasn’t really changed that much. I mean, lawyers shouldn’t be really worried about that. But now the situation is slightly different because I think a lot of lawyers, and a lot of the clients also, can see how AI can change some legal tasks, like drafting a contract. You can ask ChatGPT to write a contract, although that contract is not that good. Most of the time, it’s just a template that you could have found on Google. So the innovation in that regard is not that amazing, is not as great as some people like to describe it. So that’s another problem. 

Then you really need a PhD to be good at interacting with that technology, right? I’m not talking about ChatGPT. I’m talking about the legal AI systems that will be coming out soon or that are already available to some law firms and to some legal professionals. It reminds me of the work of Cal Newport, professor of computer science at Georgetown University. He said something quite interesting about the skills that are required to interact with technology. And that’s also a job for us as academics and as researchers. We need to be able to train our students to be able to interact with that kind of technology. And that’s not something that’s simple, because the way you interact, the way you write questions to ChatGPT or other legal AI systems will have a great impact on the answer. 

So, that’s a component that needs to be taken into consideration and that’s going to be a huge component of the legal profession. So yes, some tasks are going to be replaced and some tasks are, anyway, garbage. I mean, I don’t want to do these tasks I was doing when I was a junior associate — so reading contracts, just comparing documents. Who wants to do that anyway? And you really don’t need a JD. You don’t need a graduate degree from a top school like Queen’s or Cornell, or wherever you are. I think these tasks need to be automated anyway, so it’s not a terrible news for lawyers. It’s quite the opposite. However, there might be a change in the industry. It depends on who is investing in the technology and whether some firms are going to merge to make this better and how fast they’re moving. This is a different question. So yes, they might be worried not because of the technology but mostly how they implement the technology. 

14:32: MD: I was going to ask if it was simply just a matter of time, like if we don’t have the technology now, but who knows what five or 10 or 20 years … 

SD: Or even in six months. 

MD: Then there’s the question of AI allowing more individuals and small businesses to access the kinds of pricey legal services that they maybe can’t otherwise access. Is AI going to help them? 

SD: Yes. That’s still my main area of interest. MyOpenCourt has been one of the first, I think, open access legal AI in Canada and in the world. And, yes, it has helped a lot of people. At least it has had a lot of users. However, whether it actually helped, that’s a different question. I’m working with a couple of psychologists now to answer that question. It’s not because someone used an application that it was actually helpful. And so we need to answer that question. 

Another problem is it’s hard to reach out to these people because many of them don’t know they have a legal problem. And if they do have a legal problem, we are not certain they are capable of writing the right question. We’ve heard that it’s hard for people to prompt. And how you interact with ChatGPT will have a great impact on the answer. 

But what about legal questions? That’s probably even more complex if someone is not trained and doesn’t have some kind of legal training. How are they going to be able to ask the right question? So I’m a little bit more cautious. For example, our next project, we’ve been working on generative AI since 2021. It used to be called a Q&A, a question-and-answer system. And our original idea was to make this available to the general public. And now we’ve shifted. We are making it available to every university in Canada and pro-bono legal clinics. I think there must be a third party that filters or screens the information because most people may not be able to interact with the technology in the first place. They may not even know that they have a legal problem or what kind of legal problem they have. 

MD: So you’re proposing a solution where there’s kind of a consultant. People can do their own legal work but have this expert in the corner to support them. 

SD: Correct. Yeah. Yeah. 

MD: OK. And you can imagine a future where that might be more possible? 

SD: Yeah, yeah. That’s definitely … I mean, we’re working with a lot of bar associations and law societies and universities and clinics where we are going to make this tool. It’s called OpenJustice. It’s already available in more than 15 schools in Canada, including legal aid clinics mostly. And the idea is that people will come with questions and then the legal professionals will steer them in the right direction. I think dumping this tool on the general public is a bit irresponsible because, first of all, it’s not good enough. But even if it was amazing, if it was performing extremely well, there’s still the question of, can people actually use it? And is it really useful for them? And I’d say the answer right now is no. 

17:39: MD: There are examples of AI even messing people up within the legal profession. There was a story this spring in the U.S. where a lawyer used ChatGPT to generate a court filing document complete with a list of relevant court decisions. And it turned out that a lot of those things that were listed in the brief didn’t even exist. ChatGPT literally made them up. What do you make of that kind of situation? 

SD: So that kind of situation reinforces my point earlier. If that individual, who is a trained lawyer in one of the best, probably most elite bars in the world, which is New York state, if he was fooled, the general lay person will have no idea whether that machine is providing an accurate response. However, when it comes to the legal profession, I think this is a more general problem with generative AI and AI in general. I’m not a computer scientist but we’ve been working on this issue for quite a while. And the problem with that example or generative AI in general is that we all knew about it. The technology is not there. And that’s why [Chat GPT’s creator] OpenAI carefully stipulates on its website that it is a research project. With that in mind, it didn’t prevent them from integrating GPT-3.5 or 4 into Bing, which in my opinion is a strategy to gain some market share, which is a huge amount of money for them. Even one or two percentage points taken away from Google is a huge amount of money. 

In fact, if you think about it, Google is in a bit of a defensive position now, but they were the ones who really came up with that technology a few years ago. And you may remember also another story that made the headlines. There was this scientist at Google who claimed that AI had emotions. And that was the predecessor of ChatGPT that was capable of interacting with human beings and simulate human emotions. So, I think the problem is that the technology is not there, and we need to invest, and the community, the legal and professional communities, like doctors, lawyers, HR professionals, need to invest in building something that’s a little bit more dependable and reliable. 

20:24: MD: Right. But that does lead to the hundred-dollar question, which is: Will algorithms ever be able to truly provide legal analysis, advice or predictions? Are there certain features of legal reasoning that are just simply not consistent with how AI works? 

SD: That’s a great question. I don’t have the answer to that question, but this is one of the research questions we’re trying to address with OpenJustice. Right now, it’s clear that GPT-4 or any kind of AI cannot perform legal reasoning. Why is that? Because AI is a statistical system, right? But at the same time, this is a deeper, maybe philosophical, discussion. You know, human beings and especially lawyers are statistical beings. Our legal reasoning is very much statistical because we think about precedent and we evaluate new cases that we’re presented with, and we try to evaluate if it is sufficiently similar to a previous case. So, in that regard, it is very similar to how an AI system operates. However, for some legal tasks, some that involve deep legal reasoning, there’s very little evidence that it is capable of performing legal reasoning. 

However, one of the missions of that new initiative is to … we’re trying to create a foundational model in law and plug that model with other systems that have already trained their model on a lot of data. And the idea is to see whether it is possible to create some form of legal reasoning. At this stage, it is not possible, you know, they’re very basic, simple tasks, but it might be possible in the future. 

MD: So how realistic is the robot judge then? 

SD: A robot judge? There are many reasons why it’s not realistic. I think the reasons are more in terms of accountability, legitimacy. And I think there are many reasons why we shouldn’t go in that direction, besides the technical aspects, besides the fact that the technology is not there. However, using AI as assistance is definitely a solution that’s becoming more mainstream, which is good because there are a lot of tasks that can be not necessarily automated, but they can be performed much more efficiently. 

22:59: MD: So, final question: Tell me about the future of the Conflicts Analytics Lab. Give me a five-year plan. I mean, you said in six months we expect lots of change. What does five years out look like for you? 

SD: Our main focus now is to invest in this new technology that we’ve created, OpenJustice is becoming, or we want to make this technology, one of the largest open source legal AI in the world. So now we’ve got only the resources to make it available to about 10 to 15 law schools and public interest organizations. And the idea is that every organization or legal professionals in the world will be able to plug in their data to create their own AI system trained on this foundational model. 

So, for example, you’re a lawyer, you’ve got a lot of documents, you’ve got a lot of data in your in-box and in your cloud. And the idea is that you could plug OpenJustice into your data, and it’s going to be protected. So it won’t be shared to anybody else, any other parties in this consortium. And we’ll learn from how you draft documents. The idea is to make this available to as many lawyers as possible. That’s really the big plan for the Conflict Analytics Lab. 

MD: So overall, when it comes to law and AI, how are you feeling? Hopeful? Worried? 

SD: Yeah, hopeful. I think at the time when ChatGPT came out, I wasn’t feeling good because it’s like, this is not good. This is not going to go well. And also, when there are these AI hypes, and it’s not the first one, there are a lot of, you know, overnight new experts that say a lot of different things. Some of them are good, some are not so good. And it becomes very hard for deep tech organizations like ours to advance, to do more work. 

But now it’s a little interesting because potential collaborators, industry partners, they’re very tech savvy and they also know better what they want. They have a little bit of FOMO, a fear of missing out. So they want stuff very quickly and things that are not necessarily realistic. But at the same time, that means they’re definitely more open about it and they are also a little bit better at finding the right partner. So that’s a good thing. I definitely feel hopeful then. 

MD: OK. Well, I’m glad you feel hopeful, so all the rest of us can now too. Thank you so much for taking the time to talk with us today. 

SD: You’re very welcome. Thank you so much. 

[Music playing] 

And that’s the show. I want to thank podcast writer and lead researcher Alan Morantz, my colleague Julia Lefebvre for her behind the scenes support and Bill Cassidy for editing support. If you’re looking for more insights for business leaders on AI and many other topics, check out Smith Business Insight at smithqueen.com/insight. Thanks for listening.