"Change in law and technology is happening. Is it a revolution? Or evolution?"

 

There's an enormous thirst for learning more about the law and technology. It is driven not just by the students, but also by law firms or people in the government. In this interview with Prof. Anthony Niblett, PhD, we try to meet this thirst. We talk about the changing landscape, risks and opportunities of AI, "the upheaval," and whether there can be a global approach to legal tech. He looks at the topic from both a professor's perspective and as a co-founder of Blue J Legal, a legal tech startup that helps predict legal outcomes in tax and employment law. As a lawyer and economist, he is concerned with the question, "What is the law, or what should the law be?”  - Alexandra Lorch


Anthony Niblett is an Associate Professor and Canada Research Chair in Law, Economics, & Innovation at the Faculty of Law of the University of Toronto. He holds a Ph.D. in economics from Harvard as well as degrees in law and commerce from the University of Melbourne. He was a Bigelow Fellow at the University of Chicago before moving to Canada. He is also the Academic Advisor at the Future of Law Lab at the Faculty and an Affiliate Researcher with the Vector Institute for AI. In 2019, he gave a TEDx talk on machine learning and law and teaches as a visiting scholar in universities around the world. In addition to his academic career, Anthony Niblett is a co-founder of Blue J, a legal tech startup that helps predict legal outcomes in tax and employment law.


How did you get from economics to "what the law is or what the law should be"?

Let me tell you a bit about my background: In Australia, it's very common to do two degrees at the same time. So I did a law degree and an economics degree, and then I thought I was going to be a lawyer. I worked in a law firm for a while before I decided to keep studying, and I went to America to do a graduate degree in economics. Partway through my graduate degree in economics, my advisor was getting very interested in law and economics and found out I had a law degree, so we were working on some projects together. The questions economists ask are always twofold. One is: what is the world? very descriptive. And then there's the more normative: what should the world be? When I think about the law, that's pretty much the same way I ask: What is the law, and what should the law be? A lot of what the more traditional legal education was about, is always the question of "Is this what the law is?" You know, to some degree, that's still part of legal education. However, a big part of legal education should instead be: what should the law be? What should we be doing – this or what? What are the different ways of thinking about what the law is? How can the law help resolve conflicts? So I've always thought of it in those two ways from my degree in economics. I thought I'd let go of the law back when I started working on my graduate degree in economics. Because of this work with my incredible advisor, I started working on questions about law and economics. From there, I got back into law and got a job at a law school, which was not the path that was intended at all. When I think back to what I thought my life would look like, this is very different and much better.

I fully intended on being a corporate lawyer. Then I worked at a law firm and thought, "I'm just not ready for this yet." I was very young at the time. And so I just kept studying and found something else I really enjoyed. So it was good.


How did you come up with the idea of working at the interface between artificial intelligence, economics, and law?

Working on artificial intelligence in the law is something that I've only really been doing for the past five or six years. My Ph.D. thesis was on statistical analysis of cases; turning cases into data with the goal of assessing how judges decide. Whether judges are consistent in how they judge cases, whether judges have particular biases that they bring to cases, that is what my research focused on for many, many years. I was looking at how you can statistically measure how the law changes over time. About five or six years ago, I started to actually look into what happens when data and computing power explode. And what happens when we do have the capability for these data-driven AI algorithms to make better decisions? That sort of changed the way I started thinking about it. Instead of thinking about the problems with human judging, I started thinking about the opportunities for artificial intelligence.


The Faculty of Law was founded in 1887, making it one of the oldest faculties at the University of Toronto. It is home to more than 50 full-time faculty and 25 short-term visiting professors from other law schools around the world, as well as 500 undergraduate and graduate students. The Faculty is located in Toronto and its academic offerings are complemented by numerous law clinics, public interest programs, and close ties with more than 6,000 alumni.


Where did you get the motivation to think ahead? Because the law especially reacts rather slowly.

My interest began six years ago, when there was a huge explosion of literature on big data and applying the singularity more broadly to all kinds of far-fetched ideas. I was somewhat moving away from my research on judging anyway and was looking for something new. So, I just found myself reading these books on AI. Partly, there was just lots of new stuff out there, and it was exciting and interesting. I was discussing it with a couple of my colleagues who were very interested in these questions, too, as well as with one of my co-authors in Chicago, Tony Casey. That is how I moved away from judging and into just thinking about artificial intelligence. It was kind of like technology. It happened organically, yet it happened very quickly. I didn't know I was going to go into AI, but it turns out that my background in doing statistical analysis of law and statistical analysis of judgment really put me in a fairly good position to start thinking about some of those questions about data-driven issues of law.


Have you been thinking of the law as an academic enterprise or the practice of law as being slow to react?

I think that makes sense when making laws, because you don't want to rush into taking steps that you're going to brand.

In terms of academics, a lot of the way we approach problems in law is by thinking about the potential harms and how we can use law to mitigate some of those harms. Thus, when people think about artificial intelligence, they wonder about the risks, the limitations, or the potential harm that could arise from artificial intelligence. That is why a lot of legal academics have been writing about the question of how we use the law to address some of the concerns about artificial intelligence. whereas I took the opposite approach. There are limitations to the way we do things right now. There are limitations to the human approach. I rather look at the opportunities that AI can provide for the law. What are the opportunities that it can provide for lawyers? What could it be for citizens or provide for lawmakers? This is just a different way of thinking about it. There are still limitations and challenges, but I believe in the power of technology to make a better world.


What would be the risks and, on the other hand, what would be the benefits of artificial intelligence for society?

The opportunities to make the law better and to better understand the law. A lot of people don't know their legal rights. They don't know when they sign a contract what exactly is in the contract. They don't understand what the background laws are, the laws that govern consumer transactions, or don't understand the laws that govern employment, and they're not going to go to a lawyer for everything. Why not start thinking about getting some sort of technological solution to help people understand what their legal rights are? I think there are a lot of opportunities to help people be better citizens in the sense that they know their rights and that they know their responsibilities. There are also a lot of opportunities for lawmakers to make better laws. A law that is actually more responsive and better calibrated to the needs of citizens. When I look ahead, I think there are certainly particular ways in which technology can reduce a lot of friction in society, and I'm not just talking about interpersonal conflicts. Just in general, friction from legal barriers created by the way we behave: the way we pay our taxes, the way we invest.


Is there an upheaval in the law and a change of the traditional rule of law happening at the moment?

I think it is happening, but it's a question of to what degree. For example, we have artificial intelligence software that helps people predict what their legal situation is by using case law. It is used by tax accountants, by lawyers, and by the government. So we sell our product to the government because sometimes you have particular auditors and particular groups within the government who may have different approaches to similar legal questions. I think it is a fundamental principle of good law to be consistent and that the government is consistent with its approach to some of these questions, not just in tax law, but just generally. Consistency is a goal we should be seeking to uphold.

I also think it is happening in terms of the upheaval. When I teach the course, for example, talking about this grand moment where machines are great and are smarter than humans (and I mean, I don't know if that's going to happen in my lifetime), it's more of a theoretical exercise. But in terms of what's happening in law right now, I think there are changes at the margin, and therefore, they're for the good. But in terms of a massive upheaval, we're certainly not seeing it in 2021 yet. To add one thing, when I talk about people not understanding their legal rights – and there are people when they lose their job, they don't know how much they're entitled to, and very few people actually go to a lawyer in that situation – it would be nice to have some sort of information out there that says, "No, they're not paying you what you need or what you deserve." Or when you sign a lease for a rental agreement, it'd be great to run it through a computer that says, "No, this clause is illegal and they're not allowed to put this clause in there." All sorts of those things could be enormously beneficial, because people don't know all of their legal rights. That's why I think it's incredibly important that we can use AI to help people.


Would it help if lawyers became some kind of computer scientists or at least knew how to deal with data?

It's helpful for lawyers to understand what algorithms do, where they come from, how they're built, and how to interrogate them. But I don't think it's the goal to get computer scientists to be lawyers. Considering how technology helps doctors, doctors do what they do and they use technology, like X-rays, MRIs, and all sorts of tests that chemists and physicists have developed. That doesn't mean all doctors are purely physicists. They have their own specialization, but they still need to know the basics of chemistry and physics. That's part of their training. If AI takes a greater role in governance and in the legal systems, it's very helpful to have a background in understanding where those algorithms come from and being able to interrogate them. It is very difficult to trust something if you don't understand it and where it's coming from.

We want to have some level of trust here. Especially if you're an advocate and some algorithm has decided against you, some decision has been based on an algorithmic assessment that is to the detriment of your client. Being able to understand why the algorithm did what it did and what's wrong would be incredibly helpful.


So one of the main benefits of artificial intelligence and the law would be transparency?

People talk about AI not being transparent, but you can make it transparent as best as you can. You can say how we arrived at this result, what the algorithm was doing, what data did it use, what was it optimizing? It may not be perfect, but that will give you a good sense of what the algorithm is doing. This is more transparent than how humans make their decisions. A human judge will justify a decision. She will say: "I am sentencing you to 10 years, and here's why." That decision or those reasons may not be exactly why they came up with that decision. We don't know what the black box of the human brain is actually doing. So, in some senses, AI can be far more transparent than a human judge can be. There is still the issue of how complex it is.


That is an interesting point, as that argument of transparency is normally taken against algorithms as you "can't see" what AI is doing and if the coder is biased, somehow the whole algorithm will be biased.

That is all true to the extent that the way different people approach problems, they each bring their own subjective views about things. That's not to say you can't understand where that came from, and if it's the coders, you can pin down what they did. You can't do that with the human brain to the same extent when a judge or an auditor is making the decision on whether to fine you for gross negligence. It is very difficult to exactly pin down what the human brain is doing.

But it is correct that there is a black box element to AI. There are no perfect solutions to that yet, so there remain some problems with transparency. But every time I think about the risks and challenges of AI, I always ask: What are the risks and challenges of leaving it up to humans, and is that what you want to do?


Is there a limit to artificial intelligence in the law?

My optimism is being tempered daily. Not so much by the technological feasibility or the limits of technology. There are some areas where, if you want to use a data-driven approach, there's just not enough data. In certain areas of law, you wouldn't necessarily want to use a data-driven approach, as you don't want to replicate what's going on in society. Maybe you want to be more aspirational about what you want to be. And the biggest problem is a human problem, which is the one we started off talking about, which is: what should the law be? Part of the problem is that we as a society don't fully agree on what the law should be or what it should do in any given situation. An algorithm comes down with particular rules in particular circumstances, and it's maximizing in some dimension. The question is, do we, as a society, agree on what should be maximized? The analogy I often give is a self-driving car. You're getting a self-driving car and you put in the destination. The algorithm directs you the best way to get there. I used to have this view that we should just work out where we want to go as a society. You put that into the self-driving law car and then we'll work out which laws are good for us to get us to this society. However, the problem is that we, as a society, would have to agree upon a destination. That is the biggest hurdle to AI in law, which is problematic, but not fatal in the following sense. There are still opportunities in the sense that it doesn't have to be a fully self-driving car. It's just more information for the person who ultimately makes that decision. If you want to follow a more utilitarian approach, you should do this. If you want to follow a more consistent approach, you should take that approach. If you want to maximize welfare, do this. If you want to maximize some other parameter, you should be doing this. Then leave it up to humans to decide, rather than requiring humans to set a destination upfront. So there I got all philosophical, I'm sorry.


At the end of the day, it is society that naturally sets boundaries. Will society also set the boundaries for AI in law?

Not only that. I was very optimistic six years ago when everyone else was very skeptical of people's comfort levels with AI. I am less optimistic now. People's comfort level with AI has really increased since then, dramatically. For example, lawyers are much more comfortable using programs to help them find cases, uncover keywords in documents, predict the outcome of cases and assess the merits of particular decisions. But the end game would be letting the computer choose our laws. I think this is not going to happen. I had this grand vision of a utopian society where we'd get much better predictions and much better information, and therefore learn so much more. I got less confident in that.


You also gathered experience as a co-founder. Where did the idea for Blue J Legal come from?

Together with two other law professors, we founded Blue J Legal six and a half years ago. The head, professor Benjamin Alarie (?), is an expert in tax use, and we started turning tax cases into code and into data. When you come with your case, it basically compares your case to every single one it comes across and makes some kind of assessment, or prediction, based on the facts of your particular case. The first example that we did was to assess whether you were an employee or a contractor. There is no bright-line rule; it depends on a lot of different factors. However, some people, when they put their taxes in, don't know whether they are employees or not. This can lead to mistakes. It turns out that you can actually use artificial intelligence to compare the facts of your case to every other case that's going on before it and come up with a prediction. 99 percent likely you are an employee. And so you should file your taxes as an employee.


Where do you want to go with that idea?

We did lots of different areas of law, especially tax law. We've built it out so that you can – with particular issues – fit it in with the statute. We have resold to all the accounting firms or the law firms that work in tax because it's just an easier way to find relevant cases and to get a sense of: if the matter goes to court, the courts have usually favored this or the other side, indicating how strong the opposition is. So far, we've done it in tax law and in employment law too. To return to my example from earlier, people do not know what they are entitled to when they are let go. We developed a program that compares your situation to what people were entitled to in the 5000 cases that went before you and makes a comparison. In the end, you can come up with a pretty good prediction about what the court would do if your case ever went to court.


Is your target group rather the lawyers or the people in general?

We sell to tax accountants, tax lawyers, and employment lawyers in Canada, but are also moving into tax in the United States. We also give products to legal aid, for free, to clinics that serve low-income populations and to students so they can get a feel for how AI works in that context. So, what's the end game? I mean, ultimately, what we're trying to do is provide clarity. There's a lot of vagueness in the law. If you have better information, you can make better predictions, which can lead you to a better, hopefully better legal system. That's the end game. A better legal system.


Could you think of some other areas of the law where the prediction could also work rather than tax and labor law?

Oh, I think there are a lot! I think some of the areas that I teach in are ripe for those kinds of predictions. While some will be different in the sense that there probably won't be as much case law, there are different ways that you can still make predictions. I'm sure that somebody is already doing this. Maybe not in the legal tech space, but maybe in the government space in regards to antitrust and competition law or some of the economic consultants to make predictions on questions like: "If we merge, what are the pricing predictions? Are we likely to get flagged as an anti-competitive merger or something like that? "by using machine learning algorithms. Another is contract law.There are a number of startups that analyze contracts already. And I think there's space for somebody to come along and – taking the example of the rental agreement again – get programs to read contracts and flag an illegal clause that has been struck down in the past and would not be enforced and taken out of the lease. Also, in consumer law, when you rent a car or something else and you've got these ridiculously long contracts, it would be great to just have a high-level summary. indicating whether this is a consumer-friendly contract or a company-friendly contract. Why is it so company-friendly? What's problematic? Furthermore, it could show what you, as a consumer, really value. When you know that you are going to drive a long distance, the AI might predict that for people who drive a really long distance, this is actually not a very good contract, whereas for people who drive short distances, it might be more friendly. It just gives you more information based on data rather than having to read that ridiculously long contract or know what the legislation says. Consumers would get a green light or a red light telling them "You shouldn't or you should go in here". This better way of assessing contracts should help customers make choices so that in the future they compare not only price indicators, but also what they get. Now, everything I've talked about so far was all commercial. They're all commercial applications because that's where a lot of data is and we're pretty clear about what we want out of the applications. There are some differences in how we approach antitrust, for example, but generally we're pretty clear about what's good, what's potentially good, and what's not so good. There are areas of law where conflict is much harder to capture in data: things like freedom of expression or conflicts that depend on whether they change social perceptions, like same-sex marriage. Could you have used AI to predict that? I don't know. But there are lots of areas where AI predictions have their limits.


Looking at other jurisdictions, such as Germany, could Blue J Legal work in the same way in civil law jurisdictions?

I mean, the thing about law is, even more than in other fields like medicine, it's very jurisdiction specific. To apply the algorithm to tax, contract, or other programs, German lawyers who understand German law and some sort of jurisdiction-specific expert input would be required.The Blue J Legal algorithm is very much about using case law at the moment. However, that doesn't necessarily turn on the fact that it's because of precedent. It's helpful because of precedent. I'm sure it's the same in Germany, where legal decisions are made all the time.And even if they don't have the same precedential value, you can still use that data and it's still valuable. So, similar to predicting how people will behave, you can learn something about how a judge decides cases.In the end, it's the volume of cases that is important.


What do you think about our German legal tech industry?

I know that Toronto is a huge hub for legal tech. My knowledge of legal tech outside of Toronto and of legal tech companies outside of North America is pretty slim. Yet, it seems like the same types of issues and the same types of legal problems are being solved by German Legal Tech companies. Part of the problem is that everything is so jurisdiction-specific at the moment: you need a Canadian version and you need a German version, rather than having one expert that is actually able to leverage what they've learned in one jurisdiction and be able to use it. We're moving into the United States in tax and we hired a whole bunch of U.S. tax experts because we needed that expert input from the U.S. jurisdiction. What we do at Blue J Legal is very jurisdictional specific because the algorithm will help you to understand the legal situation and, therewith, provide clarity as to what the law is, and that itself is a very jurisdictional specific question.


Can there be something like a global approach to it?

I think there can be, but it still needs experts on the ground. Perhaps trade law or OECD transfer pricing are global issues for which global experts might be suitable. However, issues such as document disclosure, the ability to highlight documents, etc. are not jurisdictional but rather language-specific.


Do you think that universities are changing and legal education is developing in a way that future lawyers need to know about legal tech and artificial intelligence?

Yes! I think every law school is worried that they're going to be left behind. There are some law schools where they're trying to put AI into every class: AI in the context of tort law, obligations, contracts, or in terms of property law. We have a whole stream of legislation on innovation and legal technology. At the University of Toronto, we have this future law lab, bringing in speakers who are working in legal tech, holding conferences about the hot issues in tech, and just generally being very cognizant. There's an enormous thirst for learning more about the law and technology. It is driven not just by the students, but also by law firms or people in the government. I think there's a push, an enormous push, certainly in Canada and America, towards putting on courses that meet that thirst.


What do you hope to achieve through your work in legal tech for society and also for yourself?

That's a huge question. For society, I would like to think that some of the work I do is going to make a difference. I would like to think that we have come up with ways of getting better information and helping with better predictions and just improving the legal system. There are so many issues of access, issues of vagueness, and issues of not being able to understand the law. There are solutions out there that can help improve legal outcomes for people. To the extent that I'm a small part, the tiniest small part in making that happen, that would be good.


Do you have any advice for those interested in legal tech?

I do. Follow what's interesting, do what's interesting, and ask yourself whether you're making the world a better place. And that's the key.


Interviewer

This interview was held by Alexandra Lorch. Alexandra is a law student in her final semester at the Albert-Ludwigs-University in Freiburg, with stays at Tsinghua University in Beijing and Keio University Law School in Tokyo.