call-center-1015274_960_720

Recently Alan Freeman wrote about the use of artificial intelligence in third party funding of litigation, in his article “Intelligent Funding: Could AI Drive the Future of Litigation Finance”. Litigation funding, also known as third party funding, provides financing to plaintiffs and law firms to enable them to pursue their claims in return for a piece of the recovery.

For a court to approve a third party funding agreement, the party must show that (a) the agreement is necessary to provide access to justice, (b) that access to justice is facilitated by the third party funding agreement in a meaningful way, (c) the agreement is fair and reasonable by enabling access to justice while protecting the interests of the defendants, (d) the third party funder is not over-compensated, and (e) the third party funder is not interfering with the solicitor-client relationship, including the duty of loyalty.  Typically in class action law suits the third party funder takes about 10% or less of the recovery. (Houle v St. Jude Medical Inc., 2018 ONSC 6352 at paras 34, 63-64)

Through applying artificial intelligence to thousands of cases, third party funders may be able to better determine which cases to “bet” on. Freeman writes that by using artificial intelligence programs, like Blue J Legal, third party funders may be able to determine the likely outcome of a case. He further quotes Professor Alarie (also a founder of Blue J Legal) that using artificial intelligence programs may become common place for third party funders.

I also predict that predictive programs will become more prevalent in the law. However, as long as humans are the judges, artificial intelligence programs will have its limitations in predicting the outcome of cases. There are many influencing factors beyond precedent in deciding a case. The evidence that is admitted and how witnesses are perceived also play a major role in the outcome of the case.

Additionally, there are opportunities for artificial intelligence programs to make mistakes. In the New Yorker article The Hidden Costs of Automated Thinking, Jonathan Zittrain writes that machine learning systems (subset of artificial intelligence) can be tricked into making inaccurate judgments. “Seduced by the predictive power of such systems, we may stand down the human judges whom they promise to replace. But they will remain susceptible to hijacking—and we will have no easy process for validating the answers they continue to produce.”

(This article was originally posted on slaw.ca. Views are my own and do not reflect the views of any organization.)