By Fred Jennings

supreme_courtpix

 

There has been much talk lately of using data to refine the criminal justice system. At first glance, this carries fantastic promise — accurate, consistent prediction of case results could hasten fair plea deals and relieve overburdened courts; automated analysis of police activity could reveal patterns of bias or prejudice; predictive casework task creation could streamline workload distribution in public defenders’ offices.

These ideas and others seem to be a recent craze among “legal tech” or “legal hacker” groups and startups. They tout analytic solutions as if they offered panaceas for a lawyer’s, client’s, or judge’s every woe.

But given a closer look, these proposals could do worse than the ills they seek to cure. Unless careful steps are taken and major changes made, they may never offer anything beyond data-driven injustice.

Two high-level problems plague this optimistic view of the intersection of criminal law and data science.

First, math does lie, and proponents of legal tech too often ignore this fact. Algorithms create black-boxes whose results do not necessarily merit the veneer of objectivity they carry. Without transparency, and careful evaluation of the underlying data and methods, this guarantees unjust results.

The statistics, data, and arithmetic beneath these services are only as objective as their design and training permit. It is well-known that biases, conscious or unconscious, drastically impact neural networks and other data analysis methods. Even with a diverse and aware team of developers, uneven representation in training data can create racial bias. Even where race or other prejudices are not directly included, their impact as “hidden variables” can be just as severe.

Second, misleading or flawed results carry a sincere human cost. Unlike badly targeted advertising or personalized news content, bad data applied in any criminal justice context can have devastating results. Criminal justice is highly individualized and difficult to generalize for good reasons — the individuals, facts, and legal issues can vary greatly from case to case. The above fallibility can mean any error in this prediction can create sincere injustice — and has, in some places that have applied it.

The Federal Trade Commission already voiced strong warnings about inaccuracy problems of big data in a credit-reporting context. It remains hard to imagine these methods carrying sufficient reliability, accuracy, and fairness to bring to the criminal justice system.

But this has not yet stopped or even slowed down the march of justice-system actors, such as the NYPD, LAPD, and others, who are adopting these flawed and unreliable methods and applications.

If there is a silver lining, it is that prominent voices, including the White House, have spoken on the need for greater transparency and accountability of data analytic systems used in the criminal justice system. A critical, evaluative approach is necessary if data analysis and algorithmic methods are to someday improve our ability to secure just results.

 

 

Frederic Jennings has been an associate at Tor Ekeland, P.C. since 2014, he litigates in Federal and State courts as well as handling transactional matters.