A2JC Data and Legal Technology Committee hosts Conversation on ChatGPT and Similar Ai Tools
ChatGPT and Similar Ai Tools:
A Conversation with Professor Colin Starger, Director of the Legal Data and Design Clinic at UB Law
About the A2JC Data & Legal Technology Committee. The Committee, which advocates for increased data access and laws that enhance civil justice data collection, is actively engaged in curating, visualizing and analyzing civil justice data through a number of initiatives including by building first-of-their-kind data tools, like the Civil Justice Data Dashboard, the Civil Justice for All Story Map and the Housing Data Dashboard. A2JC’s Data and Legal Technology Committee also aims to understand technology barriers to justice, keep abreast of legal technology solutions to access to justice problems, and advocate on behalf of more government transparency.
A2JC’s Data & Legal Technology Committee held a timely informational chat session on April 12, 2023. The event, entitled “ChatGPT and Similar Tools: New Possibilities and Pitfalls For Legal Services” was led by Professor Colin Starger, founder and Director of UB Law’s Legal Data & Design Clinic. Before entering the legal profession, Starger who is also currently the Associate Dean for Academic Affairs at UB Law, worked as a computer programmer for many years. “When I shifted into law, I began using principles of computer programming in order to build visualizations for the academic articles I was writing,” Starger said, “and I quickly found that there was a great need for data analysis in legal policy circles.” Recognizing the unique relationship between data and legal policy, Starger founded the Legal Data and Design Clinic (LDDC) at UB Law.
During the informational chat session hosted by the A2JC Data & Legal Technology Committee, Starger discussed, among other things, the background and basics of ChatGPT and similar AI tools and attendees were also able to ask questions and learn about the potential next steps attentive lawyers may want to consider. We caught up with Professor Starger after the event. Below are a few takeaways on ChatGPT and the possibilities and pitfalls of similar AI tools in the legal services space.
Data, Technology and Legal Services. Data and technology have been important tools in transforming the legal field and the legal services space. Until recently, data and technology tools in the legal space mostly dealt with finding ways to effectively use and present aggregated data, particularly to inform policy and, in some instances, litigation. More recently, with the rise of artificial intelligence (AI) chatbots like ChatGPT, legal service professionals and justice tech advocates have started to consider how these new data and technology tools can be used to create efficiencies that help solve real-world justice problems .
Machine Learning and Large Language Models/Chatbots. Among other things, Starger noted that while ChatGPT is often described as a generative artificial intelligence tool, “these types of technologies are really known as large language models and they are based on human intelligence, which is to say that they are machine learning algorithms that are trained on human thoughts and ideas that have been reduced to text.” For that reason, Starger notes that the type of information used to train these large language models can be a potential pitfall. “If they’re trained on bad data. . .filled with racists or classist assumptions, it’s just going to replicate that.” Starger said.
Artificial Intelligence and Human Intelligence. In describing how “large language models” work, Starger pointed to the words of legal philosopher, Ronald Dworkin who suggested that by relying on the traditions of common law and precedent, judges can be viewed as chain novel authors, following the laws and precedent that came before them and then taking that information and dealing with new issues that have been set before them. “What we see in doctrine and what we see in the law is [human] intelligence,” Starger said. “The novel gets passed on to another person who has to write the next chapter in a way that’s coherent with what came before, while also taking the story in a new direction . . . “[l]arge language models kind of do the same thing, but we should understand that they have real limitations and those limitations are based on the fact that they do not [have any of the] things that make us human in a meaningful way.” What they do have, Starger explained, is “an incredible capacity for looking at patterns in language.”
Pattern Recognition, Early Uses and Potential Pitfalls. “What large language models are very good at doing are things that involve parsing huge amounts of text, finding essential patterns and applying essential rules. It can be extremely efficient in legal contexts where similar cases may fall into 1 or 3 buckets, it can be really good at figuring out which bucket a case fits into.” On the other hand, Starger noted that “if you have a genuinely new case or [topic], a large language model “will automatically try to put it into a box based on past categories, [because] it doesn’t have true creativity or a true ability to see when something is new or think outside of the box.”
Among other things, Starger warned against deferring or relying too much on large language chatbots. “[Chatbots] don’t have judgment or compassion or any human values,” Starger noted. For those looking to incorporate the technology safely into their legal service work, Starger emphasized the importance of ensuring some type of human oversight, particularly in instances where someone might need to exercise legal judgment or balance competing values. Starger also drew a distinction between using the tool to summarize or filter large amounts of text and data, and using the tool to generate written legal arguments without fact checking and oversight. “It’s just not a good idea [because] the bottom line is that [these models] have been known to make mistakes and make things up because they don’t have a sense of truth,” Starger said. “We’ve always had libraries, we’ve always had texts that contain useful information, and we’ve always had to have a critical view towards what’s in [those texts], it’s important to recognize that [large language chatbots] are just another mode of text production and they require us to tap our critical engagement skills.”
A2JC’s Data and Legal Technology Committee also aims to understand technology barriers to justice, keep abreast of legal technology solutions to access to justice problems, and advocate on behalf of more government transparency. The Committee, which meets on the second Tuesday of every even month at 1pm will be hosting a different speaker related to legal technology. That portion of the Committee’s meeting is open to the public and all are welcome to attend!