I have written many blog posts, articles, legal briefs, memoranda, and even a law review article. I thought I had tackled it all (outside of a novel or two) until I was asked to contribute to a whole new type of writing: an academic research paper.
It was like nothing I had ever done before.
To celebrate its availability, keep reading to see what I learned from the experience—and what you can learn from the finished piece.
First, this paper had eleven authors! I am used to either writing my own content or occasionally partnering with one other person. Can you imagine trying to get eleven people to agree on anything? But to my surprise and enjoyment it was one of the most collaborative experiences I have ever had. We threw out ideas left and right, provided suggestions to one another, and often discussed topics asynchronously through Slack or live on calls when things got complex.
Second, academic papers are really a different animal when it comes to writing style. The language is very distinct from a legal brief or an entertaining article. At times, it is a bit dry (sorry, academia)—but at the same time very informative and factual. On this paper, nine of the authors were data scientists (I am going to name them here since they deserve the credit: Roshanak Omrani, Eugene Yang, Evan Curtin, Lenora Gray, Jeremy Pickens, Nathan Reff, Sean Underwood, David Lewis, and Aron Ahmadia) and two were attorneys (Tara Emory and me).
As you can imagine, the attorneys spent a lot of time editing to make sure the legal industry jargon was captured accurately, and the data scientists focused more on the experiments driving the paper and our conclusions. This dynamic made for a good balance; Tara and I would often propose language that was more easily recognized in the world of e-discovery to counter some of the more technical language that might only be understood by someone with a degree in applied science or mathematics.
Third, when it was time to edit, this exercise was like proofreading on steroids. I know lawyers like to wordsmith, but data scientists take this to a whole new level. They reviewed all the calculations over and over and over again, because—understandably!—they wanted to ensure everything was correct, beyond the slightest doubt. Even when we said “pens down” so that Tara and I could look at the paper in its near-finished state, they couldn’t help themselves: more reviewing and editing took place until we finally took a harder stance. I think if we didn’t, they would be editing to this day.
They were just so passionate about getting it all exactly right. The remarkably informative result is a credit to their dedication.
Fourth, the submission process for an academic paper is very different than that of a typical legal piece. For example, when I am writing an article or blog post, I already know it is going to be published and I am working toward a deadline. For a piece in a legal publication, you typically submit a short description of what you plan to write about and then the publication tells you whether they are interested or not so you can submit your idea somewhere else if they decline.
“If you are interested in learning about our experiments with generative AI and how this technology can transform document review, your wait is over.”
However, in this case, we wrote the paper without knowing whether it would be published at all. We chose a “conference” to submit it to, which included a deadline, but we won’t know for months whether it has been accepted. If it’s not, then it won’t be published by that conference. At that time, we can submit it to a new conference and hope for a better outcome—but that means it could be even longer before it is officially “published.” We also can’t share with the world which conference we submitted it to, in case it isn’t accepted. But we can share it with all of you now.
So, if you are interested in learning about our experiments with generative AI and how this technology can transform document review, your wait is over. Even if our wait to be published by a third party isn’t.
At Relativity, we recognize that generative AI comes with both opportunity and risk. We are committed to testing Relativity aiR products on real matters to ensure they deliver defensible and trustworthy results before we make them available to our users.
This academic paper outlines our disciplined approach to AI, detailing Relativity’s and Redgrave Data’s vigorous evaluation of early versions of Relativity aiR for Review on a real legal matter.
Read the paper to learn:
- How aiR for Review conducts defensible first-pass responsiveness review
- How attorneys, with no special training in large language models, can leverage aiR for Review to assist and automate their review processes
- The outcome of the evaluation, including the number of responsive documents found by aiR for Review
I hope you enjoy reading it as much as I enjoyed collaborating on it (even though my part was minor compared to the data scientists’). Please reach out to me if you want to discuss—I can also connect you with the brains behind the work if you want to do a deeper dive into the methodology and the statistical findings.
I can’t say that I am jumping at the bit to contribute to another academic research paper, but I have learned a lot from this experience, and it has made me a better writer. I think I will stick to blog posts for now though!
Cristin Traylor is the senior director of AI transformation and law firm strategy at Relativity, where she focuses on the legal technology needs of law firms. She previously served as discovery counsel at McGuireWoods LLP, where she oversaw a multi-faceted team of legal professionals providing experienced discovery assistance and strategic advice to firm clients, including white collar crime matters. Cristin currently serves as Assistant Chapter Director of Richmond Women in e-Discovery and Project Trustee of the EDRM Privilege Log Protocol. She is an active member of Sedona Conference Working Group 1 and holds the Relativity Master certification.