v0.28 🌳  

AI Efficiency Promises Excite Legal, Tax, and Audit Professionals

2024-07-16 14:46:06.598000

Buchalter, a law firm based in Seattle, has announced the addition of Wendy Lee as a shareholder and the establishment of a new fintech and AI practice group. Lee, who previously served as the chief legal officer at a fintech company exploring the use of AI in the mortgage servicing industry, will also chair the new practice group. This move reflects Buchalter's commitment to staying at the forefront of legal developments in the fintech and AI sectors. The firm aims to provide comprehensive legal services to clients operating in these rapidly evolving industries [175084e7].

In a similar move, Travers Smith LLP has spun out a new artificial intelligence business from its legal technology arm. The aim of this spin-out is to create a single system for lawyers, streamlining their processes and enhancing efficiency. The announcement was made on May 22, 2024, at 1:49 PM BST [d6fb4c7f].

LawPro.ai, a startup that provides automation software for legal tasks, has announced the completion of a seed investment round for product and marketing growth. The investment round was backed by LegalTech Fund and Scopus Ventures. With this funding, LawPro.ai aims to further develop its automation software and expand its marketing efforts. The announcement was made on June 13, 2024, at 2:21 PM EDT [91b06835].

Meanwhile, Raghu Ramanathan, the president of the Legal Professionals segment within Thomson Reuters, has called for open benchmarking on legal AI products. Ramanathan believes that there should be well-funded, well-conducted research to compare and contrast the performance of AI in legal assistants. This call for open benchmarking comes after a study by researchers at Stanford University found that Thomson Reuters' AI legal research product delivered hallucinated results nearly a third of the time. However, Ramanathan expressed surprise at the study's results, stating that they do not align with the company's internal testing or customer feedback. He welcomes further research to provide a more comprehensive understanding of AI performance in the legal industry. Ramanathan also discussed his background, priorities as president, and views on the market [220b8327].

A new study by researchers at Stanford and Yale universities has found that AI legal research tools are unreliable and prone to 'hallucinations'. The study, titled 'Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools', examined different AI tools used by lawyers for legal research and found that they are prone to generating false information. The study tested popular tools from LexisNexis and Thomson Reuters and found that they made mistakes 17% to 33% of the time. Some 'hallucinations' occur when AI tools cite non-existent legal rules or misinterpret legal precedents. The authors of the study called on lawyers to supervise and verify AI-generated outputs and urged AI tool providers to be honest about the accuracy of their products [4f52b45b].

According to an opinion piece by Siddharth Pai in Mint, generative artificial intelligence (GenAI) output is essentially a hallucination. GenAI output is based on probabilistic analyses and large language models (LLMs) like OpenAI's and Google's leverage vast amounts of data to understand and generate human language. However, the decision to use a particular word in a GenAI sentence is rooted in the mathematics of regression analysis, which predicts patterns based on the movement of variables. The output of GenAI models is based on probabilities and guesses, which can lead to inaccuracies and mistakes. While researchers and AI companies are working to reduce errors, the inherent nature of AI based on probability theory means there will always be a chance of blunders. GenAI output should be understood as a hallucination, as it is dreamt up by machines rather than based on encyclopedic knowledge [a6208cd1].

A recent article on Retraction Watch discusses the impact of AI on scientific research and the threat of fake papers. The article highlights the Swiss National Science Foundation's detection of plagiarism in three books and the imposition of sanctions. It also mentions a study that found the percentage of promotional language in a grant proposal is associated with the grant's probability of being funded, its estimated innovativeness, and its predicted levels of citation impact. The study suggests that research on retracted literature would require querying more than one source. The article also discusses the potential costs of the 2022 Nelson Memo on open access, the future of scientific publishing, and the role of scientific publishers in Open Science. It mentions the Retraction Watch Database, the Retraction Watch Hijacked Journal Checker, and the leaderboard of authors with the most retractions [58214234].

Australia's top tier law firms are adopting AI, but the end goal is uncertain. The integration of AI into law firms is seen as a way to increase efficiency, generate more client billings, and keep pace with clients who are also adopting AI. However, there is skepticism that AI will create more work and lead to lower prices for services. The use of generative AI for legal advice is compared to using 'Dr. Google' for medical advice. The adoption of AI by law firms is seen as a nascent technology that is still evolving [78b5e6a7].

A survey conducted by Bain Research found that lawyers are the least satisfied with generative AI results among corporate work groups. Legal executives expressed the greatest dissatisfaction with software tools, with an 18% decline in the number of lawyers saying GenAI had 'met or exceeded expectations' over a five-month period. Reasons for disappointment included GenAI's inability to perform tasks with sufficient performance, poor quality output, lack of understanding of how to use the tools, and low quality of the vendor and its product. The survey also revealed that more than 60% of businesses surveyed ranked GenAI among their top three priorities into 2025. The results are based on a sample of senior executives at 200 companies in October 2023 and February 2024 [9ffad35e].

Legal professionals are facing challenges in adopting generative artificial intelligence (Gen AI) due to technological barriers, ethical concerns, and evolving legal frameworks and regulations. Gen AI has the potential to automate document reviews, legal research, and contract drafting, but law firms need to address technological hurdles and ensure ethical integration. Law firms can create learning and development programs, combine human supervision with Gen AI output, and establish feedback mechanisms to enhance Gen AI tools' performance. Ethical considerations include addressing biases in AI-generated content, ensuring transparency in decision-making processes, and establishing accountability for Gen AI outcomes. Legal professionals should stay updated on regulatory developments and collaborate with Gen AI developers to ensure compliance. The adoption of Gen AI reflects broader industry trends towards digitalization and technology adoption in the legal sector. Law firms need to adapt to these trends, acquire technical literacy, and foster a culture of constant learning and adaptability. Proactive approaches involve remaining agile and ready to pivot in response to technological advancements, creating opportunities for interdisciplinary collaboration, and actively participating in discussions about ethical considerations. The legal industry should actively shape the future of Gen AI by addressing ongoing questions and challenges related to innovation, bias, legal frameworks, workforce impact, and client relationships. By embracing constant education, cross-disciplinary cooperation, and ethical practices, legal professionals can navigate the complexities of Gen AI integration and ensure a successful and ethical implementation of this technology in the legal domain.

According to Thomson Reuters' 2024 report, AI could save legal, tax, and risk and compliance professionals 12 hours per week in the next five years, equivalent to adding an extra colleague for every 10 team members. This could translate to $100,000 in additional billable hours for a US law firm. However, professionals in the legal and tax sectors feel that allowing AI to represent clients in court or make final decisions would be a step too far. The majority of respondents would prefer to see the introduction of a certification process for AI systems or having independent bodies create standards for use to ensure responsible AI. [22340ca4]

Disclaimer: The story curated or synthesized by the AI agents may not always be accurate or complete. It is provided for informational purposes only and should not be relied upon as legal, financial, or professional advice. Please use your own discretion.