The legal landscape is undergoing significant changes as artificial intelligence (AI) becomes increasingly integrated into legal, tax, and audit professions. Buchalter, a law firm based in Seattle, has recently appointed Wendy Lee as a shareholder and established a new fintech and AI practice group. Lee, who previously served as the chief legal officer at a fintech company exploring AI in mortgage servicing, will chair this new group, reflecting Buchalter's commitment to staying at the forefront of legal developments in these sectors [175084e7].
In a similar vein, Travers Smith LLP has spun out a new artificial intelligence business from its legal technology arm, aiming to create a streamlined system for lawyers to enhance efficiency. This move was announced on May 22, 2024, and is part of a broader trend among law firms to adopt AI technologies [d6fb4c7f].
LawPro.ai, a startup specializing in automation software for legal tasks, has completed a seed investment round backed by LegalTech Fund and Scopus Ventures. This funding will support the development of its automation software and marketing efforts, highlighting the growing interest in AI solutions within the legal sector [91b06835].
However, the integration of AI into legal practices is not without its challenges. Raghu Ramanathan, president of the Legal Professionals segment at Thomson Reuters, has called for open benchmarking on legal AI products following a Stanford University study that found Thomson Reuters' AI legal research tool delivered hallucinated results nearly a third of the time. Ramanathan expressed surprise at these findings, which he believes do not align with the company's internal testing or customer feedback [220b8327].
A recent study by Stanford and Yale universities has raised concerns about the reliability of AI legal research tools, revealing that they are prone to generating false information. The study tested popular tools from LexisNexis and Thomson Reuters, finding that they made mistakes 17% to 33% of the time, leading to calls for lawyers to supervise AI-generated outputs [4f52b45b].
In a related analysis, Siddharth Pai emphasized that generative AI output is essentially a 'hallucination', as it relies on probabilistic analyses rather than encyclopedic knowledge. This inherent nature of AI means there will always be a risk of inaccuracies, highlighting the need for caution in its application [a6208cd1].
The challenges of accessing legal help have also come to the forefront, particularly with the recent fine imposed by the FTC on DoNotPay for falsely advertising an AI 'robot lawyer'. Many individuals lack access to affordable legal assistance, leading them to seek dubious AI solutions. This access to justice crisis leaves many without legal representation, further eroding trust in legal institutions as people face complex legal issues alone. AI chatbots, while potentially helpful, may provide incorrect or harmful advice without accountability, prompting calls for regulators to enforce responsibility on AI legal services [1aa80ad0].
Despite the potential benefits of AI, a survey conducted by Bain Research found that lawyers are the least satisfied with generative AI results among corporate work groups. Legal executives expressed significant dissatisfaction with software tools, citing reasons such as poor quality output and a lack of understanding of how to use the tools effectively. More than 60% of businesses surveyed ranked generative AI among their top three priorities for 2025, indicating a strong interest in its potential despite the challenges [9ffad35e].
As the legal industry grapples with these changes, professionals are urged to address technological barriers, ethical concerns, and evolving legal frameworks. The adoption of generative AI reflects broader trends towards digitalization in the legal sector, necessitating law firms to adapt and foster a culture of constant learning and adaptability [78b5e6a7].
According to Thomson Reuters' 2024 report, AI could save legal, tax, and risk and compliance professionals 12 hours per week in the next five years, equivalent to adding an extra colleague for every 10 team members. However, many professionals remain cautious about allowing AI to represent clients in court or make final decisions, advocating for a certification process for AI systems to ensure responsible use [22340ca4].