As AI continues to transform many industries[1], including the legal service industry, many experts are unanimous in predicting exponential growth in AI as a paramount technology to bring new tools and features to improve legal services and access to justice. Already, many aspects of the estimated $786B[2] market for legal services are being digitised, automated and AI-enabled whether discovery in litigation (e.g. RelativityAI), divorce (e.g. HelloDivorce), dispute resolution (e.g. DoNotPay) or contract management (e.g. IronClad).
As with many disruptive technologies, there are many experts who believe that AI will significantly disrupt (rather than extend) the legal market:
“AI will impact the availability of legal sector jobs, the business models of many law firms, and how in-house counsel leverage technology. According to Deloitte, about 100,000 legal sector jobs are likely to be automated in the next twenty years. Deloitte claims 39% of legal jobs can be automated; McKinsey estimates that 23% of a lawyer’s job could be automated. Some estimates suggest that adopting all legal technology (including AI) already available now would reduce lawyers’ hours by 13%”[3]
The real impact will be more nuanced over the long-term as whilst AI will eliminate certain tasks and some legal jobs, it will also augment and extend the way legal services are provided and consumed. In doing so, it will drive new ways of working and operating for both established and new entrant firms who will need to invest in new capabilities and skills to support the opening up new markets, new business models and new service innovations. In the past few decades, we have seen the impact of emerging and disruptive technologies on established players across many sectors, including banking (e.g. FinTechs), media and entertainment (e.g. music, movies, gambling), publishing (e.g. news), travel (e.g. Airbnb) and transportation (e.g. Uber). It is very likely traditional legal providers will be faced with the same disruptive challenges from AI and AI-enabled innovations bundling automation, analytics, and cloud with new business models including subscription, transaction or freemium.
Although AI and AI-enabled solutions present tremendous opportunities to support, disrupt or extend traditional legal services, they also present extremely difficult ethical questions for society, policy-makers and legal bodies (e.g. Law Society) to decide.
This is the focus of this article which sets out a summary of these issues, and is structured into two parts:
- Current and future use cases and trends of AI in legal and compliance services;
- Key issues for stakeholders including legal practitioners, society, organisations, AI vendors, and policy-makers.
A few notes:
- This article is not designed to be exhaustive, comprehensive or academically detailed review and analysis of the existing AI and legal services literature. It is a blog post first and foremost (albeit a detailed one) on a topic of personal and professional interest to me, and should be read within this context;
- Sources are referenced within the footnotes and acknowledged where possible, with any errors or omissions are my own.
- Practical solutions and future research areas of focus is lightly touched on in the conclusion, however is not a focus for this article.
Part 1 – Current and future use cases of AI in legal and compliance services
Historically, AI in legal services has focused on automating tasks via software to achieve the same outcome as if a law practitioner had done the work. However, increasing innovation in AI and experimentation within the legal and broader ecosystem have allowed solutions to accelerate beyond this historical perspective.
The graphic below provides a helpful segmentation of four main use cases of how AI tools are being used in legal services[4]:

A wider view of use cases, which links to existing legal and business processes, is provided below:
- e-discovery;
- document and contract management
- expertise automation;
- legal research and insight
- contract management
- predictive analytics
- dispute resolution
- practice automation
- transactions and deals
- access to justice
Further context on a selection of these uses is summarised below (note, there is overlap between many of these areas):
- E-Discovery – Over the past few years, the market for e-discovery services has accelerated beyond the historical litigation use case and into other enterprise processes and requirements (e.g. AML remediation, compliance, cybersecurity, document management). This has allowed for the development of more powerful and integrated business solutions enabled by the convergence of technologies including cloud, AI, automation, data and analytics. Players in the legal e-discovery space include Relativity, DISCO, and Everlaw.
- Document and contract management – The rapid adoption of cloud technologies have accelerated the ability of organisations across all sectors to invest in solutions to better solve, integrate and automate business processes challenges, such as document and contract lifecycle management. For contracts, they need to be initiated (e.g. templates, precedents), shared, stored, monitored (e.g. renewals) or searched and tracked for legal, regulatory or dispute reasons (e.g. AI legaltech start-ups like Kira, LawGeex, and eBrevia). In terms of drafting and collaboration, the power of Microsoft Word, Power Automate and G-Suite solutions has expanded along with a significant number of AI-powered tools or sites (e.g. LegalZoom) that help lawyers (and businesses or consumers) to find, draft and share the right documents whether for commercial needs, transactions or litigation. New ‘alternative legal service’ entrants have combined these sorts of powerful solutions (and others in this list) with lower-cost labour models (with non-legal talent and/or lower-cost legal talent) to provide a more integrated offering for Fortune500 legal, risk and compliance teams (e.g. Ontra, Axiom, UnitedLex, Elevate, Integreon);
- Expertise Automation –In the access to justice context, there are AI-powered services that automate contentious or bureaucratic situations for individuals such as utility bill disputes, small claims, immigration filing, or fighting traffic tickets (e.g. DoNotPay). Other examples include workflow automation software to enable consumers to draft a will (for a fixed fee or subscription) or chatbots in businesses to give employees access to answers to common questions in a specific area, such as employment law. It is forseeable that extending this at scale in a B2C context (using AI-voice assistants Siri or Alexa) with a trusted brand (e.g. Amazon Legal perhaps?) – and bundled into your Prime subscription alongside music, videos and same-day delivery – will be as easy as checking the weather or ordering an Uber.
- Legal Research – New technologies (e.g. AI, automation, analytics, e-commerce) and business models (e.g. SaaS) have enabled the democratisation of legal knowledge beyond the historic use cases (e.g. find me an IT contract precedent or Canadian case law on limitation of liability). New solutions make it easy for clients and consumers (as well as lawyers) to find answers or solutions to legal or business challenges without interacting with a lawyer. In more recent times, legal publishing companies (e.g. LexisNexis, PLC, Westlaw) have leveraged legal sector relationships and huge databases of information including laws and regulations in multiple jurisdictions to build different AI-enabled solutions and business models for clients (or lawyers). These offerings promise fast, accurate (and therefore cost-effective) research with a variety of analytical and predictive capabilities. In the IP context, intellectual property lawyers can use AI-based software from companies like TrademarkNow and Anaqua to perform IP research, brand protection and risk assessment;
- Legal and predictive analytics – This area aims to generate insights from unstructured, fragmented and other types of data sets to improve future decision-making. A key use case are the tools that will analyse all the decisions in a domain (e.g. software patent litigation cases), input the specific issues in a case including factors (e.g. region, judge, parties etc) and provide a prediction of likely outcomes. This may significantly impact how the insurance and medical industry operate in terms of risk, pricing, and business models. For example, Intraspexion leverages deep learning to predict and warn users of their litigation risks, and predictive analytical company CourtQuant has partnered with two litigation financing companies to help evaluate litigation funding opportunities using AI. Another kind of analytics will review a given piece of legal research or legal submission to a court and help judges (or barristers) identify missing precedents In addition, there is a growing group of AI providers that provide what are essentially do-it-yourself tool kits to law firms and corporations to create their own analytics programs customized to their specific needs;
- Transactions and deals – Although no two deals are the same, similar deals do require similar processes of pricing, project management, document due diligence and contract management. However, for various reasons, many firms will start each transaction with a blank sheet of paper (or sale and purchase agreement) or a sparsely populated one. However, AI-enabled document and contract automation solutions – and other M&A/transaction tools – are providing efficiencies during each stage of the process. In more advanced cases, data room vendors in partnership with law firms or end clients are using AI to analyse large amounts of data created by lawyers from previous deals. This data set is capable of acting as an enormous data bank for future deals where the AI has the ability to learn from these data sets in order to then:
- Make clause recommendations to lawyers based on previous drafting and best practice.
- Identify “market” standards for contentious clauses.
- Spot patterns and make deal predictions.
- Benchmark clauses and documents against given criteria.
- Support pricing decisions based on key variables
- Access to justice – Despite more lawyers in the market than ever before, the law has arguably never been more inaccessible. From a small consumer perspective, there are thousands of easy-to-use and free or low cost apps or online services which solve many simple or challenging aspects of life, whether buying properties, consulting with a doctor, making payments, finding on-demand transport, or booking household services. However, escalating costs and increasing complexity (both in terms of the law itself and the institutions that apply and enforce it) mean that justice is often out of reach for many, especially the most vulnerable members of society. With the accelerating convergence of various technologies and business models, it is starting to play a role in opening up the (i) provision of legal services to a greater segment of the population and (ii) replacing or augmenting the role of legal experts. From providing quick on-demand access to a lawyer via VC, accelerating time to key evidence, to bringing the courtroom to even the most remote corners of the world and digitizing many court processes, AI, augmented intelligence, and automation is dramatically improving the accessibility and affordability of legal representation. Examples include:
- VC tools e.g. Zoom, FaceTime
- Process apps e.g. Your Lawyers Online, CourtNav, Hello Divorce
- Robolawyers e.g. Do Not Pay
- Lawyer marketplaces e.g. Probono.net, Paladin, AsylumConnect
- Document and knowledge automation e.g. LegalZoom
- ADR to ODR (online dispute resolution) e.g. eBay, Alibaba
- Speed to evidence – Cloud-based, AI-powered technology e.g. DISCO
- Risk detection e.g. Legal Risk Detector, rAInbow app
2. Key issues for the future of AI-power legal and compliance services
There are many significant issues and challenges for the legal sector when adopting AI and AI-powered solutions. Whilst every use case of AI-deployment is unique, there are some overarching issues to be explored by key stakeholders including the legal profession, regulators, society, programmers, vendors and government.
A sample of key questions include the following:
- Will AI in the future make lawyers obsolete?
- How does AI impact the duty of competence and related professional responsibilities?
- How do lawyers, users and clients and stakeholders navigate the ‘black box’ challenge?
- Do the users (e.g. lawyers, legal operations, individuals) and clients trust the data and the insights the systems generate?
- How will liability be managed and apportioned in a balanced, fair and equitable way?
- How do organisations identify, procure, implement and govern the ‘right’ AI-solution for their organisation?
- Are individuals, lawyers or clients prepared to let data drive decision outcomes?
- What is the role of ethics in developing AI systems?
Other important questions include:
- How do AI users (e.g. lawyers), clients or regulators ‘audit’ an AI system?
- How can AI systems be safeguarded from cybercriminals?
- To what extent do AI-legal services need to be regulated and consumers be protected?
- Have leaders in businesses identified the talent/skills needed to realise the business benefits (and manage risks) from AI?
- To what extent is client consent to use data an issue in the development and scaling of AI systems?
- Are lawyers, law students, or legal service professionals receiving relevant training to prepare for how they need to approach the use of AI in their jobs?
- Are senior management and employees open to working with or alongside AI systems in their decisions and decision-making?
Below we further explore a selection of the above questions:
- Obsolescence – When technology performs better than humans at certain tasks, job losses for those tasks are inevitable. However, the dynamic role of a lawyer — one that involves strategy, negotiation, empathy, creativity, judgement, and persuasion — can’t be replaced by one or several AI programs. As such, the impact of AI on lawyers in the profession may not be as dire as some like to predict. In his book Online Courts and the Future of Justice, author Richard Susskind discusses the ‘AI fallacy’ which is the mistaken impression that machines mimic the way humans work. For example, many current AI systems review data using machine learning, or algorithms, rather than cognitive processes. AI is adept at processing data, but it can’t think abstractly or apply common sense as humans can. Thus, AI in the legal sector enhances the work of lawyers, but it can’t replace them (see chart below[5]).

- Professional Responsibility – Lawyers in all jurisdictions have specific professional responsibilities to consider and uphold in the delivery of legal and client services. Sample questions include:
- Can a lawyer discharge professional duties of competence if they do not understand how the technology works?
- Is a legal chatbot practicing law?
- How does a lawyer provide adequate supervision where the lawyer does not understand how the work is being done or even ‘who’ is doing it?
- How will a lawyer explain decisions made if they do not even know how those decisions were derived?
To better understand these complex questions, the below summaries some of the key professional duties and how they are being navigated by various jurisdictions:
Duty of Competence: The principal ethical obligation of lawyers when they are developing or assisting clients is the duty of competence. Over the past decade, many jurisdictions are specifically requiring lawyers to understand how (and why) new technologies such as AI, impact that duty (and related duties). This includes the requirement for lawyers to develop and maintain competence in ‘relevant technologies’. In 2012, in the US the American Bar Association (the “ABA”) explicitly included the obligation of “technological competence” as falling within the general duty of competence which exists within Rule 1.1 of its Model Rules of Professional Conduct (“Model Rules”)[6]. To date, 38 states have adopted some version of this revised comment to Rule 1.1. In Australia, most state solicitor and barrister regulators have incorporated this principle into their rules. In the future, jurisdictions may consider it unethical for lawyers or legal service professionals to avoid technologies that could benefit one’s clients. A key challenge is that there is no easy way to provide objective and independent analysis of the efficacy of any given AI solution, so that neither lawyers nor clients can easily determine which of several products or services actually achieve either the results they promise. In the long-term, it will very likely be one of the tasks of the future lawyer to assist clients in making those determinations and in selecting the most appropriate solution for a given problem. At a minimum, lawyers will need to be able to identify and access the expertise to make those judgments if they do not have it themselves.
Duty to Supervise – This supervisory duty assumes that lawyers are competent to select and oversee team members and the proper use of third parties (e.g. law firms) in the delivery of legal services[7]. However, the types of third parties used has expanded in recent times due to liberalisation of legal practice in some markets (e.g. UK due to the ABS laws allowing non-lawyers to operate legal services businesses). For example, alternative service providers, legal process outsourcers, tech vendors, and AI vendors have historically been outside of the remit of the solicitor or lawyer regulators (this is changing in various jurisdictions as discussed in below sections). By extension, to what extent is this more than just a matter of the duty to supervise what goes on with third parties, but how those third-parties provide services especially if technologies and tools are used? In such a case, potential liability issues arise if client outcomes are not successful: did the lawyer appropriately select the vendor, and did the lawyers properly manage the use of the solution?
The Duty to Communicate – In the US, lawyers also have an explicit duty to communicate to material matters to clients in connection with the lawyers’ services. This duty is set out in ABA Model Rue 1.4 and other jurisdictions have adopted similar rules[8]. Thus, not only must lawyers be competent in the use of AI, but they will need to understand its use sufficiently to explain to clients the question of the selection, use, and supervision of AI tools.
Black Box Challenge
- Transparency – A basic principle of justice is transparency – the requirement to explain and justify the reasons for a decision. As AI algorithms grow more advanced and rely on increasing volumes of structured and unstructured data sets, it becomes more difficult to make sense of their inner workings or how outcomes have been derived. For example, Michael Kearns and Aaron Roth report in Ethical Algorithm Design Should Guide Technology Regulation[9]:
“Nearly every week, a new report of algorithmic misbehaviour emerges. Recent examples include an algorithm for targeting medical interventions that systematically led to inferior outcomes for black patients, a resume-screening tool that explicitly discounted resumes containing the word “women” (as in “women’s chess club captain”), and a set of supposedly anonymized MRI scans that could be reverse-engineered to match to patient faces and names”.
Part of the problem is that many of these types of AI systems are ‘self-organising’ so they are inherently without external supervision or guidance. The ‘secrecy’ of AI vendors – especially those in a B2B and legal services context – regarding the inner workings of the AI algorithms and data sets doesn’t make the transparency and trust issue difficult for customers, regulators and other stakeholders. For lawyers, to what extent must they know the inner workings of that black box to ensure that she meets her ethical duties of competence and diligence? Without addressing this, these problems will likely continue as the legal sector increases its reliance on technology increases and injustices, in all likelihood, continue to arise. Over time, many organisations will need to have a robust and integrated AI business strategy designed at the board and management level to guide the wider organisation on these AI issues across areas including governance, policy, risk, HR and more. For example, during procurement of AI solutions, buyers, stakeholders and users (e.g. lawyers) must consider broader AI policies and mitigate these risk factors during vendor evaluation and procurement.
- Algorithms – There are many concerns that AI algorithms are inherently limited in their accuracy, reliability and impartiality[10]. These limitations may be the direct result of biased data, but they may also stem from how the algorithms are created. For example, how software engineers choose a set of variables to include in an algorithm, deciding how to use variables, whether to maximize profit margins or maximize loan repayments, can lead to a biased algorithm. Programmers may also struggle to understand how an AI algorithm generates its outputs—the algorithm may be unpredictable, thus validating “correctness” or accuracy of those outputs when piloting a new AI system. This brings up the challenge of auditing algorithms:
“More systematic, ongoing, and legal ways of auditing algorithms are needed. . . . It should be based on what we have come to call ethical algorithm design, which begins with a precise understanding of what kinds of behaviours we want algorithms to avoid (so that we know what to audit for), and proceeds to design and deploy algorithms that avoid those behaviours (so that auditing does not simply become a game of whack-a-mole).”[11]
In terms of AI applications, most AI algorithms within legal services are currently able to perform only a very specific set of tasks based on data patterns and definitive answers. Conversely, it performs poorly when applied to the abstract or open-ended situations requiring judgment, such as the situations that lawyers often operate in[12]. In these circumstances, human expertise and intelligence are still critical to the development of AI solutions. Many are not sophisticated enough to understand and adapt to nuances, and to respond to expectations and layered meaning, and comprehend the practicalities of human experience. Thus, AI still a long way from the ‘obsolescence’ issue for lawyers raised above, and further research is necessary on programmers’ and product managers’ decision-making processes and methodologies when ideating, designing, coding, testing and training an AI algorithm[13]:
- Data – Large volumes of data is a critical part of AI algorithm development as training material and input material. However, data sets may be of poor quality for a variety of reasons. For example, the data an AI system is ‘trained’ on may well include systemic ‘human’ bias, such as recruiters’ gender or racial discrimination of job candidates. In terms of data quality in law firms, most are slow at adopting new technologies and tend to be “document rich, and data poor” due, in large part, to legacy on-premise systems (or hybrid cloud) which do not integrate with each other. As more firms and enterprises transition to the cloud, this will accelerate the automation of business processes (e.g. contract management) with more advanced data and analytics capabilities to enable and facilitate AI system adoption (in theory, however there are many constraints within traditional law firm business and operating models which makes the adoption of AI-enabled solutions at scale unlikely). However, 3rd party vendors within the legal sector including e-discovery, data rooms, and legal process outsourcers – or new tech-powered entrants from outside of the legal sector – do not have such constraints and are able to innovate more effectively using AI, cloud, automation and analytics in these contexts (however other constants exist such as client consent and security). In the court context, public data such as judicial decisions and opinions are either not available or so varied in format as to be difficult to use effectively[14]. Beyond data quality issues, significant data privacy, client confidentiality and cybersecurity concerns exist which raises the need to define and implement standards (including safeguards) to build confidence in the use of algorithmic systems – and especially in legal contexts. As AI becomes more pervasive within law firms, legal departments, legal vendors (including managed services) and new entrants outside of legal, a foundation with strong guidelines for ethical use, transparency, privacy, cross-department sharing and more becomes even crucial[15].
- Implementation – Within the legal sector, law firms and legal departments are laggards when it comes to adopting new technologies, transforming operations, and implementing change. With business models based on hours billed (e.g. law firms), this may not incentivize the efficiency improvements that AI systems can provide. In addition:
“Effective deployment of AI requires a clearly defined use case and work process, strong technical expertise, extensive personnel and algorithm training, well-executed change management processes, an appetite for change and a willingness to work with the new technologies. Potential AI users should recognize that effectively deploying the technology may be harder than they would expect. Indeed, the greatest challenge may be simply getting potential users to understand and to trust the technology, not necessarily deploying it[16].
However, enterprises (e.g. Fortune500), start-ups, alternative service providers (e.g. UnitedLex) and new entrants from outside of legal do not suffer from these constraints, and are likely to be more successful – from a business model and innovation perspective – in adopting new AI-enabled solutions for use with clients (although AI-enabled providers must work to overcome client concerns as discussed above).
- Liability – There are a number of issues to consider on the topic of liability. Key questions are set out below:
- Who is responsible when things do go wrong? Although AI might be more efficient than a human lawyer at performing these tasks, if the AI system misses clauses, mis-references definitions, or provides incorrect outcome/price predictions caused by AI software, all parties risk claims depending on how the parties apportioned liability. The role of contract and insurance is key, however this assumes that law firms have the contractual means of passing liability (in terms of professional duties) onto third parties. In addition, when determining relative liability between the provider of the defective solution and the lawyer, should a court consider the steps the lawyer took to determine whether the solution was the appropriate one for use in the particular client’s matter?
- Should AI developers be liable for damage caused by their product? In most other fields, product liability is an established principle. But if the product is performing in ways no-one could have predicted, is it still reasonable to assign blame to the developer? AI systems also often interact with other systems so assigning liability becomes difficult. AI solutions are also fundamentally reliant on the data they were trained on, so liability may exists with the data sources. Equally, there are risks of AI systems that are vulnerable to hacking.
- To what extent are, or will, lawyers be liable when and how they use, or fail to use, AI solutions to address client needs? One example explained above is whether a lawyer or law firm will be liable for malpractice if the judge in a matter accesses software that identifies guiding principles or precedents that the lawyer failed to find or use. It does not seem to be a stretch to believe that liability should attach if the consequence of the lawyer’s failure to use that kind of tool is a bad outcome for the client and the client suffers injury as a result.
- Regulatory Issues – As discussed above, addressing the significant issues of bias and transparency in AI tools, and, in addition, advertising standards, will grow in importance as the use of AI itself grows. Whilst the wider landscape for regulating AI is fragmented across industry and political spheres, there are signs the UK, EU and US are starting to align.[17] Within the legal services sector, some jurisdictions (e.g. England, Wales, Australia and certain Canadian provinces) are in the process of adopting and implementing a broader regulatory framework. This approach enables the legal regulators to oversee all providers of legal services, not just traditional law firms and/or lawyers. However, in the interim the implications of this regulatory imbalance will become more pronounced as alternative legal service providers play an increasing role in providing clients with legal services, often without any direct involvement of lawyers. In the long run, a broader regulatory approach is going to be critically important in establishing appropriate standards for all providers of AI-based legal services.
- Ethics – The ethics of AI and data uses remains a high concern and key topic for debate in terms of the moral implications or unintended consequences that result from the coming together of technology and humans. Even proponents of AI, such as Elon Musk’s OpenAI group, recognise the need to police AI that could be used for ‘nefarious’ means. A sample of current ethical challenges in this area include:
- Big data, cloud and autonomous systems provoke questions around security, privacy, identify, and fundamental rights and freedoms;
- AI and social media challenge us to define how we connect with each other, source news, facts and information, and understand truth in the world;
- Global data centres, data sources and intelligent systems means there is limited control of the data outside our borders (although regimes including GDPR is addressing this);
- Is society content with AI that kills? Military applications including lethal autonomous weapons are already here;
- Facial recognition, sentiment analysis, and data mining algorithms could be used to discriminate against disfavoured groups, or invade people’s privacy, or enable oppressive regimes to more effectively target political dissidents;
- It may be necessary to develop AI systems that disobey human orders, subject to some higher-order principles of safety and protection of life;
Over the years, the private and public sectors have attempted to provide various frameworks and standards to ensure ethical AI development. For example, the Aletheia Framework[18] (developed by Rolls-Royce in an open partnership with industry) is a recent, practical one-page toolkit that guides developers, executives and boards both prior to deploying an AI, and during its use. It asks system designs and relevant AI business managers to consider 32 facets of social impact, governance and trust and transparency and to provide evidence which can then be used to engage with approvers, stakeholders or auditors. A new module added in December 2021 is a tried and tested way to identify and help mitigate the risk of bias in training data and AIs. This complements the existing five-step continuous automated checking process, which, if comprehensively applied, tracks the decisions the AI is making to detect bias in service or malfunction and allow human intervention to control and correct it.
Within the practice of law, while AI offers cutting-edge advantages and benefits, it also raises complicated questions for lawyers around professional ethics. Lawyers must be aware of the ethical issues involved in using (and not using) AI, and they must have an awareness of how AI may be flawed or biased. In 2016, The House of Commons Science and Technology Committee (UK Parliament) recognised the issue:
“While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now”.
In a 2016 article in the Georgetown Journal of Legal Ethics, the authors Remus and Levy were concerned that:
“…the core values of legal professionalism meant that it might not always be desirable, even if feasible, to replace humans with computers because of the different way they perform the task. This assertion raises questions about what the core values of the legal profession are and what they should or could be in the future. What is the core value of a solicitor beyond reserved activities? And should we define the limit of what being a solicitor or lawyer is?[19]
These are all extremely nuanced, complex and dynamic issues for lawyers, society, developers and regulators at large. How the law itself may need to change to deal with these issues will be a hot topic of debate in the coming years.
Conclusion
Over the next few years there can be little doubt that AI will begin to have a noticeable impact on the legal profession and consumers of legal services. Law firms, in-house legal departments and alternative legal services firms and vendors – plus new entrants outside of legal perhaps unencumbered by the constraints of established legal sector firms – have opportunities to explore and challenges to address, but it is clear that there will be significant change ahead. What is required of a future ‘lawyer’ (this term may mean something different in the future) or legal graduate today – let alone in 2025 or 2030 versus new lawyers of a few decades ago, will likely be transformed in many ways. There are also many difficult ethical questions for society to decide, for which the legal practice regulators (e.g. Law Society in England and Wales) may be in a unique position to grasp the opportunity of ‘innovating the profession’ and lead the debate. On the other hand, as the businesses of the future become more AI-enabled at their core (e.g. Netflix, Facebook, Google, Amazon etc), the risk that many legal services become commoditised or a ‘feature set’ within a broader business or service model is a real possibility in the near future.
At the same time, AI itself poses significant legal and ethical questions across all sorts of sectors and priority global challenges, from health, to climate change, to war, to cybersecurity. Further analysis on the legal and ethical implications of AI for society, legal practitioners, organisations, AI vendors, and policy-makers, plus what practical solutions can be employed to navigate the safe and ethical deployment of AI in the legal and other sectors, will be critical.
[1] AI could contribute up to $15.7 trillion1 to the global economy in 2030, more than the current output of China and India combined. Of this, $6.6 trillion is likely to come from increased productivity and $9.1 trillion is likely to come from consumption side effects.
[2] https://www.statista.com/statistics/605125/size-of-the-global-legal-services-market/
[3] https://jolt.law.harvard.edu/digest/a-primer-on-using-artificial-intelligence-in-the-legal-profession
[4] https://www.morganlewis.com/-/media/files/publication/presentation/webinar/2020/session-11_the-ethics-of-artificial-intelligence-for-the-legal-profession_18june20.pdf
[5] https://kirasystems.com/learn/can-ai-be-problematic-in-legal-sector/
[6] https://www.americanbar.org/groups/professional_responsibility/publications/professional_lawyer/27/1/the-future-law-firms-and-lawyers-the-age-artificial-intelligence
[7] Australian Solicitors Conduct Rules 2012, Rule 37 Supervision of Legal Services.
[8] https://lawcat.berkeley.edu/record/1164159?ln=en
[9] https://www.brookings.edu/research/ethical-algorithm-design-should-guide-technology-regulation/
[10] https://hbr.org/2019/05/addressing-the-biases-plaguing-algorithms
[11] https://www.brookings.edu/research/ethical-algorithm-design-should-guide-technology-regulation/
[12] https://hbr.org/2019/05/addressing-the-biases-plaguing-algorithms
[13] https://bostonreview.net/articles/annette-zimmermann-algorithmic-political/
[14] https://www.law.com/legaltechnews/2019/10/29/uninformed-or-underwhelming-most-lawyers-arent-seeing-ais-value/
[15] https://www.crowell.com/NewsEvents/Publications/Articles/A-Tangled-Web-How-the-Internet-of-Things-and-AI-Expose-Companies-to-Increased-Tort-Privacy-and-Cybersecurity-Litigation
[16] https://www.lexisnexis.co.uk/pdf/lawyers-and-robots.pdf
[17] https://www.brookings.edu/blog/techtank/2022/02/01/the-eu-and-u-s-are-starting-to-align-on-ai-regulation/
[18] https://www.rolls-royce.com/sustainability/ethics-and-compliance/the-aletheia-framework.aspx
[19]https://go.gale.com/ps/i.do?id=GALE%7CA514460996&sid=googleScholar&v=2.1&it=r&linkaccess=abs&issn=10415548&p=AONE&sw=w&userGroupName=anon%7E138c97cd