Trends and issues of AI-enabled legal and compliance services

As AI continues to transform many industries[1], including the legal service industry, many experts are unanimous in predicting exponential growth in AI as a paramount technology to bring new tools and features to improve legal services and access to justice. Already, many aspects of the estimated $786B[2] market for legal services are being digitised, automated and AI-enabled whether discovery in litigation (e.g. RelativityAI), divorce (e.g. HelloDivorce), dispute resolution (e.g. DoNotPay) or contract management (e.g. IronClad).

As with many disruptive technologies, there are many experts who believe that AI will significantly disrupt (rather than extend) the legal market:

“AI will impact the availability of legal sector jobs, the business models of many law firms, and how in-house counsel leverage technology. According to Deloitte, about 100,000 legal sector jobs are likely to be automated in the next twenty years. Deloitte claims 39% of legal jobs can be automated; McKinsey estimates that 23% of a lawyer’s job could be automated. Some estimates suggest that adopting all legal technology (including AI) already available now would reduce lawyers’ hours by 13%”[3]

The real impact will be more nuanced over the long-term as whilst AI will eliminate certain tasks and some legal jobs, it will also augment and extend the way legal services are provided and consumed. In doing so, it will drive new ways of working and operating for both established and new entrant firms who will need to invest in new capabilities and skills to support the opening up new markets, new business models and new service innovations. In the past few decades, we have seen the impact of emerging and disruptive technologies on established players across many sectors, including banking (e.g. FinTechs), media and entertainment (e.g. music, movies, gambling), publishing (e.g. news), travel (e.g. Airbnb) and transportation (e.g. Uber). It is very likely traditional legal providers will be faced with the same disruptive challenges from AI and AI-enabled innovations bundling automation, analytics, and cloud with new business models including subscription, transaction or freemium.

Although AI and AI-enabled solutions present tremendous opportunities to support, disrupt or extend traditional legal services, they also present extremely difficult ethical questions for society, policy-makers and legal bodies (e.g. Law Society) to decide.

This is the focus of this article which sets out a summary of these issues, and is structured into two parts:

  1. Current and future use cases and trends of AI in legal and compliance services;
  2. Key issues for stakeholders including legal practitioners, society, organisations, AI vendors, and policy-makers.

A few notes:

  • This article is not designed to be exhaustive, comprehensive or academically detailed review and analysis of the existing AI and legal services literature. It is a blog post first and foremost (albeit a detailed one) on a topic of personal and professional interest to me, and should be read within this context;
  • Sources are referenced within the footnotes and acknowledged where possible, with any errors or omissions are my own.
  • Practical solutions and future research areas of focus is lightly touched on in the conclusion, however is not a focus for this article.

Part 1 – Current and future use cases of AI in legal and compliance services

Historically, AI in legal services has focused on automating tasks via software to achieve the same outcome as if a law practitioner had done the work. However, increasing innovation in AI and experimentation within the legal and broader ecosystem have allowed solutions to accelerate beyond this historical perspective.

The graphic below provides a helpful segmentation of four main use cases of how AI tools are being used in legal services[4]:

A wider view of use cases, which links to existing legal and business processes, is provided below:

  • e-discovery;
  • document and contract management
  • expertise automation;
  • legal research and insight
  • contract management
  • predictive analytics
  • dispute resolution
  • practice automation
  • transactions and deals
  • access to justice

Further context on a selection of these uses is summarised below (note, there is overlap between many of these areas):

  • E-Discovery – Over the past few years, the market for e-discovery services has accelerated beyond the historical litigation use case and into other enterprise processes and requirements (e.g. AML remediation, compliance, cybersecurity, document management). This has allowed for the development of more powerful and integrated business solutions enabled by the convergence of technologies including cloud, AI, automation, data and analytics. Players in the legal e-discovery space include Relativity, DISCO, and Everlaw.
  • Document and contract management The rapid adoption of cloud technologies have accelerated the ability of organisations across all sectors to invest in solutions to better solve, integrate and automate business processes challenges, such as document and contract lifecycle management. For contracts, they need to be initiated (e.g. templates, precedents), shared, stored, monitored (e.g. renewals) or searched and tracked for legal, regulatory or dispute reasons (e.g. AI legaltech start-ups like Kira, LawGeex, and eBrevia). In terms of drafting and collaboration, the power of Microsoft Word, Power Automate and G-Suite solutions has expanded along with a significant number of  AI-powered tools or sites (e.g. LegalZoom) that help lawyers (and businesses or consumers) to find, draft and share the right documents whether for commercial needs, transactions or litigation. New ‘alternative legal service’ entrants have combined these sorts of powerful solutions (and others in this list) with lower-cost labour models (with non-legal talent and/or lower-cost legal talent) to provide a more integrated offering for Fortune500 legal, risk and compliance teams (e.g. Ontra, Axiom, UnitedLex, Elevate, Integreon);
  • Expertise Automation –In the access to justice context, there are AI-powered services that automate contentious or bureaucratic situations for individuals such as utility bill disputes, small claims, immigration filing, or fighting traffic tickets (e.g. DoNotPay). Other examples include workflow automation software to enable consumers to draft a will (for a fixed fee or subscription) or chatbots in businesses to give employees access to answers to common questions in a specific area, such as employment law. It is forseeable that extending this at scale in a B2C context (using AI-voice assistants Siri or Alexa) with a trusted brand (e.g. Amazon Legal perhaps?) – and bundled into your Prime subscription alongside music, videos and same-day delivery – will be as easy as checking the weather or ordering an Uber.
  • Legal Research – New technologies (e.g. AI, automation, analytics, e-commerce) and business models (e.g. SaaS) have enabled the democratisation of legal knowledge beyond the historic use cases (e.g. find me an IT contract precedent or Canadian case law on limitation of liability). New solutions make it easy for clients and consumers (as well as lawyers) to find answers or solutions to legal or business challenges without interacting with a lawyer. In more recent times, legal publishing companies (e.g. LexisNexis, PLC, Westlaw) have leveraged legal sector relationships and huge databases of information including laws and regulations in multiple jurisdictions to build different AI-enabled solutions and business models for clients (or lawyers). These offerings promise fast, accurate (and therefore cost-effective) research with a variety of analytical and predictive capabilities. In the IP context, intellectual property lawyers can use AI-based software from companies like TrademarkNow and Anaqua to perform IP research, brand protection and risk assessment;
  • Legal and predictive analytics – This area aims to generate insights from unstructured, fragmented and other types of data sets to improve future decision-making.  A key use case are the tools that will analyse all the decisions in a domain (e.g. software patent litigation cases), input the specific issues in a case including factors (e.g. region, judge, parties etc) and provide a prediction of likely outcomes. This may significantly impact how the insurance and medical industry operate in terms of risk, pricing, and business models. For example, Intraspexion leverages deep learning to predict and warn users of their litigation risks, and predictive analytical company CourtQuant has partnered with two litigation financing companies to help evaluate litigation funding opportunities using AI. Another kind of analytics will review a given piece of legal research or legal submission to a court and help judges (or barristers) identify missing precedents In addition, there is a growing group of AI providers that provide what are essentially do-it-yourself tool kits to law firms and corporations to create their own analytics programs customized to their specific needs;
  • Transactions and deals – Although no two deals are the same, similar deals do require similar processes of pricing, project management, document due diligence and contract management. However, for various reasons, many firms will start each transaction with a blank sheet of paper (or sale and purchase agreement) or a sparsely populated one. However, AI-enabled document and contract automation solutions – and other M&A/transaction tools – are providing efficiencies during each stage of the process. In more advanced cases, data room vendors in partnership with law firms or end clients are using AI to analyse large amounts of data created by lawyers from previous deals. This data set is capable of acting as an enormous data bank for future deals where the AI has the ability to learn from these data sets in order to then:
    • Make clause recommendations to lawyers based on previous drafting and best practice.
    • Identify “market” standards for contentious clauses.
    • Spot patterns and make deal predictions.
    • Benchmark clauses and documents against given criteria.
    • Support pricing decisions based on key variables
  • Access to justice – Despite more lawyers in the market than ever before, the law has arguably never been more inaccessible. From a small consumer perspective, there are thousands of easy-to-use and free or low cost apps or online services which solve many simple or challenging aspects of life, whether buying properties, consulting with a doctor, making payments, finding on-demand transport, or booking household services. However, escalating costs and increasing complexity (both in terms of the law itself and the institutions that apply and enforce it) mean that justice is often out of reach for many, especially the most vulnerable members of society. With the accelerating convergence of various technologies and business models, it is starting to play a role in opening up the (i) provision of legal services to a greater segment of the population and (ii) replacing or augmenting the role of legal experts. From providing quick on-demand access to a lawyer via VC, accelerating time to key evidence, to bringing the courtroom to even the most remote corners of the world and digitizing many court processes, AI, augmented intelligence, and automation is dramatically improving the accessibility and affordability of legal representation. Examples include:
    • VC tools e.g. Zoom, FaceTime
    • Document and knowledge automation e.g. LegalZoom
    • ADR to ODR (online dispute resolution) e.g. eBay, Alibaba
    • Speed to evidence – Cloud-based, AI-powered technology e.g. DISCO

2. Key issues for the future of AI-power legal and compliance services  

There are many significant issues and challenges for the legal sector when adopting AI and AI-powered solutions. Whilst every use case of AI-deployment is unique, there are some overarching issues to be explored by key stakeholders including the legal profession, regulators, society, programmers, vendors and government.  

A sample of key questions include the following:

  • Will AI in the future make lawyers obsolete?
  • How does AI impact the duty of competence and related professional responsibilities?
  • How do lawyers, users and clients and stakeholders navigate the ‘black box’ challenge?
  • Do the users (e.g. lawyers, legal operations, individuals) and clients trust the data and the insights the systems generate?
  • How will liability be managed and apportioned in a balanced, fair and equitable way?
  • How do organisations identify, procure, implement and govern the ‘right’ AI-solution for their organisation?
  • Are individuals, lawyers or clients prepared to let data drive decision outcomes?
  • What is the role of ethics in developing AI systems?

Other important questions include:

  • How do AI users (e.g. lawyers), clients or regulators ‘audit’ an AI system?
  • How can AI systems be safeguarded from cybercriminals?
  • To what extent do AI-legal services need to be regulated and consumers be protected?
  • Have leaders in businesses identified the talent/skills needed to realise the business benefits (and manage risks) from AI?
  • To what extent is client consent to use data an issue in the development and scaling of AI systems?
  • Are lawyers, law students, or legal service professionals receiving relevant training to prepare for how they need to approach the use of AI in their jobs?
  • Are senior management and employees open to working with or alongside AI systems in their decisions and decision-making?

Below we further explore a selection of the above questions:

  • Obsolescence – When technology performs better than humans at certain tasks, job losses for those tasks are inevitable. However, the dynamic role of a lawyer — one that involves strategy, negotiation, empathy, creativity, judgement, and persuasion — can’t be replaced by one or several AI programs. As such, the impact of AI on lawyers in the profession may not be as dire as some like to predict. In his book Online Courts and the Future of Justice, author Richard Susskind discusses the ‘AI fallacy’ which is the mistaken impression that machines mimic the way humans work. For example, many current AI systems review data using machine learning, or algorithms, rather than cognitive processes. AI is adept at processing data, but it can’t think abstractly or apply common sense as humans can. Thus, AI in the legal sector enhances the work of lawyers, but it can’t replace them (see chart below[5]).
  • Professional Responsibility – Lawyers in all jurisdictions have specific professional responsibilities to consider and uphold in the delivery of legal and client services. Sample questions include:
    • Can a lawyer discharge professional duties of competence if they do not understand how the technology works?
    • Is a legal chatbot practicing law?
    • How does a lawyer provide adequate supervision where the lawyer does not understand how the work is being done or even ‘who’ is doing it?
    • How will a lawyer explain decisions made if they do not even know how those decisions were derived?

To better understand these complex questions, the below summaries some of the key professional duties and how they are being navigated by various jurisdictions:

Duty of Competence: The principal ethical obligation of lawyers when they are developing or assisting clients is the duty of competence. Over the past decade, many jurisdictions are specifically requiring lawyers to understand how (and why) new technologies such as AI, impact that duty (and related duties). This includes the requirement for lawyers to develop and maintain competence in ‘relevant technologies’. In 2012, in the US the American Bar Association (the “ABA”) explicitly included the obligation of “technological competence” as falling within the general duty of competence which exists within Rule 1.1 of its Model Rules of Professional Conduct (“Model Rules”)[6]. To date, 38 states have adopted some version of this revised comment to Rule 1.1. In Australia, most state solicitor and barrister regulators have incorporated this principle into their rules. In the future, jurisdictions may consider it unethical for lawyers or legal service professionals to avoid technologies that could benefit one’s clients. A key challenge is that there is no easy way to provide objective and independent analysis of the efficacy of any given AI solution, so that neither lawyers nor clients can easily determine which of several products or services actually achieve either the results they promise. In the long-term, it will very likely be one of the tasks of the future lawyer to assist clients in making those determinations and in selecting the most appropriate solution for a given problem. At a minimum, lawyers will need to be able to identify and access the expertise to make those judgments if they do not have it themselves.

Duty to Supervise – This supervisory duty assumes that lawyers are competent to select and oversee team members and the proper use of third parties (e.g. law firms) in the delivery of legal services[7]. However, the types of third parties used has expanded in recent times due to liberalisation of legal practice in some markets (e.g. UK due to the ABS laws allowing non-lawyers to operate legal services businesses). For example, alternative service providers, legal process outsourcers, tech vendors, and AI vendors have historically been outside of the remit of the solicitor or lawyer regulators (this is changing in various jurisdictions as discussed in below sections). By extension, to what extent is this more than just a matter of the duty to supervise what goes on with third parties, but how those third-parties provide services especially if technologies and tools are used? In such a case, potential liability issues arise if client outcomes are not successful: did the lawyer appropriately select the vendor, and did the lawyers properly manage the use of the solution?

The Duty to Communicate – In the US, lawyers also have an explicit duty to communicate to material matters to clients in connection with the lawyers’ services. This duty is set out in ABA Model Rue 1.4 and other jurisdictions have adopted similar rules[8]. Thus, not only must lawyers be competent in the use of AI, but they will need to understand its use sufficiently to explain to clients the question of the selection, use, and supervision of AI tools.

Black Box Challenge  

  • Transparency – A basic principle of justice is transparency – the requirement to explain and justify the reasons for a decision. As AI algorithms grow more advanced and rely on increasing volumes of structured and unstructured data sets, it becomes more difficult to make sense of their inner workings or how outcomes have been derived. For example, Michael Kearns and Aaron Roth report in Ethical Algorithm Design Should Guide Technology Regulation[9]:

“Nearly every week, a new report of algorithmic misbehaviour emerges. Recent examples include an algorithm for targeting medical interventions that systematically led to inferior outcomes for black patients, a resume-screening tool that explicitly discounted resumes containing the word “women” (as in “women’s chess club captain”), and a set of supposedly anonymized MRI scans that could be reverse-engineered to match to patient faces and names”.

Part of the problem is that many of these types of AI systems are ‘self-organising’ so they are inherently without external supervision or guidance. The ‘secrecy’ of AI vendors – especially those in a B2B and legal services context – regarding the inner workings of the AI algorithms and data sets doesn’t make the transparency and trust issue difficult for customers, regulators and other stakeholders. For lawyers, to what extent must they know the inner workings of that black box to ensure that she meets her ethical duties of competence and diligence? Without addressing this, these problems will likely continue as the legal sector increases its reliance on technology increases and injustices, in all likelihood, continue to arise. Over time, many organisations will need to have a robust and integrated AI business strategy designed at the board and management level to guide the wider organisation on these AI issues across areas including governance, policy, risk, HR and more. For example, during procurement of AI solutions, buyers, stakeholders and users (e.g. lawyers) must consider broader AI policies and mitigate these risk factors during vendor evaluation and procurement.

  • Algorithms – There are many concerns that AI algorithms are inherently limited in their accuracy, reliability and impartiality[10]. These limitations may be the direct result of biased data, but they may also stem from how the algorithms are created. For example, how software engineers choose a set of variables to include in an algorithm, deciding how to use variables, whether to maximize profit margins or maximize loan repayments, can lead to a biased algorithm. Programmers may also struggle to understand how an AI algorithm generates its outputs—the algorithm may be unpredictable, thus validating “correctness” or accuracy of those outputs when piloting a new AI system. This brings up the challenge of auditing algorithms:

“More systematic, ongoing, and legal ways of auditing algorithms are needed. . . . It should be based on what we have come to call ethical algorithm design, which begins with a precise understanding of what kinds of behaviours we want algorithms to avoid (so that we know what to audit for), and proceeds to design and deploy algorithms that avoid those behaviours (so that auditing does not simply become a game of whack-a-mole).”[11]

In terms of AI applications, most AI algorithms within legal services are currently able to perform only a very specific set of tasks based on data patterns and definitive answers. Conversely, it performs poorly when applied to the abstract or open-ended situations requiring judgment, such as the situations that lawyers often operate in[12]. In these circumstances, human expertise and intelligence are still critical to the development of AI solutions. Many are not sophisticated enough to understand and adapt to nuances, and to respond to expectations and layered meaning, and comprehend the practicalities of human experience. Thus, AI still a long way from the ‘obsolescence’ issue for lawyers raised above, and further research is necessary on programmers’ and product managers’ decision-making processes and methodologies when ideating, designing, coding, testing and training an AI algorithm[13]:

  • Data – Large volumes of data is a critical part of AI algorithm development as training material and input material. However, data sets may be of poor quality for a variety of reasons. For example, the data an AI system is ‘trained’ on may well include systemic ‘human’ bias, such as recruiters’ gender or racial discrimination of job candidates. In terms of data quality in law firms, most are slow at adopting new technologies and tend to be “document rich, and data poor” due, in large part, to legacy on-premise systems (or hybrid cloud) which do not integrate with each other. As more firms and enterprises transition to the cloud, this will accelerate the automation of business processes (e.g. contract management) with more advanced data and analytics capabilities to enable and facilitate AI system adoption (in theory, however there are many constraints within traditional law firm business and operating models which makes the adoption of AI-enabled solutions at scale unlikely). However, 3rd party vendors within the legal sector including e-discovery, data rooms, and legal process outsourcers – or new tech-powered entrants from outside of the legal sector – do not have such constraints and are able to innovate more effectively using AI, cloud, automation and analytics in these contexts (however other constants exist such as client consent and security). In the court context, public data such as judicial decisions and opinions are either not available or so varied in format as to be difficult to use effectively[14]. Beyond data quality issues, significant data privacy, client confidentiality and cybersecurity concerns exist which raises the need to define and implement standards (including safeguards) to build confidence in the use of algorithmic systems – and especially in legal contexts. As AI becomes more pervasive within law firms, legal departments, legal vendors (including managed services) and new entrants outside of legal, a foundation with strong guidelines for ethical use, transparency, privacy, cross-department sharing and more becomes even crucial[15].
  • Implementation – Within the legal sector, law firms and legal departments are laggards when it comes to adopting new technologies, transforming operations, and implementing change. With business models based on hours billed (e.g. law firms), this may not incentivize the efficiency improvements that AI systems can provide.  In addition:

“Effective deployment of AI requires a clearly defined use case and work process, strong technical expertise, extensive personnel and algorithm training, well-executed change management processes, an appetite for change and a willingness to work with the new technologies. Potential AI users should recognize that effectively deploying the technology may be harder than they would expect. Indeed, the greatest challenge may be simply getting potential users to understand and to trust the technology, not necessarily deploying it[16].

However, enterprises (e.g. Fortune500), start-ups, alternative service providers (e.g. UnitedLex) and new entrants from outside of legal do not suffer from these constraints, and are likely to be more successful – from a business model and innovation perspective – in adopting new AI-enabled solutions for use with clients (although AI-enabled providers must work to overcome client concerns as discussed above).   

  • Liability – There are a number of issues to consider on the topic of liability. Key questions are set out below:
    • Who is responsible when things do go wrong? Although AI might be more efficient than a human lawyer at performing these tasks, if the AI system misses clauses, mis-references definitions, or provides incorrect outcome/price predictions caused by AI software, all parties risk claims depending on how the parties apportioned liability. The role of contract and insurance is key, however this assumes that law firms have the contractual means of passing liability (in terms of professional duties) onto third parties. In addition, when determining relative liability between the provider of the defective solution and the lawyer, should a court consider the steps the lawyer took to determine whether the solution was the appropriate one for use in the particular client’s matter?
    • Should AI developers be liable for damage caused by their product? In most other fields, product liability is an established principle. But if the product is performing in ways no-one could have predicted, is it still reasonable to assign blame to the developer? AI systems also often interact with other systems so assigning liability becomes difficult. AI solutions are also fundamentally reliant on the data they were trained on, so liability may exists with the data sources.  Equally, there are risks of AI systems that are vulnerable to hacking.
    • To what extent are, or will, lawyers be liable when and how they use, or fail to use, AI solutions to address client needs? One example explained above is whether a lawyer or law firm will be liable for malpractice if the judge in a matter accesses software that identifies guiding principles or precedents that the lawyer failed to find or use. It does not seem to be a stretch to believe that liability should attach if the consequence of the lawyer’s failure to use that kind of tool is a bad outcome for the client and the client suffers injury as a result.
  • Regulatory Issues – As discussed above, addressing the significant issues of bias and transparency in AI tools, and, in addition, advertising standards, will grow in importance as the use of AI itself grows. Whilst the wider landscape for regulating AI is fragmented across industry and political spheres, there are signs the UK, EU and US are starting to align.[17] Within the legal services sector, some jurisdictions (e.g. England, Wales, Australia and certain Canadian provinces) are in the process of adopting and implementing a broader regulatory framework. This approach enables the legal regulators to oversee all providers of legal services, not just traditional law firms and/or lawyers. However, in the interim the implications of this regulatory imbalance will become more pronounced as alternative legal service providers play an increasing role in providing clients with legal services, often without any direct involvement of lawyers. In the long run, a broader regulatory approach is going to be critically important in establishing appropriate standards for all providers of AI-based legal services.
  • Ethics – The ethics of AI and data uses remains a high concern and key topic for debate in terms of the moral implications or unintended consequences that result from the coming together of technology and humans. Even proponents of AI, such as Elon Musk’s OpenAI group, recognise the need to police AI that could be used for ‘nefarious’ means. A sample of current ethical challenges in this area include:
    • Big data, cloud and autonomous systems provoke questions around security, privacy, identify, and fundamental rights and freedoms;
    • AI and social media challenge us to define how we connect with each other, source news, facts and information, and understand truth in the world;
    • Global data centres, data sources and intelligent systems means there is limited control of the data outside our borders (although regimes including GDPR is addressing this);
    • Is society content with AI that kills? Military applications including lethal autonomous weapons are already here;
    • Facial recognition, sentiment analysis, and data mining algorithms could be used to discriminate against disfavoured groups, or invade people’s privacy, or enable oppressive regimes to more effectively target political dissidents;
    • It may be necessary to develop AI systems that disobey human orders, subject to some higher-order principles of safety and protection of life;

Over the years, the private and public sectors have attempted to provide various frameworks and standards to ensure ethical AI development. For example, the Aletheia Framework[18] (developed by Rolls-Royce in an open partnership with industry) is a recent, practical one-page toolkit that guides developers, executives and boards both prior to deploying an AI, and during its use. It asks system designs and relevant AI business managers to consider 32 facets of social impact, governance and trust and transparency and to provide evidence which can then be used to engage with approvers, stakeholders or auditors. A new module added in December 2021 is a tried and tested way to identify and help mitigate the risk of bias in training data and AIs. This complements the existing five-step continuous automated checking process, which, if comprehensively applied, tracks the decisions the AI is making to detect bias in service or malfunction and allow human intervention to control and correct it.

Within the practice of law, while AI offers cutting-edge advantages and benefits, it also raises complicated questions for lawyers around professional ethics. Lawyers must be aware of the ethical issues involved in using (and not using) AI, and they must have an awareness of how AI may be flawed or biased. In 2016, The House of Commons Science and Technology Committee (UK Parliament) recognised the issue:

“While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now”.

In a 2016 article in the Georgetown Journal of Legal Ethics, the authors Remus and Levy were concerned that:

“…the core values of legal professionalism meant that it might not always be desirable, even if feasible, to replace humans with computers because of the different way they perform the task. This assertion raises questions about what the core values of the legal profession are and what they should or could be in the future. What is the core value of a solicitor beyond reserved activities? And should we define the limit of what being a solicitor or lawyer is?[19]

These are all extremely nuanced, complex and dynamic issues for lawyers, society, developers and regulators at large. How the law itself may need to change to deal with these issues will be a hot topic of debate in the coming years.

Conclusion

Over the next few years there can be little doubt that AI will begin to have a noticeable impact on the legal profession and consumers of legal services. Law firms, in-house legal departments and alternative legal services firms and vendors – plus new entrants outside of legal perhaps unencumbered by the constraints of established legal sector firms – have opportunities to explore and challenges to address, but it is clear that there will be significant change ahead. What is required of a future ‘lawyer’ (this term may mean something different in the future) or legal graduate today – let alone in 2025 or 2030 versus new lawyers of a few decades ago, will likely be transformed in many ways. There are also many difficult ethical questions for society to decide, for which the legal practice regulators (e.g. Law Society in England and Wales) may be in a unique position to grasp the opportunity of ‘innovating the profession’ and lead the debate. On the other hand, as the businesses of the future become more AI-enabled at their core (e.g. Netflix, Facebook, Google, Amazon etc), the risk that many legal services become commoditised or a ‘feature set’ within a broader business or service model is a real possibility in the near future.

At the same time, AI itself poses significant legal and ethical questions across all sorts of sectors and priority global challenges, from health, to climate change, to war, to cybersecurity. Further analysis on the legal and ethical implications of AI for society, legal practitioners, organisations, AI vendors, and policy-makers, plus what practical solutions can be employed to navigate the safe and ethical deployment of AI in the legal and other sectors, will be critical.


[1] AI could contribute up to $15.7 trillion1 to the global economy in 2030, more than the current output of China and India combined. Of this, $6.6 trillion is likely to come from increased productivity and $9.1 trillion is likely to come from consumption side effects.

[2] https://www.statista.com/statistics/605125/size-of-the-global-legal-services-market/

[3] https://jolt.law.harvard.edu/digest/a-primer-on-using-artificial-intelligence-in-the-legal-profession

[4] https://www.morganlewis.com/-/media/files/publication/presentation/webinar/2020/session-11_the-ethics-of-artificial-intelligence-for-the-legal-profession_18june20.pdf

[5] https://kirasystems.com/learn/can-ai-be-problematic-in-legal-sector/

[6] https://www.americanbar.org/groups/professional_responsibility/publications/professional_lawyer/27/1/the-future-law-firms-and-lawyers-the-age-artificial-intelligence

[7] Australian Solicitors Conduct Rules 2012, Rule 37 Supervision of Legal Services.

[8] https://lawcat.berkeley.edu/record/1164159?ln=en

[9] https://www.brookings.edu/research/ethical-algorithm-design-should-guide-technology-regulation/

[10] https://hbr.org/2019/05/addressing-the-biases-plaguing-algorithms

[11] https://www.brookings.edu/research/ethical-algorithm-design-should-guide-technology-regulation/

[12] https://hbr.org/2019/05/addressing-the-biases-plaguing-algorithms

[13] https://bostonreview.net/articles/annette-zimmermann-algorithmic-political/

[14] https://www.law.com/legaltechnews/2019/10/29/uninformed-or-underwhelming-most-lawyers-arent-seeing-ais-value/

[15] https://www.crowell.com/NewsEvents/Publications/Articles/A-Tangled-Web-How-the-Internet-of-Things-and-AI-Expose-Companies-to-Increased-Tort-Privacy-and-Cybersecurity-Litigation

[16] https://www.lexisnexis.co.uk/pdf/lawyers-and-robots.pdf

[17] https://www.brookings.edu/blog/techtank/2022/02/01/the-eu-and-u-s-are-starting-to-align-on-ai-regulation/

[18] https://www.rolls-royce.com/sustainability/ethics-and-compliance/the-aletheia-framework.aspx

[19]https://go.gale.com/ps/i.do?id=GALE%7CA514460996&sid=googleScholar&v=2.1&it=r&linkaccess=abs&issn=10415548&p=AONE&sw=w&userGroupName=anon%7E138c97cd

Advertisement

Think Tank Credibility

After listening to a podcast with Tristan Harris (co-founder of the think tank, Centre for Humane Technology), I’ve started to take a closer look into think tanks over the past few months. Given the recent Congress Hearings with Tech leaders and upcoming elections in US (and Guernsey where I have resided since 2016), I am slightly fascinated in the role think tanks play in democracy.

As part of the research, I recently came across a brilliant article by Andrea Baertl on the topic of ‘think tank credibility’. Obviously, credibility is crucial for a think tank. To be able to effectively inform policy and practice they need to be and be seen as credible sources of information and advice.

In the current environment- where fake news, fake think tanks, bad and fake research abound- think tanks need to be trustworthy sources of information and advice to their stakeholders.

In Andrea’s article, she provides an annotated reading list of resources that address the concept of credibility and think tanks. Below, I have provided a selection of the key resources to check out for further insight into the changing role of think tanks and the challenges of the new world:

A problematic context: post-truth, bad research and clandestine lobbying

Lewandowsky, S., Ecker, U. K. H.,& Cook, J. (2017, in press). Beyond Misinformation: Understanding and coping with the post-truth era. Journal of Applied Research in Memory and Cognition.

This article discusses the terms post- truth, fake news and misinformation. It outlines the societal trends that gave way to the current misinformation environment: a decline in social capital, growing economic inequality, increased polarisation, declining trust in science, and a fractionated media landscape. It also shows how misinformation influences people and the pervasive effects this can have. It finally discusses how people respond to corrections, showing how difficult changing people’s minds can be, and finishes with recommendations on how to combat misinformation

Leach, M. (2017Research and evidence in the ’post-truth’ era. Institute of Development Studies. Opinion.

This opinion article discusses the role of research and evidence in the current environment in which experts and facts are rejected by some groups. It argues that there is still need for research and evidence, but done differently. Research, evidence and knowledge needs to evidence how and why we need to create a fairer and more sustainable world, and how research can contribute towards that goal.

Gutbrod, H. (2017) Fake news, fake tanks, and the general election: Britain’s democracy under threat?. Transparency International Blog.

The author reflects on the impact that fake news and fake tanks can have on UK elections. He describes how fake tanks have effectively generated false news that are picked up by the main media outlets arguing that, yes, there is cause for concern. Editors and journalists often cannot tell the difference between real think tanks and fake ones (who are usually fronts for lobbyists or other powers) and gives examples of them further propagating fake news. Transparency, he argues, is a useful tool to identify if a think tank is credible or not, and that could and should be used to combat fake news and fake tanks. He finally argues that governments should not fall into the trap of more regulations, as that would stifle existing think tanks, but instead the focus should be on improving the media and ask them to fact check and refuse providing outlets to fake tanks and dark money groups.

The concept of credibility

Rieh, S. Y. & Danielson, D. R. (2007). Credibility: A multidisciplinary framework. In B. Cronin (Ed.), Annual Review of Information Science and Technology (Vol. 41, pp. 307-364). Medford, NJ: Information Today.

Rieh and Danielson discuss the concept of credibility and its relationship with trust, quality, authority and persuasion. They focus on identifying critical concepts and dimensions of credibility and the factors that influence its assessment. The focus is geared towards general communications, web design and information science, but the review of the concept they do as well as the framework proposed are very useful and a good introduction to understanding credibility.

Hilligoss, B., Rieh, S.Y., 2007. Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction in context. Information Processing and Management 44 (2008) 1467–1484

Based on interviews the authors propose that there are three levels of credibility judgements: 1) Construct, which is the way a person defines or operationalized credibility, 2) Heuristics, which are the rules of thumb that people use to assess credibility in particular situations, and 3) Interaction level, which is how these two interact with the cues elicited by the source. Additionally, they propose that context frames these assessments. This is a very interesting framework and useful to understand how individuals assess credibility. The authors do a very good job of explaining to readers how assessments at every level are made and how different aspects influence credibility judgements.

Policy research, think tanks and credibility

Judis, J.B. (2017) The credible think tank is dead. New Republic.

Judis discusses the ousting of a Google critic at the New America Foundation and argues that donors have corrupted Washington’s policy and research institutes. The author traces the story of think tanks in the United States and shows how they have transformed, and concludes- in an un-optimistic tone- that a reduction of the role of money throughout American politics is needed to revive the older vision of the think tank: carrying out disinterested research.

Mendizabal, E. (2018). Is it all about credibility?. On Think Tanks Article.

This is a reflection article on different aspects that On Think Tanks has focused over the years: governance, business models, transparency, research quality, communications, etc.- and how these different issues all lead to a think tank strengthening and showcasing its credibility.

Doberstein, C. (2017). Whom Do Bureaucrats Believe? A Randomized Controlled Experiment Testing Perceptions of Credibility of Policy ResearchPolicy Studies Journal, 45: 384-405.

Highly recommended research that shows the power of heuristics when assessing the credibility of a source. Doberstein ran an experiment in which participants (government bureaucrats) were asked to read research summaries and assess their credibility, for half or respondents the affiliation/authorship of the content was randomly reassigned. The findings showed that credibility was basically assessed via heuristics and regardless of the actual piece of research: academic research is perceived to be more credible than think tank or advocacy organisation research. The author did this follow up study with similar findings: Doberstein, C. (2017) The Credibility Chasm in Policy Research from Academics, Think Tanks, and Advocacy Organizations. Canadian Public Policy, 43, 4. Both articles can be found in academia.edu and the author also published an abridged version.

Rich, A. (2004) Think tanks, public policy and the politics of expertise. Cambridge University Press, New York.

This book is an excellent introduction to understanding and studying think tanks. Regarding, credibility chapter three “Political Credibility” is highly recommended. Rich analyses the perceptions of think tanks among US congressional staff and journalists (as key actors in policymaking), their views on the influence and credibility of think tanks, and how their visibility and marketing efforts affect their influence and perceptions of credibility.

Stone, D. (2004) Private authority, scholarly legitimacy and political credibility. Think Tanks and informal diplomacy. In. Higgot, R., Underhill, G.R.D., Bieler, A. (2004) Non-State Actors and Authority in the Global System

The work of Stone on think tanks in general is highly recommended. This chapter is very interesting to understand the credibility of think tanks from a political viewpoint. The author describes how think tanks as non-state actors act as policy entrepreneurs on both domestic and international policy domains and contribute to policymaking. Despite not being fully academic actors, they operate within that world as well, which in turn lends them credibility.

Baertl, A. (2018) De-constructing credibility: factors that affect a think tank’s credibility. On Think Tanks Working Paper 4. On Think Tanks

The paper explores the concept of credibility, explaining that credibility is constructed through the interaction of characteristics and actions of an organisation, and the assessment of others in the context within which communication takes place. Stakeholders give (or take away) credibility based on their assessments of the information they have and the influence of the current context. The paper argues that the credibility of a think tank goes beyond the quality of its research, and that there are a common set of factors from which individuals draw from to assess the credibility of a think tank. These are: networks, past impact, intellectual independence, transparency, credentials and expertise, communications and visibility, research quality, ideology and values, and current context.

Ensuring credibility

The following are a selection of articles that focus on specific factors that relate to the credibility of think tanks, and give ideas on how think tanks can ensure and showcase it.

Research quality

Méndez, 2012. What’s in good? Evaluating IDRC Results: Research Excellence. IDRC

Although a little dated now- therefore missing the latest research quality frameworks (REF and RQ+)- this is an excellent overview of the literature on research quality and excellences, as well as some of its gaps. The article discusses the elusive concept of research excellence or quality and demonstrates that there are no common definitions, but several commonalities in it. This document is included in this credibility reading list because research quality is at the core of a think tank’s credibility and a needs to be reflected on before moving any further on to assess its credibility.

McLean, R. (2018)Credibility and research quality- time for a paradigm shift? On Think Tanks Article.

The author discusses the RQ+ framework of the IDRC as a way forward to measure and ensure the quality of research and lead to its credibility. The article starts by questioning impact indicators an argues that they are essentially a proxy indicator of how popular the publication is, and that they say very little about the importance of the topic, the quality of the research or their impact on policy or practice. The framework developed by the IDRC is a way to ensure all of this, which would in turn lead to credible research.

Transparency

Gutrod, H. (2018) Credibility- the role of transparency. On Think Tanks Article.

This short article reflects on the relationship of transparency and credibility, arguing that transparency does not guarantee credibility for a think tank, but it is a necessary step towards achieving it. Gutbrod says transparency can also contribute to the debate on credibility- after all, every organisation has particular interests, motivations and affiliations. The problem for the credibility of the organisation arises when these are hidden.

Bruckner, T. (2017) Think tanks, evidence and policy: democratic players or clandestine lobbyists?. LSE Impact blog

Think tanks are thought by some to conduct sound policy research aimed at enriching policy discussions, and by others as covert lobbyists financed by corporations to suit their needs. Bruckner discusses the role that transparency can (and is) playing in establishing which think tanks are legitimate and credible organisations and which are not.

Communications and credibility

Fogg, B.J. (2002). Prominence-Interpretation Theory: Explaining How People Assess Credibility. A Research Report from the Stanford Persuasive Technology Lab, Stanford University

Prominence-Interpretation theory proposes that in order to assess the credibility of something (in this case, websites) people first need to notice something (prominence) and then make a judgement made about it (interpretation). People only base their credibility judgements on aspects that they notice. This highlights the importance of good communications as part of a think tank’s credibility strategy.

Williams K (2018). Three strategies for attaining legitimacy in policy knowledge: Coherence in identity, process and outcome. Public Admin. 2018;1–17.

The author outlines three types of coherence that enhance the legitimacy of organisations based on an analysis and interviews to individuals from 12 development research organisations. Williams argues that the credibility of knowledge production organisations is enhaced by demonstrating a coherent identity; showing adequate processes for maintaining independence, integrity and transparency; and creating the ‘right’ products that impact on their audiences.

Schwartz, J, 2018 Credibility and think tank communications. On Think Tanks article.

Schwwartz argues that credibility is at the heart of all effective communications (as without credibility the message will not be adequately received by the source). The argument, is that to build its credibility, a think tanks needs to: be evidence based, and showcase this in its communications; be brand-conscious and build consistent arguments over time, and; be useful, working with and for their audiences, and making its ideas easy to find and use.

Westerman, D., Spence, P. R. and Van Der Heide, B. (2014), Social Media as Information Source: Recency of Updates and Credibility of Information. J Comput-Mediat Comm, 19: 171–183. doi:10.1111/jcc4.12041

This very interesting article analyses how information available on social media impacts the perceptions of credibility. Although not directly focused on think tanks, it does offer very interesting lessons for them. The findings showed that recency of tweets positively impacts the credibility of the source, although this process is not automatic and is mediated by cognitive elaboration.

Newman, E. J., & Schwarz, N. (in press). Good sound, Good Research: How the audio quality of talks and interviews influences perceptions of the researcher and the research. Science Communication

Although the focus of this research is not think tanks per se, the implications of the findings are important for think tank communications. The authors ran an experiment in which they presented identical conference talks in high and low audio quality and asked people to evaluate the piece. People evaluated the research and researcher less favourably when they were presented with the poor audio quality audio. This has important implications for think tank communications, as efforts in curating the quality of their pieces will have larger implications in how their audiences perceive them.

Flanagin & Metzger 2017. Digital media and perceptions of source credibility in political Communication. In Kathleen, K.K & Jamieson, H. (2017) The Oxford Handbook of Political Communication. Oxford University Press

The authors compare the credibility of digital versus traditional channels, and the dynamics and nature of political information online. They also reflect on the following aspects: the link between credibility and selective exposure, the potential for group polarisation, and the role of social media in seeking and delivering credible political information. They analyse these issues and offer challenges and opportunities that can be used by think tanks to better engage with the public.

Corporate Governance And Innovation: 10 Questions for Boards

To be successful, companies must be led by leaders – the CEO, top executives and board of directors – who are deeply and irrevocably committed to innovation as their path to success. Just making innovation one of many priorities or passive support for innovation are the best ways to ensure that their company will never become a great innovator – Bill George, former CEO and Chairman of Medtronic and Professor at Harvard Business School

A few weeks back I gave a talk focused strategic response, adaptability and innovation in a COVID world to an audience of NEDs mainly focused on off-shore financial services (FS) sector firms.

Given how highly regulated and risk-adverse many off-shore FS firms are, unsurprisingly questions were focused on the challenges of balancing risk vs innovation, how to make change happen at board level, and how to navigate director duties.

It got me thinking….

What are the ways for boards to show their real, concrete commitment to innovation and technology, and its governance?

As I discussed in my talk, all global business and technology trends point in the same direction: there is a need for more proactive and far-sighted management of innovation. Innovation for business reinforcement and growth – and for transformation in particular – are, of course, the prime responsibility of top management. Innovation governance – a holistic approach to steering, promoting and sustaining innovation activities within a firm – is thus becoming a critical management imperative.

Boards of directors also need to be more than just observers of this renewed management interest in innovation, because so much is at stake in an increasingly pervasive digital and COVID world. In a growing number of industries and companies, innovation will determine future success or failure.

Of course, boards do not need to interfere with company leaders in the day-to-day management of innovation, but they should include a strong innovation element in their traditional corporate governance missions. For example:

  • Strategy review;
  • Auditing;
  • Performance review;
  • Risk prevention and, last but not least;
  • CEO nomination.  

It is therefore a healthy practice for boards to regularly reflect on the following questions:

  • To what extent is innovation, broadly defined, an agenda item in our board meetings?
  • What role, if any, should our board play vis-à-vis management regarding innovation?

To facilitate their self-assessment, boards should answer a number of practical questions that represent good practice in the governance of innovation. According to various innovation governance experts, including Professor Jean-Phillipe Deschamps at IMD Business School and author of Innovation Governance: How Top Management Organises and Mobilises for Innovation (2014), below are ten good-practice questions and perspectives to incorporate into any board evaluation:

1) Have we set an innovation agenda in many, if not most, of our meetings?

Board meetings are always crowded with all kinds of statutory corporate governance questions, without talking about the need to handle unexpected events and crises. So, unless innovation issues are inserted into the board agenda, they won’t be covered. It is a good practice to include innovation as a regular and open agenda item in at least a couple of board meetings per year. It should also be a key item in the annual strategy retreat that many boards set up with the top management team. Many of the following questions will provide a focus for this open innovation agenda item.

2) Do we regularly review “make-or-break” innovation projects?

In some industries, like pharmaceuticals, automotive, energy and aerospace, company boards regularly review the big, often risky innovation projects that are expected to provide future growth. They also do so because of funding issues – some of these projects may require extraordinary and long-term investments that need board approval. But in other industries, boards may be only superficially aware of the new products or services under preparation. Arguably, there may be several projects that are still small in terms of investments but could become “game-changers,” and it would be wise for the board to review them regularly in the presence of R&D leaders and innovators.

3) Do we regularly review and discuss the company’s innovation strategy?

Boards are generally aware of – and discuss – the company’s business strategy, particularly when it involves important investments, mergers and acquisitions and critical geopolitical moves. But what about the company’s innovation strategy (if it exists and is explicit, which is not always the case)? There are indeed important decisions that might concern the board in a company’s innovation choices because of their risk level and impact. Think of the adoption of innovative new business models, the creation of totally new product categories, or the conclusion of important strategic alliances and partnerships for the development, introduction and distribution of new products. Management’s adoption of a clear ‘typology’ of innovation in its board communication would definitely facilitate such reviews and discussions.

4) Do we regularly review and discuss the company’s innovation risk?

Boards usually devote a significant amount of time to risk assessment and reduction. But their focus tends to be on financial, environmental, regulatory and geopolitical risk. Innovation risk may be underestimated, except in the case of large projects involving huge investments and new technologies. But internal innovation risk is not limited to new project and technology uncertainties. It can be linked to the loss of critical staff, for example. Innovation risk can also be purely external. Will competitors introduce a new disruptive technology that will make our products and processes obsolete? Will new entrants invade our market space through different, more effective business models? Will our customers expect new solutions that we have not thought about? Assessing innovation risk is critical to avoid what Ravi Arora calls “pre-science errors” – underestimating the speed and extent of market or technology changes – and, even worse, “obstinacy errors” – sticking to one’s solution too long after markets or technologies have changed. It is the duty of the board to prevent such errors.

5) Do we set specific innovation goals for management?

Boards often exert strong pressure on management by setting performance goals. But most of these goals tend to focus on financial performance: top and bottom line growth, earnings per share, capital utilization ratios, etc. Some companies add other goals to focus management’s attention on worthwhile new objectives, such as globalization or sustainability. But what about innovation if it increasingly becomes a growth driver? A number of highly innovative companies have indeed included innovation goals in the CEO’s balanced scorecard. One of the most commonly found is the percentage of sales achieved through new products, typically products introduced in the past few years. But there are many other innovation goals to incite conservative management teams to take more risk – for example, the percentage of R&D spent on high risk/high impact projects. Innovation goals are interesting because they actually determine much of the company’s long-term financial performance. It is therefore good practice to discuss these goals with the management team and retain the most meaningful ones.

6) Do we review innovation management issues with the CEO?

Most sustained innovation programs raise many issues. Some of them are managerial – how to keep innovators motivated and reward them? Others are organizational – how to decentralize R&D to tap the brains of our international staff? Many deal with intellectual property – how do we practice open innovation while maintaining our IP position? Others deal with strategic alliances and partnerships – how do we share the efforts and risks of new ventures with our partners? And there are many more issues. The question boards should ask is: Are we aware of the most acute issues that management faces as it steers the company’s innovation program? The board’s mission is of course not to interfere and become too deeply involved in these innovation issues. However, its mission is to keep informed and help the CEO and top management team reflect on their options. This is why it is essential to keep a short open agenda item – “innovation issues” – in board meetings with a specific innovation agenda. 

7) Do we expect management to conduct innovation audits?

Many companies embarking on a major innovation boosting program rightfully start with an internal audit and, sometimes, a benchmarking exercise against best-in-class competitors. Where are we deficient in terms of strategy, process, resources and tools? Do we have the right type of people in R&D and marketing, and do we tap their creativity effectively? Do we cover all types of innovation, i.e. not just new technologies, products and processes? Are our projects well resourced and adequately managed? Are they under control? How good is our innovation climate? These audits are extremely effective for highlighting priority improvement areas, and it is therefore good practice for the board to suggest that management undertake such audits and keep them updated. These audits will provide the board with a rich perspective on the company’s innovation performance issues.

8) Do we expect management to report on innovation performance?

This question is directly related to the questions on innovation goals (5) and innovation audits (7). Once innovation goals have been set and an audit conducted, it will be natural for the board to follow up and assess innovation performance. To avoid having to delve into too many details, innovation performance reviews should be carried out once or twice a year on the basis of a reasonably limited number of innovation performance indicators. Good practice calls for these indicators to cover several categories. A couple of them should be lagging indicators, i.e. measuring the current result of past efforts – the percentage of sales achieved through new products being one of them. A couple of others should be leading indicators, measuring the level of efforts done today to ensure future innovation performance – for example, the percentage of the R&D budget devoted to high risk/high impact projects mentioned above. One or two others should be in the category of in-process indicators – the most usual measure being the percentage of projects managed on schedule and on budget. Finally, it is always interesting to include a learning indicator to measure the reactivity of management and its ability to progress on key issues.

9) Do we know and occasionally meet our main corporate innovators?

Nothing conveys a company’s strong innovation orientation better than a visit by the entire board to the labs and offices where innovation takes place, both locally and abroad. Such visits, which are often carried out by innovative companies, have a dual advantage. They enable board directors to be aware of the real-world issues that the company’s innovators face, and they provide them with a good understanding of the risks and rewards of innovation. They also motivate the frontline innovators, who often lack exposure to top management.

10) Do we take innovation into account when appointing new leaders?

This last question is probably the most important. The nomination of a new CEO is undoubtedly one of the board’s most visible and powerful contributions to the company. It can herald a new and positive era for the company if the capabilities of the CEO match the company’s strategic imperatives. But it can sometimes lead to damaging regressive moves if the values of the new CEO are innovation-unfriendly. Management author Robert Tomasko notes that CEOs often fall into one of two broad categories: fixers and growers. The former are particularly appreciated by boards when the company needs to be restructured and better controlled. But fixers often place other values and priorities ahead of innovation. Growers are more interested in innovation because of its transformational and growth characteristics. This does not mean that boards should always prefer growers over fixers. There are times when companies require drastic performance improvement programs and an iron-handed CEO is needed. The board should, however, reflect on the impact the new CEO will have on the company’s innovation culture and performance. This is why it is so important to look at the composition of the entire management team. How many growers does it include and in what position? Will these senior leaders be able to counteract excessive innovation-unfriendly moves by the new fixer CEO?    

If you are interested in this topic, I suggest starting with Professor Jean-Phillipe Deschamps book Innovation Governance: How Top Management Organises and Mobilises for Innovation (2014)

%d bloggers like this: