Section 1: Introduction
Artificial intelligence (“AI”) is increasingly being used by large companies to enhance decision-making, increase efficiency, optimise supply chains, and uncover new growth opportunities.
As technology continues to evolve, the question is no longer “if” businesses should adopt AI, but how quickly they can deploy it effectively, particularly as “AI-first” delivery models emerge and AI regulation intensifies. Whilst there is still no technology capable of running entire transactions end to end, AI is increasingly being used in specific stages of the deal process where it can add real value by enhancing the insight and analysis lawyers contribute to complex transactions.
This article sets out:
- Key governance considerations for large groups and listed companies relating to the use and implementation of AI;
- Relevant market commentary for boards on adopting and managing AI; and
- Insights on market practice amongst the largest listed companies in the UK when reporting on AI.
The themes in this article are relevant for UK listed companies, large privately held companies and other multinationals.
Section 2: What comprises AI?
The Organisation for Economic Co-operation and Development (the “OECD”) defines AI as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
AI comprises a complex and evolving set of fields and is not a “one-stop shop.” AI models include a broad, interconnected range of algorithms, models and processes. The key terms which are often referred to as AI include:
- Machine learning: This refers to a particular artificial intelligence technique which can be used to enable computers to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyse and draw inferences from patterns in data.
- Large Language Models: This refers to a particular artificial intelligence technique which can be used to allow computers to create new content (such as text, code, images), including for example, Generative AI or “GenAI”.
- Agents: This refers to artificial intelligence systems that can operate autonomously, set goals, and take action with minimal human intervention to achieve those goals. Unlike other AI which primarily responds to commands, agentic AI can plan and execute tasks, and adapt its behaviour based on changing conditions and real-time data.
Other key models include Text Analytics, Chatbots, Speech Recognition, Supervised Learning, Deep Learning, Robotics, and Transformer Architecture, amongst others.
Together, these components form the backbone of modern AI, driving innovation across industries and reshaping how businesses and individuals interact with technology.
“AI” therefore acts as an all-encompassing umbrella term describing a wide range of technologies that enable machines to engage in tasks that would otherwise require human input, and it is used accordingly in this article.
Section 3: Market commentary on listed company boards’ role regarding AI
Many private and public stakeholders are showing a growing interest in how companies are addressing the challenges and leveraging the opportunities arising from the deployment of AI. The predominant concerns relate to companies using AI in a safe, ethical, and sustainable manner as well as to companies’ ability to effectively manoeuvre AI-related challenges whilst maximising the benefits of its integration.
Highlighted below are the views and expectations of the UK proxy bodies and the Financial Reporting Council (the “FRC”):
- Glass Lewis UK Benchmark Policy 2026: According to Glass Lewis’ UK Benchmark Policy, companies should consider adopting strong internal frameworks that include ethical considerations and ensure they have provided a sufficient level of oversight of AI. Boards may seek to ensure effective oversight and address skills gaps by engaging in continued board education and/or appointing directors with AI expertise. They should provide clear disclosure concerning the role of the board in overseeing issues related to AI, including how companies are ensuring directors are fully versed on this issue. Oversight may be effectively conducted by specific directors, the entire board, a separate committee, or combined with the responsibilities of a key committee. In the absence of material incidents, Glass Lewis has not issued any general voting recommendations.
- ISS Global Benchmark Policy Survey: The ISS Proxy Voting Guidelines and Benchmark Policy Recommendations do not currently refer to AI or any related voting recommendations. However, as part of its 2025 Global Benchmark Policy Survey, ISS asked whether companies should publicly share how their boards oversee AI business or AI implementation systems with the goal of managing AI-related risks. A majority of respondents, which included investors and non-investors from various countries including the UK and the US, indicated that they would consider this necessary “only in cases where AI plays a significant role in the business or business strategy (where businesses already have or plan to implement significant AI use)”.
- FRC Commentary: These findings are in line with the position of the FRC which has reiterated in its latest Annual Review of Corporate Reporting that companies should generally include only “material and relevant information”, noting that good quality reporting does not necessarily require a greater volume of disclosure. On AI specifically, the FRC set out an expectation in its updated Corporate Governance Code Guidance that boards will at least consider whether controls over emerging technologies like AI are material and should therefore be monitored, reviewed and reported on. The FRC also noted in its 2024 Review of Corporate Governance Reporting that “it is important that boards have a clear view of the responsible development and use of AI within the company and the governance around it”. This may require companies to “upskill, improve access to training or draw on the expertise of management and specific company knowledge”.
- Pensions UK’s Stewardship and Voting Guidelines 2025: According to Pensions UK’s Stewardship and Voting Guidelines 2025, investors should consider voting against the re-election of a director where there is evidence of “egregious conduct” around the development and deployment of AI. Pensions UK recommends that companies should have a governance framework for the acceptable use of AI, implement robust data anonymisation techniques and adopt a “zero-trust” approach when selecting AI tools and third-party services. Investors should assess whether companies have board-level accountability for AI, disclose responsible use frameworks, and align with emerging standards on transparency and fairness.
The growing interest and scrutiny from investors and proxy bodies of companies’ AI risk assessment and governance procedures is a direct reflection of the increasing prevalence of AI-related disclosures and reporting. Indeed, market insights indicate a clear consensus that AI is becoming a permanent consideration for boardroom discussions and companies’ strategic outlook, as is explored in the following section.
Section 4: Emerging themes from listed UK company annual reporting
AI is already increasingly considered in the annual reports of listed UK companies, with key themes emerging on how listed companies report on AI and what is included in these reports:
Strategic report and stakeholder involvement: Almost all FTSE 30 companies mentioned AI in the FY2025 reporting year, showcasing its importance in the market, with 28 companies referencing AI in their strategic reports:
- Board Strategy: AI is increasingly a strategic priority for boards, reflected in board discussions and deep-dive sessions on generative AI. Companies are reporting that they plan to integrate AI training into board training programmes, to understand the risks associated with AI and what responsible AI use is in the context of the future success of their company and informing strategic decisions for growth opportunities.
- GenAI and Machine Learning: Some companies make reference to extensive AI activities, applying “advanced algorithms such as machine learning and natural language processing to provide professional customers with the actionable insights they need to do their jobs, for example, in the form of extractive AI insights to help them make speedy and accurate decisions, or generative AI output to reduce or automate their workload”.
- Pilot Initiatives: Another organisation has initiated pilot AI initiatives with a global network of specialists to monitor restoration efforts with the objective of collecting primary data through drones, audio sensors and artificial intelligence.
- Strategic Opportunities: Opportunities have been identified by large and listed companies in relation to transforming manufacturing processes, brand-building, increasing efficiencies and even increasing responsiveness to cyber, IT and data privacy risks.
- Strategic Partnerships: It is clear that listed and large companies in the UK are actively engaging in AI-driven partnerships to drive product development and industry initiatives to enhance technology solutions with aims to improve operational efficiency and address ethical considerations. Companies have been collaborating and are seeking to collaborate with AI technology providers to develop, acquire and invest in AI tools in order to improve administrative efficiency, create tools to assist with document and biometric verification, and to improve safety and reporting.
- Regulatory Development: Some organisations are actively participating in global regulatory initiatives on AI and collaborating with other organisations to gain insight into developments into future regulation and AI governance strategies.
- Risk: A key emerging theme is the risk profile associated with AI, with AI consistently mentioned as a risk factor in annual reports of UK listed companies and across other large company reporting. AI is consistently identified as an emerging and material risk, with companies coining AI as an “emerging risk factor”.
- Director experience: In the previous reporting year, a small number of companies also looked at AI-experience and board composition, with some companies reporting on numbers of non-executive and independent non-executive directors who are knowledgeable in AI and can provide input on AI matters. It is expected that this may become an increasing trend in the next reporting year.
- Governance and Training: Many organisations are moving beyond simply identifying AI-related risk factors to implementing formal governance structures to address AI concerns:
- AI Governance Frameworks: Many companies have already introduced “Responsible AI frameworks” which apply globally to prevent misuse of AI internally and ensure AI use is compliant with internal policy. Other companies have reported on their inclusion of AI in risk management and ethics programmes as a key goal for the next financial year, with companies currently developing principles-based rules, AI codes of conduct and planning to publish “AI Standard Operating Procedures”.
- Board Training and Meeting Agendas: Certain companies plan to hold board training sessions and expert-led discussions to understand responsible use of AI in the next financial year, planning deep-dive sessions on GenAI to leverage technological capabilities, including digital, data and AI in order to achieve a competitive advantage. Some listed companies already include AI as a standing item for board meetings, with AI strategy meetings also being a key agenda item on board “strategy days”.
- Employee Training: Several companies are prioritising AI capability within their organisations through employee education including structured training programmes, learning hubs and AI awareness initiatives. One company reported its AI training hub has been established to “democratise Al awareness and knowledge building by providing access to all colleagues to immersive learning opportunities, interactive simulations and practical case studies”.
There is a clear, concerted effort for both listed and large companies to ensure that the AI technologies used are “fair, safe, transparent, explainable, accountable and sustainable, and that they comply with existing legislation and any emerging legislation”.
- Committee work: A number of organisations have established, or are seeking to establish, specialised committees and/or governance councils to oversee matters related to AI. This reflects AI’s growing importance in ethical, operational and strategic, contexts. The committees encompass various sectors including R&D, Technology and Innovation, Audit. Committees monitor these sectors by reviewing AI adoption, governance frameworks and strategic direction, balancing the exploitation of AI opportunities against responsible use by reference to internal and governance frameworks. Certain organisations are also using committees to acquire AI and machine learning modelling tools for research purposes.
- Director and employee performance: AI has been reported by certain companies as forming part of board evaluation and employee performance reviews, with use of AI reported as a justification for director and employee bonuses. Certain organisations are using AI to build career frameworks for employees, identifying skills gaps and offering personalised career paths and learning opportunities. Other organisations have provided AI coaches to provide personalised and professional guidance to employees.
Section 5: How can effective board oversight be achieved?
It is the role of the board to proactively ensure that governance arrangements and internal controls keep pace with the adoption and utilisation of AI across the organisation. In particular, where AI influences strategic decision-making or consumer outcomes, such decisions will fall within the scope of the directors’ statutory duties. Responsible AI adoption requires boards to ensure that AI strategies promote long‑term company success, are aligned with corporate purpose and governance arrangements, and appropriately balance efficiency gains whilst having wider stakeholder, ethical, environmental and reputational considerations.
Some of the topics boards may be considering include:
- Do we know where and how AI is used in our business, including through third party tools?
- Do we have AI expertise on our board?
- What is our AI strategy and use policy?
- Do we have an adequate incident response and remediation plan for when AI goes wrong?
Directors’ oversight of AI is not only reflected in board decision‑making but is also expected to be reported on through companies’ external reporting under the Companies Act 2006 and the UK Corporate Governance Code 2024. As AI use amongst companies continues to proliferate, disclosures detailing AI adoption, use and risk management are likely to become increasingly relevant.
This has also been echoed by the Institute of Directors which has published a Business Paper on “AI Governance in the Boardroom” setting out 12 principles of how boards should manage and oversee AI.

Section 6: Concluding remarks
As is clear from the growing interest and scrutiny of both public and private stakeholders as well as indicative market trends on reporting, AI technologies are at the forefront of companies’ strategy and are being firmly – and permanently – placed on boards’ agendas. As companies adjust to the intensifying speed of development and adoption of AI and explore its potential for long-term value creation, corporate reporting is increasingly reflective of these trends and highlights the crucial role AI has assumed in the corporate landscape and beyond.
This article is substantively based on an article called Managing Machines – Governance in the Age of AI which was published by Baker McKenzie in January 2026 and can be accessed here.
Section 7: Baker McKenzie Contacts



About Baker McKenzie
Baker McKenzie is a global law firm that empowers clients to compete confidently in the international marketplace. The firm delivers comprehensive, practical legal advice that cuts through complexity with clear, actionable guidance.
