A guide to ISO 42001:2023
ISO 42001:2023 is the world’s first international standard for Artificial Intelligence Management Systems. As AI becomes increasingly integrated into business operations, products, and services, organisations face new challenges and responsibilities. ISO 42001 provides a framework to manage AI risks, ensure responsible AI use and build trust with clients, regulators and the public.
Who does ISO 42001 apply to?
ISO 42001 can be applied by any organisation, whether a developer (creates, trains or fine tunes foundational AI models and algorithms), deployer (integrates AI into products or services for others), or user of AI systems (utilises AI systems for their own business needs). It is likely that organisations will wear more than one hat, for example being a deployer of one AI system and a user of another.
An organisation can choose to certify all the AI systems they develop, deploy or use; or limit the scope to more critical systems.
Why is ISO 42001 important?
AI is transforming the business landscape and societies expectations
AI technologies are rapidly changing how organisations operate, innovate and compete. With this transformation comes heightened scrutiny from customers, regulators, and society about how AI is developed and used. ISO 42001 helps organisations demonstrate their commitment to responsible and transparent AI.
It builds trust with clients and partners
Clients and partners increasingly want assurance that AI systems are safe, reliable, and aligned with their values. ISO 42001 certification is a clear signal that an organisation follows best practices for AI governance and risk management.
It can help you win business
As with other ISO standards, ISO 42001 certification is likely to become a requirement in tenders and contracts, especially where AI is a core part of an organisations offering.
It helps organisations meet legal and ethical obligations
AI is subject to evolving regulations and ethical standards. ISO 42001 provides a structured approach to identifying and addressing legal, regulatory and societal requirements, helping organisations stay ahead of compliance risks.
How is the standard structured?
ISO 42001 is structured around a set of clauses (Clause 4–10) that define the requirements for an AI Management System. These are supported by some complementary controls in Annex A of the standard. This format aligns the ISO 42001 standard with the approaches taken for ISO 27001 (Information Security) and ISO 9001 (Quality), making it easier to deploy consistent, aligned management systems.
Clauses 4–10: The core requirements
The core components required to run the AI Management System are described in Clauses 4-10. Organisations must embed the requirements outlined in these clauses into the organisational processes. No two organisations will implement an AI Management System in the same way, so each organisation should interpret how to apply the requirements in the most effective way for them.
Clause 4 – Context of the organisation
It is required to understand the organisation’s context, including internal and external issues that affect AI systems (think SWOT or PESTLE). Stakeholders and their expectations should be identified, the scope of the AI management system defined, and the system itself established. The contextual issues identified are a driver of the risk assessment process.
Clause 5 – Leadership
Top management must demonstrate leadership and commitment to responsible AI. This includes setting an AI policy, defining roles and responsibilities, and ensuring accountability throughout the organisation.
Clause 6 – Planning
Plans should be made to address AI-related risks and opportunities. This involves conducting AI risk assessments, impact assessments, and setting measurable AI objectives. Planning for changes that could affect the AI management system is also necessary.
Risk is such a central part of the AI management system it is worth pausing on this topic and exploring it a little more. Risk takes two forms: a risk assessment (and treatment plan); and a detailed AI Impact Assessment for each AI system developed, deployed or used. These two elements are not just procedural requirements – they are fundamental to building trust, ensuring safety and supporting responsible innovation in the use of artificial intelligence.
Risk as a central consideration
The standard recognises that AI systems, while powerful and transformative, introduce a new spectrum of risks that can affect individuals, organisations, and society at large. These risks may include unintended bias, lack of transparency, security vulnerabilities, privacy breaches, or even broader societal impacts such as job displacement or ethical dilemmas.
To address these challenges, ISO 42001 requires organisations to conduct thorough risk assessments as a foundational activity. This means systematically identifying, analysing, and evaluating the risks associated with the development, deployment, and use of AI systems. The process should consider not only technical risks but also legal, ethical, and societal dimensions. By making risk assessment a central pillar, ISO 42001 ensures that organisations do not simply react to problems after they occur, but proactively anticipate and mitigate potential issues before they can cause harm.
The requirement for AI Impact Assessments
In addition to general risk assessment, ISO 42001 introduces a specific and critical requirement: the AI Impact Assessment. This is a formal, written evaluation that examines the potential effects of an AI system on individuals, groups, and society as a whole. While risk assessments focus on identifying and managing threats, the AI Impact Assessment goes further by considering the broader consequences, both positive and negative, of developing, deploying or using AI technologies.
The AI Impact Assessment is designed to promote transparency, accountability, and ethical responsibility. It requires organisations to document the intended use of the AI system, analyse who may be affected and how, and identify any potential adverse outcomes. This assessment must be written, regularly reviewed, and updated as the AI system evolves or as new information becomes available.
Clause 7 – Support
It is important to ensure that the necessary resources, skills, and awareness are in place to manage AI responsibly. This includes training, communication, and maintaining up-to-date documentation.
Clause 8 – Operation
Plans should be implemented by controlling how AI systems are developed, deployed, and maintained. This includes ongoing risk and impact assessments, and ensuring operational controls are in place. Clause 8 is effectively the ‘doing’ part of the risk assessment process where controls are applied.
Clause 9 – Performance evaluation
The performance of the AI management system should be monitored and measured. Internal audits and management reviews should be conducted to ensure the system is effective and continually improving.
Clause 10 – Improvement
Continual improvement of the AI management system should be pursued by addressing nonconformities and taking corrective action when issues arise.
Annex A: Controls for responsible AI
Annex A complements clauses 4-10 by providing a comprehensive set of controls to address the unique risks and requirements of AI systems. Annex A can be seen as a catalogue of controls that can be selected and tailored, providing guidance on how to implement strong and compliant AI systems. Annex A is informative (not mandatory by itself), but organisations are expected to consider each control when implementing their management system. If any are to be excluded there must be adequate and documented justification.
There are 38 controls broken into 9 sections:
A.2 – Policies related to AI
Annex A.2 aims to achieve management direction and backing for the AI Management System. Organisations are required to create a formal AI Policy that outlines their commitment to delivering fair, responsible, transparent, and ethical AI. The AI policy should align with other organisational policies and be regularly reviewed and updated to remain effective and relevant.
A.3 – Internal organisation
Annex A.3 focuses on establishing clear accountability for AI within the organisation. It requires organisations to define roles and responsibilities for those involved in AI activities and to set up procedures for reporting concerns or issues related to AI systems. This ensures responsible oversight and a clear escalation path for AI-related matters.
A.4 – Resources for AI systems
Annex A.4 ensures that all resources critical to the development and operation of AI systems are properly managed. Organisations must document AI components, manage data and tooling resources, oversee system and computing infrastructure, and ensure that staff working with AI have the necessary skills and competencies.
A.5 – Assessing impacts of AI systems
Annex A.5 requires organisations to evaluate the potential impacts of AI systems on individuals, society, and other systems. Organisations must conduct impact assessments, document findings, and address any identified risks or negative effects. This helps ensure that AI systems are deployed responsibly and with consideration for broader societal consequences.
A.6 – AI system life cycle
Annex A.6 covers the entire life cycle of AI systems, from design and development through deployment, maintenance, and eventual decommissioning. Organisations must implement controls at each stage to ensure that AI systems are safe, effective, and managed responsibly throughout their operational life. Verification and validation activities should be carried out at appropriate points in the life cycle to confirm that AI systems meet defined requirements and perform as intended. Additionally, organisations should maintain comprehensive documented records, including technical documentation, to support transparency, traceability, and ongoing management of AI systems.
A.7 – Data for AI systems
Annex A.7 emphasises the importance of data quality, provenance, and security for AI systems. Organisations are required to establish data governance policies, manage data collection and processing, ensure data integrity, and address privacy and security concerns.
A.8 – Information for interested parties
Annex A.8 ensures that stakeholders are kept informed about AI systems. Organisations must communicate the purpose, functionality, and limitations of their AI systems, disclose associated risks, and actively engage stakeholders in the development and deployment process. This builds trust and supports responsible AI adoption.
A.9 – Use of AI systems
Annex A.9 focuses on ensuring the responsible use of AI systems within the organisation. Organisations are required to define and document clear processes that guide how AI systems are used responsibly. Objectives for responsible use must be identified and documented to provide direction and oversight. Additionally, organisations must ensure that AI systems are used strictly in accordance with their intended purposes and the accompanying technical documentation, helping to prevent misuse and support compliance with ethical and regulatory requirements.
A.10 – Third-party & customer relationships
Annex A.10 ensures that organisations remain accountable and that responsibilities and risks are clearly defined and managed when third parties are involved at any stage of the AI system life cycle. Organisations must allocate responsibilities for AI system activities between themselves, partners, suppliers, customers, and other third parties to ensure clarity and accountability. There should be a defined process to confirm that any services, products, or materials provided by suppliers align with the organisation’s standards for responsible AI development and use.
Need more guidance?
Our consultants are available to support you through the process. We can help you perform a gap analysis to understand where your strengths and weaknesses are, host educational sessions to talk you through the requirements of the standard or help you with a full implementation.
Need more information on how the Implementation and Certification process works?