Managing AI Technology in A Way That Makes Your Clients Trust You – Part 1

The explosion of AI technology is taking the world by storm. However, there is a lot of apprehension around the amount of data it possesses, the confidentiality of the data and the potential impacts of the data. Here in comes ISO 42001:2023 – Artificial Intelligence Management System. This ISO Standard looks to ensure organisations are appropriately controlling their use or development of AI by taking in the potential “Impacts” the AI technology could have.

Read Article
Co Founder sitting at desk

Thomas Dold | 08th October2024

The explosion of AI technology is taking the world by storm. However, there is a lot of apprehension around the amount of data it possesses, the confidentiality of the data and the potential impacts of the data.

Here in comes ISO 42001:2023 – Artificial Intelligence Management System. This ISO Standard looks to ensure organisations are appropriately controlling their use or development of AI by taking in the potential “Impacts” the AI technology could have.

In this series we will look to break down how the ISO 42001:2023 looks to have organisations manage their AI Technology.

Context of the Organisation

Understanding the Context

Like most ISO Standards, the first area that needs to be addressed is ensuring that you understand in what way this standard (or use of AI) is being applied to your business, and the intended purpose of the AI Technology that is developed, provided or used by the organisation.

To understand the organisation and its context, it can be helpful for you to determine its role relative to the AI Technology. These roles can include, but are not limited to, one or more of the following:

  • AI providers, including AI platform providers, AI product or service providers.
  • AI producers, including AI developers, AI designers, AI operators, AI testers.
  • AI customers, including AI users.
  • AI partners, including AI system integrators and data providers.
  • AI subjects, including data subjects and other subjects.
  • Relevant authorities, including policymakers and regulators.

Internal & External Issues

Next, as part of understands the context is identifying and managing the Internal and External issues the organisation faces relevant to the AI Technology, their possible impacts to you achieving your outcome, and how you’re going to manage them.

For this I tend to use a PESTLE analysis as it helps ensure all appropriate areas are considered and accounted for. PESTLE stands for:

  • Political
  • Economical
  • Social
  • Technological
  • Legal
  • Environmental.

Interested Parties

The organisation needs to have an understanding and document who the relevant interested parties are to the AI Management System. These typically tend to groups that fall into at least one of the following categories:

  • Influencers

Parties who provide input into the scope, objectives, requirements and methodology of operations.

  • Providers

Parties whose operations provide or meet the requirements of your AI Management System and are thus affected by policies, processes, procedures and standards.

  • Consumers  

Internal and external consumers of the product or services within the scope of your AI Management System.

  • Dependents

Parties whose interests are, or need to be, protected by the operations of the AI Technology, but who may not be direct consumers of the product or services.

Determining the Scope of the AI Management System

The scope of the AI management system needs to reflect the organisation’s activities that form the requirements around the AI management system, leadership, planning, support, operation, performance, evaluation, improvement, controls and objectives.

This is to draw a boundary and form a frame of view when assessing the effectiveness of the management system you’ve put in place.

Looking For Cyber Security?

Enquire about our comprehensive Cyber Security Services today.