Get governing your GenAI usage
Many businesses arrived at their current use of AI in much the same way. Someone tried ChatGPT (or similar), found it genuinely useful, shared it with colleagues, and within weeks a significant portion of the team was quietly using a range of AI tools with limited oversight.
What began as useful experimentation, now requires structure. The challenge is formalising that use without undermining the productivity gains that made AI attractive in the first place.
Why formalise you approach to genAI use?
The informal approach was understandable when AI was new and its applications were still being explored, but as AI tools have become more powerful and their functionality even more applicable to work, a new methodology is needed.
The landscape is shifting:
- procurement teams are encountering AI governance questions in supplier questionnaires;
- clients are seeking reassurance about how their data is handled;
- the EU AI Act is now in force and UK requirements are taking shape;
- ISO 42001 (AI Management System) is gaining momentum; and
- …governance aside, opportunities are lost if AI tool usage does not become operationalised beyond individuals
Without clear guidance, individuals make their own judgements about what is safe to share with AI systems. Those judgements can be inconsistent. Customer data and commercially sensitive material can find its way into tools with questionable data practices, or at the other end of the spectrum team members don’t use AI due to the uncertainty of doing something wrong.
For organisations already operating within an ISO 27001 information security management framework, an AI governance policy is a natural and important extension of that existing commitment. ISO 27001 establishes controls around how information assets are identified, protected and managed. AI tools introduce new vectors for data exposure that those controls need to address. At twoSB, we see AI governance not as a separate discipline but as a logical development of good information security practice, one that the most forward-thinking organisations are already building into their management systems.
The principal risks with uncontrolled GenAI usage
Data leakage
Consider a scenario where an estimator pastes a client’s project specifications into an AI tool to assist with drafting a response. Those specifications are now held on a third-party server. Some providers explicitly use inputs for model training. Others retain data indefinitely. Correctly configuring systems before confidential information is shared is essential.
Quality and accuracy
AI systems can present outputs with considerable confidence even when those outputs are incorrect. It can be challenging for teams to identify when documentation looks professional on the surface but omits a critical requirement, or when AI-generated analysis draws on outdated information. These are not hypothetical concerns.
Intellectual property considerations
Questions of ownership over AI-generated content remain unsettled. Whether marketing materials produced with AI assistance might inadvertently infringe third-party copyright, or whether outputs could replicate content generated by competitors using the same tools, are matters that clear internal policies can help to address. The legal landscape continues to develop, which makes establishing a governance framework sooner rather than later a sensible precaution.
genAI usage areas that require focus
Classify your information
Publicly available information is generally suitable for use with most tools. Customer data and commercially sensitive material requires careful handling in systems you have vetted and configured. You may decide that certain categories of information should not be processed by AI tools at all.
We recommend identifying data classifications or groups and being explicit in how they can be processed and in which systems.
Understand where data goes
Some tools process inputs transiently and delete them immediately. Others retain data for extended periods. Some use inputs to train their models; others provide contractual commitments not to do so.
It is important to identify the retention timeframes of each AI system, even after chats are deleted so you can be confident it meets your requirements.
Know the processing location
Where are the servers located? Does data cross international borders? This has material implications for GDPR compliance and, increasingly, for client security requirements.
Identify where your data will be processed and whether this meets your requirements, legal requirements and client expectations.
Require human oversight
A suitably qualified individual should review outputs before they are relied upon or distributed externally. AI performs exceptionally well in certain areas and considerably less well in others; that distinction needs to be understood and managed.
As you start to embed AI into your organisational process and workflows, identify where these human-in-the-loop reviews need to occur.
Six steps to formalise your organisations genAI use
The following six steps provide a practical action plan for organisations looking to move from informal AI use to a structured approach.
Survey current usage
Ask your teams which AI tools they are using and with what types of information. The results often reveal more than anticipated. Any immediate concerns should be addressed without delay.
Review the contractual terms
Examine what provider agreements actually state regarding data retention and model training. It is worth taking the time now.
Establish a concise policy
One or two pages is generally sufficient. The policy should cover which tools are approved, what categories of information may be processed, and what verification is required. The aim is to give people clear, practical guidance they can apply in the moment. We suggest writing the policy in your own language – it should not just be a set of rules, it should support usage in a positive and controlled way.
Maintain a tools registry
A shared register works well for this purpose (whether in Notion, excel, Monday.com or any other system). Record tool names, intended use cases, data classifications, key contractual terms, and associated costs. Review it regularly, as the landscape moves quickly.
Assign clear ownership
A named individual should be responsible for maintaining the policy, responding to queries, monitoring emerging risks, and reporting to leadership. For most SMEs this will not be a full-time role, but it does need to be someone’s responsibility.
Communicate with your teams
Explain the policy and the reasoning behind it. Keep the tone constructive and supportive. Ensure people know where to direct questions. Build the guidance into onboarding for new joiners.
Looking ahead
Regulatory expectations will likely tighten. Client procurement requirements will become more exacting. Organisations that have addressed governance proactively will find adaptation considerably more straightforward than those that have not.
AI capabilities will continue to develop. A governance framework that can accommodate new applications whilst maintaining appropriate controls will be more valuable than one that needs to be rebuilt with each new development.
Transparency requirements are likely to increase. The ability to demonstrate which information was used, which tools processed it, and how outputs were verified is becoming a genuine differentiator. At twoSB, we anticipate that by the end of 2026 this kind of auditability will be a standard expectation in client and procurement conversations rather than an optional extra.
The bottom line
Approached properly, AI governance enables people to use powerful tools with confidence, whilst protecting the organisation, its people and its clients.
Start with the foundations: clear policies, approved tools, appropriate data protections, and human oversight. Involve the people who are already using AI in developing the approach, their engagement is essential to making it work.
The transition from informal to structured AI use is not a question of whether, but when.