If you don’t have an AI policy for SMEs, you’re already taking a risk. Staff may be using tools like ChatGPT or Microsoft Copilot to draft emails or analyse data. That means client or internal information could end up training someone else’s AI. It’s already happening, and you probably don’t know.
As a local example, Kent County Council created a public AI policy covering definitions, risks like data protection and bias, and mandatory checks such as data‑protection impact assessments. They insist any AI project must go through their ICT compliance team and equality checks. This matters to your business too. If a public body in Kent sets a standard, your clients will expect you to match it.
AI tools may leak sensitive info
In 2023, Samsung engineers accidentally shared code and confidential details in ChatGPT. That led to an internal ban on external AI tools . If that can happen at Samsung, it can happen at an SME. Your data, your reputation and your client trust are on the line.
They also warn that AI use in contracts can bring legal risks: unclear decision‑making paths, IP issues, and possible bias. They recommend strong policies and supplier clauses to protect ownership and accountability.
What this means for your business
UK law demands data protection (GDPR, Data Protection Act 2018). If AI tools make decisions within your business, you must show fairness, transparency and oversight with real human safety catches. At this stage of development, AI isn’t optional. It’s in Word, Excel, Teams; your business uses AI even if you don’t ask for it.
Legal experts in Kent can help you draft a policy that includes:
-
Clear definitions (AI, LLMs, automated decisions)
-
Approved tools and banned usage
-
Data protection checks
-
Procurement clauses with suppliers
-
Oversight and accountability roles
Seek guidance on embedding these rules into contracts and corporate policies by a trusted supplier.
These developments seem to be coming out almost monthly at this point, so start by identifying AI in your daily tools. Talk to staff. Define who can use what. Declare what info is off-limits. Work with a contract specialist to build a simple AI policy document. Link it to your data‑protection and backup strategy and review it regularly as tools change.
If your business lacks a clear AI policy, you’re managing risk – not avoiding it. Jon and the team at Crosstek supports Kent SMEs to audit AI use in Microsoft 365, tighten data protection, and put in place continuity and backup in case something goes wrong. No jargon. Just protection you can rely on.
You may also like: When AI Goes Wrong