Adopting an AI tool is more than a technology decision. When you connect a system to your email, CRM, or customer records, you are extending trust to a third party. Their infrastructure and security controls effectively become part of your attack surface.
Most businesses do not vet that trust carefully enough before signing up. This guide sets out the questions worth asking before any AI tool goes live, and the red flags that should give you pause.
Why AI Scrutiny Matters
Traditional software carries risk, but AI integrations introduce specific characteristics that warrant extra attention:
- Broad Access Permissions: AI tools often require read/write access to your entire inbox or file system to be useful, which is a significant jump in risk profile compared to standard tools.
- Opaque Decision Making: AI outputs can be difficult to audit, making it harder to detect if data has been mishandled or compromised.
- Market Immaturity: Many new AI vendors prioritise speed over security maturity, often lacking independent testing or formal incident response plans.
The Vetting Checklist
Data Handling and Storage
Confirm exactly where your data is stored. For UK businesses, data residency is critical for GDPR compliance. Ask specifically if your business data is used to train their models; many vendors hide this in the small print.
Security Controls and Testing
Ask for ISO 27001 certification or a SOC 2 Type II report. A vendor that lacks independent validation is asking you to take their word for it. Request a summary of their most recent independent penetration test.
Incident Response
You may be legally required to notify the ICO within 72 hours of a breach. You cannot meet this obligation if your vendor takes a week to alert you. Verify their notification timelines and documentation before committing.
Red Flags to Watch For
- Vague or defensive answers to straightforward security questions.
- Permissions requests that are much broader than the stated use case requires.
- No Data Processing Agreement (DPA) available for review.
- Terms of service that allow the vendor to use your sensitive data for model training.
Periodic Reviews
The same standards apply to AI features built into software you already use, such as Microsoft 365 or your CRM. Vendors often update their terms of service to reflect new AI capabilities. A periodic review of these integrations ensures your data remains protected as tools evolve.
Frequently Asked Questions
Need an AI Security Review?
Vetting vendors requires specialised knowledge. We help UK firms assess their integrations and secure their agentic AI setups.