2 July 2025
When AI Listens In: The Risks of Unapproved Technology and Data Sharing
By Jess Pembroke, Director of Information Law Services
I recently read a Sky News investigation that made me pause and then imagine a scene from the early days of the NHS. Picture this: it’s the 1950s, and your GP says, “I’ll respect your confidentiality, of course, but I just need to switch on this intercom so a team of transcribers in the next room can write down everything you say.” You’d probably walk straight out. Fast forward to today, and something eerily similar is happening only now, the “intercom” is an unapproved AI tool.
Doctors across the UK are using AI software to record and transcribe patient consultations. The idea is simple: reduce paperwork, free up time, and let clinicians focus on care. But the reality? A patchwork of tools, some of which don’t meet NHS governance standards, are being used in live clinical settings without proper oversight or assurance.
As one GP put it, “We’re not dinosaurs we’re pro-AI. But it has to be safe and secure.” [1]
In defence of my former NHS colleagues, they’re not acting recklessly. They’re being actively encouraged by both Government and NHS England to innovate and embrace technology to help manage the overwhelming demand on services. But here’s the catch: there’s no central approval system for software. That responsibility falls to individual Trusts and GP practices.
I’ve no doubt there are brilliant Information Governance professionals working hard to assess these tools and provide guidance but the pace of AI proliferation, combined with pushy sales tactics from suppliers, inevitably leads to people introducing things without thinking through the data protection considerations.
What Should Organisations Do?
Here are a few practical steps:
- Supplier Due Diligence: Ask tough questions. Has the tool been tested for clinical safety? Is it compliant with NHS standards? The NHS provides a Data Security and Protection Toolkit to help organisations assess and demonstrate their compliance with data protection and cyber security requirements make sure your suppliers are using it.
- Data Protection Impact Assessment (DPIA): Use our free AI DPIA to assess any AI products. AI-DPIA.docx
- Training: Ensure staff understand the risks of using AI tools, especially those not officially approved.
- Policies: Have a policy for staff on AI use, see our template AI-Chatbots-Staff-Policy-1.docx
- Cyber Resilience: Work with IT to ensure AI tools are secure. AI-generated cyberattacks and voice cloning scams are on the rise
Please get in touch if you need support assessing the impact of any proposed AI solution we’re here to help you navigate the legal, ethical, and practical considerations.
Training: AI & Information Law Course
Are you looking to learn more about the requirements of AI? Our AI & Information Law course is designed to help you:
- Understand AI and data protection challenges
- Apply ethical and legal frameworks
- Promote responsible AI use
- Manage the possible effects of AI-driven decisions on individuals and their rights
- Implement tools for better internal governance and practice
Next running 15 October, 9:30am–1pm. To find out more and book your place, visit https://naomikorn.com/courses/ai-information-law/. Or we can come to you, contact us about in-house options.
If you have any queries or would like more information about our training, please get in touch. We look forward to hearing from you!
[1] Doctors are using unapproved AI software to record patient meetings, investigation reveals | Politics News | Sky News