I use personal AI every day to manage my hearing difficulties. It is not a luxury. It is not a productivity hack. It is how I participate fully in meetings, consultations, and conversations that would otherwise leave me guessing at half the content.
I am also a researcher in AI governance for healthcare. So I sit squarely in the tension that every CIO, CTO, and governance lead is now navigating: the tools your people need to function are the same tools your risk frameworks are designed to block.
The Shadow AI Problem Is Not What You Think
When we talk about “Shadow AI” in the boardroom, the framing is usually about rogue staff circumventing policy. The reality is more uncomfortable. Staff are using ChatGPT to summarise complex patient notes because your approved systems take four clicks too many. A colleague with dyslexia is using an AI writing assistant because the alternative is spending three times longer on every email. A patient is recording their consultation on their phone because they know they will forget the diagnosis before they reach the car park.
These are not security incidents waiting to happen. They are signals that your organisation’s digital infrastructure has left people behind.
The governance question is not “how do we stop this?” It is “how do we make this safe?”
Where the Line Actually Sits: The Meta Ray-Ban Problem
The reason AI wearables triggered alarm across healthcare and corporate governance was not the AI itself. It was the absence of what I call Bystander Consent: the principle that a device operating in shared space must make its data capture visible and controllable by everyone present, not just the wearer.
Meta’s Ray-Ban glasses crossed that line on three fronts. First, pervasive recording: always-on cameras and microphones with no reliable signal to those being recorded, which in a clinical setting means instant breach of patient confidentiality. Second, data leakage: personal AI devices routinely transmit captured data back to parent models for training, turning sensitive organisational and patient data into public training material. Third, social trust: people behave differently when they believe they are being watched, and that shift erodes the candour that clinical and professional relationships depend on.
These are legitimate governance concerns. But banning the entire category of personal AI would be like banning reading glasses because someone might use a hidden camera.
A Better Framework: BYO-PAT
What organisations need is not a blanket prohibition. It is a Bring Your Own Personal Assistive Technology (BYO-PAT) policy that distinguishes between modes of operation and levels of data exposure.
Contextual Permissions, Not Blanket Bans
The critical distinction is between assistive mode and recording mode. Assistive mode means AI that processes data locally, on-device, providing feedback only to the user: real-time captioning for a hearing-impaired staff member, text-to-speech for a visually impaired clinician, cognitive scaffolding for someone with ADHD. No data leaves the device. No bystander is recorded. Recording mode means AI that uploads to cloud infrastructure for processing, storage, or model training. This mode requires organisational vetting, explicit consent protocols, and a physical indicator (a visible LED that cannot be obscured or disabled) so that everyone in the room knows capture is active. Something easily managed with port routing or protocol identification within organisation’s own networks.
The BYO-AI Governance Layer
Sitting above BYO-PAT is the broader BYO-AI policy question: how does the organisation govern any personal AI tool brought into the working environment? Personal subscriptions to AI tools, could have the potential to be cost efficiency vehicles to organisations.
Three principles should anchor this:
- Privacy Mode Verification. Only permit personal AI devices and applications that offer a verified privacy mode where captured data is not used for model training. If the vendor cannot demonstrate this technically, the device does not enter the building.
- Transparent Usage Agreements. Staff using AI for disability support or personal productivity should operate under a clear agreement that specifies how they will protect others’ data while using their tools. This is not about surveillance of the employee. It is about making the social contract explicit.
- Organisational Substitution. If staff are reaching for Shadow AI because the approved tools are inadequate, the governance response is not enforcement. It is procurement. Provide official AI transcription, summarisation, and accessibility tools so that people are not forced to choose between doing their job and following policy.
Turning Patients into Partners
The same logic applies to patients. Banning personal AI in clinical settings does not stop patients using it. It simply ensures they use it without guidance, without safeguards, and without any organisational visibility.
A governance-mature approach looks different. Provide patients with clear, supportive guidance: “You are welcome to use AI for note-taking. Please let your clinician know so we can ensure the environment is appropriate.” Offer organisational AI alternatives, such as approved transcription tools or AI-generated consultation summaries, so patients do not feel compelled to rely on unvetted third-party applications.
The goal is digital literacy, not digital prohibition.
The Governance Reframe
The organisations that will lead on AI governance are not the ones building the highest walls. They are the ones building the most intelligent filters: blocking surveillance risk while enabling the assistive, productive, and genuinely transformative uses of personal AI.
For those of us who depend on personal AI to participate fully in professional life, this is not an abstract policy debate. It is a question of inclusion.
And if your governance framework cannot tell the difference between a surveillance device and a hearing aid, it is not fit for the era we have already entered.
