The Inventory Nobody Has
- Unika Hypolite

- 21 hours ago
- 4 min read

On January 8, 2026, Ontario’s Information and Privacy Commissioner signalled a sharper turn in the province’s AI governance, describing January 2026 as a milestone moment for how public institutions will be expected to manage artificial intelligence risk. The message was not subtle: AI is no longer a side project for innovation teams. It is becoming a compliance surface.
For CIOs and risk officers across Ontario’s public sector, the timing matters. The last two years have normalized generative AI in daily work, faster than procurement cycles, policy refreshes, or security assessments could keep up. The result is a familiar pattern with a new name: shadow AI. Unapproved copilots, browser-based chat tools, and personal accounts are quietly doing real work within government workflows, without the safeguards those workflows were designed to provide.
Shadow AI is often framed as a behavioural issue: staff are using unauthorized tools, so leaders need stricter rules. That type of framing misses the system. In most ministries and agencies, people adopt AI for the same reasons they adopted spreadsheets decades ago. It closes gaps. It speeds up drafting, summarizing, translation, stakeholder analysis, and issue triage. When the approved toolset cannot keep up with the pace or fit the work, the path of least resistance is to use whatever is available.
Compliance, meanwhile, remains document-centric. Controls are frequently expressed as policies rather than embedded into the workflow itself. Risk teams ask for attestations. IT asks for lists. Managers ask staff to “be careful.” Yet none of those mechanisms reliably answers the most basic questions: What tools are in use? What data is being pasted into them? Where is the evidence?

Why This Persists: A Systems Thinking View
Systems thinking forces an uncomfortable conclusion: shadow AI is not a temporary deviation from the operating model. It is an output of the operating model.
Start with incentives. Public sector performance is measured on delivery, responsiveness, and fiscal restraint. AI tools promise speed at near-zero marginal cost. The immediate rewards are tangible: fewer hours on routine writing, faster briefings, quicker turnaround on correspondence. The risks are delayed, probabilistic, and often diffused across multiple accountabilities.
Now add constraints. Procurement processes are designed to manage vendor risk, but they are not designed for software categories that evolve weekly. Security and privacy reviews are thorough, but the queue is long. Enterprise licenses lag behind demand. Staff training is uneven, and “approved tools” frequently arrive without redesigned workflows, leaving the most important gap unaddressed: how to use AI safely in the actual workflow.
Finally, consider feedback loops. When employees use unapproved AI and nothing visibly breaks, that success reinforces the behaviour. When leadership responds with blanket bans, it often drives the behaviour further underground. The system learns the wrong lesson: not “avoid risk,” but “avoid visibility.” The core failure is not that policies don’t exist. It is that the organization lacks operational instrumentation—mechanisms that make AI use observable, governable, and auditable without grinding work to a halt.
What’s at Stake for Ontario’s Public Sector
The stakes are not theoretical. Public institutions handle personal information, cabinet-confidential material, legal advice, sensitive procurement data, and high-impact decisions affecting benefits, enforcement, and services. Shadow AI introduces three compounding risks.
First is data exposure. Even when tools claim not to train on user inputs, the practical question for a risk officer is more basic: can we prove where data went, under what terms, and with what retention?
Second is integrity risk. AI-generated text can embed errors or confabulations that look authoritative. In a public sector context, a flawed summary in a briefing note is not a minor typo. It can alter decisions, distort consultations, or trigger avoidable reputational harm.
Third is audit and discovery risk. As enforcement expectations tighten, the inability to produce a credible inventory of AI tools, use cases, and controls becomes a governance problem in itself. The cost isn’t only remediation. It is the loss of trust that comes from appearing unaware of one’s own operational reality.

A Practical Framework: Audit, Migrate, Redesign
The solution is not another policy memo. It is an operational infrastructure.
A useful model is a Shadow AI Audit & Migration Sprint: a time-boxed intervention that starts by treating AI like any other enterprise risk surface—inventory first, controls second, migration third.
Step one is discovery that does not rely on self-reporting alone. The aim is a risk-scored inventory of tools and use cases, mapped to data sensitivity and business processes.
Step two is migration to approved environments, but with pragmatic triage. Not every use case warrants the same level of control. Some can be moved quickly with guardrails; others require redesign or prohibition. The deliverable is a migration playbook that links decisions to evidence.
Step three is workflow redesign, where compliance becomes embedded rather than aspirational. Controls such as approved prompt templates, logging, redaction steps, and escalation paths are stored within the work, not in a PDF.
In our work as operational compliance partners at CoCr8 Labs X CINTA & Co., the consistent lesson is that governance improves fastest when teams can produce artifacts—not intentions: audit-ready inventories, decision logs, controls mapped to workflows, and ongoing evidence generation.
The Forward Look: From AI Anxiety to AI Operations
Ontario’s AI governance is moving from guidance to expectation, and from expectation to verification. The institutions that fare best will not be the ones with the longest policies. They will be the ones who can quickly and credibly answer how AI is being used, where sensitive data flows, what controls apply, and what evidence supports that claim.
Done well, this shift is an opportunity. Shadow AI can become sanctioned AI. Improvised workarounds can become standardized workflows. And the public sector can adopt productivity gains without sacrificing trust.
The next phase of AI in government will not be won by novelty. It will be won by operations: making responsible use repeatable, observable, and resilient—especially when the tools change faster than the org chart.



Comments