Dan Kregor - 04.02.202620260204

United States | AI Governance Will Stop Being Optional

Join our community of 1,000+ IT professionals, and receive tech tips and updates once a week.

AI Governance Will Stop Being Optional

United States | AI Governance Will Stop Being Optional

Why 2026 is the year your compliance team becomes your new best friend, whether you like it or not 

In my previous post on agentic AI, I talked about AI systems that actually do things autonomously planning, executing, and adjusting. Exciting stuff. But here’s the uncomfortable follow-up question: who’s responsible when those autonomous systems do something they shouldn’t?

Welcome to AI governance the trend I flagged in my six predictions for 2026 that’s about to make compliance teams very busy indeed. And before you dismiss this as just another regulatory headache, consider this, Gartner projects that by 2026, 80% of organizations will formalise AI policies addressing ethical, brand, and data risks. Gartner also warns that over 2,000 “death by AI” legal claims could emerge this year due to insufficient guardrails. The question isn’t whether you need AI governance. It’s whether you’ll build it proactively or have it imposed on you reactively. 

Why Governance, Why Now?

Let’s be clear about what’s changed. In 2024, US federal agencies introduced 59 AI-related regulations more than double the previous year. Legislative mentions of AI rose across 75 countries. Meanwhile, 78% of organizations reported using AI, up sharply from 55% in 2023. The gap between AI adoption and governance maturity has become impossible to ignore. 

CallMiner’s research captures the tension perfectly: 71% of organizations now have a dedicated AI governance function, yet 67% admit they’re still deploying AI without the governance structures needed to manage risk. We’re building the plane while flying it and regulators have noticed. 

To put it bluntly. AI oversight is transitioning from aspirational frameworks to binding requirements. Expectations now include documented AI inventories, risk classifications, third-party due diligence, and model lifecycle controls. Governance will be measured by clear KPIs, not just policies on paper. 

Perhaps most significantly, governance boards that once guided AI adoption are giving way to compliance-led structures. As the EU AI Act and similar regulations come into force, oversight stops being advisory and becomes mandatory. Organizations will no longer ask whether they need guardrails. They’ll focus on how fast they can implement them.

The Shadow AI Problem: Your Biggest Governance Gap

 Here’s a statistic that should concern every IT leader: according to UpGuard, more than 80% of workers including nearly 90% of security professionals use unapproved AI tools in their jobs. Half say they do so regularly. And in a remarkable twist, executives have the highest levels of regular shadow AI use. 

Shadow AI isn’t new, it’s the rebellious cousin of shadow IT. But while shadow IT involved rogue Dropbox folders and unauthorised project management apps, shadow AI is fundamentally more dangerous. These aren’t just unsanctioned apps; they’re autonomous systems that learn, think, and act often with access to sensitive data. 

The risks are substantial.

Data exposure: when employees paste proprietary code or confidential strategy documents into public AI tools, that data may be logged, cached, or used for model training permanently leaving organisational control.

Regulatory non-compliance: GDPR requires legal basis for processing personal data and the ability to erase it. Shadow AI tools bypass these safeguards entirely.

Untracked decisions: AI models can produce biased outputs or hallucinate facts that shape business decisions with no audit trail of how or why. 

Netskope’s January 2026 report found that while the percentage of employees using personal AI apps dropped from 78% to 47% year-over-year, and company-approved account usage rose from 25% to 62%, organizations “still have work to do.” The shift toward managed accounts is encouraging, but employee behavior continues to outpace governance. 

The solution isn’t prohibition bans tend to drive AI use further underground. Instead, organizations need to provide sanctioned alternatives that match the convenience of consumer tools, combined with clear policies, monitoring, and education. As one analyst put it: “Organizations that try to ban AI will lose momentum. Those that govern it intelligently will gain both trust and velocity.”

The Regulatory Patchwork: What’s Actually Enforceable

If you operate across borders, 2026 brings a fragmented compliance landscape with overlapping and sometimes inconsistent expectations. Here’s what’s actually enforceable: 

European Union – EU AI Act: The big one. August 2, 2026 brings full application to high-risk systems. Penalties reach €35 million or 7% of global revenue. Organizations must complete conformity assessments, establish risk management systems, and ensure human oversight mechanisms are operational. The Act classifies AI systems into four risk categories and completely bans “unacceptable risk” systems including harmful manipulation and social scoring. High-risk systems including those used in employment, credit, and critical infrastructure face strict requirements around risk assessment, high-quality datasets, detailed documentation, and human oversight. Crucially, the EU AI Act is becoming the de facto global standard. Research shows organisations not directly impacted are 22-33 points behind on AI controls. The regulation spreads globally through supply chain requirements, multinational operations, and competitive benchmarking. 

United States – Fragmented but Intensifying: The US lacks comprehensive federal AI legislation, leaving agencies like the FTC, NIST, and Department of Commerce to interpret compliance within existing mandates. But state-level action is accelerating. The Colorado AI Act takes effect June 2026, requiring risk management policies, impact assessments, and transparency for high-risk systems. Illinois’s AI in Employment Law (effective January 2026) mandates disclosure when AI influences employment decisions. The NIST AI Risk Management Framework provides voluntary but widely adopted guidance organized around four functions: Govern, Map, Measure, and Manage. 

United Kingdom: Taking a principles-based approach through existing regulators rather than new AI-specific legislation. Comprehensive AI regulation has been delayed, creating flexibility but also uncertainty. 

Asia-Pacific: Approaches vary significantly. Singapore has published agentic AI security guidelines and launched regulatory sandboxes. India’s DPDP Act increases data protection complexity. China’s updated Cybersecurity Law includes AI-specific provisions. The regional trend is toward sovereign AI ecosystems with localised governance requirements. 

Australia: The government’s AI plan is expected to incorporate data localisation requirements, with organizations preparing for tighter sovereignty rules and mapping digital supply chains for cross-jurisdictional flexibility. 

The practical implication? Firms that treat AI governance as a documentation exercise rather than an operational discipline will struggle to demonstrate control when regulators come calling. Compliance is becoming a technical and strategic priority, not a downstream legal task.

Building Governance That Actually Works

So what does effective AI governance actually look like in practice? It starts with recognising that AI can’t be governed from a single department it requires collaboration across legal, compliance, data science, cybersecurity, risk management, and business stakeholders. 

Start with Discovery 

Before you can govern AI, you need to know what AI you have. Conduct a comprehensive inventory of all AI technologies in use including generative AI tools, embedded AI features in existing software, and yes, shadow AI deployments. Smarsh’s guidance is direct: “Inventory AI-enabled tools and features, sanctioned or not. Map AI use cases to existing recordkeeping and supervision requirements.”

Establish Clear Ownership 

Assign specific roles using RACI methodology. Who builds AI systems? Who approves them? Who monitors them? Who’s accountable when they fail? The emerging role of Chief AI Risk Officer is becoming standard in regulated industries bridging technical AI expertise with risk management discipline. Forrester predicts 60% of Fortune 100 companies will appoint a head of AI governance in 2026. 

Fix Your Data Foundation 

AI governance extends data governance. You cannot govern AI without governing the data that fuels it. Before launching AI initiatives, ensure the basics are in place: clear data ownership, traceable lineage, quality rules, and appropriate access controls. As one framework puts it: “Get your data house in order before you invite AI in.” 

Build for Accountability, Not Just Compliance 

The shift in 2026 is from governance as checkpoint to governance as circuit breaker. OneTrust’s DV Lamba describes it as “accountability-in-the-loop” making approvals and audit trails as integral as code commits. This means embedding governance into AI development workflows, not bolting it on afterward. 

The concept matters because, as FTI Consulting’s Dera Nevin notes, “In 2026, AI governance will be about much more than regulatory compliance. It will be integral to doing good business.” Organizations that build governance into how they develop and deploy AI gain competitive advantage and reduce regulatory and litigation exposure. 

What Good Governance Looks Like

Mature AI governance isn’t about creating bureaucratic obstacles. It’s about enabling responsible innovation at scale. Organizations with evidence-quality audit trails show 20-32 point advantages on every AI governance metric measured, including training data recovery, human-in-the-loop processes, and incident response. 

The components of effective governance include: 

Transparency and explainability: Required by most regulations and essential for user trust. AI systems must be able to “show their work” for even complex outputs. 

Fairness and bias mitigation: Explicit assessments of bias across demographic or risk-sensitive groups, with documented testing and remediation. 

Continuous risk management: Not static audits continuous measurement, alerting, and escalation. Dynamic dashboards, real-time alerts, and automated mitigation workflows. 

Third-party risk management: Outsourcing AI development doesn’t outsource liability. Rigorous vendor management, ongoing compliance monitoring, and contractual provisions specifying responsibility for AI-related harms. 

Implementation typically spans 6-12 months depending on organisational size and AI complexity. The phased approach that works discovery and assessment (months 1-2), framework development (months 2-4), piloting with selected projects (months 4-6), and scaling across the organization (months 6-12).

What Should You Actually Do?

If you’re responsible for enterprise technology or compliance, here’s my distillation: 

Accept that governance is no longer optional. Between the EU AI Act (August 2026), Colorado AI Act (June 2026), and proliferating state and international requirements, formalised AI policies have moved from best practice to compliance obligation. If your organization touches the EU market, the compliance clock is ticking. 

Address shadow AI before it addresses you. Audit departments to identify which tools teams actually rely on. Provide sanctioned alternatives that match consumer tool convenience. Create clear acceptable use policies with categories: Approved, Limited-Use, and Prohibited. Train employees on risks most misuse is unintentional. 

Invest in your compliance team. With AI and automation reducing manual burden, compliance professionals are moving into more strategic roles guiding decisions around ethics, risk, and corporate integrity. The organizations that treat AI governance as strategic capability rather than compliance overhead will gain competitive advantage. 

Build for multiple jurisdictions. Global harmonisation efforts are gaining momentum, but regional regulations continue to diverge. Design governance frameworks that can flex across jurisdictions while maintaining operational consistency. 

Treat culture as a compliance risk. Policies define boundaries, but culture defines behavior. The most effective organizations frame AI governance not as restriction but as responsible empowerment turning employee creativity into lasting enterprise capability.

How Insentra Can Help

Look, I get it. “Governance” doesn’t exactly spark joy. But if you’re reading this and thinking, “Our AI governance is basically a Word document someone wrote eighteen months ago,” you’re not alone and that’s exactly where we can help. 

At Insentra, we’ve helped organizations across Australia, Asia-Pacific, and beyond navigate the gap between “we should probably have AI policies” and “we have operational governance that actually works.” Whether you’re building governance frameworks from scratch, wrestling with shadow AI visibility, preparing for EU AI Act compliance, or trying to figure out what “good” looks like in the Microsoft 365 ecosystem we’ve been there. 
 
If you are looking for a practical starting point, we have also created a detailed guide to help organizations build responsible, scalable governance frameworks. You can download our Responsible AI Governance Whitepaper 

We focus on three things: practical frameworks that don’t require a law degree to understand, real-world implementation experience from hundreds of transformations, and a genuine commitment to making governance an enabler rather than a roadblock. 

Ready to get ahead of the compliance curve? Let’s have a conversation about where you are, what’s coming, and how to build governance that enables innovation rather than stifling it. Contact our team we promise to make it as painless as possible. 

Coming up next: Hybrid Work Gets Its Second Act – why flexible work isn’t dead, it’s just finally growing up. 

Hungry for more?

If you’re waiting for a sign, this is it.

We’re a certified amazing place to work, with an incredible team and fascinating projects – and we’re ready for you to join us! Go through our simple application process. Once you’re done, we will be in touch shortly!

Who is Insentra?

Imagine a business which exists to help IT Partners & Vendors grow and thrive.

Insentra is a 100% channel business. This means we provide a range of Advisory, Professional and Managed IT services exclusively for and through our Partners.

Our #PartnerObsessed business model achieves powerful results for our Partners and their Clients with our crew’s deep expertise and specialised knowledge.

We love what we do and are driven by a relentless determination to deliver exceptional service excellence.

Insentra ISO 27001:2013 Certification

SYDNEY, WEDNESDAY 20TH APRIL 2022 – We are proud to announce that Insentra has achieved the  ISO 27001 Certification.