Written

Introducing the AI Compliance Lab — And What We Learned in Our Latest Session

Christiana A

5 mins

Here's a question most boards can't answer with any certainty: how is AI being used in your organisation right now?

Not theoretically. Not according to policy. In practice — across your teams, in daily workflows, in the decisions that end up in front of regulators. If that question makes you uncomfortable, you're not alone. And that discomfort is exactly what we discussed in last week's AI Compliance Lab


What is the AI Compliance Lab?

The AI Compliance Lab is ChatKYC's bi-weekly Youtube Live series, hosted by co-founder Omasirichukwu Anyanwu. Each session focuses on tackling the real, practical questions around AI in regulated environments, not the theory, but what's actually happening on the ground coupled with practical tips on how best to use AI in the compliance domain.


This Session: Governance, Shadow AI, and Where Firms Fall Short.

In our latest session, Omasirichukwu was joined by Ayo Omogunsoye, Senior Regulatory Compliance Consultant, for a conversation that cut straight to the tension compliance professionals are living with every day: AI adoption is outpacing AI governance, and the gap is where the risk sits.

Here's what the discussion surfaced.

Most firms don't have visibility over their own AI usage. Not just at team level — at board level. As Omasirichukwu put it, if a regulator walked in today and asked you to evidence your AI governance, could you? Could you show not just a policy document, but the underlying methodology behind AI-assisted outputs? For many firms, the honest answer is no. Employees are already using tools like ChatGPT and Copilot to get work done faster, but there's often no clear inventory of what's being used, no approval process, and no audit trail.

Banning AI makes the problem worse, not better. Ayo introduced the concept of "shadow AI" — where firms that prohibit AI usage entirely end up with staff using it anyway, invisibly, with no oversight whatsoever. As he put it: you can only police what you actually know about. If you don't know where AI is being used, you don't have control. The answer isn't prohibition. It's proper oversight — an AI use case register, risk assessments for each tool, clear human oversight processes, and staff training that goes beyond a slide deck.

Risk ownership sits with everyone — not IT, not compliance, not the tool. This was one of the sharpest points in the session. Ayo was direct: risk ownership of AI sits across all three lines of defence. It's not an IT issue. It's not solely a compliance department concern. If you use an AI tool to produce a piece of work, you own the output. You can't offload accountability to an LLM. And regulators won't accept "the AI told me to" as a defence.


The Moments That Made It Real

What set this session apart from the usual AI governance conversation was the specificity. Two examples in particular landed hard.

Ayo shared a personal experience where he'd used an AI tool to research a tax question. The tool confidently told him the rate was 6%. He took that at face value — until he spoke to a subject matter expert who corrected him: the rate had changed to 8% since April of the previous year. When Ayo challenged the AI, it doubled down. It insisted on 6% until he pushed back forcefully, at which point it apologised and admitted it had been referencing outdated regulations. His takeaway was blunt: if he hadn't consulted a human expert, he'd have made a decision based on wrong information and never known.

Omasirichukwu built on this with a compliance-specific example. Nigeria was removed from the FATF grey list in late 2025 — but depending on the LLM that you are using, it may still report it as high-risk, because their training data hasn't caught up. If you're an analyst running a screening assessment and your AI tool tells you Nigeria is still on the grey list, that sounds plausible. You'd associate the jurisdiction with higher risk. But it's wrong. And that kind of error, in a compliance context, has real consequences.

The underlying point from both examples: the data that trains these models isn't always current. Newer model versions don't automatically mean better outputs for compliance-specific use cases. Testing and verification aren't optional extras — they're the baseline.


The Practical Takeaway

The session closed with concrete guidance rather than abstract principles.

On prompting: the quality of your output is directly tied to the quality of your input. Omasirichukwu made the point that a detailed, context-specific prompt — one that includes your firm's size, jurisdiction, business model, and specific regulatory context — might take an extra 10 to 20 minutes to construct. That investment pays for itself. As he put it, Gordon Ramsay's prompt for a recipe would look completely different from yours or mine, because he is a world class professional chef who knows exactly what he's looking for. The same applies in compliance.

On human oversight: Ayo's framing was memorable — "you don't need perfection, you just need control." Every AI output should be reviewed. Challenge the response. Ask for sources. Click on those sources. And if you're unsure, escalate to someone with domain expertise. There's no reason an AI output should go unchecked.

On getting started: for those not yet using AI tools at all, Omasirichukwu's advice was straightforward — start. Even outside work, get familiar with how these tools operate, because they will impact your role whether you're ready or not. Existing compliance tools already use machine learning; the shift to generative AI is an evolution, not a revolution. But for those with personal regulatory liability — MLROs, heads of risk, senior managers — the stakes are higher. Use AI, but use it with an extra layer of care.

The closing line from Ayo summed it up best: the organisations that win won't be the ones using the most AI tools. They'll be the ones governing them better.


Watch the Replay & Join the Next Session

Missed the session? The full replay is available on ChatKYC's Youtube Page. Follow Omasirichukwu on LinkedIn to catch the next AI Compliance Lab — new sessions drop every two weeks.

If you're a compliance professional navigating AI adoption in your firm and want to be part of the conversation, join us live. Bring your questions — that's what the Lab is for.


Stay Ahead of Changing Regulations

Manage Compliance Like Never Before.

The information provided by ChatKYC is for informational purposes only and does not constitute legal, financial, or professional advice. While we strive to ensure accuracy, we do not guarantee the completeness or reliability of any response. Users should always check ChatKYC's responses and where relevant, consult qualified professionals before making any decisions solely based on ChatKYC interactions.


Privacy & Security: Conversations with our AI chatbot may be monitored or recorded to improve service quality and ensure compliance with legal standards. Please refrain from sharing sensitive or confidential information. Your use of this chatbot is subject to our [Privacy Policy] and [Terms of use].


No Liability: Oganiru Advisory Ltd and its affiliates disclaim any liability for actions taken or not taken based on ChatKYC responses.


Use of this website and ChatKYC is at your own risk.
© [2026] [Oganiru Advisory Ltd]. All rights reserved.

Follow us on our socials