Acceptable Use Policy

ChatKYC Platform: Acceptable Use Policy

Executive Summary

This Acceptable Use Policy ("Policy") governs the use of the ChatKYC platform, its associated services, and its proprietary AI agents, including Obira™ and RegRadar™, (collectively, the "Platform"). The Platform, developed and owned by Oganiru Advisory Ltd., is a powerful regulatory intelligence and compliance orchestration system designed to assist compliance professionals. This Policy establishes the clear boundaries between permitted and prohibited activities to protect the integrity of the Platform, its users, and Oganiru Advisory Ltd.

The core principle of this Policy is that ChatKYC is an assistive tool designed to augment, not replace, professional human judgment. Permitted uses include generating initial drafts of compliance documents, conducting regulatory research, performing gap analyses of existing policies, and facilitating professional training. Conversely, this Policy strictly prohibits using the Platform to circumvent compliance obligations, submitting un-reviewed AI output to regulators, generating fraudulent documentation, or for any purpose deemed an "unacceptable risk" under the EU AI Act, such as social scoring or manipulative techniques.

A critical component of this Policy is the AI Output Disclaimer. All content generated by the Platform must be treated as a draft and is subject to rigorous review and verification by qualified professionals. The Platform does not provide legal, regulatory, or compliance advice, and the ultimate responsibility for the accuracy and appropriateness of any document rests solely with the user. This Policy outlines user responsibilities, a tiered enforcement model, and a commitment to global AI governance principles from the OECD, UK, EU, US, and Canada. Adherence to this Policy is a condition of continued access to the Platform.

1.0 INTRODUCTION (Purpose and Objectives)

1.1 What This Policy Covers

This Policy defines the terms and conditions under which users ("Users") may access and utilize the ChatKYC Platform, providing a framework for responsible, ethical, and lawful use.

The primary objectives of this Policy are to:

  • Establish Clear Guidelines: Provide unambiguous rules regarding acceptable and unacceptable conduct on the Platform.

  • Protect Platform Integrity: Safeguard the intellectual property, security, and operational stability of the ChatKYC Platform and its underlying technologies.

  • Ensure User Responsibility: Reinforce that while the Platform is a sophisticated tool, the ultimate accountability for compliance decisions and documentation rests with the User.

  • Mitigate Risk: Minimize the risk of the Platform being used for illegal, fraudulent, or unethical purposes that could harm Users, Oganiru Advisory Ltd., or the public.

  • Promote Best Practices: Encourage the adoption of a "human-in-the-loop" approach, where AI-generated content serves as a starting point for expert human review and refinement.

1.2 Agreement to Terms

By accessing, registering for, or using any part of the ChatKYC Platform, you acknowledge that you have read, understood, and agree to be bound by the terms and conditions set forth in this Policy. This Policy constitutes a binding agreement between the User and Oganiru Advisory Ltd.

1.3 Governing Principles: A Global Commitment to Responsible AI

As a UK-registered company operating globally, Oganiru Advisory Ltd. is committed to the responsible and ethical development of AI. The design, operation, and governance of the ChatKYC Platform are guided by a synthesis of leading international principles and regulatory frameworks.

1.3.1 Foundational OECD AI Principles We align with the core principles established by the Organisation for Economic Co-operation and Development (OECD), which serve as a global reference point for trustworthy AI:

  • Human-Centred Values & Fairness: The "human-in-the-loop" requirement is central to this Policy, ensuring human oversight and ethical considerations.

  • Transparency & Explainability: We are transparent that ChatKYC is an AI system and provide citations to support explainability.

  • Robustness, Security & Safety: We are committed to building a secure and resilient platform.

  • Accountability: This Policy establishes clear lines of accountability for Oganiru Advisory Ltd. as the provider and for the User as the implementer of AI-assisted work.

1.3.2 Alignment with UK & EU Frameworks

  • United Kingdom: We are guided by the five principles of the UK's AI Regulation White Paper (Safety, Transparency, Fairness, Accountability, Contestability).

  • European Union: We are committed to the principles of the EU AI Act, particularly its requirements for Transparency, Risk Management, and robust Data Governance. To protect user confidentiality and prevent model contamination, ChatKYC does not use user chats, uploaded documents, or any other interactions to train our public-facing AI models.

1.3.3 Alignment with North American Frameworks

  • United States: Our risk management approach is informed by the NIST AI Risk Management Framework (AI RMF), focusing on the core functions of Govern, Map, Measure, and Manage. We also embrace the principles of the White House Executive Order on Safe, Secure, and Trustworthy AI.

  • Canada: We proactively align with the principles emerging from Canada's Artificial Intelligence and Data Act (AIDA), including requirements for accountability, transparency, and risk mitigation.

1.4 Contestability and Redress: Formal Issue Resolution Process

We provide a clear, multi-step process for Users to report, contest, and seek resolution for issues encountered on the Platform.

  • Step 1: Initial Report Submission: Users must submit all issue reports via the official "Report Issue" feature or by emailing info@chatkyc.ai with required details.

  • Step 2: Acknowledgment and Triage: Users will receive a ticket number. Our team will triage the issue within one (1) business day.

  • Step 3: Investigation and Response: We will provide an initial substantive response within five (5) business days.

  • Step 4: Resolution and Escalation: We will notify the User of the resolution. If unsatisfied, the User may request an escalation to senior management for a final review.

2.0 PERMITTED USES AND USER TIERS

2.1 Permitted Activities

  • Generating Compliance Documentation Drafts: Creating initial drafts of policies, procedures, and risk assessments.

  • Research and Reference: Conducting research on regulations, standards, and best practices.

  • Gap Analysis of Existing Policies: Using ChatKYC’s agents to analyze documents against regulatory frameworks.

  • Training and Education: Developing training materials and simulating compliance scenarios.

  • Miscellaneous: Using ChatKYC and its associated unerlying technologies in any permissible (permissible in the context of this policy refers to a use that is not in contravention of prohbited use case policy listed in section 3) risk and compliance use cases not listed above.

2.2 User Tiers and Access Rights

  • Enterprise Client: For large organizations. Provides the highest usage limits, full API access, and dedicated support.

  • Professional User: For individual professionals and small teams. Offers significant access but may have volume limitations depending on token use (Users in this tier will have the option to purchase additional tokens when necessary)

  • Basic User: For individual users offering limited access to ChatKYC features. Users in this tier will have the option to upgrade their tier if required.

3.0 PROHIBITED USES

Any use of the Platform for purposes other than those expressly permitted is strictly prohibited.

  • 3.1 Submitting AI Output Directly to Regulators (or any other Authorities) Without Review

  • 3.2 Circumventing Compliance Obligations

  • 3.3 Generating Fraudulent Documentation

  • 3.4 Misrepresenting AI-Generated Content as Human-Authored

  • 3.5 Reverse Engineering and System Disruption

  • 3.6 Unlawful or Unethical Activities

  • 3.7 Prohibition of Unacceptable Risk Applications (EU AI Act Alignment) Users are strictly prohibited from using the Platform for any application classified as "unacceptable risk" under Article 5 of the EU AI Act, including social scoring, manipulative techniques, or the exploitation of vulnerabilities.

4.0 AI OUTPUT DISCLAIMER

4.1 AI Disclaimer 

ChatKYC provides AI-generated content for informational purposes only. All outputs should be treated as drafts requiring professional review before use. ChatKYC does not provide legal, regulatory, or compliance advice. Users are solely responsible for verifying the accuracy and appropriateness of any content before submission to regulators or any other relevant authorities.

4.2 Hallucination Acknowledgment & Citation Verification

While ChatKYC employs anti-hallucination protocols and provides citations, AI systems may occasionally generate inaccurate information ("hallucinations"). Users must verify all citations against original sources.

4.3 User Assumes Full Responsibility

The User acknowledges and agrees that they are solely responsible and liable for any actions taken based on the Platform's output. Oganiru Advisory Ltd. bears no responsibility for the consequences of any User's failure to adequately review, vet, and approve AI-generated content.

5.0 USER RESPONSIBILITIES

  • Maintain Confidentiality of Account: Safeguard login credentials and report any unauthorized use.

  • Review All Outputs Before Use: Subject every output to a critical review by a qualified subject matter expert.

  • Report Errors or Hallucinations: Report any identified inaccuracies to support the Platform's improvement.

  • Keep a Human-in-the-Loop for Decisions: Ensure all significant compliance decisions are made by a qualified human.

6.0 ENFORCEMENT

Oganiru Advisory Ltd. reserves the right to investigate any suspected violation of this Policy and apply a tiered enforcement framework.

6.1 Tier 1: Formal Warning

For minor, first-time infractions, a formal written warning will be issued.

6.2 Tier 2: Temporary Suspension

For repeated minor infractions or a single moderately severe violation, we may temporarily suspend the User's account.

6.3 Tier 3: Permanent Termination

For severe violations (e.g., fraudulent use, system disruption), we reserve the right to permanently terminate the User's access to the Platform.

6.4 No Refund for Policy Violations

If access is suspended or terminated due to a policy violation, the User will not be entitled to a refund.

8.0 DEFINITIONS

  • Platform: The entire suite of services provided by Oganiru Advisory Ltd., including the web interface, APIs, and the AI agents Obira™ and RegRadar™.

  • User: Any individual or entity that accesses or uses the ChatKYC Platform.

  • AI-Generated Content: Any text, data, analysis, or documentation produced by the ChatKYC Platform.

  • Hallucination: An instance where the AI generates information that is factually incorrect, nonsensical, or not grounded in the provided source data.