Cybiont GmbH is committed to rigorous validation of our technology stack and transparent mapping to relevant regulatory frameworks. Our architecture is designed to support compliance by design.
CybiontGov and Ω9
What is CybiontGov?
CybiontGov is the independent governance layer that monitors and verifies AI usage in regulated environments. It runs inside a hardware-secured Trusted Execution Environment (TEE), works across major clouds, and produces verifiable, tamper-evident evidence of AI interactions — independent from the hyperscaler running the model.
Core functions: - Captures and verifies every AI interaction (input, output, metadata) - Produces cryptographically anchored evidence for auditors, supervisors, and internal control functions - Enforces risk-adaptive governance rules in real time (e.g. for borderline or critical decisions) - Supports outsourcing compliance (e.g. FINMA Circular 2018/3) with independent oversight
What is Ω9 (Omega-9)?
Ω9 is the epistemic analysis engine that evaluates the reliability of AI outputs. It generates an “Epistemic Trust Score” per interaction — reflecting plausibility, consistency, and potential risk.
Core functions: - Detects hallucinations and unstable patterns (model-agnostic) - Assesses output coherence using ΔH-based epistemic metrics - Optional governance binding: scores can trigger governance actions (review, fallback options, enhanced logging)
How CybiontGov and Ω9 work together
Together they form a runtime proof chain: 1. CybiontGov monitors and verifies every AI interaction. 2. Ω9 analyses epistemic trust and risk. 3. Both elements are cryptographically bound into an immutable evidence record. 4. Governance rules can adapt dynamically based on Ω9 results. 5. Outcome: audit-ready, regulator-grade runtime evidence.
Validation Milestones
Completed (Q3 2025)
- Patents Filed: Four patent-pending innovations in trustworthy and governed AI: cryptographic verification of AI interactions, risk-adaptive governance mechanisms, resilient security in confidential computing environments, and epistemic trust analysis (Ω9) for assessing AI output quality.
- Prototype Testing: Internal prototypes validated for stress testing and isolation control efficacy.
- Theoretical Framework: Thermodynamic analysis of knowledge systems completed (Knowledge Series).
In Progress (Q4 2025 - Q1 2026)
- Performance Profiling: Benchmarking attestation latency and ZKP aggregation throughput.
- FINMA Mapping: Detailed mapping of architecture capabilities to FINMA Circular 08/2024 requirements (risk-proportional governance).
- Security Review: Third-party security audit of the zero-trust security infrastructure and cryptographic protocols.
Planned (2026)
- Pilot Deployment: Deployment of the integrated stack with a regulated collaborator.
- EU AI Act Assessment: Full assessment against EU AI Act requirements for high-risk systems (transparency, auditability, human oversight).