Quabbit

Safety Protocol

Last updated: March 19, 2026. How we protect users and ensure responsible use of AI simulations.

1. Simulation Labeling

All AI outputs on Quabbit are simulations. They are generated for informational and exploratory purposes only. They do not constitute professional medical, legal, or financial advice. Users should not rely on them for decisions that require licensed expertise. Seek qualified professionals for real-world guidance.

2. Consent for High-Risk Categories

Bots in the following categories require explicit consent before a session can begin:

  • MEDICAL — symptom analysis, treatment options, drug interactions
  • LEGAL — contract review, compliance, case research
  • FINANCE — portfolio analysis, market trends, risk assessment

Users must actively confirm (e.g. via checkbox) that they understand the simulation disclaimer before sending messages. Consent is recorded once per session. A standard disclaimer is appended to outputs: "Not a licensed professional. This is not legal or medical advice. Consult a qualified professional."

3. Content Blacklist

We block prompts that attempt to:

  • Encourage self-harm or suicide
  • Request instructions for building weapons or explosives
  • Seek illegal drugs or weapons
  • Involve child abuse or exploitation
  • Request doxxing or harassment of individuals

Blocked prompts are rejected with a generic message. We do not log or store the full content of blocked requests beyond what is necessary for abuse prevention.

4. Multi-Agent Debate

Our platform uses a Specialist, Challenger, and Judge architecture. The Challenger critiques the Specialist's answers; the Judge resolves disagreements. This design reduces single-model hallucinations and surfaces conflicting viewpoints. However, consensus does not guarantee accuracy or completeness. Users should always verify critical information through authoritative sources.

5. Rate Limiting & Abuse

We apply rate limits to prevent abuse and ensure fair usage. Excessive requests may be throttled or blocked. We monitor for patterns that indicate abuse or attempts to circumvent safety measures.

6. Audit & Support

Session conversations may be stored for quality assurance, support, and safety investigations. We use this data only for legitimate platform operations and in accordance with our Privacy Policy. We do not use conversation content to train third-party models without explicit consent.

7. Changes

We may update this protocol as we improve our safety systems. Material changes will be communicated via our website or in-app notice.

8. Contact

For safety-related questions or to report concerns, contact us at the address or email provided on our website.

← Back to Home