AI Risk & Explainability Disclaimer

Last Updated: January 23, 2025

QuROI services use advanced multi-agent AI systems powered by the QuROI Context Engine API. This disclaimer explains the inherent risks and limitations of AI-generated outputs.

⚠️ CRITICAL: AI Systems Are Not Infallible

AI outputs can be incorrect, misleading, biased, or harmful. Do not rely on them for decisions with legal, financial, medical, safety, or other high-stakes consequences.

1. Lack of Perfect Accuracy

AI-generated outputs may be:

  • Incorrect: Factually wrong or based on outdated information
  • Incomplete: Missing critical context or nuance
  • Misleading: Presenting plausible but false information (hallucinations)
  • Unverifiable: Making claims without citations or sources
  • Biased: Reflecting biases present in training data or model design
  • Outdated: Based on information with a knowledge cutoff date
  • Contradictory: Self-contradictory or logically inconsistent

2. Probabilistic & Non-Deterministic Behavior

Wisepanel's multi-agent outputs may vary across runs due to:

  • Model randomness and temperature sampling
  • Dynamic agent role assignments
  • Variations in multi-agent debate dynamics
  • Provider response variability

Repeatability is not guaranteed. Running the same prompt twice may produce different results.

3. No Professional Advice

AI outputs must NOT be relied upon for:

  • Legal decisions, contracts, or regulatory compliance
  • Medical diagnosis, treatment, or care decisions
  • Financial investments, tax planning, or advisory
  • Immigration, residency, or visa matters
  • Safety-critical operations (aviation, nuclear, transportation)
  • Emergency response or crisis management

Always consult a qualified human expert for decisions with serious consequences.

4. No Explainability Guarantee

While QuROI provides structured multi-agent reasoning, the internal computation of third-party AI models is:

  • Non-transparent (closed-source models)
  • Not auditable by users
  • Not explainable in a deterministic way
  • Subject to vendor limitations and changes

We provide interpretable scaffolding (agent debates, role assignments) but not ground-truth explainability of model internals.

5. User Responsibility

By using QuROI services, you agree to:

  • Review outputs critically and validate them before use
  • Take full responsibility for your decisions based on AI outputs
  • Not rely on outputs where harm may occur
  • Understand that AI is a tool, not a replacement for human judgment

6. Third-Party Model Risks

QuROI uses third-party AI model providers (industry-leading providers). These providers may:

  • Change model behavior without notice
  • Experience outages or degraded performance
  • Introduce new limitations or restrictions
  • Modify their terms of service

QuROI, Inc.

Suite 4805, 4th Floor, 1007 North Orange Street

Wilmington, DE 19801, United States

© 2025 QuROI, Inc. All rights reserved.