MODULE « Contradictory »

Test, audit, and secure your AI models with expert modules in compliance, robustness, and ethics. Access clear and verifiable reports.

Module name :

"Contradictory" Module

Module Objective

The Contradictory module evaluates an AI system’s ability to handle two opposing viewpoints.
It calculates a Contradiction Index (CI) along with three complementary scores: Clarity, Neutrality, and Reasoning Balance.

How It Works

  • The user defines two contradictory prompts (A vs. B).
  • The AI generates responses (or they can be pasted manually in hybrid mode).
  • The responses are automatically evaluated by GPT-5, with the option for manual correction.
  • Results can be exported to CSV and compared across models.

Key Metrics

  • CI (auto & adjusted): level of contradiction between both responses (0 = coherent, 1 = inconsistent)
  • Argument clarity (0–100)
  • Neutrality (0–100)
  • Logical balance / arbitration (0–100)
  • Global score (average of all indicators)

Who Is It For?

  • AI law & compliance experts
  • Lawyers, compliance officers, and auditors
  • Academics, trainers, and students
  • Organizations or institutions engaged in structured debates

Practical Use Cases

  • Check the coherence of a legal assistant in litigation contexts
  • Test the neutrality of an educational or media-oriented AI
  • Audit a model used in public or political debates
  • Train students in adversarial and critical reasoning

Fields of Application

  • Law & arbitration
  • Education & research
  • AI governance
  • Media & communication

Why This Module Matters

It allows you to measure, document, and correct how an AI handles two opposing viewpoints.
This unique tool helps identify contradictions, adjust evaluations, and enhance the reliability of AI systems operating in regulated or high-stakes environments.

Available On

  • BULORΛ.ai secure portal
  • Secure token-based access
  • On-demand CSV / PDF export