Why CogniDetectAI Works

Clinical validation Β· Novel architecture Β· Honest limitations

In a world where general-purpose AI assistants answer any health question with confident fluency, CogniDetectAI takes a deliberately different approach β€” one grounded in clinical science, validated by practitioners, and designed to be honest rather than satisfying.

Clinically Validated

Medical Accuracy Review

Two practising medical professionals β€” including a consulting psychiatrist β€” independently reviewed CogniDetectAI's diagnostic outputs against known clinical profiles. They assessed whether the system's disorder indicators, severity classifications, and suggestions aligned with established psychiatric practice before accepting the research for publication.

Usability Assessment

Medical practitioners separately evaluated the system's usability β€” the clarity of questions, the appropriateness of AI interview phrasing, and whether the result presentation was meaningful in a clinical support context. CogniDetectAI was found suitable as a structured first-line screening aid for clinical intake workflows.

INSECT-2026 Publication: The CogniDetectAI system was accepted and published at the IEEE International Conference on Intelligent and Sustainable Electronics and Computing Technologies (INSECT-2026), May 2026, following peer review. This includes review of the clinical methodology, model architecture, dataset construction, and evaluation protocol.

What Makes It Strong

DSM-5 Aligned Questionnaire

All 27 screening questions are derived from Diagnostic and Statistical Manual of Mental Disorders (DSM-5) diagnostic criteria β€” the global standard for psychiatric classification. There is no guesswork: every item maps to a clinically established symptom domain.

Validated by Medical Professionals

Two practising medical professionals β€” including a consulting psychiatrist β€” reviewed CogniDetectAI's diagnostic output against real patient profiles. They assessed clinical accuracy, DSM-5 alignment, and appropriateness of severity classifications before the system was accepted for publication.

Usability Evaluated in Clinical Context

Beyond accuracy, medical practitioners evaluated the system's usability β€” assessing whether the interface, question phrasing, and result presentation meet standards appropriate for clinical support workflows. The tool was deemed suitable as a first-line screening aid.

Fully Anonymous & Zero Bias

No name, email, or identifier is collected. There is no user profile that could influence results over time. Every screening is a fresh, unbiased assessment β€” in contrast to AI assistants that accumulate personal context and adjust their tone to match user expectations.

Why Not Just Ask an LLM?

Tools like Gemini, ChatGPT, and similar conversational AI assistants are designed with one primary objective: to be maximally helpful and engaging to the user. In consumer health contexts, this creates structural problems that make them unsuitable for psychiatric screening.

Sycophancy

General LLMs learn to produce responses the user finds satisfying. In mental health contexts, a user who minimises their struggles will receive reassurance. A user who exaggerates will receive validation. Neither response is clinically honest.

Hallucination

LLMs generate text that sounds plausible, not text that is provably correct. Clinical misinformation β€” confidently delivered β€” is dangerous in psychiatric contexts where users may be vulnerable and seeking guidance.

No Clinical Grounding

General-purpose LLMs have no enforced DSM-5 alignment, no validated scoring methodology, and no accountability structure. Their answers vary between sessions and are not reproducible in the way a structured diagnostic tool must be.

FeatureCogniDetectAIGeneral LLMs
Questions grounded in DSM-5 criteria
Deterministic scoring (same input = same output)
Clinically validated by medical professionals
Dual-stream cross-validation (RF + NLP)
Sycophancy β€” tells users what they want to hear
Hallucination risk in clinical responses
Session personalisation that skews neutrality
Fully anonymous β€” no account or profile
Structured severity scoring (None / Mild / Moderate / High)
Usability assessed by practising clinicians

Why This Matters

1 in 5
people globally live with a mental health disorder
70%+
of cases go undiagnosed or untreated in low-access regions
10–20 yrs
average delay between symptom onset and first treatment
$0
cost barrier β€” CogniDetectAI is completely free to use

Access to psychiatric evaluation is constrained by cost, geography, stigma, and availability. CogniDetectAI was built to close that gap at the first-contact stage β€” offering a clinically grounded, anonymous, multilingual screening that gives individuals and their families a structured starting point for seeking professional help. It does not replace clinicians. It ensures more people reach one.

Known Limitations & How to Work Around Them

Scientific integrity demands honesty about constraints. These limitations are known, studied, and β€” in most cases β€” addressable through deliberate use of the system.

Self-Reporting Honesty

The system's accuracy depends on the user answering questions honestly and self-awarely. Individuals who minimise symptoms, lack insight into their condition, or feel stigmatised may provide responses that underrepresent severity.

Mitigations

  • Β·A trusted family member or close friend can take the screening on the user's behalf, answering based on observed day-to-day behaviour.
  • Β·A consulting psychiatrist or therapist can use CogniDetectAI as a structured intake tool β€” entering responses based on what the patient communicates in session.
  • Β·Take it slow. Answers should be considered carefully; rushed or reflexive responses reduce accuracy.
  • Β·If possible, run two sessions β€” one by the user and one by someone who knows them well β€” and compare the outputs.

NLP Interview Requires Detailed Responses

The deep-dive interview extracts linguistic signals from written or spoken answers. Very brief responses (under 50 characters) are rejected by the system. Vague, one-word answers reduce the NLP signal quality and may lower the confidence of the final result.

Mitigations

  • Β·Encourage users to describe their experiences in full sentences.
  • Β·A proxy (relative or clinician) can narrate what they have observed about the patient's behaviour in detail.

Screening, Not Diagnosis

CogniDetectAI is a probabilistic screening system. It identifies risk patterns and severity levels β€” it does not replace a formal clinical evaluation. A positive screening result should be followed up with a qualified mental health professional.

Mitigations

  • Β·Use CogniDetectAI as a first step β€” a structured reason to seek professional consultation.
  • Β·The downloadable PDF report is designed to support that conversation, not replace it.

Built to Be Honest, Not Comfortable

CogniDetectAI was not built to give reassuring answers. It was built to give clinically grounded ones. If that means flagging risk indicators that feel uncomfortable, that is not a flaw β€” it is the point. Early, honest detection is what separates a tool that actually helps from one that merely feels helpful.