Why CogniDetectAI Works
Clinical validation Β· Novel architecture Β· Honest limitations
In a world where general-purpose AI assistants answer any health question with confident fluency, CogniDetectAI takes a deliberately different approach β one grounded in clinical science, validated by practitioners, and designed to be honest rather than satisfying.
Clinically Validated
Medical Accuracy Review
Two practising medical professionals β including a consulting psychiatrist β independently reviewed CogniDetectAI's diagnostic outputs against known clinical profiles. They assessed whether the system's disorder indicators, severity classifications, and suggestions aligned with established psychiatric practice before accepting the research for publication.
Usability Assessment
Medical practitioners separately evaluated the system's usability β the clarity of questions, the appropriateness of AI interview phrasing, and whether the result presentation was meaningful in a clinical support context. CogniDetectAI was found suitable as a structured first-line screening aid for clinical intake workflows.
What Makes It Strong
DSM-5 Aligned Questionnaire
All 27 screening questions are derived from Diagnostic and Statistical Manual of Mental Disorders (DSM-5) diagnostic criteria β the global standard for psychiatric classification. There is no guesswork: every item maps to a clinically established symptom domain.
Validated by Medical Professionals
Two practising medical professionals β including a consulting psychiatrist β reviewed CogniDetectAI's diagnostic output against real patient profiles. They assessed clinical accuracy, DSM-5 alignment, and appropriateness of severity classifications before the system was accepted for publication.
Usability Evaluated in Clinical Context
Beyond accuracy, medical practitioners evaluated the system's usability β assessing whether the interface, question phrasing, and result presentation meet standards appropriate for clinical support workflows. The tool was deemed suitable as a first-line screening aid.
Fully Anonymous & Zero Bias
No name, email, or identifier is collected. There is no user profile that could influence results over time. Every screening is a fresh, unbiased assessment β in contrast to AI assistants that accumulate personal context and adjust their tone to match user expectations.
Why Not Just Ask an LLM?
Tools like Gemini, ChatGPT, and similar conversational AI assistants are designed with one primary objective: to be maximally helpful and engaging to the user. In consumer health contexts, this creates structural problems that make them unsuitable for psychiatric screening.
Sycophancy
General LLMs learn to produce responses the user finds satisfying. In mental health contexts, a user who minimises their struggles will receive reassurance. A user who exaggerates will receive validation. Neither response is clinically honest.
Hallucination
LLMs generate text that sounds plausible, not text that is provably correct. Clinical misinformation β confidently delivered β is dangerous in psychiatric contexts where users may be vulnerable and seeking guidance.
No Clinical Grounding
General-purpose LLMs have no enforced DSM-5 alignment, no validated scoring methodology, and no accountability structure. Their answers vary between sessions and are not reproducible in the way a structured diagnostic tool must be.
Why This Matters
Access to psychiatric evaluation is constrained by cost, geography, stigma, and availability. CogniDetectAI was built to close that gap at the first-contact stage β offering a clinically grounded, anonymous, multilingual screening that gives individuals and their families a structured starting point for seeking professional help. It does not replace clinicians. It ensures more people reach one.
Known Limitations & How to Work Around Them
Scientific integrity demands honesty about constraints. These limitations are known, studied, and β in most cases β addressable through deliberate use of the system.
Self-Reporting Honesty
The system's accuracy depends on the user answering questions honestly and self-awarely. Individuals who minimise symptoms, lack insight into their condition, or feel stigmatised may provide responses that underrepresent severity.
Mitigations
- Β·A trusted family member or close friend can take the screening on the user's behalf, answering based on observed day-to-day behaviour.
- Β·A consulting psychiatrist or therapist can use CogniDetectAI as a structured intake tool β entering responses based on what the patient communicates in session.
- Β·Take it slow. Answers should be considered carefully; rushed or reflexive responses reduce accuracy.
- Β·If possible, run two sessions β one by the user and one by someone who knows them well β and compare the outputs.
NLP Interview Requires Detailed Responses
The deep-dive interview extracts linguistic signals from written or spoken answers. Very brief responses (under 50 characters) are rejected by the system. Vague, one-word answers reduce the NLP signal quality and may lower the confidence of the final result.
Mitigations
- Β·Encourage users to describe their experiences in full sentences.
- Β·A proxy (relative or clinician) can narrate what they have observed about the patient's behaviour in detail.
Screening, Not Diagnosis
CogniDetectAI is a probabilistic screening system. It identifies risk patterns and severity levels β it does not replace a formal clinical evaluation. A positive screening result should be followed up with a qualified mental health professional.
Mitigations
- Β·Use CogniDetectAI as a first step β a structured reason to seek professional consultation.
- Β·The downloadable PDF report is designed to support that conversation, not replace it.
Built to Be Honest, Not Comfortable
CogniDetectAI was not built to give reassuring answers. It was built to give clinically grounded ones. If that means flagging risk indicators that feel uncomfortable, that is not a flaw β it is the point. Early, honest detection is what separates a tool that actually helps from one that merely feels helpful.