Loading...
Loading...
AI-powered structured assessments and adaptive time-bound interviews — with proctoring, real-time scoring, and rich narrative insights. Evaluate candidates at scale with fairness and depth.
The Problem
One-size-fits-all tests — or no test — means panels argue from memory. You lose clarity on who actually fits the role.
Without a shared framework, "good" shifts by interviewer — and candidates get uneven treatment.
Starting from zero takes days most teams don't have — so you skip rigor when you need it most.
Unstructured notes and recency bias replace comparable scores — harder offers, weaker consensus.
Capabilities
From competency map to scored results — AI-generated questions, hybrid scoring, skill radar, and executive dashboards in one flow.
MCQ, multi-select, short answer, fill-in-the-blank, matching, ordering, true/false, and writing prompts — AI-generated per candidate, not pulled from a static bank.
Competencies tied to level, function, and team — so assessments reflect real job success with difficulty multipliers and partial credit.
An AI Template Assistant drafts assessments from your role description. A Template Governance Agent validates them before saving.
Start from patterns that fit common roles — engineering, managers, sales, CS, HR, and more — with pre-configured skill weights and question mixes.
Hybrid scoring: objective auto-grading plus AI-assisted semantic grading for text answers, with partial credit (0 / 0.5 / 1) and difficulty multipliers.
Hiring teams contribute structured input in one place — full conversation transcripts, per-question scores with AI reasoning, and exportable PDF reports.
Skill radar charts, AI narrative insights, executive dashboards with aggregated analytics, comparative analysis, and CSV export.
Assessment Types
Pick the format that fits the moment — each shares the same scoring discipline so comparisons stay fair.
Pre-generated question widgets (MCQ, matching, ordering, etc.) delivered in a chat-style UI. AI generates unique questions per candidate from your competency framework — with auto-grading, partial credit, and rich narrative insights.
Adaptive AI-led conversational interviews with dual timers, a fairness blueprint for balanced skill coverage, real-time per-answer scoring, identity rechecks, and a comprehensive hire recommendation report.
Free-form AI interviews that adapt in real time — evaluating communication, depth, and problem-solving with scored summaries and clear hire, hold, or reject guidance.
Continuous skill measurement where difficulty adjusts to performance — brief daily checks that track skill evolution and flag development needs over time.
Question Types
Not pulled from a static bank. Each question is generated from your competency framework, tuned to the role, and scored with rubric-based evaluation.
One correct answer from options — auto-graded instantly.
Multiple correct answers — partial credit for partial matches.
Exact or semantic matching for precision recall.
Brief text with AI-assisted semantic grading.
Extended responses evaluated for depth and reasoning.
Drag-to-reorder items to test sequencing knowledge.
Pair related items to assess association and recall.
Quick factual checks with optional justification.
Structured Assessments
HR builds templates with a question mix, skills, timing, and proctoring rules. Candidates receive a magic link to take the test in a chat-style UI. Questions are AI-generated per candidate, scored automatically, and results include rich narrative insights.
Define skills to assess, question types, counts, points, per-type time limits, sections, passing score, and proctoring toggles. Or let the AI Template Assistant auto-draft from a role description.
Select a template and one or more candidates, set expiry and schedule. The system snapshots the template version and kicks off background AI question generation with retries and tenant quotas.
Once questions are ready, a magic-link email is sent. Candidates open the assessment without creating an account — token-based access, zero friction.
Per-question scores with AI reasoning, skill radar, narrative insights, proctoring report, full conversation transcript, and PDF export — all in one results view.
No account needed. If questions are still generating, a "preparing" screen keeps the candidate informed.
Timer details, rules, proctoring consent, and camera setup — clear expectations before the first question.
Structured question widgets — MCQ, multi-select, fill-in-the-blank, ordering, matching — appear one at a time. Candidates interact via control keywords (READY, NEXT) and rendered widgets.
The server validates timing with a grace window, runs hybrid scoring, generates the skill radar and AI insights, and writes results.
Time-Bound Interviews
Unlike structured assessments, time-bound interviews feature dynamic, real-time AI-generated questions driven by a fairness blueprint — ensuring balanced skill coverage and adaptive difficulty.
Define role, level, duration, question count targets, pass threshold, difficulty mix, and anti-manipulation settings. The system auto-generates a fairness blueprint and an interviewer policy.
Create assignments with invite tokens and optional email invitations. A Session Capacity Manager controls admission — if at capacity, candidates see a waiting room.
The HR hub shows tabs for Overview, Interviews, Assignments, and Results with deep drill-down into individual sessions, integrity flags, and AI reports.
Face capture creates a reference image. Browser enters fullscreen, tab-switch enforcement activates.
Dual timers run — an overall session timer and a per-question timer dynamically adjusted by difficulty, question type, and seniority level. Juniors get more time.
AI generates questions based on a fairness blueprint, adjusting for candidate performance, coverage gaps, and remaining time. Periodic identity rechecks compare live frames to the reference image.
The interview ends when all skills are "covered" per the blueprint, time expires, or violations trigger termination. A comprehensive AI report is generated instantly.
Key Capabilities
Hard session cap plus per-question limits, dynamically adjusted by difficulty, question type, and seniority level.
Persisted skill weights, coverage targets, difficulty mix, and adaptation rules — ensuring balanced evaluation for every candidate.
Tracks which skills have been adequately assessed and drives interview completion — no skill left untested.
AI generates questions based on performance, coverage gaps, and time remaining — deeper where the candidate is strong, broader where gaps appear.
Optional AI detection of evasion or manipulation in candidate responses, with server-side flagging and violation tracking.
Periodic AI face comparison (reference vs. live) during the session — confirming the same person throughout.
Zero score for no answer; partial credit for juniors on first timeout per skill — fair treatment that adjusts to experience level.
Each answer scored in real time with provider fallback (OpenAI → Anthropic → Gemini) for maximum resilience.
Compare
Both share the same proctoring, magic-link access, and ATS integration — but serve different evaluation needs.
| Feature | Cognaium | Time-Bound Interview |
|---|---|---|
| Format | Pre-generated structured widgets in a chat wrapper | Free-form conversational AI interview |
| Question generation | Batch-generated before session (async job queue) | Generated dynamically during the session |
| Adaptivity | Fixed question set per candidate | Adaptive based on performance and coverage |
| Timing | Overall duration with per-question time limits | Dual timers: session deadline + per-question (difficulty-adjusted) |
| Scoring | Objective + AI semantic grading, partial credit | Real-time per-answer AI scoring with provider fallback |
| Identity | Snapshot-based proctoring | Active identity rechecks (face comparison during session) |
| Completion trigger | All questions answered or time expires | All skills "covered" per blueprint, or time expires |
| Reports | Skill radar, AI insights, proctoring score | Comprehensive hire recommendation, communication assessment |
| Best for | Standardized skill testing at scale | Deep technical evaluation with adaptive depth |
Scoring & Results
Hybrid scoring, AI narrative insights, and visual radar charts — ready the moment the session ends.
Visual radar chart mapping candidate strengths across every assessed skill — instantly see where they excel and where gaps exist.
LLM-generated summaries covering strengths, weaknesses, and recommendations drawn from sample Q&A and scoring evidence.
Each answer scored in real time with evidence-based rubric evaluation and multi-provider AI fallback for resilience.
Time-bound interviews produce a comprehensive report: HR summary, hire/no-hire recommendation, communication assessment, and skill breakdown.
Complete conversation transcript, per-question scores with AI reasoning, proctoring snapshots, and PDF or CSV export.
Aggregated analytics, comparative analysis across candidates and cohorts, and exportable reports for leadership.
Integrity & Proctoring
Protect the signal in your scores without bolting on another vendor — configurable per assessment type, from lightweight to high-stakes.
Face detection and gaze tracking keep watch for presence and unusual activity throughout the session.
Tab switches trigger violation warnings — 3+ violations auto-terminate the assessment. All events are server-tracked.
A clear record of what happened and when — violation count, proctoring score, and validity flag in the results.
Face matching confirms the right person before questions unlock. Time-bound interviews add periodic rechecks during the session.
Dial monitoring rules up or down per assessment type — face detection, gaze tracking, tab-switch limits, and snapshot frequency.
Continuous client heartbeat confirms active session participation — no silent disconnects go unnoticed.
Platform
Magic links, ATS integration, scheduled invitations, and multi-provider AI — the infrastructure that makes assessments work at scale.
No candidate account needed — token-based authentication with zero-friction onboarding.
Server-tracked violations with configurable auto-termination thresholds.
Auto-trigger assessments from job applications or pipeline stage changes when auto_send_assessment is enabled.
Set future send dates with cron-based processing — plan campaigns ahead of time.
Conversational analytics assistant for HR to explore screening data with charts and trends.
Fallback chains across OpenAI, Anthropic, and Gemini for resilience — no single point of failure.
Outcomes
Use Cases
Hiring, promotion, screening at scale, and team development — with the same rigor.
Structured assessments test coding, systems thinking, and domain strength with auto-graded MCQ, ordering, and matching questions — before deep interview loops.
Time-bound interviews adapt questions in real time to probe judgment, communication, and strategy — with a hire recommendation and communication assessment.
Defensible, comparable records using the same rubric — skill radar and AI insights give clear evidence when someone steps up or when you say not yet.
Adaptive daily assessments track skill evolution over time — see where the group is strong, where gaps appear, and what to develop next.
ATS integration auto-triggers structured assessments when candidates apply. Magic links, batch question generation, and auto-scoring handle volume without manual effort.
Full proctoring reports, timestamped violation logs, identity verification records, and exportable transcripts — ready when compliance asks.
AI-generated questions, hybrid scoring, proctoring, and rich insights — set up in minutes, not weeks.