Ovii
← Back to blogs
Article Evaluation Context 17 min read March 30, 2026

How Ovii Adds Peer Ranking Context to Assessments and Async Video

A raw score is often not enough for a recruiter to act with confidence. A 72 can be strong in one funnel and mediocre in another, while a video score without job-level context can still feel ambiguous. Ovii's current implementation is stronger because it adds peer-ranking context on top of the underlying evaluation surfaces. In the assessment stack, the aggregate service computes a weighted score from the real marks structure across MCQ, coding, and comprehension, then adds percentile, total-candidate count, min and max score range, rank position, and time context for the job. In the async video stack, the results service computes an average from evaluated answers, then compares that rounded score against other scored sessions on the same job to produce rank, candidate count, and min-max range. The UI then brings those signals into recruiter and student drawers as performance position, range bars, score tiles, proctoring context, and per-question detail rather than as a naked rank label. This article explains that ranking model in plain language, but it stays close to what the code is doing today.

Why Raw Scores Need Peer Context

Recruiters rarely make decisions from absolute scores alone. A candidate scoring 68 percent may be underwhelming in one funnel, but top quartile in another where the assessment is hard or the talent pool is unusually thin. The same logic holds for video interviews: one score means very little until the recruiter knows how it sits against the rest of the pipeline.

That is why ranking context matters. It does not replace the underlying evaluation, but it helps recruiters interpret what a score actually means in the live hiring set they are dealing with.

Ovii's current implementation is stronger because it tries to add that context on top of the evidence rather than replacing the evidence with rank alone.

Ovii uses rank as context around evaluation, not as a substitute for evaluation.

Assessment Ranking Starts With A Weighted Aggregate

The assessment stack does not rank candidates off a single question type or a shallow average. The aggregate service first builds a job-level picture of what was actually assigned: MCQ, coding, and comprehension questions, each with their own marks structure and time model.

From there it computes earned marks against total marks. MCQ uses the question marks, coding uses stored evaluation results against each coding question's max marks, and comprehension uses earned marks against the assigned comprehension questions. The final aggregate score is then calculated as total earned marks divided by total available marks across the whole assessment.

This is an important trust detail. Ovii is not pretending that a coding percentage and an MCQ correctness rate are naturally identical. It is grounding the aggregate in the real marks configuration of the assessment.

How Ovii Builds Assessment Aggregate Score
ComponentCurrent calculation ideaWhy it matters
MCQEarned MCQ marks against total assigned MCQ marks.Prevents one-mark quizzes and higher-weight MCQs from being treated as equal.
CodingUses stored coding evaluation score against the coding question's max marks.Keeps coding contribution tied to the real rubric weight of the question.
ComprehensionEarned comprehension marks against assigned comprehension marks.Lets reading sections participate in the overall aggregate instead of being ignored.
AggregateTotal earned marks divided by total marks across all assigned questions.Produces one weighted score without flattening the exam structure.

Assessment Rank Only Counts Completed Peer Attempts

Once the aggregate score exists, Ovii adds peer context on top. The rank and total-candidate counts are not meant to compare one candidate against everyone who was merely invited. The service filters heavily before it computes that context.

For the rank position and total-candidate count, the code only considers candidates who have completed the assessment flow and who actually produced answer data. That keeps the denominator cleaner. A candidate should not be compared against a set full of untouched invites or empty shells.

This is a small but meaningful design choice. It makes the ranking signal closer to a performance position among real attempts rather than an inflated marketing number.

Assessment rank is computed against completed peers with evidence, not against the whole invite list.

Assessment Responses Carry More Than Rank

The aggregate assessment response does not stop at one rank number. It also returns percentile rank, total candidates for the job, min and max score range, total assessment time, and elapsed candidate time where available.

That richer payload gives the UI room to tell a better story. A recruiter can see whether a candidate is high in the cohort, how wide the score spread is, how much time the candidate used, and then open detailed MCQ, coding, and comprehension evidence rather than guessing from one summary tile.

This is exactly how contextual ranking should work. Rank is useful, but it is even more useful when it arrives together with the edges of the score distribution and the component breakdown below it.

Async Video Ranking Uses Job-Level Peer Comparison Too

The async video flow follows the same broader philosophy even though the scoring mechanics are different. The results service first computes the candidate's average score from the evaluated answers in the session. If there are no evaluated answers yet, the feature shows that honestly as pending instead of inventing a position.

When an average exists, Ovii then looks across sessions for the same job and builds a peer score set from other sessions with positive average scores. The current session is excluded from the peer map, and the candidate's rank is calculated as the count of peers with a strictly higher score plus one.

The service also computes total candidates for the job in that scored set and returns the min and max scores across peers plus the current candidate. That gives the async result drawer the same kind of relative frame that the assessment stack provides.

How Ovii Derives Async Video Rank
StepCurrent behaviorWhy it matters
Session scoreAverage of non-null evaluation scores across the candidate's async answers.Uses the evaluated interview performance rather than a single question.
Peer poolOther sessions on the same job with positive average score.Keeps ranking job-specific and avoids empty sessions.
Rank ruleNumber of higher-scoring peers plus one.Makes the position easy to interpret and stable for recruiters.
Score rangeReturns min and max across peers plus current candidate.Lets the UI show where the candidate sits inside the job cohort.

Ovii Also Tries To Recover Stuck Async Results

One especially useful detail in the async results service is that it does not blindly trust the current stored state. Before building the final payload, it runs a best-effort recovery pass over answers that have media or transcript data but are still missing full evaluation.

If an answer has media but no transcript, Ovii re-triggers the transcription path. If the transcript already exists but evaluation is missing, the service reconstructs the evaluation request from the question, session, and job context and republishes it into the evaluation pipeline.

That is not just an operational nicety. It directly affects trust in the ranking surface. Recruiters are less likely to see candidates stuck forever in an incomplete state just because one earlier async step stalled.

Ovii tries to recover stuck async answers before building the result view, which makes the ranking surface more reliable.

The UI Keeps Rank Close To Evidence

The recruiter and student drawers do not present rank as a detached leaderboard tile. In the assessment dashboard, the aggregate score sits next to rank position, min and max score range, timing context, proctoring summary, domain breakdown, and the detailed MCQ, coding, and comprehension accordions.

The async video drawers follow the same approach. They show score, time used, rank position, range bars, proctoring context, and then the actual question-by-question evidence underneath, including transcript-backed feedback and evaluation detail where available.

That is the right product posture. If recruiters are going to use rank, they should be able to inspect what is behind it immediately.

Ranking Still Is Not The Hiring Decision

Ovii's implementation is strongest when rank is treated as calibration rather than verdict. A candidate can rank first because the cohort is weak. Another candidate can rank lower but still show the exact technical or behavioral strengths a hiring manager needs.

The product design already hints at that healthier use. The same surfaces that show rank also show detailed component evidence, proctoring context, transcript-backed analysis, and per-question review. In other words, Ovii gives the recruiter context first and decision power second.

That is a better trust story than saying the platform simply identifies the best candidate automatically.

Rank helps recruiters calibrate their judgment, but Ovii still keeps the underlying evidence front and center.

The Better Product Story Here

The shallow story would be to say Ovii ranks candidates. The stronger story is that Ovii adds peer context to evaluation surfaces that already have real structure behind them.

On assessments, the platform computes a weighted aggregate from the real marks design, then layers in percentile, total-candidate count, score range, and rank among completed attempts. On async video, it computes a session average, compares that score against other scored sessions for the same job, and returns the same kind of range-aware position. The UI then keeps those signals attached to evidence rather than isolating them as vanity metrics.

That is the story worth publishing, because it shows Ovii is trying to make recruiter interpretation easier without pretending that ranking is the whole evaluation.

Related product pages

See the product surfaces behind this article

This article explains one part of the system. These public pages show how the broader Ovii product is positioned and where this workflow fits.

Related reading

More from the blog

Browse all articles →
Candidate Search 14 min read

Keyword Search vs Semantic Candidate Search for Recruiters

Why exact-match resume search misses strong candidates, and how Ovii blends semantics, hard filters, and reranking to surface better-fit talent faster.

Read article
Video Interviews 16 min read

How Ovii Surfaces Hiring Signals From Async Video Interviews

Why Ovii treats async video as a structured evidence workflow, with transcript-backed review, STAR analysis, communication signals, and recruiter-ready evaluation.

Read article
Video Interviews 18 min read

What Ovii Evaluates in Async Video Interviews

A code-backed look at what Ovii actually evaluates in async video interviews: transcript evidence, criterion-level scoring, decision pressure, STAR extraction, communication signals, AI-assistance patterns, and recruiter-ready review context.

Read article