Ovii
← Back to blogs
Article Interview Authoring 18 min read March 30, 2026

How Ovii Generates Async Video Interview Questions and Evaluation Criteria From the Job

Async video authoring gets weak very quickly when teams start with a blank page, write improvised questions, and leave evaluation criteria vague. Ovii's current implementation is much more structured than that. Recruiters can still author questions manually, but the product also supports a queued AI generation path for general async questions, a separate queued path for behavioral question sets, and a job-aware criteria-generation flow for hand-written prompts. The job context matters throughout: title, description, experience range, focus areas, difficulty, and customization instructions shape the generated output, while manual criteria generation also uses the job and question text to anchor expectations. Just as important, generated questions do not automatically become live interview content. They are returned as a reviewable set with question text, evaluation criteria, time limits, categories, competencies, follow-ups, and constraint metadata, and recruiters add them one by one into the real job question set. Once saved, the questions carry sequence, category, time limit, retake policy, complexity, and question type for later evaluation routing. This article walks through that full authoring model in plain language, but it stays close to what the code is doing today.

Why Async Video Authoring Should Start With Hiring Intent

A lot of one-way video products treat question creation as a lightweight setup step. The recruiter types a few prompts, maybe copies a template from another role, and hopes the reviewers will reconstruct what a good answer should sound like later.

That is a weak operating model. If the question is vague, the evaluation criteria are generic, or the question set drifts away from the role, the whole async interview becomes harder to trust. Recruiters end up watching videos without a strong scoring target, and candidates end up answering prompts that do not clearly represent the job.

Ovii is stronger when the story starts earlier. The product is trying to turn the job into a structured interview authoring context before the candidate ever sees a recording screen.

Ovii treats async video questions as part of hiring design, not as a last-minute list of prompts.

Ovii Gives Recruiters More Than One Authoring Lane

The current flow supports at least three distinct authoring modes. Recruiters can write a question manually from scratch, ask Ovii to generate a broader async question set, or generate a behavioral set aimed at soft-skill, judgment, and situation-based evaluation.

That split matters because not every role needs the same authoring posture. Some teams already know the exact prompt they want to ask and only need help creating a better rubric. Others need the platform to propose a structured first draft based on the role. Behavioral interview design is different enough again that it deserves its own generation lane rather than being mixed loosely into every general question set.

This is a healthier product shape than one magic generate button. It gives recruiters flexibility without collapsing everything into one opaque workflow.

Ovii's Current Async Video Authoring Paths
Authoring pathWhat the recruiter doesWhat Ovii adds
Manual authoringWrites question text, time limit, retake policy, and category directly.Stores the question as a structured job-linked entity and can generate criteria from job context.
AI question generationSelects count, difficulty, focus areas, language, and custom instructions.Builds a reviewable general async question set from the job.
Behavioral generationChooses behavior topics and role context.Builds a separate behavioral question set rather than flattening soft-skill prompts into the general lane.

AI Generation Is Queued And Batch-Tracked

One of the strongest implementation details is that Ovii does not block the recruiter interface on long-running question generation. When the recruiter requests a generated question set, the controller creates a generation-job record with a `batchId`, marks it `QUEUED`, publishes a message, and returns immediately.

The heavy work happens in the background. The generation service flips the job to `GENERATING`, runs the long model call outside the transactional window, and then stores either the generated JSON or a failure state back onto the generation job. The frontend polls by `batchId` until the batch finishes.

That is a better product pattern than keeping the recruiter trapped in one blocking request. It gives the system room to handle slower model calls, retries, and queueing without turning authoring into a fragile spinner.

Generated question sets are batch-tracked and polled asynchronously, so authoring is resilient even when generation takes time.

Recruiters Can Shape The Question Set Before Ovii Writes It

The generation form is not a thin wrapper around a default prompt. Recruiters can currently choose a difficulty level, decide how many questions to generate, select focus areas such as technical or problem-solving, and add custom instructions for the run.

That matters because question quality is rarely just about role title. Two engineering teams hiring for the same title can still care about very different themes: architecture, debugging, communication, leadership, or customer-facing decision making. The focus-area controls let recruiters emphasize that before generation begins.

The same idea shows up in the behavioral lane too. Recruiters can specify behavioral topics and other role context so the product is not forced to invent a soft-skill interview frame from nothing.

What Shapes Ovii's Generated Question Sets
InputWhere it is usedWhy it matters
Job title and job descriptionPassed into the generation request and prompt builder.Keeps questions anchored to the real role instead of a generic title pattern.
Experience rangeUsed to adapt expectations and scenario difficulty.Helps Ovii avoid senior-level prompts for junior roles and vice versa.
Difficulty and focus areasSet in the recruiter generation form.Lets recruiters bias the interview toward the skills they actually want to test.
Custom instructionsPassed directly into the generation contract.Adds room for role-specific nuance without forcing recruiters to rewrite the whole set manually.

The Generation Contract Asks For More Than Just Questions

The prompt contract is richer than the UI first suggests. Ovii is not only asking for a list of prompts. The generated response is expected to include question text, evaluation criteria, time limits, category, difficulty, tags, competencies, follow-up questions, constraint types, and constraint severity.

The question text itself is also guided heavily. The contract pushes toward scenario-based, realistic interview prompts with practical constraints and tradeoffs instead of flat textbook definitions. It asks for role-aware distribution across question types and expects the evaluation criteria to include both what to look for and what should count as red flags.

That makes the generated output much more usable. A recruiter gets not just a prompt, but a small evaluation package that can later survive handoff into reviewer workflow.

What Ovii Expects In A Generated Question Item
FieldCurrent contractWhy it matters later
`questionText`Scenario-based prompt with context and constraints.Improves signal quality versus vague, definition-style prompts.
`evaluationCriteria`Structured criteria plus red flags.Gives reviewers a rubric instead of a pure impression.
`timeLimit`One of a constrained set of answer windows.Keeps candidate expectations and session timing predictable.
`category`, `competencies`, `tags`, `followUps`Metadata describing what the question is testing.Makes the question set more inspectable and reusable.

Manual Questions Can Still Pull Job-Aware Criteria

Ovii does not force recruiters to choose between hand-authored questions and structured criteria. In the manual authoring flow, the recruiter can write the question first and then ask the system to generate evaluation criteria specifically for that prompt.

That criteria-generation call uses the job description, experience range, and job title together with the question text. The prompt builder explicitly asks the engine to keep the criteria realistic for a short recorded answer, anchored to role responsibilities, and calibrated to the expected experience level.

This is one of the better trust details in the whole flow. It means the recruiter can keep control over the prompt itself while still borrowing structure from Ovii for the scoring expectations.

Recruiters can keep their own prompt and still let Ovii generate a role-aware evaluation rubric around it.

Generated Questions Are Reviewed Before They Become Live

The most important product safeguard is what happens after generation. Ovii returns a reviewable list, but it does not automatically write that list into the live interview setup for the job.

Instead, the recruiter sees the generated set, inspects question text, criteria, category, difficulty, and time limit, and then adds questions one by one. The frontend even warns when the growing set pushes beyond the recommended total interview duration, but still leaves the final choice with the recruiter.

This review-before-save behavior is exactly the right tradeoff. Ovii helps create the first draft faster, but the recruiter remains the final author of the live interview.

AI generation produces a review set, not a live interview. Questions become real only when the recruiter adds them.

Live Job Questions Carry Structure Forward

When a recruiter saves a question, Ovii writes it into a job-linked question entity with the details the later interview flow needs: sequence, category, complexity, time limit, retake policy, first-impression flag, and question type such as `GENERAL` or `BEHAVIORAL`.

That structure matters because the question is not only content. It is also routing and timing information. The later interview and evaluation pipeline uses those fields to preserve order, enforce timing, and decide how the answer should be interpreted.

This is where the authoring experience becomes more than a text editor. Ovii is building a structured interview object that the downstream candidate and reviewer flows can rely on.

Question Sets Can Be Promoted Into Templates

Ovii also lets teams turn a job's live async question set into a reusable company template. When that happens, the template service copies question text, evaluation criteria, time limit, retake policy, category, complexity, question type, and sequence into a company-owned template record.

Applying a template to another job copies those structured questions forward again, preserving order while appending them after the existing job question set. That is a very practical trust feature. Once a team has built a good interview package for a role family, it does not have to recreate the setup every time.

That means the authoring story is not only generation. It is also standardization. Ovii can help teams move from ad hoc question writing toward reusable interview design.

The Better Product Story Here

The shallow version of this feature would be to say Ovii can generate interview questions with AI. The stronger and more accurate story is that Ovii gives recruiters a structured async video authoring system.

That system supports multiple authoring lanes, shapes generation with real role context, runs asynchronously with job tracking, returns rich question packages instead of bare prompts, lets recruiters review before save, preserves structured metadata for downstream evaluation, and turns proven job sets into reusable templates.

That is the story worth publishing, because it shows Ovii is not trying to replace interview design with a magic button. It is trying to make interview design faster, more consistent, and easier to trust.

Related product pages

See the product surfaces behind this article

This article explains one part of the system. These public pages show how the broader Ovii product is positioned and where this workflow fits.

Related reading

More from the blog

Browse all articles →
Candidate Search 14 min read

Keyword Search vs Semantic Candidate Search for Recruiters

Why exact-match resume search misses strong candidates, and how Ovii blends semantics, hard filters, and reranking to surface better-fit talent faster.

Read article
Video Interviews 16 min read

How Ovii Surfaces Hiring Signals From Async Video Interviews

Why Ovii treats async video as a structured evidence workflow, with transcript-backed review, STAR analysis, communication signals, and recruiter-ready evaluation.

Read article
Video Interviews 18 min read

What Ovii Evaluates in Async Video Interviews

A code-backed look at what Ovii actually evaluates in async video interviews: transcript evidence, criterion-level scoring, decision pressure, STAR extraction, communication signals, AI-assistance patterns, and recruiter-ready review context.

Read article