Online test maker to build, deliver, and auto-grade tests

Create secure, auto-graded tests in minutes with AI and real-time analytics

Quiz
Themes
Settings
Results
Leads
Share
Default Themes
Your Themes
Customize
Question Container
 
 
 
 
 
Fullscreen
Preview
Click to return to Quiz Screen
Quiz Title
Question?
Yes
No
Theme
Customize
Quiz
Plugins
Integrate
Plugins:
Top:
Results
Scoring
Grades

Quizzes generate 7x more leads than forms. Enable lead capture in your quiz below:

Lead Capture
Allow respondent to skip lead capture

Upgrade to Unlock More

Free accounts are limited to 25 responses. Upgrade and get the first days free to unlock more responses and features. Zero risk, cancel any time.

Upgrade
Share
Embed
Email
Unique Codes
Free Quizzes show ads and are limited to 25 responses. Get a day free trial and remove all limits.
Type:
Code:
Preview Embed
Set Image/Title
Width:
Fullscreen
Height:
Add Email
Create a list of Unique Codes that you can give to voters to ensure that they only vote once. You can also download the codes as direct links
Add/Remove Codes
New Quiz
Make a Quiz
Trusted by teams worldwide
Customer logos

Make an online test

STEP 01

Build the test your way. Add questions manually or generate a first draft with AI, then edit until it matches what you actually want to measure.

STEP 02

Set the answer key, points, pass mark, and grading rules so every attempt is scored against the same standard.

STEP 03

Publish by link, embed, or course assignment, then review results as soon as submissions come in.

Set scoring rules once and grade every attempt the same way

Mark the correct answers, assign points, and choose the pass threshold before you publish. Once those rules are in place, every submission is scored consistently.

You can also define grade bands when you want more than a simple pass or fail, which makes the result clearer for learners and easier to report on later.

When a response needs human judgement, keep that item in manual review while objective questions continue to score automatically.

Scoring settings for an online test

Choose question formats that match the evidence you need

Use multiple choice for fast grading, open text for written responses, file upload for proof of work, and ranking, matching, matrix, dropdown, or numeric questions when a simple choice is not enough.

The goal is not to use every format. It is to choose the one that fits the skill, concept, or task you want to check.

That gives you room to build short knowledge checks, end-of-module tests, and more applied assessments without forcing everything into the same template.

Different question types available in the test editor

Publish where people already learn and work

Once the test is ready, share it in the format that suits the workflow. Send a direct link, embed it on your site, or assign it through the course builder.

That makes the same test useful across classroom pages, training portals, internal learning programs, and customer education without rebuilding it for each channel.

Publishing an online test by link, embed, or assignment

See results quickly and improve weak questions

As soon as someone submits, results update with scores, completion status, and time taken. You can review individual attempts or step back and look at overall performance.

Export to CSV when you need the data elsewhere, and use the question-level breakdown to spot items that were too easy, too hard, or simply unclear.

Results dashboard for an online test

How to build a test people can trust

Publishing fast helps. Publishing something well structured helps more. These four habits improve the quality of the test without slowing the workflow down.

1

Start with the objective, not the question list

Before you write or generate anything, decide what the test should prove. Clear objectives make it easier to choose the right questions, difficulty, and scoring rules.

2

Mix quick wins with applied questions

A strong test usually blends a few straightforward knowledge checks with questions that ask learners to rank, explain, calculate, or submit evidence. That gives you a better read on understanding.

3

Write the grading rules before you publish

Do not leave pass marks, point values, or feedback until the last minute. Decide how the test will be scored first so learners get consistent results from the first attempt onward.

4

Use the first round of results to improve the next one

After the first cohort, review which questions performed well and which ones caused confusion. Small edits to wording, distractors, or scoring can make the next version much stronger.

Online test maker FAQ

What is the fastest way to create a high-quality scored test?

Start by defining the decision the test should support, because that determines everything else. A short lesson check, a refresher quiz, and a screening test may all use similar tools, but they need different standards, question depth, and pass criteria.

Once the purpose is clear, build a first draft quickly: write questions manually or generate a draft with AI, then edit aggressively for clarity and fairness. Set the answer key, point values, and pass threshold before publishing so every respondent is measured against the same standard. In most cases, a concise test with precise questions performs better than a long test with repetitive items.

How should I choose question types instead of just picking random formats?

Choose question types based on the kind of evidence you need. Multiple choice is efficient for broad coverage and fast scoring. True/false works for quick checks but has a higher chance of guessing. Matching, ranking, matrix, numeric, and dropdown formats are better when sequence, comparison, or structured reasoning matters.

Use open text when you need explanation rather than recognition, and file upload when proof of work is required. A strong test does not try to use every available format. It uses the smallest set of formats that accurately captures the skill or knowledge being assessed.

Should I keep everything objective for auto-grading, or include subjective items too?

Use objective-only tests when speed, consistency, and immediate scoring are the top priority. This is ideal for high-volume checks where clear right and wrong answers are enough to make a decision.

Include open text or file upload when judgment, reasoning, or applied work matters. In that model, objective items can still auto-grade immediately while subjective items are held for manual review. The tradeoff is operational: richer evidence usually improves decision quality, but it requires review capacity and slightly slower final results.

How do pass marks, score bands, and answer keys work in practice?

Set them before launch and keep them stable during live attempts. Define correct answers, assign point values, and choose a pass threshold that reflects the required standard. If pass/fail is too blunt, use score bands to make outcomes easier to interpret.

A practical validation step is to run the test yourself with two scenarios: one fully correct and one intentionally below threshold. This confirms that scoring logic, pass rules, and labels behave as expected before real respondents see the test.

What is the best way to publish: direct link, embed, or course assignment?

Use a direct link when speed and convenience matter most. It works well for email, chat, and quick rollouts. Use embed when the test should live inside your site, portal, or help center so the experience stays on-brand and in-context.

Use course assignment when the test is one part of a larger learning sequence and needs to sit inside a structured path. The key point is that publishing method should follow delivery context, not force a redesign of scoring or content.

Can I reuse one test across different audiences?

Yes, when the same standard applies. One core test can often be shared by link, embed, or assignment without rebuilding the logic.

Create separate versions when standards differ by audience, role, or risk. If pass marks, feedback, or difficulty need to change, versioning is usually better than forcing one test to fit everyone. Reuse is efficient; versioning protects fairness.

What can I learn from results besides total score?

You can review completion status, time taken, and question-level performance. Total score shows who passed; item-level data shows which questions are working and which ones are weak, ambiguous, or misaligned with the goal.

That distinction matters for improvement cycles. Good question analytics help you decide whether to tighten wording, improve distractors, rebalance difficulty, or replace an item entirely.

How should I improve weak questions after the first test run?

Treat first-run data as a calibration pass. Look for items that nearly everyone misses, nearly everyone answers correctly, or produce response patterns that suggest confusion. These are usually signs of poor wording, low-quality distractors, or a mismatch between question type and objective.

Edit in small steps first: tighten language, remove double-barreled prompts, improve wrong options, or switch to a better format. If the test is already live and high stakes, duplicate and version it rather than changing scoring logic mid-cycle.

Where online tests work best

This page is about practical test delivery: building something clear, scoring it consistently, and learning from the results. These are the workflows where that matters most.

Education

Topic checks, revision tests, and end-of-unit assessments

Teachers and tutors can build short tests for regular classroom use or longer checks at the end of a unit. Automatic scoring saves time, while question-level reporting shows what needs to be retaught.

Training

Onboarding and refresher knowledge checks

Training teams can turn handbooks, slide decks, and learning content into scored tests that work the same way for every cohort. That makes it easier to compare outcomes over time.

Hiring

Role-specific screening and practical knowledge tests

When you need a consistent first filter, online tests help teams assess baseline knowledge before interviews or later-stage tasks. The structure stays the same even when the question set changes.

Enablement

Customer, partner, and internal learning programs

Use scored tests to confirm understanding after a lesson, module, or rollout. When the program grows beyond a single course, connect it to policy training workflows or broader reporting as needed.

Build your first test

Start with a blank test or let AI draft the first version, then publish when it is ready.