Preventing cheating in online exams requires a layered approach that combines technology, smart assessment design, and clear expectations. No single tool eliminates dishonesty on its own, but when you pair secure testing environments with questions that are genuinely hard to cheat on, you dramatically reduce the opportunity and incentive to try.
Lock Down the Testing Environment
A secure browser is the most straightforward technical barrier you can put between a student and outside resources. Tools like Respondus LockDown Browser force the exam into full-screen mode that cannot be minimized. While the assessment is open, students cannot access other applications, messaging platforms, screen-sharing tools, virtual machines, or remote desktops. Copy-paste, right-click menus, function keys, keyboard shortcuts, screen capture, and printing are all disabled. The student cannot exit the browser until they submit the exam for grading.
These browsers also block more advanced workarounds: launching applications with timers or alerts, browser cache exploits, screen recording software, and keystroke combinations that might let a student slip out of the testing window. The goal is to turn the computer into a single-purpose exam terminal for the duration of the test.
Secure browsers work best when paired with a lockdown at the operating system level. Require students to close all other applications before launching the browser, and configure the tool to detect whether a virtual machine is running underneath. Students who try to run a lockdown browser inside a virtual machine while keeping their normal desktop accessible on the host machine should be blocked from starting the exam entirely.
Add Remote Proctoring
Proctoring adds a human or algorithmic set of eyes to the exam session. There are three common models, and each sits at a different point on the cost and privacy spectrum.
- AI-based proctoring: Software uses the student’s webcam and microphone to monitor the session automatically. It flags behaviors like looking away from the screen repeatedly, a second person entering the frame, or unusual audio. No human watches in real time, so it scales easily to large classes.
- Live proctoring: A human proctor watches one or more students via video feed for the entire exam. This is the most secure option but also the most expensive and the hardest to schedule at scale.
- Hybrid proctoring: AI monitors the session continuously and alerts a live proctor only when it detects potential cheating. This balances cost with responsiveness, because the human only intervenes when something looks wrong.
Modern proctoring tools include features specifically designed for new cheating methods. Mobile phone detection uses the webcam feed to identify when a phone enters the testing area. Smart speech detection listens for activation phrases like “Siri” or “OK Google” that would launch a virtual assistant. Some platforms also scan the internet for leaked test questions and let instructors submit takedown requests with one click when matches are found.
Design Exams That Are Hard to Cheat On
Technology controls what students can access during the exam. Assessment design controls whether access to outside help would even matter. The strongest anti-cheating strategy is writing questions that resist shortcuts.
Start by favoring formats that require original thinking. Essay questions, case study analyses, and open-ended problem-solving prompts are far harder to copy than multiple-choice items. When you do use multiple choice, create several versions of the exam. Vary the question order and the order of answer choices across versions. Change small but important details, like the numbers in a math problem or the variables in a scenario, so that the questions look identical at a glance but produce different correct answers. If a student copies from a neighbor’s screen or a shared answer key, the mismatched details make it obvious.
Randomized question pools amplify this effect. Instead of giving every student the same 40 questions, build a pool of 100 or more items organized by topic and difficulty, then have the system pull a unique subset for each student. Two students sitting next to each other (or sharing a screen via video call) will see mostly different questions.
Time pressure also matters. Set a per-question time limit or an overall exam duration that gives a prepared student enough time to answer comfortably but leaves little room to look things up. Restrict backtracking so students cannot screenshot early questions, send them to someone else, and return later to fill in answers.
Reduce the Payoff of AI Tools
Generative AI has made traditional knowledge-recall questions especially vulnerable. A student with access to ChatGPT can produce a passable answer to “explain the causes of inflation” in seconds. The most effective defense is designing assessments that AI finds difficult to replicate convincingly.
Harvard’s Office of Academic Integrity recommends several specific strategies. Ask students to connect course material to personal experience, like reflecting on a clinical rotation, a lab experiment they personally conducted, or a workplace situation only they encountered. Require process documentation: students submit progressive drafts, outlines, or annotated bibliographies that show how their thinking developed over time. A polished final product that appears with no preceding drafts is a red flag.
Interactive assessments also work well. Oral exams, live problem-solving sessions, and simulations test a student’s ability to think on the spot, which no AI tool can do on their behalf. Even adding a short follow-up question after a written submission (“walk me through how you arrived at this conclusion”) forces the student to demonstrate genuine understanding.
For written exams, include critical justification prompts where students must explain their reasoning step by step. AI can generate a plausible answer, but students who did not actually work through the problem struggle to defend the logic when the prompt demands specificity tied to course lectures, assigned readings, or in-class discussions that AI would not have access to.
Set Clear Expectations Up Front
An honor statement on the first page of every exam is a small step that research consistently shows has a measurable effect. Ask students to sign (or check a box affirming) that they will complete the exam honestly and independently. This primes the ethical mindset right before the test begins.
Beyond the exam itself, your syllabus should define what counts as cheating in specific, concrete terms. Students sometimes genuinely do not know whether using a calculator, referencing their notes, or discussing a take-home exam with a classmate crosses the line. Spell it out: which tools are permitted, whether collaboration is allowed, and what the consequences are for violations. When the rules are ambiguous, some students will exploit the gray area and argue they did not know it was prohibited.
Use Multiple Layers Together
Each strategy above has weaknesses in isolation. A lockdown browser does not stop a student from having a friend in the room reading questions aloud. Proctoring software can be fooled by a second monitor placed outside webcam range. Randomized question pools do not help if a student has access to the entire pool. The combination is what makes the system robust.
A practical setup for a high-stakes online exam might look like this: require a lockdown browser to block digital cheating, enable webcam-based proctoring to deter physical cheating, use randomized questions drawn from a large pool to make answer-sharing useless, set a time limit tight enough to discourage lookups, and include at least one open-ended question that demands original reasoning. Layer an honor pledge on top, and you have covered the most common vectors without making the experience unreasonably stressful for honest students.
For lower-stakes assessments like weekly quizzes, you can afford to rely more on design than technology. Frequent, low-point-value assessments with randomized questions reduce the incentive to cheat because no single quiz is worth the risk. Spreading your grading across many smaller checkpoints also gives you a richer picture of each student’s actual knowledge, making it easier to spot anomalies when a final exam score does not match the pattern.

