How to Write Test Cases: Steps and Examples

A test case is a set of conditions and steps you use to verify whether a specific feature in your software works correctly. Writing good test cases means documenting exactly what to test, how to test it, and what the correct outcome should be. The difference between a useful test case and a useless one comes down to clarity: can someone who has never seen the feature before pick up your test case and execute it without asking questions?

Anatomy of a Test Case

Every test case follows a predictable structure. The core fields you need are:

  • Test Case ID: A unique identifier so you can reference and track it. Something like TC-LOGIN-001 works better than “Test 1” because it tells you the feature area at a glance.
  • Test Objective: A one-sentence description of what you’re verifying. “Verify that a registered user can log in with valid credentials” is specific. “Test login” is not.
  • Preconditions: Anything that must be true before the test starts. This includes the state of the system, test accounts that need to exist, data that needs to be loaded, or configuration settings that must be in place.
  • Test Steps: A numbered sequence of actions the tester performs, written in the order they happen.
  • Test Data: The specific inputs used in the test, such as usernames, passwords, file names, or dollar amounts.
  • Expected Result: What the system should do after each step or at the end of the sequence. This is the most important field. Without a clear expected result, the tester has no way to judge pass or fail.
  • Pass/Fail Criteria: The rule for deciding whether the test case succeeded. Often this is simply “expected result matches actual result,” but sometimes you need to check a database record, a confirmation email, or a downstream system.

Optional fields that become useful on larger projects include priority level (critical, high, medium, low), the requirement or user story the test case traces back to, dependencies on other test cases, and any postconditions describing the system state after the test runs.

Example: Login Feature

Here is a positive test case for a standard login page.

  • Test Case ID: TC-LOGIN-001
  • Objective: Verify that a registered user can log in with a valid email and password.
  • Preconditions: A user account exists with email user@example.com and password Test@1234. The user is not currently logged in.
  • Step 1: Navigate to the login page. Expected: Login form displays with email and password fields.
  • Step 2: Enter “user@example.com” in the email field. Expected: Text appears in the field.
  • Step 3: Enter “Test@1234” in the password field. Expected: Input is masked with dots or asterisks.
  • Step 4: Click the “Log In” button. Expected: User is redirected to the dashboard. A welcome message displays the user’s name.
  • Pass/Fail: Pass if the dashboard loads and displays the correct user name. Fail if an error message appears, the page doesn’t redirect, or the wrong account loads.

Notice that each step has its own expected result. This granularity helps testers pinpoint exactly where a failure occurs rather than just reporting “login didn’t work.”

Example: Payment Processing

Payment features require careful test coverage because they involve money, third-party gateways, and multiple validation points.

  • Test Case ID: TC-PAY-001
  • Objective: Verify that a user can complete a purchase using a valid credit card.
  • Preconditions: User is logged in. At least one item is in the shopping cart. A test credit card number is available (not expired, sufficient balance).
  • Step 1: Navigate to the cart and click “Proceed to Checkout.” Expected: Checkout page loads showing the correct item(s) and total amount.
  • Step 2: Select “Credit Card” as the payment method. Expected: Credit card input fields appear (card number, expiration date, CVV, cardholder name).
  • Step 3: Enter valid test card details and click “Pay Now.” Expected: A processing indicator appears. After a few seconds, the system displays an order confirmation page with an order number.
  • Step 4: Check that the deducted amount matches the total shown at checkout. Expected: The confirmation page and any email receipt show the same amount as the checkout total.
  • Pass/Fail: Pass if order confirmation displays, amount is correct, and the order appears in the user’s order history. Fail if the transaction errors out, the amount is wrong, or no confirmation is generated.

Related test cases you would write alongside this one: verify that expired cards are rejected, verify that the correct currency displays based on the user’s region, verify that the card number is masked on screen, and verify that the system prevents checkout when the cart is empty.

Writing Negative Test Cases

Positive test cases confirm the system works when everything goes right. Negative test cases confirm the system handles it gracefully when things go wrong. Skipping negative tests is one of the most common gaps in test coverage, because real users will paste entire emails into single-line text fields, enter phone numbers where email addresses belong, and attempt to submit forms with every required field left blank.

Here is a negative test case for the same login feature:

  • Test Case ID: TC-LOGIN-005
  • Objective: Verify that the system displays an error when a user enters an invalid password.
  • Preconditions: A user account exists with email user@example.com.
  • Step 1: Navigate to the login page.
  • Step 2: Enter “user@example.com” in the email field.
  • Step 3: Enter “WrongPassword!” in the password field.
  • Step 4: Click “Log In.” Expected: The system does not log the user in. An error message such as “Invalid email or password” appears. The password field clears.
  • Pass/Fail: Pass if access is denied and a user-friendly error displays. Fail if the system crashes, reveals which field was wrong (a security issue), or grants access.

Other negative scenarios worth covering for a login page: leaving both fields blank and clicking submit, entering a valid email with extra leading or trailing spaces, attempting to log in with a deactivated account, and making multiple consecutive failed attempts to verify the system enforces an account lockout policy.

Boundary and Edge Case Examples

Boundary test cases target the limits of acceptable input. If a password field requires 8 to 20 characters, you write test cases for exactly 7 characters (should fail), exactly 8 (should pass), exactly 20 (should pass), and exactly 21 (should fail). These four cases catch off-by-one errors that broader testing often misses.

For a quantity field in a shopping cart that accepts 1 to 99 items, your boundary cases would be:

  • TC-CART-010: Enter 0 in the quantity field. Expected: System rejects the input or prompts the user to enter at least 1.
  • TC-CART-011: Enter 1. Expected: System accepts the value and updates the cart total.
  • TC-CART-012: Enter 99. Expected: System accepts the value.
  • TC-CART-013: Enter 100. Expected: System rejects the input or displays a maximum quantity message.
  • TC-CART-014: Enter -1. Expected: System rejects the input.
  • TC-CART-015: Enter “abc.” Expected: System rejects non-numeric input.

Edge cases go beyond boundaries into unexpected territory. What happens if someone pastes a 10,000-character string into the quantity field? What if they enter a decimal like 2.5? These scenarios test how robust your input validation really is.

Principles That Keep Test Cases Useful

A test case that only its author can understand is a liability. Follow these principles to keep your test suite maintainable as it grows.

One test case, one thing. Each test case should verify a single behavior. If your test case checks both valid login and password reset, split it into two. When a combined test case fails, you cannot tell which behavior broke without re-investigating. This principle is sometimes called “atomic” test design.

Make test cases independent. Avoid chains where Test Case B can only run after Test Case A. If Test Case A fails, the entire chain stalls. When a test case needs specific data or system state, set that up in the preconditions rather than relying on another test to create it.

Write steps a stranger could follow. “Enter the credentials” is vague. “Enter user@example.com in the email field and Test@1234 in the password field” is executable. Specify the exact data, the exact field, and the exact button or link. A new team member or someone covering for you on a release day should be able to run your tests without guessing.

Tie each test case to a requirement. If you are working from user stories with acceptance criteria, start your test cases from those acceptance criteria. Each criterion typically maps to at least one positive and one negative test case. This traceability helps you prove that every requirement has been tested and quickly identify which tests to re-run when a requirement changes.

Using AI Tools to Draft Test Cases

Modern testing platforms can generate test cases from natural language descriptions. You describe what you want to test in plain English, and the tool produces structured test steps, sometimes even exporting them as executable code. Several platforms now offer this capability: you provide a user story or product requirement, and an AI agent parses it into test scenarios with steps, expected results, and test data.

These tools are genuinely useful for generating a first draft. They can quickly produce dozens of scenarios you might not think of, especially for repetitive form validation or CRUD operations (create, read, update, delete). Some platforms also analyze your application’s actual user paths and generate test cases that cover the most common workflows automatically.

The limitation is that AI-generated test cases still need human review. They tend to miss business logic nuances, security edge cases, and the kinds of creative misuse that real users attempt. Use them to build your initial set quickly, then refine the preconditions, sharpen the expected results, and add the negative and boundary cases that require domain knowledge.

Organizing a Test Suite

Once you have more than a handful of test cases, organization matters. Group test cases by feature area (login, payment, user profile, search) and assign priority levels. Critical test cases cover functionality that would block users entirely if broken, like login, checkout, or data saving. High-priority cases cover important features that have workarounds. Medium and low cases cover cosmetic issues, minor convenience features, and rare scenarios.

This prioritization lets you run a smaller “smoke test” suite of critical cases after every build, with the full suite reserved for milestone releases. Number your test case IDs with a feature prefix (TC-LOGIN, TC-PAY, TC-SEARCH) so anyone scanning the suite can find relevant cases quickly. Store your test cases in a shared location, whether that is a spreadsheet, a test management tool, or a wiki, and version them alongside the requirements they trace back to.