The Swiss Cheese Model illustrates that accidents rarely stem from a single, isolated malfunction or mistake in complex systems. Instead, a catastrophic outcome is typically the result of defensive weaknesses aligning across multiple layers. The model provides a metaphor to identify and address the systemic flaws that allow a hazard to bypass established safeguards and ultimately cause harm.
Origin and Purpose of the Model
Psychologist James Reason developed the model in the early 1990s, introducing the concept in his book, Human Error. Reason sought to change how industries investigated accidents, moving away from the traditional “person approach” that blamed individuals for errors. He advocated for a “system approach,” which examines the conditions and processes leading to failure. The model highlights that human actions are often the consequence of upstream organizational failures, allowing investigators to trace the causal chain back to deeper, systemic flaws.
The Core Analogy: Slices, Holes, and the Trajectory of Failure
The metaphor uses a stack of Swiss cheese slices to represent the multiple protective layers, barriers, and safeguards built into a system. Each slice is an imperfect defense mechanism, such as policies, training, or physical barriers. The flaws inherent in these layers are represented by the holes, which continuously change in size and position, reflecting dynamic organizational performance. Failure occurs only when a hazard successfully navigates through all successive layers. This happens when the holes in all slices momentarily align, creating an unobstructed “trajectory of failure” from the hazard to an adverse outcome.
Understanding the Four Levels of Defense
Reason’s model categorizes weaknesses, or holes, into four distinct levels of failure, moving from macro-level systemic issues down to micro-level individual actions. These levels must all be breached for an accident to occur. The systemic flaws are often described as latent conditions because they can lie dormant and undetected for long periods, only becoming apparent when they contribute to an accident.
Organizational Influences
This uppermost layer represents the high-level decisions, management policies, and organizational culture that create latent conditions within the system. Examples include budget cuts that lead to reduced maintenance schedules or a corporate focus on production speed over safety compliance. The influence of organizational structure and resource allocation sets the stage for failures downstream by impacting the design and operation of the entire system. A weak safety culture, where employees are discouraged from reporting errors, also contributes to the size and number of holes at this layer.
Unsafe Supervision
This layer focuses on failures in the direct oversight and management of personnel and operations, often resulting from organizational influences. Supervisory failures include providing inadequate training or failing to provide sufficient professional guidance on complex tasks. Poor management also involves an unwillingness to correct known problems, such as defective equipment or chronic staffing shortages. Scheduling personnel to work excessive hours that induce fatigue is a further example of weakened defenses.
Preconditions for Unsafe Acts
This layer describes the immediate environment, psychological state, and physical conditions that directly contribute to the likelihood of an individual committing an error. These preconditions can include a worker experiencing physical fatigue or mental stress due to a heavy workload. Other factors involve technological issues, such as poorly designed human-machine interfaces, or environmental factors like noise and poor lighting that impair performance. These factors do not cause an accident directly but increase the probability of an active failure by the person at the operational end.
Unsafe Acts (Active Failures)
The final layer, closest to the accident, represents the active failures committed by people who are directly interacting with the system. These are the sharp-end errors, lapses, and violations that immediately precede an adverse event. An error might be a surgeon incorrectly calculating a drug dosage, or a pilot inadvertently selecting the wrong switch during flight. Violations involve deliberate disregard for a safety rule, such as an employee skipping a mandatory pre-use equipment check to save time.
Real-World Applications of the Model
Aviation Safety
Aviation safety was an early adopter of the Swiss Cheese Model, using it to trace accidents from the final active error back through preconditions, supervision, and organizational decisions. This systemic analysis helps authorities implement changes to design, training, and regulation, making the system more robust against future threats.
Healthcare and Patient Safety
In healthcare, the model is used to analyze medical mishaps, such as medication administration errors or surgical complications. For example, a nurse’s active error might be traced back to a latent condition of look-alike drug packaging and an organizational influence of inadequate staffing that causes rushed procedures.
Cybersecurity and IT Risk Management
Cybersecurity and IT risk management also utilize the model, viewing system firewalls, employee training, and access control policies as layered defenses. This approach, known as “defense in depth,” ensures that a breach in one area, such as a successful phishing attempt, is stopped by subsequent layers, like endpoint detection and response software.
Moving Beyond the Cheese: Limitations and Evolution
While the Swiss Cheese Model offers a clear visualization of accident causation, it has faced criticism for being overly simplistic in its representation of complex, adaptive systems. A primary critique is that the model can portray human error as a cause rather than a consequence of deeper systemic flaws. It also fails to fully account for the dynamic nature of human performance, where people often adapt to maintain safety despite flawed procedures. Newer frameworks, such as Resilience Engineering and Safety-II, have evolved to address these limitations by focusing on how systems anticipate, adapt, and absorb disruptions.

