The ICE method is a prioritization framework used to rank ideas, features, or projects by scoring them on three criteria: Impact, Confidence, and Ease. Each criterion gets a score from 1 to 10, and the three scores are multiplied together to produce a single number that helps you decide what to work on first. It’s popular in product management, marketing, and growth teams because it’s fast to apply and easy to explain.
The term “ICE” also appears in other contexts, including emergency contact setup on smartphones and an educational assessment model. This article covers all three meanings so you can find the one you’re looking for.
The ICE Prioritization Framework
The ICE scoring model was designed to help teams stop debating priorities based on gut feelings and start using a consistent, repeatable system. You list every idea or project you’re considering, score each one on Impact, Confidence, and Ease, then multiply those three numbers. The result is the ICE score, and higher scores rise to the top of your list.
Here’s what each component measures:
- Impact: How much will this project move the needle on the key metric you’re targeting? A feature that could double conversion rates scores higher than one that might improve page load time slightly. Score it 1 (minimal effect) to 10 (transformative).
- Confidence: How certain are you that this project will actually deliver the impact you predicted? If you have data from a similar test or a prototype that performed well, confidence is high. If you’re guessing, it’s low. Score it 1 (pure speculation) to 10 (near certainty).
- Ease: How simple is this to implement? Ease accounts for time, technical complexity, cost, and the number of people or teams involved. A project one developer can finish in a week scores high. A six-month cross-team initiative scores low. Score it 1 (extremely difficult) to 10 (very easy).
The formula is straightforward: ICE Score = Impact × Confidence × Ease. A project scoring 8 for Impact, 7 for Confidence, and 9 for Ease gets an ICE score of 504. Compare that to a project scoring 9, 4, and 3, which comes out to 108. The first project wins because it combines strong expected results with high certainty and low effort.
When ICE Works Best
ICE is built for speed. It works well when you’re exploring smaller experiments, growth ideas, or early-stage features where precision matters less than momentum. Think hackathons, marketing A/B tests, or weekly sprint planning where you need to pick from a dozen small ideas quickly. The framework takes minutes per idea, not hours.
For higher-stakes decisions that affect your product roadmap, budgets, or multiple teams, many organizations switch to a more structured framework called RICE, which adds a “Reach” component to estimate how many users a project will affect. RICE provides more rigor for big-picture planning, while ICE helps you move faster when you’re testing and iterating.
How to Run an ICE Scoring Session
Start by listing every idea or project on the table. A spreadsheet works fine: one row per idea, with columns for Impact, Confidence, Ease, and the calculated score. Before scoring, align your team on what metric “Impact” refers to. If half the team is thinking about revenue and the other half is thinking about user engagement, your scores won’t be comparable.
Have each team member score independently first, then compare. When scores differ significantly on the same idea, that’s a signal to discuss assumptions rather than just average the numbers. Someone scoring Confidence at 3 while a colleague scores it at 9 probably has different information or a different interpretation of the evidence.
One practical tip: anchor your scale with examples. Before your first session, pick a completed project the team agrees was high-impact and one that was low-impact. Use those as reference points so everyone calibrates their 1-to-10 scores similarly. Without anchoring, one person’s 7 might be another person’s 4, and the rankings become unreliable.
After scoring, sort by ICE score from highest to lowest. The top of the list isn’t an automatic decision, but it gives you a defensible starting point. If something near the top feels wrong, revisit the individual scores and figure out which assumption is off.
Limitations to Keep in Mind
The biggest weakness of ICE is subjectivity. All three scores come from human judgment, and there’s no built-in way to account for how many people a project affects. Two projects might both score 8 on Impact, but one reaches 500 users and the other reaches 50,000. ICE treats them the same. If audience size matters to your decision, you’ll need to factor that in separately or use a framework like RICE that includes it.
The 1-to-10 scale can also compress differences. People tend to cluster scores between 5 and 8, which makes many ideas look similar. Encouraging the full range of the scale, and using those anchor examples, helps spread scores out enough to be useful.
ICE as an Emergency Contact Method
Outside of business, “ICE” stands for “In Case of Emergency,” a system for storing emergency contact information on your phone so first responders can reach your family or doctor if you’re unable to communicate.
On Android, open the Safety app, sign into your Google Account, and tap “Your info.” From there, you can add emergency contacts by tapping “Emergency contacts,” then “Add contact” and selecting someone from your existing contacts. You can also add medical information like blood type, allergies, and current medications under the “Medical information” section.
On iPhones, you set this up through the Health app. Open Health, tap your profile picture, then “Medical ID.” Toggle on “Show When Locked” so the information is accessible from the lock screen without needing your passcode. Add your emergency contacts, medical conditions, allergies, and blood type here.
The key step most people skip is making this information visible from the lock screen. If a paramedic can’t unlock your phone, the emergency info is useless unless you’ve enabled lock-screen access.
ICE in Education
In teaching and curriculum design, ICE stands for Ideas, Connections, and Extensions. It’s a framework developed to help educators assess how deeply students understand material, moving beyond simple memorization.
The three stages represent a learning progression. “Ideas” covers foundational knowledge: the basic facts and vocabulary a student needs. “Connections” measures whether a student can link those ideas together or relate them to other concepts. “Extensions” looks at whether a student can apply what they’ve learned to new situations, think critically, or create something original with the knowledge.
Teachers use ICE to design assignments and rubrics that reward deeper thinking. Rather than grading only on whether a student recalled the right answer, an ICE-based rubric distinguishes between a student who memorized a definition (Ideas level), one who can explain how it relates to a broader topic (Connections level), and one who can use the concept to analyze an unfamiliar problem (Extensions level). The framework works across subjects and grade levels, from elementary school through university courses.

