Risk probability and impact assessment

A structured technique to estimate how likely each risk is to occur and how strongly it could affect project objectives. The combined results are used to rank risks and focus response planning on what matters most.

Definition

A structured technique to estimate how likely each risk is to occur and how strongly it could affect project objectives. The combined results are used to rank risks and focus response planning on what matters most.

Key Points

  • Uses agreed probability and impact scales (for example, 1–5 or low–medium–high) to rate each risk.
  • Considers multiple impact dimensions such as cost, schedule, scope, quality, safety, and reputation.
  • Produces a prioritized risk list, often shown on a probability-impact matrix or heat map.
  • Applies to both threats (negative impact) and opportunities (positive impact) with consistent scales.
  • Relies on expert judgment, historical data, and calibration to reduce bias and improve consistency.
  • Feeds directly into selecting risk responses and informing contingency and management reserves.

Risk Objective

Prioritize individual risks so the team applies limited time and budget to the most significant items. Create a common, transparent basis for comparing risks across objectives and categories. Align prioritization with stakeholder risk appetite and predefined thresholds to guide escalation and response planning.

Method Steps

  • Define and socialize probability and impact scales, including clear impact criteria across relevant dimensions.
  • Prepare the current risk list and context; gather supporting data, assumptions, and historical information.
  • Facilitate assessments with the right stakeholders and SMEs; estimate probability and impact for each risk and note assumptions and time frames.
  • Combine ratings into a score or place each risk on a probability-impact matrix; consider modifiers such as detectability, urgency, or proximity if used.
  • Prioritize and categorize risks; assign or confirm risk owners and propose initial response strategies.
  • Calibrate and review for consistency and bias; document rationale and update the risk register and risk report.

Inputs Needed

  • Identified risks (risk register) and risk categories or RBS.
  • Defined probability and impact scales with impact criteria by objective.
  • Stakeholder risk appetite, thresholds, and escalation guidelines.
  • Project baselines, constraints, assumption log, and stakeholder information.
  • Historical data, lessons learned, expert judgment, and industry benchmarks.

Outputs Produced

  • Updated risk register with probability, impact, combined score or priority, rationale, and owner.
  • Probability-impact matrix or heat map tailored to the project.
  • Prioritized risk list and watch list for low-priority items.
  • Updates to the risk report and communications to stakeholders.
  • Inputs to risk response planning and contingency reserve discussions.

Thresholds/Triggers

  • Score-based threshold (e.g., score ≥ 12 on a 1–5 scale) triggers immediate response planning and assignment.
  • High-impact risks (e.g., impact = 5) escalate to the sponsor regardless of probability.
  • Opportunities above a defined score are flagged for exploit or enhance strategies.
  • Category concentration trigger (e.g., >20% of top risks in one category) prompts root-cause analysis.
  • Time-based reassessment at a fixed cadence (e.g., sprint review or monthly cycle) and upon major baseline changes.
  • Trigger to revisit ratings when new information changes detectability, urgency, or underlying assumptions.

Example

A project team defines 1–5 probability and impact scales with clear criteria for cost, schedule, and quality. They assess a potential supplier delay as probability = 3 and schedule impact = 4, yielding a score of 12, placing it in the red zone. The risk is prioritized, an owner is assigned, and a mitigation plan (dual-sourcing and schedule buffer) is initiated.

They also assess an opportunity: early completion could free resources and reduce overhead. Probability = 2, impact (cost saving) = 3, score = 6, which is monitored on the watch list and considered for an enhance strategy if conditions improve.

Pitfalls

  • Using vague or inconsistent scales, leading to incomparable ratings.
  • Groupthink, optimism bias, or anchoring that skews estimates.
  • Treating relative scores as precise forecasts rather than prioritization aids.
  • Ignoring opportunities or applying threat-only criteria to positive risks.
  • Mixing inherent and residual risk without clearly documenting which is rated.
  • Allowing assessments to become stale by not revisiting them after changes.
  • Focusing on a single dimension (e.g., cost) and overlooking other impacts.

PMP Example Question

During a qualitative risk workshop, participants disagree on how to rate several risks. What should the project manager do to improve consistency before continuing?

  1. Establish and communicate clear, project-specific probability and impact scales with impact criteria.
  2. Move directly to quantitative analysis to obtain objective numbers.
  3. Average the differing ratings and proceed to response planning.
  4. Defer risk analysis until more detailed estimates are available.

Correct Answer: A — Establish and communicate clear, project-specific probability and impact scales with impact criteria.

Explanation: Clear, agreed scales and criteria enable consistent probability and impact assessments. Quantitative analysis is not a substitute for well-calibrated qualitative ratings.

AI-Prompt Engineering for Strategic Leaders

Stop managing administration and start leading the future. This course is built specifically for managers and project professionals who want to automate chaos and drive strategic value using the power of artificial intelligence.

We don't teach you how to program Python; we teach you how to program productivity. You will master the AI-First Mindset and the 'AI Assistant' model to hand off repetitive work like status reports and meeting minutes so you can focus on what humans do best: empathy, negotiation, and vision.

Learn the 5 Core Prompt Elements-Role, Goal, Context, Constraints, and Output-to get high-quality results every time. You will build chained sequences for complex tasks like auditing schedules or simulating risks, while navigating ethics and privacy with human-in-the-loop safeguards.

Move from being an administrative manager to a high-value strategic leader. Future-proof your career today with practical, management-focused AI workflows that map to your real-world challenges. Enroll now and master the language of the future.



Stop Managing Admin. Start Leading the Future!

HK School of Management helps you master AI-Prompt Engineering to automate chaos and drive strategic value. Move beyond status reports and risk logs by turning AI into your most capable assistant. Learn the core elements of prompt engineering to save hours every week and focus on high-value leadership. For the price of lunch, you get practical frameworks to future-proof your career and solve the blank page problem immediately. Backed by a 30-day money-back guarantee-zero risk, real impact.

Enroll Now
``` ### Marketing Notes for this Revision: * **The Hook:** I used the "Stop/Start" phrasing from your landing page description because it creates a clear transformation for the user. * **The Value:** It highlights the specific pain point mentioned in your text (drowning in administrative work) and offers the "AI Assistant" model as the solution. * **The Pricing/Risk:** I kept the "price of lunch" and "guarantee" messaging as it is a powerful way to reduce friction for a Udemy course. Would you like me to create a second version that focuses more specifically on the "fear of obsolescence" mentioned in your landing page info?