Simulation
Simulation is an analysis technique that models uncertainty by running many trials to estimate likely outcomes for schedule, cost, or performance. It provides ranges and probabilities rather than single-point estimates to support better decisions.
Key Points
- Simulation models uncertainty using probability distributions and runs many iterations to sample possible outcomes.
- Monte Carlo simulation is most common for schedule and cost; discrete-event simulation is used for process flows.
- Results are presented as ranges, percentiles (such as P10, P50, P90), and probabilities of meeting targets.
- Assumptions, dependencies, and correlations strongly influence results and must be made explicit.
- Guides contingency and management reserves and supports what-if scenario evaluation.
- Highlights key risk drivers to focus risk responses where they matter most.
Purpose of Analysis
Use simulation to quantify how uncertainty and risk affect project objectives and to inform realistic commitments and reserves.
- Estimate the probability of meeting dates, budgets, and performance targets.
- Determine appropriate buffers and reserves based on desired confidence levels.
- Compare scenarios to evaluate risk responses and trade-offs.
- Identify activities, cost elements, or risks that drive outcome variability.
Method Steps
- Define the objective and success criteria (e.g., probability to finish by a specific date or within a budget).
- Build a model of the project (schedule network, cost breakdown, or process flow).
- Assign probability distributions to uncertain inputs (durations, costs, quantities) and define dependencies and correlations.
- Select number of iterations and run the simulation (e.g., 5,000–20,000 trials for stable results).
- Validate the model against historical data or expert judgment and adjust assumptions as needed.
- Analyze outputs (percentiles, S-curves, tornado charts) and derive recommendations for reserves and responses.
- Document assumptions, communicate results, and update the model as the project or risks change.
Inputs Needed
- Project model: schedule network with logic, calendars, and resources; and/or a cost breakdown structure.
- Estimates with uncertainty ranges (e.g., three-point estimates or distribution parameters for durations and costs).
- Risk register with identified risks, impacts, and likelihood; mapped to model elements where applicable.
- Assumptions, constraints, target dates/budgets, and risk thresholds.
- Dependencies and correlations among activities or cost items.
- Historical data and expert judgment to inform distributions and validate realism.
Outputs Produced
- Probability distributions and S-curves for total project duration or cost.
- Percentile values (e.g., P50, P80, P90 dates or costs) to set realistic commitments and reserves.
- Probability of meeting a specific target (e.g., 65% chance to finish by June 30).
- Tornado/sensitivity charts showing key drivers of variance.
- Recommended contingency and management reserves aligned to risk appetite.
- Documented assumptions and insights to guide risk responses and planning.
Interpretation Tips
- Focus on ranges and confidence levels, not a single number; percentiles support risk-based decisions.
- Check that assumptions and correlations are realistic; unrealistic tails can distort reserves.
- Use sensitivity results to prioritize risk responses where they will have the most impact.
- Compare scenarios (with and without responses) to demonstrate the value of mitigation.
- Validate results against historical performance or benchmarks to build stakeholder confidence.
- Communicate simply with visuals (S-curves, tornado charts) and a clear narrative of assumptions.
Example
A project team models a 200-activity schedule with three-point duration estimates and known dependencies. Running 10,000 Monte Carlo iterations yields a 35% chance to finish by September 30, a P50 date of October 8, and a P80 date of October 18. A tornado chart shows two integration tasks and supplier lead time as main drivers. The sponsor sets the target at P80 and approves a two-week schedule buffer plus mitigations focused on the drivers.
Pitfalls
- Garbage in, garbage out: poor or biased inputs produce misleading outputs.
- Ignoring correlations or dependencies, which can understate true risk.
- Too few iterations, leading to unstable or noisy results.
- Overly complex black-box models that stakeholders cannot understand or trust.
- Using unrealistic distributions (e.g., symmetric when data are skewed).
- Failing to update the model as risks change and new data emerge.
- Reporting only the mean without confidence ranges or probabilities.
PMP Example Question
A project sponsor asks for a quantified probability of meeting the current completion date and guidance on how much schedule buffer is needed. Which technique should the project manager use?
- Sensitivity analysis on the critical path.
- Monte Carlo simulation of the project schedule.
- Three-point estimating for each activity.
- Expected Monetary Value (EMV) decision tree.
Correct Answer: B — Monte Carlo simulation of the project schedule.
Explanation: Simulation models uncertainty across the schedule to produce probabilities and percentiles, enabling buffer sizing. Three-point estimates alone do not provide overall completion probabilities.
HKSM