Test and inspection planning
A technique for deciding how deliverables will be verified for conformance before acceptance. It defines the methods, timing, criteria, environments, and responsibilities for tests and inspections. Doing this early aligns expectations and reduces rework and disputes.
Key Points
- Determines what will be verified, how it will be verified, and who approves results.
- Links requirements to specific verification methods such as inspection, sampling, walkthroughs, and testing.
- Establishes measurable acceptance criteria, entry and exit conditions, and defect thresholds.
- Plans logistics such as environments, data, tools, and test datasets.
- Integrates verification activities into schedule, budget, and Definition of Done or acceptance criteria.
- Supports compliance by mapping regulatory or standards-based checks to deliverables.
- Scales by risk, using 100% inspection for critical items and sampling for low-risk items.
Purpose of Analysis
- Translate stakeholder needs into observable, testable acceptance conditions.
- Set clear expectations for quality and acceptance to minimize rework and change disputes.
- Right-size verification effort using risk and criticality to guide depth and frequency.
- Ensure resources, environments, and data are planned before validation points.
- Demonstrate compliance with standards, contracts, and regulations.
Method Steps
- Review requirements and initial acceptance criteria with stakeholders and product owner.
- Classify verification needs by risk, complexity, regulatory impact, and measurability.
- Select methods for each requirement or deliverable: inspection, review, functional testing, performance testing, sampling, or user acceptance testing.
- Define objective measures: pass or fail conditions, tolerances, sampling plans, and defect severity thresholds.
- Plan logistics: environments, test data, tools, automation scope, and independence of testers or inspectors.
- Assign roles and responsibilities for executing, witnessing, and approving results.
- Schedule verification points in the roadmap, iterations, or milestones, including lead times for environment readiness.
- Document the approach in the scope and quality planning artifacts and link each requirement to a verification method.
- Review with stakeholders and adjust based on risk, cost, and schedule trade-offs.
Inputs Needed
- Requirements and user stories with initial acceptance criteria.
- Product scope description and assumptions or constraints.
- Applicable standards, regulations, and contractual acceptance clauses.
- Organizational test policies, templates, historical checklists, and lessons learned.
- Stakeholder quality expectations and Definition of Done or acceptance definitions.
- Preliminary architecture or design concepts that influence testability.
- Risk insights that indicate areas needing deeper verification.
Outputs Produced
- Verification strategy describing methods, rigor, and independence levels.
- Updated and measurable acceptance criteria per requirement or story.
- Verification matrix linking requirements to test or inspection methods and approval authorities.
- Inspection and test checklists, sampling plans, and defect classification scheme.
- Environment and data readiness plan with lead times and responsibilities.
- Schedule and cost estimates for verification activities and resources.
- Inputs to the scope management approach and quality management planning.
Interpretation Tips
- Favor objective, observable measures over subjective wording to avoid disputes.
- Use risk-based testing to concentrate effort where failure impact is highest.
- Balance 100% inspection versus sampling to optimize cost of quality.
- Ensure acceptance criteria reflect user value, not just technical correctness.
- Keep traceability so changes to requirements automatically trigger updates to verification activities.
- In adaptive approaches, embed acceptance tests in each story and include them in the Definition of Done.
Example
A team planning a new e-commerce checkout defines verification as follows.
- Functional: Scenario-based UAT for taxes, shipping, and payment declines; pass if outcomes match business rules.
- Nonfunctional: Load testing for 500 concurrent checkouts with response time under 2 seconds at 95th percentile.
- Data: Sampling plan to verify 2% of migrated customer records with zero critical defects.
- Security: Independent penetration test of payment flow; no high or critical findings allowed before release.
- Inspections: UX checklist review of accessibility criteria against WCAG standards.
- Logistics: Staging environment ready two sprints before UAT; masked production-like data prepared one week prior.
Pitfalls
- Vague acceptance criteria that are not measurable or observable.
- Ignoring nonfunctional and compliance requirements until late in the project.
- Underestimating lead time for environments, data, and test accounts.
- Over-testing low-risk items while under-testing high-impact areas.
- Not defining approval roles, causing delays at validation gates.
- Failing to integrate verification tasks into schedule and budget.
- Missing traceability, so requirement changes do not update verification plans.
PMP Example Question
While developing the scope management approach, the team defines what will be verified, the methods to use (inspection, sampling, UAT), acceptable thresholds, and who signs off. What is the primary benefit of doing this early?
- It guarantees zero defects in the final product.
- It clarifies acceptance criteria and verification approach, reducing rework and disputes.
- It allows the team to skip quality assurance activities.
- It eliminates the need for stakeholder involvement during validation.
Correct Answer: B — It clarifies acceptance criteria and verification approach, reducing rework and disputes.
Explanation: Early test and inspection planning sets objective acceptance measures and methods, aligning stakeholders and embedding verification into the plan. It does not guarantee zero defects or replace QA or stakeholder validation.
HKSM