Customer talks and tests
A collaborative validation technique where customers and end users discuss needs and perform hands-on acceptance tests on deliverables. It enables quick feedback, objective pass/fail results, and formal acceptance decisions during Validate Scope.
Key Points
- Pairs real-time customer conversations with practical testing of the product or increment.
- Centers on acceptance criteria to drive objective pass/fail outcomes.
- Time-boxed, facilitated sessions that end in accept, reject, or change request decisions.
- Generates visible evidence such as test results, screenshots, and sign-off records.
- Can be run as UAT, demo-plus-test sessions, or focused validation workshops.
Purpose of Analysis
- Confirm that delivered scope satisfies agreed requirements and user needs.
- Expose gaps, defects, or misunderstandings before wider release.
- Translate qualitative feedback into measurable acceptance outcomes and change actions.
- Reduce rework and scope disputes through transparent, traceable results.
Method Steps
- Plan the session: objectives, scope items, participants with decision authority, and time-box.
- Prepare: align on acceptance criteria, test charters or scripts, environments, and data.
- Facilitate talks: clarify use cases, walk through expected outcomes, and confirm priorities.
- Execute tests: customers perform scenarios while the team observes and records evidence.
- Record outcomes: pass/fail per criterion, defects, usability notes, and improvement ideas.
- Decide: accept deliverables, log change requests, or schedule fixes and retests.
- Close: capture sign-off, update traceability, backlog, and lessons learned.
Inputs Needed
- Requirements baseline or product backlog items with clear acceptance criteria.
- Working increment, prototype, or deliverable ready for hands-on evaluation.
- Definition of Done and quality standards.
- Test cases, charters, or scenario outlines and test data.
- Test environment access, tools for capturing evidence, and defect logging mechanisms.
- Previous feedback, open issues, and relevant risks or constraints.
Outputs Produced
- Accepted deliverables with formal sign-off records.
- Work performance information such as pass/fail counts and coverage achieved.
- Defect and issue logs with severity and owner.
- Change requests for unmet needs, new findings, or scope adjustments.
- Updated backlog, traceability matrix, and test artifacts.
- Lessons learned focused on validation efficiency and stakeholder engagement.
Interpretation Tips
- Anchor judgments to acceptance criteria to avoid subjective debates.
- Distinguish defects (do not meet criteria) from enhancements (new requests) to route correctly.
- Use coverage metrics to confirm that critical scenarios and edge cases were exercised.
- When results are mixed, negotiate conditional acceptance with a clear remediation plan and retest date.
- Capture enough evidence to support audits and future regressions.
Example
A team validates a new “Funds Transfer” feature for a mobile banking app. Product owners and client representatives join a two-hour session to walk through priority scenarios and execute tests using masked test data.
- Talks clarify daily transfer limits, error messaging, and confirmation flows.
- Tests cover standard transfer, invalid account number, exceeded limit, and network retry.
- Results: three scenarios pass and are accepted; one fails due to missing validation, logged as a defect with a one-week fix and retest.
- Sign-off is recorded for the accepted scenarios, and the backlog is updated with the defect and an enhancement request for clearer confirmation text.
Pitfalls
- Vague or missing acceptance criteria leading to subjective decisions.
- Wrong participants or no decision authority, causing delays in acceptance.
- Unstable environments or data issues that mask true product quality.
- Over-scripted sessions that ignore real user workflows and edge cases.
- Poor evidence capture, making outcomes hard to audit or retest.
- Allowing scope expansion during the session instead of logging change requests.
PMP Example Question
A project manager schedules a time-boxed session where end users discuss expected behavior and execute hands-on tests to decide acceptance of new features. Which technique is being used?
- Inspections.
- Customer talks and tests.
- Control charts.
- Benchmarking.
Correct Answer: B — Customer talks and tests.
Explanation: This technique combines stakeholder conversations with acceptance testing to confirm deliverables meet criteria and to obtain formal acceptance during Validate Scope.
HKSM