Expectations for Tester During Agile Ceremonies
3 Amigos
Identify and Clarify Ambiguities in Requirements
The tester should actively look for and raise questions about any subjective or vague terminology used (e.g., "quickly", "like", "similar") to improve clarity and a shared understanding. This helps uncover the author’s intent and reduces assumptions that could lead to defects later in development.
Contribute to Defining Acceptance Criteria (ACs)
The tester is expected to help draft initial Acceptance Criteria. Their input ensures testability and that edge cases and failure scenarios are considered from the start, increasing confidence that the story can be validated effectively.
Highlight Potential Risks and Gaps in Test Coverage
The tester should bring a quality-focused perspective by pointing out potential risks in the proposed solution, such as areas prone to defects or lacking clarity. They should also ensure that testability and validation are considered early, contributing to a shared understanding of what successful delivery looks like.
Stand Ups
Communicate Testing Progress for Each User Story or Sub-task
The tester should clearly share what testing activities have been completed, are in progress, or are planned for each relevant story. This ensures the team has visibility into quality-related work and how it aligns with the current sprint goals.
Raise Risks and Quality Concerns Identified in the Last 24 Hours
The tester is expected to highlight any risks, blockers, or emerging issues that could impact the quality or timely delivery of a story. This includes discussing potential mitigation strategies with the team to address them early.
Promote a Team-wide Focus on Quality and Collaboration
The tester should encourage conversation around quality and Acceptance Criteria (ACs) across the team, not just from the test engineer’s perspective. This includes surfacing any new information affecting the ACs and engaging in task discussions—even if the tester isn’t directly assigned to them—to foster a shared responsibility for story completion.
Refinements
Adopt a Critical and Questioning Mindset
The tester should actively challenge assumptions, vague statements, and overconfidence in solutions that may induce a false sense of security. This includes probing the team's understanding—especially of members who did not attend the 3 Amigos session—to ensure shared clarity and alignment with the story’s intent.
Ensure Stories Align with the Definition of Ready (DoR)
The tester is expected to assess whether each story meets the team’s Definition of Ready from a quality and testability standpoint. If the DoR seems insufficient or lacking from a QA perspective, the tester should question and propose improvements to strengthen it.
Contribute a Quality-Focused Perspective to Estimation and Planning
The tester should offer a critical view of the estimation process by considering both detail and broader implications. They should assess whether potential risks, dependencies, and edge cases have been considered—ensuring quality is baked into the conversation before the story enters development.
Sprint Planning
Assess Testability and Clarity of Stories Before Commitment
The tester should verify that each selected story meets the Definition of Ready (DoR) from a quality perspective—clear requirements, agreed-upon Acceptance Criteria, and known testing scope. If any story lacks sufficient clarity or poses testability concerns, the tester should raise these before the team commits to it.
Contribute to Estimation by Highlighting Testing Effort and Risk
The tester should provide input on the testing complexity, scope, and risk level for each story. This includes considerations like automation effort, data setup, environment dependencies, or the need for exploratory testing—all of which can impact story sizing and planning.
Identify and Flag Stories That May Require Early QA Involvement or Test Planning
The tester should flag stories that will need early QA activities—such as reviewing designs, preparing test data, or collaborating on automation strategies. This ensures testing is not an afterthought and is instead woven into the sprint’s workflow from the start.
Retrospective
Provide Insight into Quality-Related Challenges and Wins
The QE should share observations on what went well and what could be improved from a quality standpoint during the sprint. This can include things like successful test coverage, defect prevention, or any blockers that impacted test execution or story validation.
Raise Process Gaps or Inefficiencies That Affected Quality
The QE should highlight any process-related issues that impacted quality, such as late definition of Acceptance Criteria, insufficient collaboration during 3 Amigos, or recurring issues in how defects were handled. This feedback supports continuous improvement in team practices.
Suggest Improvements to Strengthen Collaboration and Prevent Defects
The QE is expected to propose actionable quality-focused improvements, such as better early test involvement, more structured AC writing, pairing sessions, or enhancements to automation or test environments. The goal is to improve the team's overall quality mindset in the next sprint.
Throughout the Sprint
Continuously Validate Work Against Acceptance Criteria (ACs)
The QE should actively test and validate features as they are developed, ensuring that the implementation meets the agreed-upon ACs and behaves as expected from a user and business perspective. Any deviations or gaps should be communicated early.
Collaborate Closely with Developers and Business Reps
Throughout the sprint, the QE should maintain close collaboration with developers and product/business stakeholders to clarify requirements, discuss edge cases, and resolve issues quickly. This helps avoid silos and reduces the risk of late-stage surprises.
Identify, Log, and Communicate Risks or Defects Promptly
The QE should continuously monitor for quality risks, log defects clearly (when needed), and escalate blockers or concerns that could affect the sprint goal. This includes being proactive about environmental issues, missing data, or incomplete definitions. Use the comment section of the user story that you’re working on to rationalize your risks and tag people such as stakeholders that need to know. In your EOD update, raise these risks and concerns early, so that senior testers and test leads can see a picture of the challenges you’re dealing with.