Recent from talks
Nothing was collected or created yet.
Acceptance testing
View on Wikipedia


In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests.[1]
In systems engineering, it may involve black-box testing performed on a system (for example: a piece of software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery.[2]
In software testing, the ISTQB defines acceptance testing as:
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether a system satisfies the acceptance criteria [3] and to enable the user, customers or other authorized entity to determine whether to accept the system.
— Standard Glossary of Terms used in Software Testing[4]: 2
The final test in the QA lifecycle, user acceptance testing, is conducted just before the final release to assess whether the product or application can handle real-world scenarios. By replicating user behavior, it checks if the system satisfies business requirements and rejects changes if certain criteria are not met.[5]
Some forms of acceptance testing are, user acceptance testing (UAT), end-user testing, operational acceptance testing (OAT), acceptance test-driven development (ATDD) and field (acceptance) testing. Acceptance criteria are the criteria that a system or component must satisfy in order to be accepted by a user, customer, or other authorized entity.[6]
Overview
[edit]Testing is a set of activities conducted to facilitate the discovery and/or evaluation of properties of one or more items under test.[7] Each test, known as a test case, exercises a set of predefined test activities, developed to drive the execution of the test item to meet test objectives; including correct implementation, error identification, quality verification, and other valued details.[7] The test environment is usually designed to be identical, or as close as possible, to the anticipated production environment. It includes all facilities, hardware, software, firmware, procedures, and/or documentation intended for or used to perform the testing of software.[7]
UAT and OAT test cases are ideally derived in collaboration with business customers, business analysts, testers, and developers. These tests must include both business logic tests as well as operational environment conditions. The business customers (product owners) are the primary stakeholders of these tests. As the test conditions successfully achieve their acceptance criteria, the stakeholders are reassured the development is progressing in the right direction.[8]
- User acceptance test (UAT) criteria (in agile software development) are usually created by business customers and expressed in a business domain language. These are high-level tests to verify the completeness of a user story or stories 'played' during any sprint/iteration.
- Operational acceptance test (OAT) criteria (regardless of using agile, iterative, or sequential development) are defined in terms of functional and non-functional requirements; covering key quality attributes of functional stability, portability, and reliability.
Process
[edit]The acceptance test suite may need to be performed multiple times, as all of the test cases may not be executed within a single test iteration.[9]
The acceptance test suite is run using predefined acceptance test procedures to direct the testers on which data to use, the step-by-step processes to follow, and the expected result following execution. The actual results are retained for comparison with the expected results.[9] If the actual results match the expected results for each test case, the test case is said to pass. If the quantity of non-passing test cases does not breach the project's predetermined threshold, the test suite is said to pass. If it does, the system may either be rejected or accepted on conditions previously agreed between the sponsor and the manufacturer.
The anticipated result of a successful test execution:
- test cases are executed, using predetermined data
- actual results are recorded
- actual and expected results are compared, and
- test results are determined.
The objective is to provide confidence that the developed product meets both the functional and non-functional requirements. The purpose of conducting acceptance testing is that once completed, and provided the acceptance criteria are met, it is expected the sponsors will sign off on the product development/enhancement as satisfying the defined requirements (previously agreed between business and product provider/developer).
User acceptance testing
[edit]User acceptance testing (UAT) consists of a process of verifying that a solution works for the user.[10] It is not system testing (ensuring software does not crash and meets documented requirements) but rather ensures that the solution will work for the user (i.e. tests that the user accepts the solution); software vendors often refer to this as "Beta testing".
This testing should be undertaken by the intended end user, or a subject-matter expert (SME), preferably the owner or client of the solution under test and provide a summary of the findings for confirmation to proceed after trial or review. In software development, UAT as one of the final stages of a project often occurs before a client or customer accepts the new system. Users of the system perform tests in line with what would occur in real-life scenarios.[11]
The materials given to the tester must be similar to the materials that the end user will have. Testers should be given real-life scenarios such as the three most common or difficult tasks that the users they represent will undertake.[12]
The UAT acts as a final verification of the required business functionality and proper functioning of the system, emulating real-world conditions on behalf of the paying client or a specific large customer. If the software works as required and without issues during normal use, one can reasonably extrapolate the same level of stability in production.[13]
User tests, usually performed by clients or by end-users, do not normally focus on identifying simple cosmetic problems such as spelling errors, nor on showstopper defects, such as software crashes; testers and developers identify and fix these issues during earlier unit testing, integration testing, and system testing phases.
UAT should be executed against test scenarios.[14][15] Test scenarios usually differ from System or Functional test cases in that they represent a "player" or "user" journey. The broad nature of the test scenario ensures that the focus is on the journey and not on technical or system-specific details, staying away from "click-by-click" test steps to allow for a variance in users' behavior. Test scenarios can be broken down into logical "days", which are usually where the actor (player/customer/operator) or system (backoffice, front end) changes.[16]
In industry, a common UAT is a factory acceptance test (FAT). This test takes place before the installation of the equipment. Most of the time testers not only check that the equipment meets the specification but also that it is fully functional. A FAT usually includes a check of completeness, a verification against contractual requirements, a proof of functionality (either by simulation or a conventional function test), and a final inspection.[17] The results of these tests give clients confidence in how the system will perform in production. There may also be legal or contractual requirements for acceptance of the system.
Operational acceptance testing
[edit]Operational acceptance testing (OAT) is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment.[18]
Acceptance testing in extreme programming
[edit]Acceptance testing is a term used in agile software development methodologies, particularly extreme programming, referring to the functional testing of a user story by the software development team during the implementation phase.[19]
The customer specifies scenarios to test when a user story has been correctly implemented. A story can have one or many acceptance tests, whatever it takes to ensure the functionality works. Acceptance tests are black-box system tests. Each acceptance test represents some expected result from the system. Customers are responsible for verifying the correctness of the acceptance tests and reviewing test scores to decide which failed tests are of highest priority. Acceptance tests are also used as regression tests prior to a production release. A user story is not considered complete until it has passed its acceptance tests. This means that new acceptance tests must be created for each iteration, or the development team will report zero progress.[20]
This section needs expansion. You can help by adding to it. (May 2008) |
Types of acceptance testing
[edit]This article needs additional citations for verification. (November 2024) |
Typical types of acceptance testing include the following
- User acceptance testing
- This may include factory acceptance testing (FAT), i.e. the testing done by a vendor before the product or system is moved to its destination site, after which site acceptance testing (SAT) may be performed by the users at the site.[21]
- Operational acceptance testing
- Also known as operational readiness testing, this refers to the checking done to a system to ensure that processes and procedures are in place to allow the system to be used and maintained. This may include checks done to back-up facilities, procedures for disaster recovery, training for end users, maintenance procedures, and security procedures.[22]
- Contract and regulation acceptance testing
- In contract acceptance testing, a system is tested against acceptance criteria as documented in a contract, before the system is accepted. In regulation acceptance testing, a system is tested to ensure it meets governmental, legal and safety standards.[23]
- Factory acceptance testing
- Acceptance testing conducted at the site at which the product is developed and performed by employees of the supplier organization, to determine whether a component or system satisfies the requirements, normally including hardware as well as software.[24]
- Alpha and beta testing
- Alpha testing takes place at developers' sites and involves testing of the operational system by internal staff, before it is released to external customers. Beta testing takes place at customers' sites and involves testing by a group of customers who use the system at their own locations and provide feedback, before the system is released to other customers. The latter is often called "field testing".[25]
Acceptance criteria
[edit]According to the Project Management Institute, acceptance criteria is a "set of conditions that is required to be met before deliverables are accepted."[26] Requirements found in acceptance criteria for a given component of the system are usually very detailed.[27]
List of acceptance-testing frameworks
[edit]- Concordion, Specification by example (SbE) framework
- Concordion.NET, acceptance testing in .NET
- Cucumber, a behavior-driven development (BDD) acceptance test framework
- Capybara, Acceptance test framework for Ruby web applications
- Behat, BDD acceptance framework for PHP
- Lettuce, BDD acceptance framework for Python
- Cypress
- Fabasoft app.test for automated acceptance tests
- Framework for Integrated Test (Fit)
- Gauge (software), Test Automation Framework from Thoughtworks
- iMacros
- ItsNat Java Ajax web framework with built-in, server based, functional web testing capabilities.
- Maveryx Test Automation Framework for functional testing, regression testing, GUI testing, data-driven and codeless testing of Desktop and Web applications.
- Mocha, a popular web acceptance test framework based on Javascript and Node.js
- Playwright (software)
- Ranorex
- Robot Framework
- Selenium
- Specification by example (Specs2)
- Watir
See also
[edit]References
[edit]- ^ "BPTS - Is Business process testing the best name / description". SFIA. Retrieved February 18, 2023.
- ^ Black, Rex (August 2009). Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing. Hoboken, NJ: Wiley. ISBN 978-0-470-40415-7.
- ^ "acceptance criteria". Innolution, LLC. June 10, 2019.
- ^ "Standard Glossary of Terms used in Software Testing, Version 3.2: All Terms" (PDF). ISTQB. Retrieved November 23, 2020.
- ^ "User Acceptance Testing (UAT) - Software Testing". GeeksforGeeks. November 24, 2022. Retrieved May 23, 2024.
- ^ ISO/IEC/IEEE International Standard - Systems and software engineering. ISO/IEC/IEEE. 2010. pp. vol., no., pp.1–418.
- ^ a b c ISO/IEC/IEEE 29119-1:2013 Software and Systems Engineering - Software Testing - Part 1: Concepts and Definitions. ISO. 2013. Retrieved October 14, 2014.
- ^ ISO/IEC/IEEE 29119-4:2013 Software and Systems Engineering - Software Testing - Part 4: Test Techniques. ISO. 2013. Retrieved October 14, 2014.
- ^ a b ISO/IEC/IEEE 29119-2:2013 Software and Systems Engineering - Software Testing - Part 2: Test Processes. ISO. 2013. Retrieved May 21, 2014.
- ^ Cimperman, Rob (2006). UAT Defined: A Guide to Practical User Acceptance Testing. Pearson Education. pp. Chapter 2. ISBN 9780132702621.
- ^ Goethem, Brian; van Hambling, Pauline (2013). User acceptance testing : a step-by-step guide. BCS Learning & Development Limited. ISBN 9781780171678.
- ^ "2.6: Systems Testing". Engineering LibreTexts. August 2, 2021. Retrieved February 18, 2023.
- ^ Pusuluri, Nageshwar Rao (2006). Software Testing Concepts And Tools. Dreamtech Press. p. 62. ISBN 9788177227123.
- ^ "Get Reliable Usability and Avoid Risk with These Testing Scenarios". Panaya. April 25, 2022. Retrieved May 11, 2022.
- ^ Elazar, Eyal (April 23, 2018). "What is User Acceptance Testing (UAT) - The Full Process Explained". Panaya. Retrieved February 18, 2023.
- ^ Wysocka, Emilia M.; Page, Matthew; Snowden, James; Simpson, T. Ian (December 15, 2022). "Comparison of rule- and ordinary differential equation-based dynamic model of DARPP-32 signalling network". PeerJ. 10. Table 1: The specifications of the ODE and RB models can be broken down into elements, the number of which can be compared. doi:10.7717/peerj.14516. ISSN 2167-8359. PMC 9760030. PMID 36540795.
- ^ "Factory Acceptance Test (FAT)". TÜV Rheinland. Archived from the original on February 4, 2013. Retrieved September 18, 2012.
- ^ Vijay (February 2, 2018). "What is Acceptance Testing (A Complete Guide)". Software Testing Help. Retrieved February 18, 2023.
- ^ "Introduction to Acceptance/Customer Tests as Requirements Artifacts". agilemodeling.com. Agile Modeling. Retrieved December 9, 2013.
- ^ Wells, Don. "Acceptance Tests". Extremeprogramming.org. Retrieved September 20, 2011.
- ^ Prasad, Durga (March 29, 2012). "The Difference Between a FAT and a SAT". Kneat.com. Archived from the original on June 16, 2017. Retrieved July 27, 2016.
- ^ Turner, Paul (October 5, 2020). "Operational Readiness". Commissioning and Startup. Retrieved February 18, 2023.
- ^ Brosnan, Adeline (January 12, 2021). "Acceptance Testing in Information Technology Contracts". LegalVision. Archived from the original on May 3, 2025. Retrieved February 18, 2023.
- ^ "ISTQB Standard glossary of terms used in Software Testing". Archived from the original on November 5, 2018. Retrieved March 15, 2019.
- ^ Hamilton, Thomas (April 3, 2020). "Alpha Testing Vs Beta Testing – Difference Between Them". www.guru99.com. Retrieved February 18, 2023.
- ^ Project Management Institute 2021, §Glossary Section 3. Definitions.
- ^ Project Management Institute 2021, §2.6.2.1 Requirements.
Sources
[edit]- A guide to the project management body of knowledge (PMBOK guide) (7th ed.). Newtown Square, PA: Project Management Institute. 2021. ISBN 978-1-62825-664-2.
Further reading
[edit]- Hambling, Brian; van Goethem, Pauline (2013). User Acceptance Testing: A Step by Step Guide. Swindon: BCS Learning and Development Ltd. ISBN 978-1-78017-167-8.
External links
[edit]- Acceptance Test Engineering Guide Archived December 23, 2017, at the Wayback Machine by Microsoft patterns & practices
- "Using Customer Tests to Drive Development" from Methods & Tools
Acceptance testing
View on GrokipediaFundamentals
Definition and Purpose
Acceptance testing is the final phase of software testing, conducted to evaluate whether a system meets predefined business requirements, user needs, and acceptance criteria prior to deployment or operational use. This phase involves assessing the software as a complete entity to verify its readiness for production, often through simulated real-world scenarios that align with stakeholder expectations. As an incremental process throughout development or maintenance, it approves or rejects the system based on established benchmarks, ensuring alignment with contractual or operational specifications.[4] The primary purpose of acceptance testing is to confirm the software's functionality, usability, performance, and compliance with external standards from an end-user viewpoint, thereby mitigating risks associated with deployment. Unlike unit testing, which verifies individual components in isolation by developers, or integration testing, which examines interactions between modules, acceptance testing adopts an external, holistic perspective to validate overall system behavior against user-centric requirements. This focus helps identify discrepancies between expected and actual outcomes, ensuring the software delivers value and avoids costly post-release fixes. It plays a key role in catching defects missed in earlier testing phases, reducing overall project risks.[5][4] Key concepts in acceptance testing include its black-box approach, where testers evaluate inputs and outputs without knowledge of internal code or structure, emphasizing observable behavior over implementation details. Stakeholders such as customers, end-users, buyers, and acceptance managers play central roles, collaborating to define and apply criteria for acceptance or rejection, typically categorized into functionality, performance, interface quality, overall quality, security, and safety, each with quantifiable measures. Originating in the demonstration-oriented era of software testing during the late 1950s, when validation shifted from mere debugging to proving system adequacy, acceptance testing was initially formalized through standards like IEEE 829 in 1983 and has since evolved with the ISO/IEC/IEEE 29119 series (2013–2024), which provides the current international framework for test documentation, planning, execution, and reporting across testing phases, including recent updates such as part 5 on keyword-driven testing (2024) and guidance for AI systems testing (2025).[5][4][6][3]Role in Software Development Lifecycle
Acceptance testing is positioned as the culminating phase of the software development lifecycle (SDLC), occurring after unit, integration, and system testing but before production deployment. This placement ensures that the software has been rigorously validated against technical specifications prior to end-user evaluation, serving as a critical gatekeeper that determines readiness for go-live by confirming alignment with business needs and user expectations.[7][8][9] Within the SDLC, acceptance testing integrates closely with requirements gathering to maintain traceability from initial specifications through to validation, ensuring that the delivered product adheres to defined criteria and mitigates risks such as scope creep by clarifying and confirming stakeholder expectations early in the process. It also supports post-deployment maintenance by providing a baseline for ongoing validation against evolving requirements, helping to identify potential operational issues that could lead to deployment failures or extended support needs.[10][11][12] The benefits of acceptance testing extend to enhanced quality assurance, greater stakeholder satisfaction, and improved cost efficiency, as it uncovers usability and functional gaps that earlier phases might overlook, thereby preventing expensive rework in production.[13] Effective acceptance testing presupposes the completion of preceding testing phases, with all defects from unit, integration, and system testing resolved to a predefined threshold. It further relies on strong traceability to requirements documents, such as through a requirements traceability matrix, which links test cases directly to original specifications to ensure comprehensive coverage and verifiability.[14][15]Types of Acceptance Testing
User Acceptance Testing
User Acceptance Testing (UAT) is a type of acceptance testing performed by the intended users or their representatives to determine whether a system satisfies the specified user requirements, business processes, and expectations in a simulated operational environment.[16] This testing phase focuses on validating that the software aligns with end-user needs rather than internal technical specifications, often serving as the final validation before deployment.[17] Key activities in UAT include scenario-based testing derived from use cases, where users execute predefined scripts to simulate real-world interactions; logging defects encountered during these scenarios; and providing formal sign-off upon successful validation.[7] These activities typically involve non-technical users, such as business stakeholders or end-users, who assess functionality from a practical perspective without deep involvement in code-level details.[18] Unlike other testing types, such as system or integration testing, UAT emphasizes subjective user experience and usability over objective technical metrics like code coverage or performance benchmarks.[19] It relies on user-derived scripts from business use cases to evaluate fit-for-purpose outcomes, prioritizing qualitative feedback on workflow efficiency and intuitiveness.[20] Best practices for UAT include setting up a dedicated staging environment that mirrors production to ensure realistic testing conditions, and providing training or guidance to participants to familiarize them with test scripts and tools.[7] This approach is particularly prevalent in regulated industries like finance, where it supports compliance with standards such as those from FINRA for settlement systems, and healthcare, for example in validation of electronic systems for clinical outcome assessments as outlined in best practice recommendations.[21][22] Success in UAT is measured through metrics such as pass/fail ratios of test cases, which indicate the percentage of scenarios meeting acceptance criteria, and user feedback surveys assessing satisfaction with usability and functionality.[23] These quantitative and qualitative indicators help quantify overall readiness, with positive survey scores signaling effective user validation.[24]Operational Acceptance Testing
Operational Acceptance Testing (OAT) is a form of acceptance testing that evaluates the operational readiness of a software system or service by verifying non-functional requirements related to reliability, recoverability, maintainability, and supportability. This testing confirms that the system can be effectively operated and supported in a production environment without causing disruptions, focusing on backend infrastructure and IT operations rather than user interactions. According to the International Software Testing Qualifications Board (ISTQB), OAT determines whether the organization responsible for operating the system—typically IT operations and systems administration staff—can accept it for live deployment.[25] Key components of OAT encompass testing critical operational elements such as backup and restore procedures, disaster recovery mechanisms, security protocols, and monitoring and logging tools. These are assessed under simulated production conditions to replicate real-world stresses, including high loads and failure scenarios, ensuring the system maintains integrity during routine maintenance and unexpected events. In the context of ITIL 4's Service Validation and Testing practice, OAT integrates with broader service transition activities to validate that releases meet operational quality criteria before handover.[26] Procedures for OAT typically include load and performance testing to evaluate scalability under expected volumes, failover simulations to confirm redundancy and quick recovery, and validation of maintenance processes like patching and configuration management. These activities are led by IT operations teams, using tools and environments that mirror production to identify potential issues in supportability and resource utilization. For instance, backup testing verifies data integrity and restoration times, while disaster recovery drills assess the ability to resume operations within predefined recovery time objectives.[25][26] The importance of OAT lies in its role in mitigating risks of post-deployment downtime and operational failures, which can be costly for enterprise systems handling critical data or services. By adhering to standards like ITIL 4 (released in 2019 with ongoing updates), organizations ensure robust operational handover, reducing incident rates and enhancing service continuity. In high-stakes environments, such as financial or healthcare systems, OAT supports improved availability metrics through thorough pre-release validation.[27] Outcomes of OAT include the creation of operational checklists, detailed handover documentation, and acceptance sign-off from operations teams, facilitating a smooth transition to live support. These deliverables provide support staff with clear guidelines for ongoing maintenance, monitoring thresholds, and escalation procedures, ensuring long-term system stability.[26]Contract and Regulatory Acceptance Testing
Contract and Regulatory Acceptance Testing (CRAT) verifies that a software system meets the specific terms outlined in service-level agreements (SLAs), contractual obligations, or mandatory regulatory standards, ensuring legal and compliance adherence before deployment. This form of testing focuses on external enforceable requirements rather than internal operational fitness, distinguishing it from other acceptance variants by emphasizing verifiable fulfillment of predefined legal criteria. For instance, it confirms that the system adheres to contractual performance benchmarks, such as uptime guarantees or data handling protocols, and regulatory mandates like data privacy protections under the General Data Protection Regulation (GDPR).[4][28] Key elements of CRAT include comprehensive audits for data privacy, detailed audit trails for traceability, and validation of performance metrics explicitly stated in contracts or regulations. These audits often involve third-party reviewers, such as independent auditors or notified bodies, to objectively assess compliance and mitigate liability risks. In regulatory contexts, testing ensures safeguards like access controls and encryption align with standards; for example, under GDPR, acceptance testing must incorporate data protection impact assessments, using anonymized test data to avoid processing real personal information without necessity. Similarly, HIPAA Security Rule compliance requires testing audit controls and contingency plans to protect electronic protected health information (ePHI), with addressable specifications evaluated for appropriateness. Performance benchmarks might include response times or error rates tied to penalty clauses in contracts, ensuring the system avoids financial repercussions for non-compliance.[29][30][4] The process entails formal planning with quantifiable acceptance criteria, execution through structured test cases, and culminating in official sign-offs by stakeholders, often including legal representatives. This is prevalent in sectors like government and finance, where failure to comply can trigger penalties or contract termination; for example, post-2002 Sarbanes-Oxley Act (SOX) implementations require software systems supporting financial reporting to undergo acceptance testing for internal controls and auditability to prevent discrepancies in reported data. In payment processing, PCI-DSS compliance testing validates software against security standards for cardholder data, involving validated solutions lists maintained by the PCI Security Standards Council. Challenges arise from evolving regulations, such as the 2024 EU AI Act updates, which mandate risk assessments, pre-market conformity testing, and post-market monitoring for high-risk AI systems, including real-world testing plans and bias mitigation in datasets to ensure fundamental rights protection.[31][32][28]Alpha and Beta Testing
Alpha testing represents an internal phase of acceptance testing conducted within the developer's controlled environment, typically by quality assurance teams or internal users simulating end-user actions to identify major functional and usability issues before external release.[33] This process focuses on verifying that the software meets basic operational requirements in a lab-like setting, allowing developers to address defects such as crashes, interface inconsistencies, or performance bottlenecks without exposing the product to real-world variables.[34] Beta testing, in contrast, involves external validation by a limited group of real users in their natural environments, aiming to collect diverse feedback on usability, compatibility, and remaining bugs that may not surface in controlled conditions.[35] Participants, often selected from early adopters or target audiences, interact with the software as they would in daily use, providing insights into real-world scenarios like hardware variations or network issues.[36] Feedback is commonly gathered through dedicated portals, surveys, or direct reports, enabling iterative improvements prior to full deployment.[37] The primary differences lie in scope and execution: alpha testing is developer-led and confined to an in-house lab to catch foundational flaws, whereas beta testing is user-driven and field-based to validate broader applicability and gather subjective user experiences.[33][35] Alpha occurs earlier, emphasizing technical stability, while beta follows to assess user satisfaction and edge cases.[34] These practices originated from hardware testing conventions in the mid-20th century, such as IBM's use in the 1950s for product cycle checkpoints, but gained prominence in software development during the 1980s as personal computing expanded, with structured alpha and beta phases becoming standard for pre-release validation.[34][38][39] Key metrics for both include the volume and severity of bug reports, defect resolution rates, and user satisfaction scores derived from feedback surveys, which inform the transition to comprehensive user acceptance testing upon successful completion.[37] For instance, a high defect burn-down rate during alpha signals readiness for beta, while beta satisfaction scores from feedback often indicate progression to full release.[40]The Acceptance Testing Process
Planning and Preparation
Planning and preparation for acceptance testing involve defining the scope, assembling the necessary team, and developing detailed test plans and scripts to ensure alignment with project requirements. The scope is determined by reviewing and prioritizing requirements from earlier phases of the software development lifecycle, focusing on business objectives and user needs to avoid scope creep. According to the ISTQB Foundation Level Acceptance Testing syllabus, this step establishes the objectives and approach for testing, ensuring that only relevant functionalities are covered.[41] Hands-on expertise in User Acceptance Testing (UAT) and Integration Acceptance Testing (IAT) planning is critical. This includes creating comprehensive test plans and realistic test scenarios. UAT scenarios validate that the system meets business requirements from an end-user perspective, while IAT scenarios focus on verifying that integrated components and interfaces function correctly together as an internal acceptance step before full UAT. Team assembly includes stakeholders such as end-users, business analysts, testers, and subject matter experts to foster collaboration; business analysts and testers work together to clarify requirements and identify potential gaps. The syllabus emphasizes this collaborative effort to enhance the quality of test preparation.[41] Test plans outline the strategy, resources, schedule, and entry/exit criteria, while scripts detail specific test cases derived from acceptance criteria, often using traceable links to requirements for verification. Key preparation elements include conducting a risk assessment to prioritize testing efforts based on potential impacts to business processes, followed by creating representative test data that simulates real-world scenarios without compromising sensitive information. The ISTQB syllabus recommends risk-based testing to focus on high-impact areas, such as critical user workflows.[41] Environment configuration is crucial, involving setups that mirror production conditions, including hardware, software, network configurations, and data volumes to ensure realistic validation; for instance, deploying virtualized servers or cloud-based replicas to replicate operational loads. Test data creation typically involves anonymized or synthetic datasets to support scenario-based testing, as outlined in standard practices for ensuring data integrity and compliance. Prerequisites for this phase include fully traceable requirements documented from prior SDLC stages, such as design and implementation, to enable bidirectional mapping between tests and specifications.[41] Tools for planning often include test management software like Jira for tracking requirements and defects, and TestRail for organizing test cases and scripts, facilitating team collaboration and progress monitoring. Budget considerations encompass costs for user involvement, such as training sessions or compensated participation from business users, which can represent a significant portion of testing expenses due to their domain expertise. The ISTQB syllabus implies resource allocation for these activities to maintain project viability.[41]Execution and Evaluation
Execution in acceptance testing involves hands-on running of predefined test cases to verify that the software meets the specified acceptance criteria. For User Acceptance Testing (UAT), this typically includes coordinating with business users who actively participate in executing scripted scenarios to simulate real-user interactions and validate business requirements. Integration Acceptance Testing (IAT) focuses on hands-on verification of integrated components and interfaces, often performed by internal teams before full UAT. Operational Acceptance Testing (OAT) employs simulated production setups to assess backup, recovery, and maintenance procedures.[42][43] Defect management is a critical hands-on activity during execution. Defects are logged using specialized tools such as JIRA or Application Lifecycle Management (ALM) systems, prioritized based on severity and business impact, tracked throughout the resolution process, and verified through retesting after fixes. Defects are classified by severity—critical (system crash or data loss), major (core functionality impaired), minor (non-critical UI issues), or low (cosmetic flaws)—to prioritize resolution. This process enables iterative retesting, ensuring that resolved defects do not reoccur and that the system progressively aligns with requirements.[44][45][46] Stakeholders, including product owners and quality assurance teams, play key roles: testers handle the hands-on execution, while reviewers assess business impacts and approve retests. Post-2020, remote execution has become prevalent, leveraging cloud platforms like AWS or Azure for distributed testing environments, which supports global teams and reduces on-site dependencies amid hybrid work trends. The execution phase duration varies depending on project complexity and test volume.[42][47][48] Evaluation follows execution through pass/fail judgments against acceptance criteria, where tests passing indicate compliance and failures trigger defect analysis. Quantitative metrics, such as defect density (number of defects per thousand lines of code or function points), provide an objective measure of software quality, with lower densities signaling higher reliability. Severity classification guides these assessments, ensuring critical issues block release until resolved, while test summary reports aggregate results for stakeholder review.[49][46]Reporting and Closure
In the reporting phase of acceptance testing, teams generate comprehensive test summaries that outline the overall execution results, coverage achieved, and alignment with predefined criteria. These summaries often include defect reports detailing identified issues, their severity, and status, along with root cause analysis to uncover underlying factors such as requirement ambiguities or integration flaws, enabling preventive measures in future cycles.[50][51][52] Metrics dashboards are also compiled to visualize key performance indicators, such as pass/fail rates and test completion percentages, providing stakeholders with actionable insights into the testing outcomes.[53] Closure activities formalize the end of the acceptance testing process through stakeholder sign-off, where key parties review reports and approve or reject the deliverables based on results. Lessons learned sessions are conducted to capture insights on process efficiencies, challenges encountered, and recommendations for improvement, fostering continuous enhancement in testing practices. Artifacts, including test scripts, logs, and reports, are then archived in a centralized repository to ensure traceability and compliance with organizational standards. These steps culminate in a go/no-go decision for deployment, evaluating whether the system meets readiness thresholds to proceed to production.[54][55][48][56] The primary outcomes of reporting and closure include issuing a formal acceptance certificate upon successful validation, signifying that the software fulfills contractual or operational requirements, or documenting rejection with detailed remediation plans outlining necessary fixes and retesting timelines. This process integrates seamlessly with change management protocols, where acceptance outcomes inform controlled transitions, risk assessments, and updates to production environments to minimize disruptions.[57][58][59] Modern approaches have shifted toward digital reporting via integrated dashboards, such as those in Azure DevOps, which provide capabilities for real-time test analytics, automated defect tracking, and collaborative visualizations, addressing limitations of traditional paper-based methods like delayed feedback and manual aggregation.[60][53]Acceptance Criteria
Defining Effective Criteria
Effective acceptance criteria serve as the foundational standards that determine whether a software system meets stakeholder expectations during acceptance testing. These criteria must be clearly articulated to ensure unambiguous evaluation of the product's readiness for deployment or use. According to the ISTQB Certified Tester Acceptance Testing syllabus, well-written acceptance criteria are precise, measurable, and concise, focusing on the "what" of the requirements rather than the "how" of implementation.[41] Criteria derived from user stories, business requirements, or regulatory needs provide a direct link to the project's objectives. For instance, functional aspects might include achieving a specified test coverage level, such as 95% of user scenarios, while non-functional aspects could specify performance thresholds like response times under 2 seconds under load. The ISTQB syllabus emphasizes that criteria should encompass both functional requirements and non-functional characteristics, such as usability and security, aligned with standards like ISO/IEC 25010.[41] The development process for these criteria involves collaborative workshops and reviews with stakeholders, including business analysts, testers, and end-users, to foster shared understanding and alignment. This iterative approach, often using techniques like joint application design sessions, ensures criteria are realistic and comprehensive. Traceability matrices are essential tools in this process, mapping criteria back to requirements to verify coverage and forward to test cases for validation.[41] Common pitfalls in defining criteria include vagueness, which can lead to interpretation disputes, scope creep, or failed tests requiring extensive rework. Such issues are best addressed by employing traceability matrices to maintain bidirectional links between requirements and tests, enabling early detection of gaps. The ISTQB guidelines recommend black-box test design techniques, such as equivalence partitioning, to derive criteria that support robust evaluation without implementation details.[41]Examples and Templates
Practical examples of acceptance criteria illustrate how abstract principles translate into verifiable conditions for software features, ensuring alignment between user needs and system performance. These examples often draw from common domains like e-commerce and mobile applications to demonstrate measurable outcomes.[61] In an e-commerce login scenario, acceptance criteria might specify: "The user can log in with valid credentials in under 3 seconds." This ensures both functionality and performance meet user expectations under typical load.[61] Similarly, for a mobile app's offline mode, criteria could include: "The app handles offline conditions by queuing user actions locally and synchronizing them upon reconnection without data loss." This criterion verifies resilience in variable network environments.[62] Templates provide reusable structures to standardize acceptance criteria, facilitating collaboration in behavior-driven development (BDD) and user acceptance testing (UAT). The Gherkin format, using Given-When-Then syntax, is a widely adopted template for BDD scenarios that can be automated with tools like Cucumber. For instance, a Gherkin template for the e-commerce login might read: Feature: User Authentication Scenario: Successful login with valid credentialsGiven the user is on the login page
When the user enters valid username and password and clicks submit
Then the user is redirected to the dashboard within 3 seconds This structure promotes readable, executable specifications.[63] For UAT sign-off, checklists serve as practical templates to confirm completion and stakeholder approval. A standard UAT checklist template includes items such as: verifying all test cases pass against defined criteria, documenting any defects and resolutions, obtaining sign-off from business stakeholders, and confirming the system meets exit criteria. These checklists ensure systematic closure of testing phases.[64] Acceptance criteria vary by context, with business-oriented criteria focusing on user value and outcomes, while technical criteria emphasize system attributes like performance and security. Business criteria for an e-commerce checkout might state: "The user can complete a purchase and receive a confirmation email within 1 minute." In contrast, technical criteria could require: "The system processes transactions with 99.9% uptime and encrypts data using AES-256." This distinction allows tailored verification for different stakeholders.[65] A sample traceability table links requirements to acceptance tests, ensuring comprehensive coverage. Below is an example in table format:
| Requirement ID | Description | Acceptance Criterion | Test Case ID | Status |
|---|---|---|---|---|
| REQ-001 | User login functionality | Login succeeds in <3s, 100% rate | TC-001 | Pass |
| REQ-002 | Offline action queuing | Actions queue and sync without loss | TC-002 | Pass |
| REQ-003 | Purchase confirmation | Email sent within 1min | TC-003 | Fail |