Acceptance testing
Acceptance testing is a formal testing procedure conducted to determine whether a software system satisfies predefined acceptance criteria and meets user needs before deployment [1]. It represents the final phase of software testing that occurs after system testing and integration testing have been completed. The primary purpose is to enable customers, end users or other authorized parties to decide whether to accept the system for operational use. Unlike other forms of testing which focus on finding defects acceptance testing validates that the delivered solution actually solves the business problem it was designed to address.
Historical background
The origins of software testing can be traced to the early days of computing in the 1950s when testing was essentially synonymous with debugging. According to the historical classification proposed by D. Gelperin and W.C. Hetzel in 1988 the period until 1956 was characterized as the debugging-oriented period when no clear distinction existed between testing and debugging activities [2]. The demonstration-oriented period from 1957 to 1978 saw the emergence of testing as a separate discipline focused on demonstrating that software satisfied its requirements.
Charles L. Baker from the RAND Corporation is credited with making an early distinction between program testing and debugging in his 1957 review of the book Digital Computer Programming by Dan McCracken [3]. Gerald M. Weinberg formed one of the first dedicated testing teams in 1958 and later contributed foundational texts on software testing methodology. In 1961 the book Computer Programming Fundamentals by Weinberg and Herbert Leeds included a specific chapter on software testing which helped establish testing as a recognized phase of software development.
The period from 1979 to 1982 represented the destruction-oriented period when the explicit goal of testing shifted to finding errors rather than simply demonstrating correctness. The evaluation-oriented period of 1983 to 1987 introduced the concept of providing product quality evaluation throughout the software lifecycle. From 1988 to 2000 the prevention-oriented era emerged with testing focused on demonstrating specification compliance detecting faults and preventing defects through improved process improvement practices.
The publication of the Agile Manifesto in 2001 by seventeen software practitioners fundamentally changed how acceptance testing was integrated into development workflows [4]. Agile methodologies prioritized customer satisfaction working software and responsiveness to change which led to acceptance testing being incorporated into iterative development cycles rather than being performed as an isolated end-phase activity.
Concept and definition
According to the ISTQB glossary acceptance testing is defined as formal testing with respect to user needs requirements and business processes conducted to determine whether a system satisfies the acceptance criteria and to enable the user customers or other authorized entity to determine whether to accept the system [5]. This definition highlights several key aspects of acceptance testing that distinguish it from other testing types.
First acceptance testing is formal in nature meaning it follows documented procedures and produces evidence of testing outcomes. Second it focuses on user needs and business processes rather than technical specifications alone. Third it involves a deliberate acceptance decision by authorized parties rather than simply passing or failing test cases.
Acceptance criteria
Acceptance criteria are descriptions of the conditions that must be satisfied for a system or feature to be considered acceptable [6]. They translate user requirements into testable statements that can be objectively verified. For example given a requirement such as allowing users to check out books from a library an acceptance criterion might specify that the system marks the book as checked out and updates the user's borrowing record.
Well-written acceptance criteria share several characteristics. They should be specific and measurable leaving no ambiguity about what constitutes success. They should be testable meaning their satisfaction can be objectively verified. They should be traceable back to the original requirements they validate. Most importantly they should be agreed upon by both the development team and business stakeholders before implementation begins.
Types of acceptance testing
Different forms of acceptance testing serve distinct purposes depending on who performs the testing and what aspects of the system are being validated [7].
User acceptance testing
User acceptance testing (UAT) is conducted by end users to verify that the system works as intended in business workflows. Testers perform realistic tasks using actual or production-like data to validate that the solution meets their operational needs. UAT is sometimes called end-user testing or beta testing and represents the final checkpoint before deployment to production. The emphasis is on usability and business value rather than technical correctness.
UAT requires careful planning to be effective. Test scenarios should reflect real user journeys rather than isolated technical test cases. Testers should be representative of the actual user population and should receive adequate training on the system being tested. The testing environment should closely replicate production conditions to ensure results are meaningful.
Operational acceptance testing
Operational acceptance testing (OAT) focuses on verifying the operational readiness of a system from a maintenance and support perspective [8]. It examines non-functional aspects such as backup and recovery procedures disaster recovery capabilities maintenance processes and security controls. OAT ensures that the organization's technical staff can effectively operate and maintain the system after deployment.
Areas typically covered by OAT include installation and configuration procedures system monitoring capabilities performance under expected load conditions backup and restore functionality failover mechanisms and security compliance. OAT is sometimes referred to as operational readiness testing or production acceptance testing.
Alpha and beta testing
Alpha testing is conducted internally within the development organization typically by specialized testers or quality assurance staff in a development or testing environment [9]. Alpha testers should expect to encounter bugs performance issues crashes and incomplete documentation. The purpose is to identify major defects before releasing the software to external users.
Beta testing follows alpha testing and involves releasing the software to a selected group of external users who test it in real-world conditions. Beta testers provide feedback on usability performance and any bugs they encounter. This testing typically occurs when the software is approximately 95 percent complete and usually runs for two to eight weeks before release.
Contract acceptance testing
Contract acceptance testing verifies that a delivered system meets the contractual terms scope and functionality agreed upon between the customer and vendor. It is a compliance-driven process particularly important in outsourced development projects or when software is developed against a formal contract. Acceptance is contingent on satisfying the contractual obligations documented in the agreement.
Regulation acceptance testing
Regulation acceptance testing confirms that a system complies with relevant laws regulations and standards [10]. This type of testing is crucial for software in regulated industries such as healthcare finance and government where non-compliance can result in legal penalties. Examples include validating compliance with data protection regulations financial reporting requirements or safety standards.
Business acceptance testing
Business acceptance testing validates that the software aligns with business objectives and solves the intended problems. It occurs after alpha and beta testing and focuses on whether the solution delivers the expected business value. Business acceptance testing ensures that the software supports the organization's strategic goals and can be integrated into existing business processes.
Acceptance testing process
The acceptance testing process follows a structured sequence of activities to ensure thorough validation [11].
Requirement analysis
The process begins with analyzing requirements to identify clear measurable acceptance criteria. This involves reviewing business requirements user stories functional specifications and any other documentation that describes what the system should do. The goal is to translate requirements into testable criteria that define what success looks like.
Test planning
Test planning determines who will perform testing what functionality is in scope what environments will be used and what management processes will govern the testing effort. A test plan documents the approach resources schedule and deliverables for the acceptance testing phase. Early stakeholder alignment during planning helps avoid misunderstandings later.
Test case design
Test cases are designed to verify each acceptance criterion. Unlike technical test cases acceptance test cases typically describe user journeys or business scenarios that exercise multiple system functions. Test data must be prepared that represents realistic conditions including edge cases and error scenarios. Both positive tests validating expected behavior and negative tests validating error handling should be included.
Environment setup
The test environment should replicate the production environment as closely as possible to ensure meaningful results. This includes configuring servers databases network settings and integrations to match production configurations. Environment inconsistencies are a common source of testing problems and production issues.
Test execution
During test execution testers work through the test cases documenting results and logging any defects discovered. Manual testing involves testers directly interacting with the system while automated testing uses tools to execute predefined test scripts. Results are compared against expected outcomes documented in the acceptance criteria.
Defect management
Defects discovered during acceptance testing must be logged categorized and prioritized for resolution. A defect management process tracks each defect from discovery through resolution and verification. Critical defects typically must be resolved before acceptance can be granted while minor defects may be accepted with documented workarounds.
Sign-off and approval
The acceptance testing process concludes with formal sign-off by authorized stakeholders indicating that the system meets the acceptance criteria and is approved for deployment. Sign-off documentation provides evidence that acceptance testing was performed and creates an audit trail for compliance purposes.
Acceptance test driven development
Acceptance test driven development (ATDD) is a development methodology that emphasizes collaboration between business customers developers and testers to write acceptance tests before implementing functionality [12]. ATDD shares practices with specification by example behavior-driven development and example-driven development.
The ATDD cycle
The ATDD cycle consists of four stages. The Discuss stage involves collaborative discussion between business stakeholders and the development team to understand user needs. The Distill stage translates these needs into specific acceptance tests and criteria. The Develop stage implements the functionality following a test-first approach. The Demo stage presents the completed functionality to business stakeholders for feedback.
Given-When-Then format
One widely used format for writing acceptance tests in ATDD is the Given-When-Then structure from the Gherkin language [13]. Given describes the initial state or preconditions. When specifies the action being performed. Then describes the expected outcome. For example Given items are in the shopping cart When the user views the cart Then the total price is displayed.
ATDD tools and frameworks
Various tools support ATDD including Cucumber Robot Framework FitNesse and Selenium. These tools enable teams to write acceptance tests in human-readable formats that can be automated and executed as part of continuous integration pipelines.
Acceptance testing in different methodologies
The timing and approach to acceptance testing varies depending on the project management methodology being used [14].
Waterfall approach
In waterfall software development acceptance testing takes place at the final stage immediately before deployment. Testing can only begin after the system is considered code complete meaning business requirements have been met the code base is finished quality assurance activities have been completed and previously identified bugs have been resolved.
Agile approach
In agile methodologies particularly extreme programming acceptance testing is integrated into each iteration or sprint. The customer specifies acceptance criteria for each user story and these tests are executed as part of completing the story. Continuous feedback allows issues to be identified and addressed early rather than accumulating until a final testing phase.
Advantages of acceptance testing
Acceptance testing provides several important benefits for software projects [15]:
- Validates that the system solves the business problem it was built to address
- Involves end users directly in quality validation providing valuable feedback
- Identifies gaps between delivered functionality and user expectations
- Catches defects before they reach production reducing post-deployment issues
- Builds stakeholder confidence through formal verification and sign-off
- Reduces risk of costly production failures and rework
- Creates documentation of testing activities for audit and compliance purposes
- Helps identify future requirements through user feedback during testing
- Ensures alignment between technical implementation and business objectives
Limitations of acceptance testing
Acceptance testing also has inherent limitations that must be recognized [16]:
- Business requirements are often unclear or change during development making test design difficult
- Limited resources including time budget and skilled personnel can constrain testing thoroughness
- Coordinating participation from busy end users can be logistically challenging
- Creating test environments that accurately replicate production is complex and costly
- The subjective nature of user acceptance can introduce variability in evaluation
- Users must possess sufficient product knowledge to participate effectively in testing
- Technical test cases may be difficult for non-technical users to understand and execute
- Late discovery of critical defects can cause schedule delays and budget overruns
- Not all defect types can be discovered through acceptance testing alone
| Acceptance testing — recommended articles |
| Quality assurance — Quality control — Quality management — Software development — Project management — Risk management — Customer satisfaction — Stakeholder — Documentation |
References
- ISO/IEC/IEEE 29119-1:2022, Software and systems engineering - Software testing - Part 1: General concepts.
- ISTQB (2023), Certified Tester Foundation Level Syllabus, Version 4.0.
- Crispin L., Gregory J. (2009), Agile Testing: A Practical Guide for Testers and Agile Teams, Addison-Wesley.
- Black R. (2017), Pragmatic Software Testing, John Wiley & Sons.
- Gelperin D., Hetzel W.C. (1988), The Growth of Software Testing, Communications of the ACM, Vol. 31, No. 6.
Footnotes
- ISTQB (2023), p. 45
- Gelperin D., Hetzel W.C. (1988), pp. 687-695
- Black R. (2017), pp. 15-20
- Crispin L., Gregory J. (2009), pp. 3-12
- ISTQB (2023), p. 46
- Crispin L., Gregory J. (2009), pp. 95-102
- Black R. (2017), pp. 245-260
- ISO/IEC/IEEE 29119-1:2022, Section 7.3
- Black R. (2017), pp. 255-258
- ISO/IEC/IEEE 29119-1:2022, Section 7.4
- ISTQB (2023), pp. 47-50
- Crispin L., Gregory J. (2009), pp. 185-195
- Crispin L., Gregory J. (2009), pp. 196-200
- Black R. (2017), pp. 260-265
- ISTQB (2023), pp. 48-49
- Black R. (2017), pp. 265-270
Author: Sławomir Wawak