Recent from talks
Nothing was collected or created yet.
Manual testing
View on Wikipedia
- Compare with Test automation.
Manual testing is the process of manually testing software for defects. It requires a tester to play the role of an end user where by they use most of the application's features to ensure correct behaviour. To guarantee completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases.
Overview
[edit]A key step in the process is testing the software for correct behavior prior to release to end users.
For small scale engineering efforts (including prototypes), ad hoc testing may be sufficient. With this informal approach, the tester does not follow any rigorous testing procedure and simply performs testing without planning or documentation. Conversely, exploratory testing, which involves simultaneous learning, test design and test execution, explores the user interface of the application using as many of its features as possible, using information gained in prior tests to intuitively derive additional tests. The success of exploratory manual testing relies heavily on the domain expertise of the tester, because a lack of knowledge will lead to incompleteness in testing. One of the key advantages of an informal approach is to gain an intuitive insight to how it feels to use the application.
Large scale engineering projects that rely on manual software testing follow a more rigorous methodology in order to maximize the number of defects that can be found. A systematic approach focuses on predetermined test cases and generally involves the following steps.[1]
- Choose a high level test plan where a general methodology is chosen, and resources such as people, computers, and software licenses are identified and acquired.
- Write detailed test cases, identifying clear and concise steps to be taken by the tester, with expected outcomes.
- Assign the test cases to testers, who manually follow the steps and record the results.
- Author a test report, detailing the findings of the testers. The report is used by managers to determine whether the software can be released, and if not, it is used by engineers to identify and correct the problems.
A rigorous test case based approach is often traditional for large software engineering projects that follow a Waterfall model.[2] However, at least one recent study did not show a dramatic difference in defect detection efficiency between exploratory testing and test case based testing.[3]
Testing can be through black-, white- or grey-box testing. In white-box testing the tester is concerned with the execution of the statements through the source code. In black-box testing the software is run to check for the defects and is less concerned with how the processing of the input is done. Black-box testers do not have access to the source code. Grey-box testing is concerned with running the software while having an understanding of the source code and algorithms.[4]
Static and dynamic testing approach may also be used. Dynamic testing involves running the software. Static testing includes verifying requirements, syntax of code and any other activities that do not include actually running the code of the program.
Testing can be further divided into functional and non-functional testing. In functional testing the tester would check the calculations, any link on the page, or any other field which on given input, output may be expected. Non-functional testing includes testing performance, compatibility and fitness of the system under test, its security and usability among other things.
Stages
[edit]This section may contain material unrelated to the topic of the article and should be moved to Software testing instead. (May 2019) |
There are several stages. They are:
- Unit testing
- This initial stage in testing normally carried out by the developer who wrote the code and sometimes by a peer using the white box testing technique.
- Integration testing
- This stage is carried out in two modes, as a complete package or as an increment to the earlier package. Most of the time black box testing technique is used. However, sometimes a combination of Black and White box testing is also used in this stage.
- System testing
- In this stage the software is tested from all possible dimensions for all intended purposes and platforms. In this stage Black box testing technique is normally used.
- User acceptance testing
- This testing stage carried out in order to get customer sign-off of finished product. A 'pass' in this stage also ensures that the customer has accepted the software and is ready for their use.
- Release or deployment testing
- Onsite team will go to customer site to install the system in customer configured environment and will check for the following points:
- Whether SetUp.exe is running or not.
- There are easy screens during installation
- How much space is occupied by system on HDD
- Is the system completely uninstalled when opted to uninstall from the system.
Advantages
[edit]- Low-cost operation as no software tools are used
- Most bugs are caught by manual testing
- Humans observe and judge better than the automated tools
Comparison to automated testing
[edit]Test automation may be able to reduce or eliminate the cost of actual testing.[5] A computer can follow a rote sequence of steps more quickly than a person, and it can run the tests overnight to present the results in the morning. However, the labor that is saved in actual testing must be spent instead authoring the test program. Depending on the type of application to be tested, and the automation tools that are chosen, this may require more labor than a manual approach. In addition, some testing tools present a very large amount of data, potentially creating a time-consuming task of interpreting the results.
Things such as device drivers and software libraries must be tested using test programs. In addition, testing of large numbers of users (performance testing and load testing) is typically simulated in software rather than performed in practice.
Conversely, graphical user interfaces whose layout changes frequently are very difficult to test automatically. There are test frameworks that can be used for regression testing of user interfaces. They rely on recording of sequences of keystrokes and mouse gestures, then playing them back and observing that the user interface responds in the same way every time. Unfortunately, these recordings may not work properly when a button is moved or relabeled in a subsequent release. An automatic regression test may also be fooled if the program output varies significantly.
See also
[edit]References
[edit]- ^ ANSI/IEEE 829-1983 IEEE Standard for Software Test Documentation
- ^ Craig, Rick David; Stefan P. Jaskiel (2002). Systematic Software Testing. Artech House. p. 7. ISBN 1-58053-508-9.
- ^ Itkonen, Juha; V. Mäntylä, Mika; Lassenius, Casper (2007). "Defect Detection Efficiency: Test Case Based vs. Exploratory Testing" (PDF). First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007). pp. 61–70. doi:10.1109/ESEM.2007.56. ISBN 978-0-7695-2886-1. S2CID 5178731. Archived from the original (PDF) on October 13, 2016. Retrieved January 17, 2009.
- ^ Hamilton, Thomas (May 23, 2020). "What is Grey Box Testing? Techniques, Example". www.guru99.com. Retrieved August 7, 2022.
- ^ Atlassian. "Test Automation". Atlassian. Retrieved August 7, 2022.
Manual testing
View on GrokipediaFundamentals
Definition and Scope
Manual testing is the process of executing test cases manually by human testers without the use of automation tools or scripts, primarily to verify that software applications function as intended, meet user requirements, and adhere to specified standards of usability and compliance.[7] In this approach, testers simulate end-user interactions with the software, observing behaviors, inputs, and outputs to identify defects, inconsistencies, or deviations from expected results. This method relies on human observation and decision-making to assess qualitative aspects that automated processes might overlook, such as intuitive user interfaces or contextual error handling.[7] The scope of manual testing encompasses a range of activities focused on dynamic execution rather than static analysis, including functional testing to confirm that individual features operate correctly, exploratory testing where testers dynamically design and adapt tests based on real-time discoveries, and visual checks to ensure aesthetic and layout consistency across interfaces. It explicitly excludes non-testing tasks like code reviews or static inspections, which do not involve running the software. Manual testing boundaries are defined by the need for human intervention in scenarios requiring subjective evaluation, such as ad-hoc scenarios or one-off validations, but it integrates within the broader software testing lifecycle as a foundational verification step.[7] Central to manual testing are key concepts like test cases, which consist of predefined sequences of steps, preconditions, inputs, expected outcomes, and postconditions to guide systematic verification.[8] Human judgment plays a pivotal role, enabling testers to detect subtle defects—such as edge cases or usability issues—that rigid scripts cannot capture, thereby enhancing overall software quality through intuitive and adaptive assessment. Historically, manual testing emerged as the dominant method in software engineering during the 1950s through the 1970s, when testing equated to manual debugging and demonstration of functionality, before the advent of automation tools in the 1980s introduced scripted execution options.[9]Role in Software Testing
Manual testing plays a pivotal role in the Software Development Life Cycle (SDLC) by verifying software functionality and user experience after requirements gathering and design phases, ensuring alignment with specified needs before deployment. In the waterfall model, manual testing follows a linear sequence post-development, involving comprehensive execution to validate built features against predefined test cases.[10] In contrast, agile methodologies integrate manual testing iteratively within sprints, allowing testers to collaborate closely with developers for ongoing validation and rapid feedback loops.[11] This positioning enables early defect detection, reducing rework costs later in the process.[10] Prerequisites for effective manual testing include well-defined software requirements, which serve as the foundation for deriving test cases, and detailed test plans outlining objectives, scope, and execution strategies.[11] Additionally, a stable test environment must be established, replicating production conditions to simulate real-world usage without introducing external variables.[12] These elements assume testers have foundational knowledge of the application's requirements, enabling focused validation rather than exploratory guesswork.[13] As a complement to automated testing, manual testing addresses inherent blind spots in scripted automation, such as dynamic user interface changes, subjective usability assessments, and rare edge cases that demand human intuition and adaptability.[10] For instance, while automated tests excel at repetitive regression checks, manual efforts uncover intuitive issues like navigation intuitiveness or unexpected interactions in evolving features.[14] This synergy enhances overall test coverage, with manual testing often serving as the initial exploratory layer to inform subsequent automation priorities.[15] In terms of involvement, manual testing accounts for a significant portion of total testing effort in early-stage projects, where exploratory and ad-hoc validation predominate, but this proportion evolves downward with project maturity as automation handles routine verifications. Such metrics highlight manual testing's foundational contribution to quality assurance, particularly in contexts with high variability or limited prior data.[16]Methods and Techniques
Types of Manual Testing
Manual testing encompasses several distinct variants, each tailored to specific objectives in software quality assurance. These types differ in their approach, level of structure, and focus, allowing testers to address various aspects of software behavior and user interaction without relying on automation tools. The primary categories include black-box testing, white-box testing (in its manual form), exploratory testing, usability testing, and ad-hoc testing, each applied based on project needs such as functional validation, structural review, or rapid defect detection.[17] Black-box testing treats the software as an opaque entity, focusing solely on inputs and expected outputs without any knowledge of the internal code structure or implementation details. This approach verifies whether the software meets specified requirements by simulating user interactions and checking results against predefined criteria. It is particularly useful for validating functional specifications from an end-user perspective. Key techniques within black-box testing include equivalence partitioning, which divides input data into classes expected to exhibit similar behavior, thereby reducing the number of test cases while maintaining coverage, and boundary value analysis, which targets the edges of input ranges where errors are most likely to occur, such as minimum and maximum values. These methods enhance efficiency in testing large input domains without exhaustive enumeration.[18][19] White-box testing, when performed manually, involves examining the internal logic and structure of the software to ensure comprehensive path coverage, though it lacks the automation typically associated with code execution analysis. Testers manually trace code paths, decisions, and data flows to identify potential issues like unreachable branches or logical errors, often using techniques such as decision tables to map combinations of conditions and actions. This manual variant is limited to inspection-based checks rather than dynamic execution, making it suitable for early-stage reviews where developers and testers collaborate to verify structural integrity without tools. It is applied when understanding code flow is essential but automation resources are unavailable.[20][21][22] Exploratory testing is an unscripted, improvisational approach where testers dynamically design and execute tests in real-time, leveraging their experience to uncover defects that scripted methods might miss. It emphasizes learning about the software while testing, adapting to new findings to probe deeper into potential risks. Sessions are typically time-boxed, lasting 30 to 120 minutes, to maintain focus and productivity, often structured under session-based test management with a charter outlining objectives. This type is ideal for complex or evolving applications where requirements are unclear or changing rapidly.[23][24] Usability testing evaluates the intuitiveness and user-friendliness of the software interface through direct observation of users performing realistic tasks, focusing on how effectively and efficiently they interact with the system. Testers observe participants as they attempt to complete scenarios, measuring metrics like task success rates and completion times to identify friction points in navigation or design. This manual process aligns with standards defining usability as the extent to which a product can be used by specified users to achieve goals with effectiveness, efficiency, and satisfaction in a given context. It is essential for consumer-facing applications to ensure positive user experiences.[25][26] Ad-hoc testing involves informal, unstructured exploration of the software to quickly spot obvious issues, without following test plans or cases, relying instead on the tester's intuition and familiarity. It serves as a rapid sanity check, often used for smoke tests to confirm basic functionality before deeper verification. While not systematic, this approach is valuable in time-constrained environments for initial defect detection and can reveal unexpected problems that formal methods overlook.[27][28]Execution Stages
The execution of manual testing follows a structured process to ensure systematic validation of software functionality without automation tools. This process, aligned with established standards like the ISTQB test process model, typically encompasses planning, preparation, execution, and reporting and closure phases, allowing testers to methodically identify defects and verify requirements.[5] In the planning phase, testers define testing objectives based on project requirements and select test cases prioritized by risk assessment to focus efforts on high-impact areas. A key artifact created here is the traceability matrix, which links requirements to corresponding test cases, ensuring comprehensive coverage and facilitating impact analysis if changes occur. This phase typically accounts for about 20% of the total testing effort, emphasizing upfront strategy to guide subsequent activities.[29][30] Preparation involves developing detailed test scripts that outline steps, expected outcomes, and preconditions for each test case, alongside setting up test data, environments, and allocating roles among testers to simulate real-world conditions. Tools and resources are configured to support manual execution, such as preparing checklists or spreadsheets for tracking progress. This stage, combined with planning, often represents around 30-35% of the effort, building a solid foundation for reliable testing.[5][30] During execution, testers manually perform the test cases, observing actual results against expected ones and logging any defects encountered, including details on severity (impact on system functionality) and priority (urgency of resolution). Defects are reported using bug tracking tools like Jira, where manual entry captures screenshots, steps to reproduce, and environmental details for developer triage. This core phase consumes approximately 50% of the testing effort, as it directly uncovers issues through hands-on interaction, including ad-hoc exploratory techniques where applicable to probe unscripted scenarios.[31][32][30] Finally, reporting and closure entail analyzing execution results to generate defect reports, metrics on coverage and pass/fail rates, and overall test summaries for stakeholders. Retrospectives are conducted to capture lessons learned, such as process improvements or recurring defect patterns, leading to test closure activities like archiving artifacts and releasing resources. This phase, roughly 15-20% of the effort, ensures accountability and informs future testing cycles.[5][30]Evaluation
Advantages
Manual testing leverages human intuition to detect subtle issues that automated scripts often overlook, such as visual inconsistencies, usability flaws, and unexpected user behaviors in complex interfaces.[33] This exploratory approach allows testers to apply creativity and judgment, uncovering defects through ad-hoc paths and contextual insights that rigid automation might miss, thereby reducing false negatives in intricate user interfaces.[34] For instance, testers can identify aesthetic discrepancies or intuitive navigation problems by simulating real-world interactions, ensuring a more holistic evaluation of software quality.[35] A key strength of manual testing lies in its flexibility, particularly in agile environments where requirements evolve rapidly.[33] Unlike scripted automation, which requires reprogramming for changes, manual methods enable testers to adapt test scenarios on the fly without additional infrastructure, supporting iterative development cycles and quick feedback loops.[34] This adaptability is especially valuable for handling ambiguous or shifting specifications, allowing immediate incorporation of new features or modifications into the testing routine.[33] For small-scale projects, prototypes, or one-off tests, manual testing offers cost-effectiveness by eliminating the need for expensive automation tools and setups.[34] With lower initial and short-term costs, it suits resource-constrained teams, providing rapid results and straightforward execution without the overhead of scripting or maintenance.[35] This makes it ideal for early-stage validation where thorough human oversight can be achieved economically. Manual testing ensures comprehensive coverage by enabling exploration of unplanned execution paths, which enhances defect detection in dynamic applications.[33] Testers can deviate from predefined scripts to probe edge cases or interdependencies in complex UIs, achieving broader test scope and minimizing overlooked vulnerabilities.[34] By mimicking end-user behaviors, manual testing simulates real-world usage scenarios, uncovering usability defects early in the development process.[35] This human-centered approach replicates how actual users interact with the software, revealing practical issues like accessibility barriers or workflow inefficiencies that scripted tests cannot fully capture.[33] As a result, it contributes to more user-friendly products by addressing experiential flaws proactively.[34]Limitations
Manual testing is inherently time-intensive, as executing repetitive test cases can take hours or even days per testing cycle, particularly for regression testing in large-scale applications. This process scales poorly for extensive software regressions, where the volume of tests grows exponentially with project complexity, leading to prolonged development timelines.[36][37] The approach is also prone to human error due to its subjective nature, where testers' interpretations and judgments can introduce inconsistencies in test execution and results. Fatigue from prolonged sessions further diminishes accuracy, as sustained manual effort over extended periods increases the likelihood of overlooking defects or applying uneven scrutiny across test cases.[36][37][38] Scalability presents significant challenges, making manual testing unsuitable for high-volume scenarios such as load simulation or parallel testing across numerous environments, which require specialized tools to handle efficiently without human intervention. In growing projects, the manual execution of thousands of test cases becomes unsustainable, limiting the ability to keep pace with rapid development iterations.[38][39] Over time, the ongoing labor expenses associated with manual testing often surpass the initial setup costs of automation, especially for frequent test runs in iterative development cycles. Skilled testers must be continually engaged for each execution, accumulating high personnel costs without the one-time investment yielding reusable benefits.[40][36] Finally, manual testing offers limited reusability, as test cases must be re-executed from scratch for every cycle or software update, unlike automated scripts that can be run repeatedly with minimal adaptation. This necessitates rewriting or redeveloping cases for new versions, further exacerbating time and resource demands.[37][36]Comparison with Automated Testing
Key Differences
Manual testing and automated testing represent two distinct paradigms in software quality assurance, differing fundamentally in their execution mechanisms and applicability. Manual testing relies on human testers to execute test cases through direct interaction with the software, leveraging intuition, creativity, and contextual judgment to explore and validate functionality. In contrast, automated testing employs scripts and specialized tools, such as Selenium or Appium, to perform predefined actions with minimal human intervention, emphasizing repeatability and precision in test execution.[11][41] Regarding speed and efficiency, manual testing is inherently slower, particularly for repetitive tasks like regression testing, where human execution can take significantly longer—often 70% more time than automated counterparts[41]—making it less suitable for large-scale or frequent validations. Automated testing, however, excels in efficiency for high-volume scenarios, enabling rapid execution of extensive test suites and integration into continuous integration/continuous deployment (CI/CD) pipelines for immediate feedback. While manual testing shines in ad-hoc and exploratory scenarios requiring on-the-fly adaptations, automated testing's rigidity limits its flexibility in dynamic, unscripted environments.[34][42][11] The cost models of these approaches also diverge notably. Manual testing involves low upfront costs, as it requires no specialized tools or scripting, but incurs high ongoing expenses due to the need for skilled human resources over extended periods, especially in projects demanding repeated testing cycles. Automated testing demands substantial initial investment in tool development, script creation, and maintenance, yet it proves more economical in the long term for mature projects by reducing labor-intensive repetitions and enabling scalable operations. For small-scale or one-off tests, manual methods remain cost-effective, whereas automation's return on investment grows with project complexity and duration.[34][41][42] In terms of coverage types, manual testing is particularly strong for exploratory, usability, and user experience assessments, where human perception can uncover intuitive issues like interface appeal or accessibility that scripted tests might overlook. Automated testing, conversely, is superior for functional and regression coverage, systematically verifying vast arrays of inputs and outputs across multiple iterations to ensure consistency in core behaviors. This complementary coverage profile means manual efforts often address nuanced, context-dependent areas, while automation handles exhaustive, rule-based validations.[11][34][41] Error detection capabilities further highlight these contrasts. Manual testing excels at identifying contextual defects, such as subtle usability flaws or business logic inconsistencies that require human interpretation, though it is susceptible to tester fatigue and oversight. Automated testing reliably flags exact matches against expected outcomes, providing consistent and detailed logging, but it may miss nuanced or unanticipated issues beyond its scripted parameters, such as visual inconsistencies or adaptive behaviors. Overall, manual detection prioritizes qualitative depth, while automated focuses on quantitative reliability.[42][34][11]| Aspect | Manual Testing | Automated Testing |
|---|---|---|
| Approach | Human-driven execution with judgment and exploration. | Scripted execution using tools for repeatability. |
| Speed/Efficiency | Slower for regressions; ideal for ad-hoc testing. | Faster for volume; less adaptable to changes. |
| Cost Model | Low initial; high ongoing due to labor. | High initial scripting; low maintenance long-term. |
| Coverage Types | Strong in exploratory/usability. | Excels in functional/regression. |
| Error Detection | Contextual defects via human insight; prone to errors. | Exact matches; misses nuances. |
