Hubbry Logo
Keyword-driven testingKeyword-driven testingMain
Open search
Keyword-driven testing
Community hub
Keyword-driven testing
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Keyword-driven testing
Keyword-driven testing
from Wikipedia

Keyword-driven testing, also known as action word based testing (not to be confused with action driven testing), is a software testing methodology suitable for both manual and automated testing. This method separates the documentation of test cases – including both the data and functionality to use – from the prescription of the way the test cases are executed. As a result, it separates the test creation process into two distinct stages: a design and development stage, and an execution stage. The design substage covers the requirement analysis and assessment and the data analysis, definition, and population.

Overview

[edit]

This methodology uses keywords (or action words) to symbolize a functionality to be tested, such as Enter Client. The keyword Enter Client is defined as the set of actions that must be executed to enter a new client in the database. Its keyword documentation would contain:

  • the starting state of the system under test (SUT)
  • the window or menu to start from
  • the keys or mouse clicks to get to the correct data entry window
  • the names of the fields to find and which arguments to enter
  • the actions to perform in case additional dialogs pop up (like confirmations)
  • the button to click to submit
  • an assertion about what the state of the SUT should be after completion of the actions

Keyword-driven testing syntax lists test cases (data and action words) using a table format (see example below). The first column (column A) holds the keyword, Enter Client, which is the functionality being tested. Then the remaining columns, B-E, contain the data needed to execute the keyword: Name, Address, Postcode and City.

A B C D E
. Name Address Postcode City
Enter Client Jane Smith 6 High Street SE25 6EP London

To enter another client, the tester would create another row in the table with Enter Client as the keyword and the new client's data in the following columns. There is no need to relist all the actions included.

In it, you can design your test cases by:

  • Indicating the high-level steps needed to interact with the application and the system in order to perform the test.
  • Indicating how to validate and certify the features are working properly.
  • Specifying the preconditions for the test.
  • Specifying the acceptance criteria for the test.

Given the iterative nature of software development, the test design is typically more abstract (less specific) than a manual implementation of a test, but it can easily evolve into one.

Advantages

[edit]

Keyword-driven testing reduces the sensitivity to maintenance caused by changes in the System/Software Under Test (SUT). If screen layouts change or the system is migrated to another OS hardly any changes have to be made to the test cases: the changes will be made to the keyword documentation, one document for every keyword, no matter how many times the keyword is used in test cases, and it implies a deep process of test design.

Also, due to the very detailed description of the way of executing the keyword (in the keyword documentation) the test can be performed by almost anyone. Thus keyword-driven testing can be used for both manual testing and automated testing.[1]

Furthermore, this approach is an open and extensible framework that unites all the tools, assets, and data both related to and produced by the testing effort. Under this single framework, all participants in the testing effort can define and refine the quality goals they are working toward. It is where the team defines the plan it will implement to meet those goals. And, most importantly, it provides the entire team with one place to go to determine the state of the system at any time.

Testing is the feedback mechanism in the software development process. It tells you where corrections need to be made to stay on course at any given iteration of a development effort. It also tells you about the current quality of the system being developed. The activity of implementing tests involves the design and development of reusable test scripts that implement the test case. After the implementation, it can be associated with the test case.

Implementation is different in every testing project. In one project, you might decide to build both automated test scripts and manual test scripts.[2] Designing tests, instead, is an iterative process. You can start designing tests before any system implementation by basing the test design on use case specifications, requirements, prototypes, and so on. As the system becomes more clearly specified, and you have builds of the system to work with, you can elaborate on the details of the design. The activity of designing tests answers the question, "How am I going to perform the testing?" A complete test design informs readers about what actions need to be taken with the system and what behaviors and characteristics they should expect to observe if the system is functioning properly.

A test design is different from the design work that should be done in determining how to build your test implementation.

Methodology

[edit]

The keyword-driven testing methodology divides test process execution into several stages:

  1. Model basis/prototyping: analysis and assessment of requirements.
  2. Test model definition: on the result of requirements assessment, approach an own software model.
  3. Test data definition: on the basis of the defined own model, start keyword and main/complement data definition.
  4. Test preparation: intake test basis etc.
  5. Test design: analysis of test basis, test case/procedure design, test data design.
  6. Manual test execution: manual execution of the test cases using keyword documentation as execution guideline.
  7. Automation of test execution: creation of automated script that perform actions according to the keyword documentation.
  8. Automated test execution.

Definition

[edit]

A Keyword or Action Word is a defined combination of actions on a test object which describes how test lines must be executed. An action word contains arguments and is defined by a test analyst.

The test is a key step in any process of development and shall to apply a series of tests or checks to an object (system / SW test — SUT). Always remembering that the test can only show the presence of errors, not their absence. In the RT system test, it is not sufficient to check whether the SUT produces the correct outputs. It must also verify that the time taken to produce that output is as expected. Furthermore, the timing of these outputs may also depend on the timing of the inputs. In turn, the timing of future inputs applicable is determined from the outputs.[2]

Automation of the test execution

[edit]

The implementation stage differs depending on the tool or framework. Often, automation engineers implement a framework that provides keywords like “check” and “enter”.[1] Testers or test designers (who do not need to know how to program) write test cases based on the keywords defined in the planning stage that have been implemented by the engineers. The test is executed using a driver that reads the keywords and executes the corresponding code.

Other methodologies use an all-in-one implementation stage. Instead of separating the tasks of test design and test engineering, the test design is the test automation. Keywords, such as “edit” or “check” are created using tools in which the necessary code has already been written. This removes the necessity for extra engineers in the test process, because the implementation for the keywords is already a part of the tool. Examples include GUIdancer and QTP.

Pros

[edit]
  • Maintenance is low in the long run:
    • Test cases are concise
    • Test cases are readable for the stakeholders
    • Test cases are easy to modify
    • New test cases can reuse existing keywords more easily
  • Keyword re-use across multiple test cases
  • Not dependent on a specific tool or programming language
  • Division of Labor
    • Test case construction needs stronger domain expertise - lesser tool / programming skills
    • Keyword implementation requires stronger tool/programming skill - with relatively lower domain skill
  • Abstraction of Layers

Cons

[edit]
  • Longer time to market (as compared to manual testing or record and replay technique)
  • Moderately high learning curve initially

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Keyword-driven testing (KDT), also known as action word-driven testing, is a scripting technique in which test scripts contain high-level keywords and supporting files that contain low-level scripts implementing those keywords. This methodology enables the creation of test cases using predefined keywords that represent specific user actions or application functions, typically organized in a tabular format such as a , which includes columns for steps, keywords, objects, input data, and expected results. By separating test design from execution, KDT allows both manual and automated testing, making it suitable for functional, regression, and in . The core components of keyword-driven testing include a keyword-driven test table for defining test cases, a function library mapping keywords to executable code, an object repository for UI elements, sheets for test inputs and outputs, and driver scripts to orchestrate execution. The workflow begins with identifying and developing keywords during the design phase, followed by assembling them into test cases; automation tools then interpret and run these keywords against the application under test. This approach evolved from earlier table-driven and data-driven techniques to enhance in frameworks, forming the basis for modern low-code and no-code testing platforms. Key advantages of keyword-driven testing include promoting between technical developers and non-technical stakeholders, such as business analysts and manual testers, by reducing the need for programming expertise in test case creation. It also improves reusability and of tests, as changes to the application require updates only to the underlying keyword implementations rather than individual scripts, and supports language- and tool-independent test planning even before the application is fully developed. Widely adopted in agile and environments, KDT facilitates faster test execution and serves as living for test scenarios.

Fundamentals

Definition and Principles

Keyword-driven testing is a software testing methodology that employs predefined keywords to define and execute test cases, representing specific actions or verifications in a structured format suitable for both manual and automated testing. This approach organizes test cases into tables or spreadsheets, where keywords such as "click" or "verifyText" correspond to predefined functions or scripts, enabling a clear separation between test specifications and their underlying implementation. As a test case specification technique, it supports the development of automation frameworks by abstracting complex scripting into reusable components. A core principle of keyword-driven testing is the decoupling of test logic from technical implementation details, allowing non-technical stakeholders, such as business analysts or domain experts, to author and maintain test cases without deep programming knowledge. Keywords act as high-level commands that map directly to modular scripts or functions in a , fostering reusability and reducing redundancy across multiple test scenarios. This modularity promotes maintainable test structures, where changes to underlying affect only the keyword mappings rather than individual test cases. For instance, a simple test case might be represented in a tabular format with columns for Keyword, Object, and Parameter, as shown below:
KeywordObjectParameter
openBrowserChromeN/A
navigateToURLhttps://example.com
verifyTextHeader"Welcome"
closeBrowserN/AN/A
This format encapsulates the test flow using keywords that link to executable code, illustrating the framework's emphasis on and .

Historical Development

Keyword-driven testing emerged in the 1990s as a response to the limitations of linear scripting in early test automation efforts, which often resulted in brittle and hard-to-maintain test cases due to their sequential, code-heavy nature. This approach built on modular test automation principles, allowing testers to abstract test steps into reusable components rather than writing extensive scripts for each scenario. The methodology drew initial influence from action-word-based practices in manual testing during the 1980s and 1990s, where testers documented procedures using descriptive action terms to improve clarity and reusability in test plans. A pivotal milestone occurred in the mid-1990s when Hans Buwalda developed the foundational concepts of what became known as action-based testing, a precursor to modern keyword-driven frameworks. In 1994, Buwalda originated this method, using spreadsheets to define tests via keywords and arguments, separating test logic from implementation to handle complex, changing requirements. He formalized the approach in publications, including a 1996 paper on "Automated Testing with Action Words: Abandoning Record & Playback," which advocated abandoning rigid record-and-playback tools in favor of keyword modularity. By 2001-2002, Buwalda presented the technique at the conference, gaining industry recognition and leading to its adoption in commercial tools. The early 2000s saw keyword-driven testing integrated into proprietary tools, notably Mercury Interactive's QuickTest Professional (QTP), which introduced a Keyword View in version 8.0 released in late 2004, enabling users to build tests visually using predefined keywords for actions like clicks and verifications. This feature, now part of HP Unified Functional Testing (UFT), popularized the methodology in enterprise environments by simplifying automation for non-programmers. Open-source advancements followed with the release of in 2008, which provided a keyword-driven structure for and further democratized the approach through its extensible library system. By the 2010s, keyword-driven testing evolved to align with agile methodologies and (CI) practices, facilitating faster feedback loops in iterative development. Frameworks like were adapted for CI tools such as Jenkins, allowing automated execution of keyword-based suites in pipelines, as demonstrated in industrial automation testing standards. Post-2020, the methodology has incorporated broader AI and integrations in to improve efficiency and coverage.

Key Components

Keywords and Actions

In keyword-driven testing, keywords serve as the fundamental building blocks that represent specific actions or operations within test scripts, allowing testers to abstract complex functionalities into reusable terms. Keywords are typically categorized into three main types: high-level, low-level, and custom. High-level keywords encapsulate broader processes or user workflows, such as "" or "completePurchase," which often combine multiple lower-level actions to simulate end-to-end scenarios. Low-level keywords focus on granular interactions with the application under test, such as "enterText" or "clickButton," corresponding to basic UI manipulations like inputting data or triggering events. Custom keywords are user-defined extensions tailored to domain-specific needs, enabling teams to create specialized actions beyond standard libraries, such as "validatePaymentGateway" for testing. The keyword mapping process involves linking these abstract terms to concrete implementations, such as scripts, functions, or APIs, to translate high-level descriptions into . This mapping is often documented in a centralized repository, like an Excel sheet or a dedicated file, where each keyword is associated with its underlying logic; for instance, the keyword "clickButton" might map directly to Selenium's click() method, including parameters for element locators and optional waits. During execution, a driver script or framework engine interprets the mapped keywords sequentially, invoking the corresponding while handling any dependencies like object repositories for UI elements. This separation ensures that changes to the underlying implementation (e.g., updating a UI selector) only require modifying the mapping, without altering cases. Best practices for keyword creation emphasize and robustness to enhance . Reusability is achieved by designing keywords as independent, self-contained units that can be applied across multiple test scenarios, potentially reducing script duplication by up to 60%. Parameterization allows keywords to accept dynamic inputs, such as variables for usernames or URLs (e.g., login(username, [password](/page/Password))), enabling flexible adaptation to varying test data without rewriting the keyword itself. Error handling should be integrated within keywords, including try-catch blocks, validation checks, and mechanisms to gracefully manage failures like element not found exceptions, ensuring reliable execution and clear reporting. A typical keyword library structure organizes these elements in a tabular format for clarity and ease of maintenance, often using spreadsheets or framework-specific files. The following example illustrates a simple keyword library excerpt:
KeywordDescriptionParametersAssociated Code Snippet (Pseudocode)
openBrowserLaunches a instancebrowserType (e.g., chrome)driver = new WebDriver(browserType); driver.get("");
enterTextInputs text into a specified fieldlocator, textValuedriver.findElement(locator).sendKeys(textValue);
clickButtonClicks on a elementlocatordriver.findElement(locator).click();
loginPerforms complete workflowusername, passwordenterText(usernameField, username); enterText(passwordField, password); clickButton(loginBtn);

Test Case Structure

In keyword-driven testing, test cases are constructed in a structured, often tabular format to separate test logic from implementation details, enabling non-technical users to author and maintain them. Typically, these test cases are organized using spreadsheets like Excel sheets, where each row represents a test step and columns capture essential elements such as the keyword (action to perform), object locator (identifier for the UI element, e.g., or ID), input data (parameters for the action), and expected result (verification criteria). Key components of a test case include the overall test scenario (a high-level description of the functionality being tested, such as user authentication), preconditions (initial setup actions like launching the application or navigating to a base ), postconditions (cleanup steps such as logging out or closing the browser), and the sequencing of keywords that define the step-by-step flow. This sequencing ensures linear execution unless modified by control structures, with each keyword invoking a predefined action from the keyword library. For handling complex cases, test cases incorporate control keywords to manage flow, such as "if" for conditional branching based on prior outcomes or "loop" (e.g., "FOR" in frameworks like ) for repeating sequences over datasets or iterations. These control keywords allow test cases to adapt to dynamic scenarios without embedding programming logic directly into the structure. A representative example is a test case for a , structured in a table as follows:
StepKeywordObject LocatorDataExpected Result
1Open BrowserN/AChromeBrowser window opens
2Navigate ToN/Ahttps://example.com/loginLogin page loads
3Input Textusername_field (ID)[email protected]Username field populated
4Input Textpassword_field (ID)password123Password field populated
5Clicklogin_button (XPath)N/AUser dashboard displays
6Verify Textwelcome_message (ID)Welcome, Test UserWelcome message matches
7Close BrowserN/AN/ABrowser closes
This sequence demonstrates how keywords form a cohesive, code-free test case that can be executed by a driver script interpreting the table row by row.

Implementation and Methodology

Building a Keyword Library

Building a keyword library involves creating a centralized repository of reusable keywords that represent specific actions or operations in the application under test, serving as the foundational building blocks for keyword-driven testing frameworks. This library enables testers to compose test cases by sequencing keywords without delving into underlying implementation details, promoting modularity and between test logic and execution. According to ISO/IEC/IEEE 29119-5 (2024 edition), keywords are defined by identifying sets of actions expected to occur frequently, ensuring they are named naturally and documented with parameters for clarity and reusability. The process of constructing a keyword library begins with identifying common actions through analysis of test requirements, exploratory testing, or consultation with domain experts to pinpoint reusable operations such as navigation, verification, or data manipulation. Next, developers create scripts or functions for each keyword, typically implementing low-level interactions with the in a programming language like Python or , while higher-level composite keywords combine these to form more abstract actions. Documentation follows, recording each keyword's name, description, parameters, expected outcomes, and usage examples in a structured format such as tables or resource files to facilitate understanding across teams. Finally, is integrated using tools like to track changes, manage dependencies, and enable rollback, ensuring the library evolves alongside the application. Organization of the keyword library emphasizes categorization to enhance accessibility and scalability, often grouping keywords by application module—such as (UI) elements like "click_button" or application programming interface () calls like "send_request"—or by layers like domain-specific versus test interface actions. Hierarchical structures, including base keywords for atomic operations and composite ones for sequences, support this organization to accommodate new keywords without disrupting existing structures. This modular approach, as seen in frameworks like , permits easy import of libraries via settings files, fostering extensibility for diverse testing needs. In practice, the library is stored externally, such as in resource files or databases, independent of specific test cases to maximize reuse. The 2024 edition of ISO/IEC/IEEE 29119-5 enhances library specifications with an initial list of generic technical keywords (e.g., "inputData", "checkValue") and emphasizes hierarchical keywords at various abstraction levels. Maintenance practices focus on keeping the library aligned with application evolution, involving regular reviews—such as monthly audits—to update keywords for UI changes or new features, deprecate obsolete ones by marking them with warnings or removing them after impact analysis, and enforce consistency through a dedicated or process. Cross-references track keyword usage across test cases, minimizing ripple effects from modifications, while continuous support requires allocated staff, budget, and training to handle ongoing refinements. In agile environments, this reduces overall maintenance effort by localizing changes to affected keywords rather than entire test suites. Challenges in library development include balancing abstraction levels, where overly low-level keywords increase complexity and high-level ones may lack precision, potentially leading to verbose test cases or implementation mismatches. Avoiding over-generalization is critical, as broad keywords like "select" can introduce redundancy or ambiguity if not scoped properly, complicating maintenance and requiring careful initial design to ensure uniqueness and specificity. Additionally, initial setup demands significant effort in identification and scripting, with risks of uncoordinated changes causing conflicts if version control and reviews are not rigorously applied.

Executing Tests

In keyword-driven testing, the execution process begins with a test execution engine or parser that reads the test case, typically structured as a table or sequence of keywords with associated parameters. This engine interprets the keywords by mapping them to predefined scripts or executable code stored in a keyword library, where each keyword corresponds to a specific action or function. The tests then proceed sequentially, invoking the mapped scripts in the order specified, which allows for modular and repeatable execution across different test scenarios. Results are logged throughout, capturing outcomes such as pass/fail status, timestamps, and execution durations for each keyword step. For automated execution, the framework integrates with drivers for web or UI interactions, where the tool bridge connects high-level keywords to low-level operations such as locating elements, handling inputs, and . This integration enables the execution of actions including with application responses, assertions to verify expected states (e.g., element presence or text matching), and real-time reporting mechanisms that generate summaries or detailed logs in formats like or XML. The same keyword structure supports manual execution, where testers follow the sequence using a manual test assistant tool to perform and record steps without , though automated runs require a central driver script to orchestrate the flow and handle environment setup. Error handling occurs at the keyword level, where the execution engine detects exceptions such as unimplemented keywords, timeouts, or assertion failures, marking the affected step as blocked or failed while allowing continuation or cleanup via predefined exception handlers. Failures trigger detailed reporting, including error messages, stack traces, and screenshots if applicable, to facilitate and incident without halting unrelated test portions. This granular approach ensures and supports partial test completion even in the presence of isolated issues.

Advantages and Challenges

Benefits

Keyword-driven testing offers enhanced by allowing modifications to the underlying of a keyword in a centralized , which propagates across all associated test cases without requiring updates to individual scripts. This modularity reduces sensitivity to application changes, minimizing overall maintenance effort compared to monolithic scripts. Studies on evolving keyword-driven test suites have demonstrated potential reductions of approximately 70% in changes, further underscoring its efficiency in long-term upkeep. Reusability is a core strength, as keywords—defined as modular, generic components—can be applied across multiple test cases, projects, or even similar systems, promoting portability and reducing redundancy in test development. For instance, a keyword encapsulating a common action like "" can be reused in various scenarios, streamlining the creation of comprehensive test suites without duplicating effort. The approach enhances accessibility for non-programmers, such as business analysts or domain experts, by employing English-like keywords that abstract technical details, enabling them to author and edit test cases without requiring coding expertise. This lowers the barrier to participation in testing activities, allowing contributions from stakeholders who understand business requirements but lack programming skills. Improved collaboration arises from the readable, domain-oriented nature of keyword-driven test cases, which bridge the gap between technical testers and business-level experts by using terminology familiar to all parties for review and validation. Such clarity facilitates joint efforts in test design and ensures alignment on business correctness without necessitating deep technical knowledge from non-developers. Scalability is supported through efficient expansion of test suites, as new cases can be composed from an existing keyword library with minimal additional , leading to cost and schedule savings in both manual and automated contexts. This structure accommodates growing project needs without proportional increases in development or maintenance overhead.

Limitations

One significant limitation of keyword-driven testing is the high initial setup effort required to develop a comprehensive keyword library. Creating reusable, application-independent keywords demands substantial time and expertise, often delaying the of automated tests compared to simpler scripting approaches. This upfront can extend project timelines, especially for large-scale applications where extensive coverage is needed. To mitigate this, teams can prioritize developing a minimal viable library focused on core functionalities and leverage open-source tools for pre-built keywords, gradually scaling as benefits accrue. Another drawback is the performance overhead introduced by the interpretation layer in keyword-driven frameworks. The need to parse and map keywords to underlying scripts during execution can result in slower test runs compared to direct code execution, particularly for GUI-intensive tests where keyword failures trigger retries and timeouts. For instance, failing keywords may significantly prolong overall execution time unless optimized. Mitigation strategies include setting appropriate time limits for keywords and optimizing the framework's parsing efficiency through streamlined library design. Keyword-driven testing also exhibits limited flexibility for handling complex logic, such as highly conditional or dynamic scenarios, without developing custom keywords. This approach struggles with intricate decision trees or asynchronous behaviors, often requiring additional low-level scripting that undermines the framework's benefits. Poor handling of such cases can lead to brittle tests that fail unexpectedly during application evolution. To address this, practitioners can balance high-level and low-level keywords carefully, incorporating conditional logic within the library while avoiding over-customization. The quality of the keyword library heavily influences the framework's effectiveness, as poorly designed keywords—such as those with or tight to specific applications—can create ongoing challenges. Inadequate libraries amplify fragility, making tests prone to breakage from even minor software under test changes. This dependency often results in higher long-term costs if keywords lack . Mitigation involves enforcing design principles like reusability and independence during library creation, coupled with regular refactoring to ensure robustness. Finally, keyword-driven testing presents a for advanced customization, necessitating proficiency in scripting languages and framework integration for testers. Non-technical users may initially struggle with extending libraries or keyword mappings, limiting adoption in diverse teams. This barrier can slow progress, particularly in agile environments requiring rapid adaptations. To overcome it, organizations should provide targeted training on tool-specific scripting and encourage knowledge transfer from experienced architects.

Tools and Frameworks

Several popular tools and frameworks facilitate keyword-driven testing by providing built-in support for keyword libraries, intuitive interfaces for test creation, and features like detailed reporting and cross-platform execution, which are key criteria for tool selection in this methodology. is an open-source, Python-based framework that inherently employs a keyword-driven approach, using a tabular format to define tests with extensible keywords for web, mobile, and desktop applications. It emphasizes ease of keyword creation through user-defined libraries and offers rich reporting via outputs, logs, and screenshots, while supporting cross-platform testing on Windows, macOS, and . Micro Focus Unified Functional Testing (UFT), now part of , is a commercial tool that supports keyword-driven testing through its visual Keyword View, enabling users to build tests by dragging and dropping keywords without extensive scripting. It includes advanced features for keyword management, such as reusable function libraries, and provides comprehensive reporting with dashboards and integration options for pipelines, alongside support for testing desktop, web, mobile, and applications across multiple platforms. Tricentis Tosca is a commercial, model-based tool that incorporates keyword-driven testing with visual, reusable test modules and risk-based optimization. It supports codeless keyword creation for end-to-end testing across web, mobile, , and desktop applications, featuring AI-assisted test design, detailed execution reports, and seamless integration for agile environments as of 2025. Selenium, an open-source framework primarily for web testing, is frequently adapted for keyword-driven testing by integrating with wrapper libraries or custom frameworks that abstract actions into reusable keywords, simplifying test maintenance for browser-based applications. This combination enhances keyword creation via predefined action words and supports reporting through plugins like Allure or ExtentReports, with cross-browser and cross-platform compatibility via drivers for Chrome, , and more. Other notable tools include , which offers built-in keyword support for web, API, mobile, and desktop testing, featuring a low-code interface for easy keyword development, , and broad platform coverage. by SmartBear provides keyword-driven testing via its drag-and-drop Keyword Tests, with strong extensibility, detailed execution reports, and support for multiple application types across platforms. , an open-source tool for mobile automation, extends keyword-driven capabilities through integrations with frameworks like , allowing keyword-based tests for and Android apps with reporting via external tools and cross-device support. TestRigor, an AI-powered codeless automation tool, enables keyword-driven testing using commands for web, mobile, and , with self-healing capabilities, built-in reporting, and support for cross-platform execution as of 2025.

Integration Examples

In , keyword-driven testing can be implemented by defining custom keywords that interact with web elements via the SeleniumLibrary. For instance, a keyword named "Verify " might encapsulate the process of entering credentials and asserting successful on a page. This keyword could be structured as follows, using arguments for username and password:

*** Keywords *** Verify Login [Arguments] ${username} ${password} Input Text id=username ${username} Input Text id=password ${password} Click Button id=login Page Should Contain Welcome [Teardown] Close Browser

*** Keywords *** Verify Login [Arguments] ${username} ${password} Input Text id=username ${username} Input Text id=password ${password} Click Button id=login Page Should Contain Welcome [Teardown] Close Browser

To integrate this with web elements, the SeleniumLibrary is imported in the settings section, enabling actions like Input Text and Click Button on locators such as IDs or XPaths. A can then invoke this keyword within a test case, such as:

*** Test Cases *** Valid Login Test Open Browser http://example.com/login chrome Verify Login demo mode

*** Test Cases *** Valid Login Test Open Browser http://example.com/login chrome Verify Login demo mode

Running the suite with robot test_suite.robot executes the test, producing logs and reports that detail keyword execution and outcomes. For Selenium integration in a Java-based environment, a keyword-driven framework can be built using TestNG for test execution and management. The framework typically includes an object repository for element locators, a keyword library class implementing actions like click or sendKeys, and an execution engine that maps Excel-based test steps to these methods. A Java keyword driver class, such as ActionKeywords, might define methods corresponding to keywords:

java

public class ActionKeywords { public void openBrowser(String browser) { if (browser.equals("firefox")) { driver = new [FirefoxDriver](/page/Firefox)(); } driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); } public void inputUsername(String username) { driver.findElement(By.id("log")).sendKeys(username); } public void clickLogin() { driver.findElement(By.id("login")).click(); } }

public class ActionKeywords { public void openBrowser(String browser) { if (browser.equals("firefox")) { driver = new [FirefoxDriver](/page/Firefox)(); } driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); } public void inputUsername(String username) { driver.findElement(By.id("log")).sendKeys(username); } public void clickLogin() { driver.findElement(By.id("login")).click(); } }

The TestNG execution class reads test data from an Excel sheet (e.g., rows specifying "inputUsername" as the keyword and "testuser" as the value) and invokes the corresponding method via reflection or a . Annotations like @Test in TestNG organize suites, allowing parallel execution and reporting. This setup automates a login flow by sequencing keywords like openBrowser, inputUsername, inputPassword, and clickLogin. In , keywords can handle calls using libraries like RequestsLibrary in , enabling declarative test scripts for endpoints. For example, a keyword for a POST request to create a user might be defined as:

*** Keywords *** Create User [Arguments] ${user_data} POST /users ${user_data} Response Status Code Should Be 201 Output ${response.json()}

*** Keywords *** Create User [Arguments] ${user_data} POST /users ${user_data} Response Status Code Should Be 201 Output ${response.json()}

A test case integrates this by providing data, such as {"name": "John", "email": "[email protected]"}, and verifying the response. Similarly, a GET keyword like Retrieve User /users/1 Integer response body id 1 asserts specific values in the JSON response. These keywords abstract HTTP methods and validations, allowing suites to test full API workflows without scripting low-level details. While tools like Postman support scripting extensions via Newman CLI for keyword-like chaining in collections, 's tabular syntax provides native keyword support for interactions. Keyword-driven tests can integrate into pipelines using Jenkins, where suites are executed via plugins, and results are reported through Allure for visual dashboards. In Jenkins, the plugin schedules builds triggered by commits, running commands like robot --listener allure_robotframework:allure_results tests/. The allure-robotframework listener generates XML outputs during execution, which Allure merges into reports accessible post-build. For example, a Jenkins pipeline script might include stages for checkout, test execution (sh 'robot test_suite.robot'), and reporting (allure serve allure_results), providing metrics like pass/fail rates and step traces. This setup ensures automated with historical trend analysis in Allure. A real-world scenario involves an login test using hybrid keywords across web and mobile platforms in , leveraging SeleniumLibrary for web and AppiumLibrary for mobile. A shared keyword like "Login To " accepts a platform argument:

*** Keywords *** Login To E-Commerce [Arguments] ${platform} ${username} ${password} IF '${platform}' == 'web' Open Browser [https://shop.example.com/login](/page/HTTPS) chrome SeleniumLibrary.Input Text id=email ${username} ELSE IF '${platform}' == 'mobile' Open Application [http://localhost:4723/wd/hub](/page/Localhost) platformName=Android AppiumLibrary.Input Text id=email ${username} END Input Text id=password ${password} Click Element id=login-button Page Should Contain Element class=welcome-message

*** Keywords *** Login To E-Commerce [Arguments] ${platform} ${username} ${password} IF '${platform}' == 'web' Open Browser [https://shop.example.com/login](/page/HTTPS) chrome SeleniumLibrary.Input Text id=email ${username} ELSE IF '${platform}' == 'mobile' Open Application [http://localhost:4723/wd/hub](/page/Localhost) platformName=Android AppiumLibrary.Input Text id=email ${username} END Input Text id=password ${password} Click Element id=login-button Page Should Contain Element class=welcome-message

Test cases invoke this for cross-platform validation, such as Login To [E-Commerce](/page/E-commerce) web [email protected] pass123 followed by mobile execution, ensuring consistent behavior like secure and session handling in an online store. This hybrid approach reuses keywords while adapting locators for web (e.g., CSS selectors) and mobile (e.g., IDs).

Comparisons

With

Data-driven testing is an automation approach that separates test data from the underlying test scripts, enabling the execution of the same test logic with multiple sets of input values to validate functionality across varied scenarios. This method typically stores data in external files such as spreadsheets, CSV files, or databases, which the script reads to parameterize tests and reduce code duplication. By focusing on data variation rather than action definition, it supports exhaustive validation of inputs, such as testing form submissions with diverse user details. In contrast to keyword-driven testing, which abstracts reusable actions through predefined keywords to promote modularity and accessibility for non-technical users, data-driven testing prioritizes the parameterization of inputs to cover edge cases and boundary conditions within a fixed script structure. For instance, keyword-driven testing might define a login action via keywords like "Enter Username" and "Click Submit," executed once per test case, whereas data-driven testing applies the same login script to numerous credential sets sourced from a CSV file to simulate different users. This distinction highlights keyword-driven testing's emphasis on action reusability across diverse scenarios, while data-driven testing excels in scenarios requiring broad input coverage without altering the core logic. Keyword-driven testing is particularly suitable for building maintainable test suites where actions need to be shared and adapted across multiple test flows, such as workflows involving search, add-to-cart, and checkout steps. , however, is ideal for applications demanding rigorous validation of data-dependent behaviors, like financial systems processing varied transaction amounts or user registrations with international formats. Selecting between them depends on project needs: keyword-driven for in complex, action-heavy environments, and data-driven for efficiency in data-intensive validations. Hybrid approaches integrate both methodologies to leverage their strengths, such as using keyword-driven structures to define actions while incorporating data-driven elements like external data tables for parameterization, resulting in more comprehensive and flexible test suites. For example, a hybrid login test could employ keywords for the action sequence but iterate over multiple user credentials from a to test under various conditions, enhancing coverage without redundant scripting. This combination is increasingly adopted in agile teams to balance reusability with thorough input testing.

With Script-Based Testing

Script-based testing, also known as linear or programmable scripting, involves directly coding test procedures in programming languages such as , Python, or C#, where testers write line-by-line logic to simulate user interactions, verify conditions, and handle exceptions. This approach provides full programmatic control, allowing for complex conditional statements, loops, and custom functions tailored to specific application behaviors. In contrast, keyword-driven testing abstracts the underlying into reusable keywords that represent high-level actions, such as "" or "navigate," stored in tables or external files and interpreted by a driver script. The key differences lie in and : script-based testing requires programming expertise for and , offering precise control but resulting in verbose, application-specific that is less for non-technical stakeholders. Keyword-driven testing enhances and enables domain experts to contribute to test design without coding, though it relies on a predefined keyword that must map accurately to scripted implementations. Trade-offs between the two highlight maintenance priorities: keyword-driven testing reduces scripting overhead by promoting reusability across tests, lowering long-term costs, but introduces interpretation overhead from the driver layer. Script-based testing accelerates development for simple, one-off tests due to its directness but becomes brittle to UI changes, demanding frequent code rewrites and increasing fragility in evolving applications. For scalability, keyword-driven methods better support large test suites by decoupling test logic from details. Migration from script-based to keyword-driven testing often involves refactoring existing scripts into modular functions that serve as keywords, enabling gradual adoption for better scalability in enterprise environments. This process typically starts by identifying common patterns in scripts, extracting them into a , and replacing direct calls with keyword references, which can reduce redundancy and improve team collaboration over time. A representative example is a test flow. In script-based testing, the procedure might be hardcoded as follows in Python using :

python

from selenium import webdriver driver = webdriver.Chrome() driver.get("[https](/page/HTTPS)://example.com/[login](/page/Login)") driver.find_element("id", "username").send_keys("user") driver.find_element("id", "password").send_keys("pass") driver.find_element("id", "submit").click() assert "Welcome" in driver.page_source driver.quit()

from selenium import webdriver driver = webdriver.Chrome() driver.get("[https](/page/HTTPS)://example.com/[login](/page/Login)") driver.find_element("id", "username").send_keys("user") driver.find_element("id", "password").send_keys("pass") driver.find_element("id", "submit").click() assert "Welcome" in driver.page_source driver.quit()

This embeds all logic inline, making modifications UI-dependent. Conversely, keyword-driven testing represents the same flow in a tabular format, such as:
ObjectKeywordValue
LoginPageEnterusername, user
LoginPageEnterpassword, pass
LoginButtonClick
WelcomePageVerifytext, Welcome
The driver interprets these keywords by calling corresponding functions from the library, promoting reusability for other tests.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.