Recent from talks
Nothing was collected or created yet.
Mock object
View on WikipediaThis article needs additional citations for verification. (April 2024) |
In computer science, a mock object is an object that imitates a production object in limited ways. A programmer might use a mock object as a test double for software testing. A mock object can also be used in generic programming.
Analogy
[edit]A mock object can be useful to the software tester like a car designer uses a crash test dummy to simulate a human in a vehicle impact.
Motivation
[edit]In a unit test, mock objects can simulate the behavior of complex, real objects and are therefore useful when a real object is impractical or impossible to incorporate into a unit test. If an object has any of the following characteristics, it may be useful to use a mock object in its place:
- it supplies non-deterministic results (e.g. the current time or the current temperature)
- it has states that are difficult to create or reproduce (e.g. a network error)
- it is slow (e.g. a complete database, which would have to be prepared before the test)
- it does not yet exist or may change behavior
- it would have to include information and methods exclusively for testing purposes (and not for its actual task)
For example, an alarm clock program which causes a bell to ring at a certain time might get the current time from a time service. To test this, the test must wait until the alarm time to know whether it has rung the bell correctly. If a mock time service is used in place of the real time service, it can be programmed to provide the bell-ringing time (or any other time) regardless of the real time, so that the alarm clock program can be tested in isolation.
Technical details
[edit]Mock objects have the same interface as the real objects they mimic, allowing a client object to remain unaware of whether it is using a real object or a mock object. Many available mock object frameworks allow the programmer to specify which methods will be invoked on a mock object, in what order, what parameters will be passed to them, and what values will be returned. Thus, the behavior of a complex object such as a network socket can be mimicked by a mock object, allowing the programmer to discover whether the object being tested responds appropriately to the wide variety of states such mock objects may be in.
Mocks, fakes or stubs
[edit]The definitions of mock, fake and stub are not consistent across the literature.[1][2][3][4][5][6] Nonetheless, all represent a production object in a testing environment by exposing the same interface.
Regardless of name, the simplest form returns pre-arranged responses (as in a method stub) and the most complex form imitates a production object's complete logic.
Such a test object might contain assertions to examine the context of each call. For example, a mock object might assert the order in which its methods are called, or assert consistency of data across method calls.
In the book The Art of Unit Testing[7] mocks are described as a fake object that helps decide whether a test failed or passed by verifying whether an interaction with an object occurred. Everything else is defined as a stub. In that book, fakes are anything that is not real, which, based on their usage, can be either stubs or mocks.
Setting expectations
[edit]Consider an example where an authorization subsystem has been mocked. The mock object implements an isUserAllowed(task : Task) : boolean[8] method to match that in the real authorization class. Many advantages follow if it also exposes an isAllowed : boolean property, which is not present in the real class. This allows test code to easily set the expectation that a user will, or will not, be granted permission in the next call and therefore to readily test the behavior of the rest of the system in either case.
Similarly, mock-only settings could ensure that subsequent calls to the sub-system will cause it to throw an exception, hang without responding, or return null etc. Thus, it is possible to develop and test client behaviors for realistic fault conditions in back-end sub-systems, as well as for their expected responses. Without such a simple and flexible mock system, testing each of these situations may be too laborious for them to be given proper consideration.
Writing log strings
[edit]A mock database object's save(person : Person) method may not contain much (if any) implementation code. It might check the existence and perhaps the validity of the Person object passed in for saving (see fake vs. mock discussion above), but beyond that there might be no other implementation.
This is a missed opportunity. The mock method could add an entry to a public log string. The entry need be no more than "Person saved",[9]: 146–7 or it may include some details from the person object instance, such as a name or ID. If the test code also checks the final contents of the log string after various series of operations involving the mock database, then it is possible to verify that in each case exactly the expected number of database saves have been performed. This can find otherwise invisible performance-sapping bugs, for example, where a developer, nervous of losing data, has coded repeated calls to save() where just one would have sufficed.
Use in test-driven development
[edit]Programmers working with the test-driven development (TDD) method make use of mock objects when writing software. Mock objects meet the interface requirements of, and stand in for, more complex real ones; thus they allow programmers to write and unit-test functionality in one area without calling complex underlying or collaborating classes.[9]: 144–5 Using mock objects allows developers to focus their tests on the behavior of the system under test without worrying about its dependencies. For example, testing a complex algorithm based on multiple objects being in particular states can be clearly expressed using mock objects in place of real objects.
Apart from complexity issues and the benefits gained from this separation of concerns, there are practical speed issues involved. Developing a realistic piece of software using TDD may easily involve several hundred unit tests. If many of these induce communication with databases, web services and other out-of-process or networked systems, then the suite of unit tests will quickly become too slow to be run regularly. This in turn leads to bad habits and a reluctance by the developer to maintain the basic tenets of TDD.
When mock objects are replaced by real ones, the end-to-end functionality will need further testing. These will be integration tests rather than unit tests.
Limitations
[edit]The use of mock objects can closely couple the unit tests to the implementation of the code that is being tested. For example, many mock object frameworks allow the developer to check the order of and number of times that mock object methods were invoked by the real object being tested; subsequent refactoring of the code that is being tested could therefore cause the test to fail even though all mocked object methods still obey the contract of the previous implementation. This illustrates that unit tests should test a method's external behavior rather than its internal implementation. Over-use of mock objects as part of a suite of unit tests can result in a dramatic increase in the amount of maintenance that needs to be performed on the tests themselves during system evolution as refactoring takes place. The improper maintenance of such tests during evolution could allow bugs to be missed that would otherwise be caught by unit tests that use instances of real classes. Conversely, simply mocking one method might require far less configuration than setting up an entire real class and therefore reduce maintenance needs.
Mock objects have to accurately model the behavior of the object they are mocking, which can be difficult to achieve if the object being mocked comes from another developer or project or if it has not even been written yet. If the behavior is not modelled correctly, then the unit tests may register a pass even though a failure would occur at run time under the same conditions that the unit test is exercising, thus rendering the unit test inaccurate.[10]
See also
[edit]References
[edit]- ^ "Unit testing best practices with .NET Core and .NET Standard - Let's speak the same language (Fake, Stubs and Mocks)". Microsoft Docs. Archived from the original on 3 September 2022.
- ^ D'Arcy, Hamlet (21 October 2007). "Mocks and Stubs aren't Spies". behind the times. Archived from the original on 20 June 2017.
- ^ "Mocks, Fakes, Stubs and Dummies". XUnitPatterns.com. Archived from the original on 17 January 2024.
- ^ "What's the difference between a mock & stub?". Stack Overflow. Archived from the original on 4 July 2022.
- ^ "What's the difference between faking, mocking, and stubbing?".
- ^ Feathers, Michael (2005). "Sensing and separation". Working effectively with legacy code. NJ: Prentice Hall. p. 23 et seq. ISBN 0-13-117705-2.
- ^ Osherove, Roy (2009). "Interaction testing with mock objects et seq". The art of unit testing. Manning. ISBN 978-1-933988-27-6.
- ^ These examples use a nomenclature that is similar to that used in Unified Modeling Language
- ^ a b Beck, Kent (2003). Test-Driven Development By Example. Boston: Addison Wesley. ISBN 0-321-14653-0.
- ^ InJava.com to Mocking | O'Reilly Media
External links
[edit]- Tim Mackinnon (8 September 2009). "A Brief History of Mock Objects". Mockobjects.com/. Archived from the original on 7 June 2023.
- Test Doubles: a section of a book on unit testing patterns.
- All about mock objects! Portal concerning mock objects
- "Using mock objects for complex unit tests". IBM developerWorks. 16 October 2006. Archived from the original on 4 May 2007.
- Unit testing with mock objects IBM developerWorks
- Mocks Aren't Stubs (Martin Fowler) Article about developing tests with Mock objects. Identifies and compares the "classical" and "mockist" schools of testing. Touches on points about the impact on design and maintenance.
Mock object
View on GrokipediaFundamentals
Definition
A mock object is a test-specific implementation that simulates the behavior of a real object within software testing, allowing developers to replace dependencies with controlled substitutes to focus on the logic of the system under test (SUT). Unlike the actual object, which might involve complex operations such as network calls or database interactions, a mock object is designed to respond predictably to method invocations without executing the full underlying functionality. This simulation enables isolated testing of individual components by mimicking interfaces and return values as needed.[2] Key characteristics of mock objects include their programmability, which permits defining specific responses to method calls in advance, and their ability to record and verify interactions for correctness. Developers configure mocks by setting expectations—such as the sequence, arguments, and frequency of calls—during test setup, then assert these expectations after executing the SUT to ensure proper collaboration between objects. This verification aspect distinguishes mocks as tools for behavioral testing, confirming not just outputs but how the SUT engages with its dependencies. Mock objects typically implement the same interface as the real object, ensuring seamless substitution while remaining lightweight and deterministic.[1][2] Mock objects are primarily employed in unit testing to isolate code units from external systems, facilitating faster and more reliable tests. For instance, consider a unit test for a user service that queries a database for records; a mock database connection can be programmed to return predefined data, avoiding real database access. The following pseudocode illustrates this:MockDatabaseConnection mockDb = createMock(DatabaseConnection);
when(mockDb.executeQuery("SELECT * FROM users")).thenReturn(predefinedUserList); // Predefined responses
UserService service = new UserService(mockDb);
List<User> result = service.getActiveUsers();
assertEquals(expectedActiveUsers, result); // Verify output
verify(mockDb).executeQuery("SELECT * FROM users"); // Verify interaction occurred
MockDatabaseConnection mockDb = createMock(DatabaseConnection);
when(mockDb.executeQuery("SELECT * FROM users")).thenReturn(predefinedUserList); // Predefined responses
UserService service = new UserService(mockDb);
List<User> result = service.getActiveUsers();
assertEquals(expectedActiveUsers, result); // Verify output
verify(mockDb).executeQuery("SELECT * FROM users"); // Verify interaction occurred
Historical Development
The concept of mock objects originated in the late 1990s amid the growing emphasis on unit testing in object-oriented software development, particularly as influenced by the principles of Extreme Programming (XP), which Kent Beck formalized in his 1999 book Extreme Programming Explained. This methodology stressed rigorous testing practices, including test-driven development, to ensure code quality and adaptability. Mock objects addressed the need to isolate units of code from complex dependencies during testing, building on early unit testing tools like SUnit for Smalltalk, which Beck had developed in the mid-1990s. A pivotal milestone came in 2000 when Tim Mackinnon, Steve Freeman, and Philip Craig introduced the term and technique in their paper "Endo-Testing: Unit Testing with Mock Objects," presented at the XP2000 conference.[5] This work formalized mocks as programmable substitutes for real objects, enabling behavior verification in tests without relying on full system integration. Concurrently, Kent Beck and Erich Gamma released the first version of JUnit in 2000, a Java unit testing framework that provided the infrastructure for incorporating mock objects, though initial implementations often involved manual creation. Beck further popularized the approach in his 2002 book Test-Driven Development: By Example, where he illustrated how mocks support iterative development by allowing developers to verify interactions early. The early 2000s saw mocks evolve from ad-hoc, hand-written classes to automated frameworks that simplified creation and verification. Shortly after the paper, its authors developed jMock, one of the first automated mock frameworks for Java, released around 2002, which provided tools for specifying and verifying object interactions.[3] In Java, Mockito emerged in 2007, offering a fluent API for dynamic mock generation and stubbing, significantly reducing boilerplate code compared to manual mocks. Similarly, Moq for .NET followed in 2008, leveraging LINQ expression trees to enable expressive, compile-time-safe mocking.[6] By the 2010s, mock objects became integral to agile methodologies, with widespread adoption in continuous integration pipelines and behavior-driven development. In Python, the standard library incorporated built-in support via theunittest.mock module in version 3.3 (released 2012), standardizing mocking for a broader developer audience.
A key industry milestone occurred in 2013 with the publication of ISO/IEC/IEEE 29119, an international standard for software testing that references mock-like techniques, such as stubs and drivers, for isolating components in unit and integration tests, thereby endorsing their role in systematic testing processes. This formal recognition helped solidify mock objects as a cornerstone of modern software engineering practices.
Motivations and Benefits
Reasons for Use
Mock objects enable the isolation of the unit under test from its external dependencies, such as databases, APIs, or other services, by simulating their behavior without requiring the actual components to be present or operational. This approach prevents test flakiness caused by real-world variability, like network delays or external system downtime, allowing developers to focus solely on the logic of the individual unit. For instance, in testing a service that interacts with a warehouse inventory system, a mock can replace the real warehouse interface, ensuring the test examines only the service's decision-making process.[1][5][7][8] By using mocks, tests execute more quickly and reliably compared to those involving full system integration, as they avoid the overhead of setting up and tearing down complex environments. Real dependencies, such as database connections, can significantly slow down test suites—sometimes extending execution times to minutes or hours—while mocks provide instantaneous responses, enabling faster feedback loops in development cycles. This reliability stems from the controlled nature of mocks, which deliver consistent results across runs, reducing false positives or negatives due to external factors. Empirical analysis of open-source projects shows that mocks are frequently employed for such dependencies to cut test times dramatically, with developers reporting up to 82% usage for external resources like web services.[7][8][5] Mock objects facilitate a focus on the specific behaviors and interactions of the unit, verifying side effects and method calls rather than just the final output state. This behavioral verification ensures that the unit adheres to expected contracts with its dependencies, catching issues like incorrect parameter passing early in development. In practice, this supports refactoring by maintaining interface compatibility, as changes to the unit's interactions with mocks reveal contract mismatches without altering the broader system. Additionally, mocks promote cost efficiency by minimizing the need for dedicated test environments or hardware, particularly in distributed systems like microservices, where provisioning real instances can be resource-intensive. Studies indicate that 45.5% of developers use mocks specifically for hard-to-configure dependencies, lowering overall testing overhead.[1][5][7][8]Illustrative Analogies
One common analogy for mock objects likens them to stunt doubles in filmmaking. Just as a stunt double performs dangerous or complex actions on behalf of the lead actor to ensure safety and efficiency during production, a mock object simulates the behavior of a real dependency—such as a database or external service—allowing the code under test to interact with it in a controlled manner without risking real-world consequences like network failures or data corruption.[9] This approach enables developers to focus on verifying the logic of the primary component while isolating it from unpredictable external elements.[10] Another illustrative comparison is to a flight simulator used in pilot training. Similar to how a flight simulator replicates aircraft responses and environmental conditions to prepare pilots for various scenarios without the hazards of actual flight, mock objects recreate the expected interactions and responses of dependencies in a testing environment, permitting thorough examination of code behavior under isolated, repeatable conditions.[11] This analogy highlights the value of mocks in dependency isolation, where the "simulation" allows for safe rehearsal of edge cases and failures that would be impractical or costly to reproduce with live systems.[12] Mocks can also be thought of as stand-ins for complex props in theater productions. In a play, elaborate props like a fully functional antique clock might be replaced by a simpler replica to facilitate rehearsals and performances without the logistics of sourcing or maintaining the authentic item; likewise, mock objects serve as programmable substitutes for intricate real-world components, enabling tests to proceed smoothly by providing just enough functionality to mimic the essential interface. While these analogies aid in conceptualizing mock objects, they inherently simplify the concept: unlike passive stunt doubles, props, or simulators, mocks are actively configurable and verifiable through code, allowing precise control over behaviors and interactions that go beyond mere imitation.Technical Implementation
Types and Distinctions
Mock objects are distinguished from other test doubles primarily by their role in verifying interactions rather than merely providing predefined responses. In unit testing, stubs are objects that return canned or fixed responses to calls, allowing the test to isolate the unit under test without asserting whether specific methods were invoked. For instance, a stub for an email service might always return a success message regardless of input, focusing solely on enabling the test to proceed with expected outputs. Mocks, in contrast, actively record and verify that particular methods were called with the anticipated parameters and in the correct sequence, emphasizing behavioral verification of the system under test. This distinction ensures mocks are used to confirm not just the state but the expected behavior during execution.[1] Fakes represent another category of test doubles, offering functional implementations that approximate real components but with simplifications unsuitable for production, such as an in-memory database that mimics a full relational database's behavior without persistence. Unlike stubs, which provide canned responses without verification, or mocks, which prioritize interaction checks over functionality, fakes provide a working but lightweight alternative for tests requiring more realistic interactions. Spies are similar to stubs but record calls made to them, allowing verification of interactions while executing some real methods on the object. Dummies, a simpler form, serve merely as placeholders to satisfy method signatures without any response or verification logic. These categories collectively form test doubles, with mocks specifically targeting interaction-based assertions.[1] The terminology originates from Gerard Meszaros' seminal work xUnit Test Patterns: Refactoring Test Code (2007), which categorizes these objects to promote clearer testing practices; there, mocks are defined as tools for behavior verification, distinguishing them from stubs' focus on state verification through predefined responses. This framework has influenced modern testing libraries like Mockito and Moq, standardizing the use of mocks for verifying collaborations between objects. Selection among these types depends on testing goals: mocks suit behavior-driven tests where confirming method invocations is crucial, such as verifying that an API endpoint is called exactly once with specific data during a user registration flow. Stubs are preferable for simple value-return scenarios, like simulating a fixed discount calculation without checking if the method was invoked. Fakes are chosen for integration-like tests needing operational realism, such as using an in-memory queue to test message processing without external dependencies. This targeted application prevents overcomplication and aligns test doubles with the desired verification level.[1]Configuration and Expectations
The configuration of a mock object begins with the creation of an instance that simulates the behavior of a real dependency, typically through framework-specific APIs that allow interception of method calls without altering the production code. In frameworks like Mockito for Java, this involves annotating or programmatically instantiating a mock, such as using themock(Class) method to generate a proxy that overrides targeted methods.[13] Similarly, in jMock, mocks are created via a central Mockery context that manages their lifecycle, ensuring isolation from the system under test.[14] This setup promotes loose coupling by depending on interfaces rather than concrete implementations, enabling tests to focus on the unit's logic independently of external components.[5]
Once instantiated, behaviors are defined by specifying return values, exceptions, or side effects for intercepted methods, effectively programming the mock to respond as needed for the test scenario. For instance, Mockito employs a stubbing syntax like when(mock.method(args)).thenReturn(value) to configure a method to return a predetermined value or throw an exception on invocation, allowing precise control over simulated responses.[13] In jMock, this is achieved within an expectations block using will(returnValue(value)) to stub outcomes, which integrates seamlessly with the test's assertion context.[14] These configurations are applied prior to executing the unit under test, ensuring the mock provides consistent, predictable inputs or outputs that mimic real-world interactions without requiring full system setup.[5]
Expectations outline the anticipated interactions with the mock, such as the number of calls to specific methods, the parameters passed, or the order of invocations, to validate the unit's correct usage of dependencies. Frameworks define these via cardinality constraints, like jMock's oneOf(mock.method(args)) for exactly one call or allowing(mock.method()) for zero or more, which can include sequence ordering with inSequence() to enforce temporal relationships.[14] Mockito similarly supports expectation setup through verification modes, though primarily focused on post-interaction checks, with initial configurations aiding in anticipating call counts via stubbing chains.[13] This preemptive definition helps detect deviations early, maintaining test reliability across languages like Java or C# where similar patterns emerge in tools such as Moq.[5]
Argument matchers enhance flexibility in expectations by allowing non-exact comparisons, avoiding brittle tests tied to literal values. Common techniques include generics like anyString() in Mockito to match any string argument, or jMock's implicit matching via expected patterns in method signatures, which accommodates variable inputs while still verifying intent.[13][14] These matchers are integral to the setup, fostering robust configurations that prioritize behavioral correctness over rigid parameter equality, a pattern that generalizes to other ecosystems for improved test maintainability.[5]
Interaction Verification
Interaction verification in mock objects involves checking whether the system under test (SUT) interacted with the mock as anticipated during test execution, focusing on behavior rather than state. This process, often termed behavior verification, ensures that methods on the mock were invoked with the correct arguments, frequency, and sequence, thereby validating the SUT's behavioral dependencies. Unlike state-based testing, which inspects outputs or internal states, interaction verification relies on the mock's recorded invocation history to assert expected collaborations.[1] Modern mocking frameworks provide dedicated methods for these checks, such as Mockito'sverify() function, which confirms that a specific method was called on the mock. For instance, verify(mock).method(arg) asserts at least one invocation with the given argument, while modes like times(n) specify exact call counts, never() ensures no calls occurred, and inOrder() verifies sequential interactions across mocks. These assertions leverage the framework's internal recording of all method invocations, allowing post-execution analysis without altering the SUT's logic. Frameworks like Mockito automatically capture these interactions during test runs, enabling flexible verification that adapts to complex scenarios.[15]
To inspect interaction details beyond basic assertions, developers can record call logs or traces, such as appending invocation details to lists or strings for manual review, or use specialized tools like argument captors to extract and validate passed parameters. This recording mechanism facilitates debugging by providing a traceable history of interactions, including timestamps or order indices in advanced setups. If verifications fail, frameworks raise descriptive exceptions; for example, Mockito throws WantedButNotInvoked when an expected call is missing, or VerificationInOrderFailure for sequence mismatches, highlighting discrepancies like incorrect argument types or invocation counts to guide test refinements. These errors promote test maintainability by pinpointing behavioral deviations early.[15]
For asynchronous or time-sensitive interactions, contemporary frameworks support advanced verification features, such as timeout-based checks that wait for invocations within a specified duration before failing. In Mockito, verify(mock, timeout(100).times(1)).asyncMethod() polls for the expected call up to 100 milliseconds, accommodating non-deterministic async behaviors without blocking tests indefinitely. This capability is essential for verifying interactions in concurrent environments, ensuring robustness without over-specifying thread timings.[15]
Applications in Development
Role in Test-Driven Development
Mock objects play a central role in the test-driven development (TDD) process by enabling developers to isolate the system under test (SUT) from its dependencies, facilitating the iterative "red-green-refactor" cycle. In the red phase, a failing test is written first, often using a mock object to define expected interactions with collaborators, such as method calls or return values, without implementing the actual dependencies. This approach verifies interfaces and behaviors early, ensuring the test fails due to missing implementation rather than external issues. During the green phase, minimal code is added to the SUT to make the test pass, typically by satisfying the mock's expectations through simple stubs or direct implementations. In the refactor phase, the code is cleaned up while updating mocks to reflect refined behaviors, maintaining test reliability without altering expected outcomes.[2][1] The benefits of mock objects in TDD include supporting the writing of tests before production code exists, which drives the design of loosely coupled systems by focusing on interfaces rather than concrete implementations. By verifying interactions via mocks, developers can confirm that the SUT behaves correctly in terms of collaborations, promoting modular and testable architectures from the outset. This isolation also accelerates feedback loops, as mocks eliminate the need for slow or unreliable external components, allowing rapid iteration and early detection of design flaws. Furthermore, mocks encourage a focus on observable behaviors, aligning with TDD's goal of building confidence in the system's functionality through verifiable contracts.[1][2] A typical workflow example involves developing a user authentication module that depends on an external notification service. In the red phase, a test is written asserting that successful authentication triggers a notification via the service; a mock is configured to expect a specific method call, likesendWelcomeEmail(user), causing the test to fail. For the green phase, the authentication class is implemented to invoke the mock's method, passing the test. During refactoring, the code is optimized—perhaps extracting the service interaction into a dedicated method—while the mock is adjusted to verify additional parameters, such as user details, ensuring the interaction remains precise. This step-by-step process drives incremental implementation, with each cycle refining the module's interface.[1]
The use of mock objects in TDD has evolved from classic TDD, which emphasizes state-based testing with real or simple stub objects where possible, to the mockist style that systematically employs mocks for behavior verification. Classic TDD, as originally outlined by Kent Beck, focuses on inside-out development starting from core domain logic and using state checks to validate outcomes, minimizing mocks to avoid over-specification. In contrast, mockist TDD adopts an outside-in approach, using mocks extensively to test interactions across layers from the start, which helps define roles and dependencies early but can lead to more brittle tests if expectations become overly detailed. This distinction highlights mock objects' role in shifting TDD toward interaction-focused design, though practitioners often blend both styles for balanced coverage.[1]
