|
| 1 | +# Testing |
| 2 | + |
| 3 | +One of the main reasons to control dependencies is to allow for easier testing. Learn some tips and |
| 4 | +tricks for writing better tests with the library. |
| 5 | + |
| 6 | +## Overview |
| 7 | + |
| 8 | +In the article <doc:LivePreviewTest> you learned how to define a ``TestDependencyKey/testValue`` |
| 9 | +when registering your dependencies, which will be automatically used during tests. In this article |
| 10 | +we cover more detailed information about how to actually write tests with overridden dependencies, |
| 11 | +as well as some tips and gotchas to keep in mind. |
| 12 | + |
| 13 | +* [Altered execution contexts](#Altered-execution-contexts) |
| 14 | +* [Changing dependencies during tests](#Changing-dependencies-during-tests) |
| 15 | +* [Testing gotchas](#Testing-gotchas) |
| 16 | + |
| 17 | +## Altered execution contexts |
| 18 | + |
| 19 | +It is possible to completely alter the execution context in which a feature's logic runs, which is |
| 20 | +great for tests. It means your feature doesn't need to actually make network requests just to test |
| 21 | +how your feature deals with data returned from an API, and your feature doesn't need to interact |
| 22 | +with the file system just to test how data gets loaded or persisted. |
| 23 | + |
| 24 | +The tool for doing this is ``withDependencies(_:operation:)-3vrqy``, which allows you to specify |
| 25 | +which dependencies should be overriden for the test, and then construct your feature's model |
| 26 | +in that context: |
| 27 | + |
| 28 | +```swift |
| 29 | +func testFeature() async { |
| 30 | + let model = withDependencies { |
| 31 | + $0.continuousClock = ImmediateClock() |
| 32 | + $0.date.now = Date(timeIntervalSince1970: 1234567890) |
| 33 | + } operation: { |
| 34 | + FeatureModel() |
| 35 | + } |
| 36 | + |
| 37 | + // Call methods on `model` and make assertions |
| 38 | +} |
| 39 | +``` |
| 40 | + |
| 41 | +As long as all of your dependencies are declared with `@Dependency` as instance properties on |
| 42 | +`FeatureModel`, its entire execution will happen in a context in which any reference to |
| 43 | +`continuousClock` is an `ImmediateClock` and any reference to `date.now` will always report that |
| 44 | +the date is "Feb 13, 2009 at 3:31 PM". |
| 45 | + |
| 46 | +It is important to note that if `FeatureModel` creates _other_ models inside its methods, then it |
| 47 | +has to be careful about how it does so. In order for `FeatureModel`'s dependencies to propagate |
| 48 | +to the new child model, it must construct the child model in an altered execution context that |
| 49 | +passes along the dependencies. The tool for this is |
| 50 | +``withDependencies(from:operation:file:line:)-2qx0c`` and can be used simply like this: |
| 51 | + |
| 52 | +```swift |
| 53 | +class FeatureModel: ObservableObject { |
| 54 | + // ... |
| 55 | + |
| 56 | + func buttonTapped() { |
| 57 | + self.child = withDependencies(from: self) { |
| 58 | + ChildModel() |
| 59 | + } |
| 60 | + } |
| 61 | +} |
| 62 | +``` |
| 63 | + |
| 64 | +This guarantees that when `FeatureModel`'s dependencies are overridden in tests that it will also |
| 65 | +trickle down to `ChildModel`. |
| 66 | + |
| 67 | +## Changing dependencies during tests |
| 68 | + |
| 69 | +While it is most common to set up all dependencies at the beginning of a test and then make |
| 70 | +assertions, sometimes it is necessary to also change the dependencies in the middle of a test. |
| 71 | +This can be very handy for modeling test flows in which a dependency is in a failure state at |
| 72 | +first, but then later becomes successful. |
| 73 | + |
| 74 | +For example, suppose we have a login feature such that if you try logging in and an error is thrown |
| 75 | +causing a message to appear. But then later, if login succeeds that message goes away. We can |
| 76 | +test that entire flow, from end-to-end, but starting the API client dependency in a state where |
| 77 | +login fails, and then later change the dependency so that it succeeds using |
| 78 | +``withDependencies(_:operation:)-3vrqy``: |
| 79 | + |
| 80 | +```swift |
| 81 | +func testRetryFlow() async { |
| 82 | + let model = withDependencies { |
| 83 | + $0.apiClient.login = { email, password in |
| 84 | + struct LoginFailure: Error {} |
| 85 | + throw LoginFailure() |
| 86 | + } |
| 87 | + } operation: { |
| 88 | + LoginModel() |
| 89 | + } |
| 90 | + |
| 91 | + await model.loginButtonTapped() |
| 92 | + XCTAssertEqual(model.errorMessage, "We could not log you in. Please try again") |
| 93 | + |
| 94 | + withDependencies { |
| 95 | + $0.apiClient.login = { email, password in |
| 96 | + LoginResponse(user: User(id: 42, name: "Blob")) |
| 97 | + } |
| 98 | + } operation: { |
| 99 | + await model.loginButtonTapped() |
| 100 | + XCTAssertEqual(model.errorMessage, nil) |
| 101 | + } |
| 102 | +} |
| 103 | +``` |
| 104 | + |
| 105 | +Even though the `LoginModel` was created in the context of the API client failing it still sees |
| 106 | +the updated dependency when run in the new `withDependencies` context. |
| 107 | + |
| 108 | +## Testing gotchas |
| 109 | + |
| 110 | +This is not well known, but when an application target runs tests it actually boots up a simulator |
| 111 | +and runs your actual application entry point in the simulator. This means while tests are running, |
| 112 | +your application's code is separately also running. This can be a huge gotcha because it means you |
| 113 | +may be unknowingly making network requests, tracking analytics, writing data to user defaults or |
| 114 | +to the disk, and more. |
| 115 | + |
| 116 | +This usually flies under the radar and you just won't know it's happening, which can be problematic. |
| 117 | +But, once you start using this library to control your dependencies the problem can surface in a |
| 118 | +very visible manner. Typically, when a dependency is used in a test context without being overridden, |
| 119 | +a test failure occurs. This makes it possible for your test to pass successfully, yet for some |
| 120 | +mysterious reason the test suite fails. This happens because the code in the _app host_ is now |
| 121 | +running in a test context, and accessing dependencies will cause test failures. |
| 122 | + |
| 123 | +This only happens when running tests in a _application target_, that is, a target that is |
| 124 | +specifically used to launch the application for a simulator or device. This does not happen when |
| 125 | +running tests for frameworks or SPM libraries, which is yet another good reason to modularize |
| 126 | +your code base. |
| 127 | + |
| 128 | +However, if you aren't in a position to modularize your code base right now, there is a quick |
| 129 | +fix. Our [XCTest Dynamic Overlay][xctest-dynamic-overlay-gh] library, which is transitively included |
| 130 | +with this library, comes with a property you can check to see if tests are currently running. If |
| 131 | +they are, you can omit the entire entry point of your application: |
| 132 | + |
| 133 | +```swift |
| 134 | +import SwiftUI |
| 135 | +import XCTestDynamicOverlay |
| 136 | + |
| 137 | +@main |
| 138 | +struct MyApp: App { |
| 139 | + var body: some Scene { |
| 140 | + WindowGroup { |
| 141 | + if !_XCTIsTesting { |
| 142 | + // Your real root view |
| 143 | + } |
| 144 | + } |
| 145 | + } |
| 146 | +} |
| 147 | +``` |
| 148 | + |
| 149 | +That will allow tests to run in the application target without your actual application code |
| 150 | +interfering. |
| 151 | + |
| 152 | +[xctest-dynamic-overlay-gh]: http://github.com/pointfreeco/xctest-dynamic-overlay |
0 commit comments