Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,12 @@ delegate to ``DependencyKey/liveValue`` if left unimplemented.
Leveraging these alternative dependency implementations allow to run your features in safer
environments for tests, previews, and more.

* [Live value](#Live-value)
* [Test value](#Test-value)
* [Preview value](#Preview-value)
* [Separating interface and implementation](#Separating-interface-and-implementation)
* [Cascading rules](#Cascading-rules)

## Live value

The ``DependencyKey/liveValue`` static property from the ``DependencyKey`` protocol is the only
Expand Down
152 changes: 152 additions & 0 deletions Sources/Dependencies/Documentation.docc/Articles/Testing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
# Testing

One of the main reasons to control dependencies is to allow for easier testing. Learn some tips and
tricks for writing better tests with the library.

## Overview

In the article <doc:LivePreviewTest> you learned how to define a ``TestDependencyKey/testValue``
when registering your dependencies, which will be automatically used during tests. In this article
we cover more detailed information about how to actually write tests with overridden dependencies,
as well as some tips and gotchas to keep in mind.

* [Altered execution contexts](#Altered-execution-contexts)
* [Changing dependencies during tests](#Changing-dependencies-during-tests)
* [Testing gotchas](#Testing-gotchas)

## Altered execution contexts

It is possible to completely alter the execution context in which a feature's logic runs, which is
great for tests. It means your feature doesn't need to actually make network requests just to test
how your feature deals with data returned from an API, and your feature doesn't need to interact
with the file system just to test how data gets loaded or persisted.

The tool for doing this is ``withDependencies(_:operation:)-3vrqy``, which allows you to specify
which dependencies should be overriden for the test, and then construct your feature's model
in that context:

```swift
func testFeature() async {
let model = withDependencies {
$0.continuousClock = ImmediateClock()
$0.date.now = Date(timeIntervalSince1970: 1234567890)
} operation: {
FeatureModel()
}

// Call methods on `model` and make assertions
}
```

As long as all of your dependencies are declared with `@Dependency` as instance properties on
`FeatureModel`, its entire execution will happen in a context in which any reference to
`continuousClock` is an `ImmediateClock` and any reference to `date.now` will always report that
the date is "Feb 13, 2009 at 3:31 PM".

It is important to note that if `FeatureModel` creates _other_ models inside its methods, then it
has to be careful about how it does so. In order for `FeatureModel`'s dependencies to propagate
to the new child model, it must construct the child model in an altered execution context that
passes along the dependencies. The tool for this is
``withDependencies(from:operation:file:line:)-2qx0c`` and can be used simply like this:

```swift
class FeatureModel: ObservableObject {
// ...

func buttonTapped() {
self.child = withDependencies(from: self) {
ChildModel()
}
}
}
```

This guarantees that when `FeatureModel`'s dependencies are overridden in tests that it will also
trickle down to `ChildModel`.

## Changing dependencies during tests

While it is most common to set up all dependencies at the beginning of a test and then make
assertions, sometimes it is necessary to also change the dependencies in the middle of a test.
This can be very handy for modeling test flows in which a dependency is in a failure state at
first, but then later becomes successful.

For example, suppose we have a login feature such that if you try logging in and an error is thrown
causing a message to appear. But then later, if login succeeds that message goes away. We can
test that entire flow, from end-to-end, but starting the API client dependency in a state where
login fails, and then later change the dependency so that it succeeds using
``withDependencies(_:operation:)-3vrqy``:

```swift
func testRetryFlow() async {
let model = withDependencies {
$0.apiClient.login = { email, password in
struct LoginFailure: Error {}
throw LoginFailure()
}
} operation: {
LoginModel()
}

await model.loginButtonTapped()
XCTAssertEqual(model.errorMessage, "We could not log you in. Please try again")

withDependencies {
$0.apiClient.login = { email, password in
LoginResponse(user: User(id: 42, name: "Blob"))
}
} operation: {
await model.loginButtonTapped()
XCTAssertEqual(model.errorMessage, nil)
}
}
```

Even though the `LoginModel` was created in the context of the API client failing it still sees
the updated dependency when run in the new `withDependencies` context.

## Testing gotchas

This is not well known, but when an application target runs tests it actually boots up a simulator
and runs your actual application entry point in the simulator. This means while tests are running,
your application's code is separately also running. This can be a huge gotcha because it means you
may be unknowingly making network requests, tracking analytics, writing data to user defaults or
to the disk, and more.

This usually flies under the radar and you just won't know it's happening, which can be problematic.
But, once you start using this library to control your dependencies the problem can surface in a
very visible manner. Typically, when a dependency is used in a test context without being overridden,
a test failure occurs. This makes it possible for your test to pass successfully, yet for some
mysterious reason the test suite fails. This happens because the code in the _app host_ is now
running in a test context, and accessing dependencies will cause test failures.

This only happens when running tests in a _application target_, that is, a target that is
specifically used to launch the application for a simulator or device. This does not happen when
running tests for frameworks or SPM libraries, which is yet another good reason to modularize
your code base.

However, if you aren't in a position to modularize your code base right now, there is a quick
fix. Our [XCTest Dynamic Overlay][xctest-dynamic-overlay-gh] library, which is transitively included
with this library, comes with a property you can check to see if tests are currently running. If
they are, you can omit the entire entry point of your application:

```swift
import SwiftUI
import XCTestDynamicOverlay

@main
struct MyApp: App {
var body: some Scene {
WindowGroup {
if !_XCTIsTesting {
// Your real root view
}
}
}
}
```

That will allow tests to run in the application target without your actual application code
interfering.

[xctest-dynamic-overlay-gh]: http://github.com/pointfreeco/xctest-dynamic-overlay
1 change: 1 addition & 0 deletions Sources/Dependencies/Documentation.docc/Dependencies.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@ This library addresses all of the points above, and much, _much_ more.
- <doc:UsingDependencies>
- <doc:RegisteringDependencies>
- <doc:LivePreviewTest>
- <doc:Testing>

### Advanced

Expand Down