-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Unit testing with Polly
How to approach unit-testing code wrapped in Polly policies depends what you are aiming to test
TL:DR; Polly's NoOpPolicy
allows you to stub out Polly, to test your code as if Polly were not in the mix.
A common situation may be that you have code modules for which you already have unit tests, including success and failure test cases. You then retro-fit Polly for more robust operation. How does having the Polly policy in play here affect your existing unit tests? Do all the tests need adjusting? How do I test what my code does without Polly 'interfering'?
Suggested strategy: stub out Polly for the purposes of those tests. This can be facilitated by using dependency injection to pass policies into code. That could be with a full DI container, or just simple constructor injection or property injection, per preference.
Polly now defines a NoOpPolicy
specifically for this scenario. NoOpPolicy
does nothing but execute the governed delegate as if Polly was not involved. In your tests, simply inject NoOpPolicy
rather than the policies you use in production, and Polly is stubbed out of those tests.
TL:DR; Bear in mind the Polly codebase already tests this for you extensively.
An understandable desire after introducing Polly to a project, is to want to check the Polly policy does what it says on the tin. If I configure .Retry(3)
(maybe even with a nice exponential backoff), it would be nice to have an integration test checking it really works, right?
No-one's going to stop you coding these tests, and it's a good exercise to satisfy yourself you understand what Polly does. But, to allow you to concentrate on delivering your business value rather than reinventing Polly's (test) wheel, it's worth noting the Polly codebase already tests this for you extensively. As at Polly v5.0, the Polly codebase has over 1000 tests per nuget target. Have a browse - tests are grouped by policy, and named by what they do.
TL;DR: Useful as a specification for, and regression check on, the kinds of scenario you intend Polly to handle.
Slightly different from [2] above, this angle on testing aims to check you've configured policies to match your desired resilience behaviour, not that Polly policies actually operate as specified. If somebody changes the configuration, the test provides regression value by failing. The test can be read as a specification of the resilience behaviour for that piece of code.
In this testing approach, you typically stub or mock out the underlying systems called (for instance you might stub out a call to some endpoint to return TimeoutException
), then check your configured policy does handle that.
If taking this approach, a test-helper approach such as that in Polly's PolicyExtensions
(and similar classes) can be useful, to provide convenient ways of stubbing raising specified exceptions from the underlying call.
Thoughts/questions about unit-testing? Post an issue on the issues board.
- Home
- Polly RoadMap
- Contributing
- Transient fault handling and proactive resilience engineering
- Supported targets
- Retry
- Circuit Breaker
- Advanced Circuit Breaker
- Timeout
- Bulkhead
- Cache
- Rate-Limit
- Fallback
- PolicyWrap
- NoOp
- PolicyRegistry
- Polly and HttpClientFactory
- Asynchronous action execution
- Handling InnerExceptions and AggregateExceptions
- Statefulness of policies
- Keys and Context Data
- Non generic and generic policies
- Polly and interfaces
- Some policy patterns
- Debugging with Polly in Visual Studio
- Unit-testing with Polly
- Polly concept and architecture
- Polly v6 breaking changes
- Polly v7 breaking changes
- DISCUSSION PROPOSAL- Polly eventing and metrics architecture