-
Notifications
You must be signed in to change notification settings - Fork 37
Canary guide #1348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Canary guide #1348
Conversation
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a bunch of comments!
Like the tone on this one - fits our guides!
- FAQ | ||
--- | ||
|
||
Canary deployments are one of the ways we can release updates without violating our Service Level Agreements (SLAs) with our users. By rolling out new code to a small subset of users and looking at the results, we allow a final testing phase with live users before everyone sees our changes. If problems are detected and fixed before updates roll out to all users, we can generally prevent most users from ever knowing that we tried to release broken code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The second paragraph here is stronger than the first. Idk about the opening sentence, but let's explain why you use Canary Deployments. Position it as a step after Test Automation, and then go exactly where you were going with monitoring. It's used for the same reason - it's impossible to replicate production environments, so testing in production is always essential. We facilitate that.
|
||
The reason to use canary deployments is very similar to the reason to run synthetic monitoring with Checkly: our production code running in the real world can encounter problems that can't be foreseen, no matter how much pre-release testing we do. | ||
|
||
But here's the problem with synthetic monitoring during a canary deployment: if the deployment is broken, the signal of that failure will be effectively hidden by all the working versions of the service that aren't yet running the new code. Here's a dashboard that might give us some concern during a canary deployment: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here at this point. I would visibly show a canary deployment. Show a side by side of the UI change. Say 90% of the people get Version A, 10% get Version B.
Then show that running the one script would obviously fail were it to get that 10% version B, since whatever locator is no longer there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated!
|
||
In our scenario, we can control whether we get the canary version of our service with a feature flag. By controlling a request's headers, we can set the user agent or add arbitrary header values to our requests. Let's set some headers in an API check, a Browser check, and a more complex Multistep API check. | ||
|
||
### A. Set Headers for an API Check |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add in a real example? Changing versions of an Database underneath an API. Don't have to use it, but make the story realistic. Didn't really think we needed an API example. but versioning is pretty necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah this setup was confusing, clarified how this should work, introduced the load balancer that was mentioned in the intro.
}) | ||
``` | ||
|
||
By adding a call to page.route, we ensure that every single request within this page is modified. Alternatively, we could [manage `browser.context()` so that only some requests have these new headers](https://www.checklyhq.com/docs/browser-checks/multiple-tabs/). Note that using multiple browser tabs with different `browser.newContext()` calls may have [unexpected effects on your capture of traces and video from your check runs](https://www.checklyhq.com/blog/playwright-video-troubleshooting/), which is why we've gone with the `await page.route()` method here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, maybe it's just the examples. But what are we showing here? It's a feature flag. In version A, there is something we are either using to automate or asserting. In version B, that's changed. It's not apparent here what exactly we are doing.
Also, maybe add a little section here on Playwright Inceptor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rewrote the copy around this, simplified a bit, and connected it to the scenario in the intro.
Affected Components