Skip to content
This repository was archived by the owner on Nov 30, 2021. It is now read-only.

Commit f9811c6

Browse files
author
Matthew Fisher
committed
docs(*): migrate from Helm Classic to Helm
This changes all documentation to use Helm as the default tool for installing Deis Workflow. Users of Helm Classic are urged to use https://github.com/deis/workflow-migration to migrate from Helm Classic to Helm.
1 parent f561953 commit f9811c6

File tree

18 files changed

+175
-668
lines changed

18 files changed

+175
-668
lines changed

mkdocs.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ pages:
3232
- Configuring Object Storage: installing-workflow/configuring-object-storage.md
3333
- Configuring Postgres: installing-workflow/configuring-postgres.md
3434
- Configuring the Registry: installing-workflow/configuring-registry.md
35-
- Workflow Helm Charts: installing-workflow/workflow-helm-charts.md
35+
- Chart Provenance: installing-workflow/chart-provenance.md
3636
- Users:
3737
- Command Line Interface: users/cli.md
3838
- Users and Registration: users/registration.md

src/contributing/overview.md

Lines changed: 1 addition & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -6,19 +6,7 @@ Interested in contributing to a Deis project? There are lots of ways to help.
66

77
Find a bug? Want to see a new feature? Have a request for the maintainers? Open a Github issue in the applicable repository and we’ll get the conversation started.
88

9-
Our official support channels are:
10-
11-
- GitHub issue queues:
12-
- [builder](https://github.com/deis/builder/issues)
13-
- [chart](https://github.com/deis/charts/issues)
14-
- [database](https://github.com/deis/postgres/issues)
15-
- [helm classic](https://github.com/helm/helm-classic/issues)
16-
- [monitor](https://github.com/deis/monitor/issues)
17-
- [registry](https://github.com/deis/registry/issues)
18-
- [router](https://github.com/deis/router/issues)
19-
- [workflow](https://github.com/deis/workflow/issues)
20-
- [workflow-cli](https://github.com/deis/workflow-cli/issues)
21-
- [Deis #community Slack channel][slack]
9+
Our official support channel is the [Deis #community Slack channel][slack].
2210

2311
Don't know what the applicable repository for an issue is? Open up in issue in [workflow][] or chat with a maintainer in the [Deis #community Slack channel][slack] and we'll make sure it gets to the right place.
2412

src/installing-workflow/workflow-helm-charts.md renamed to src/installing-workflow/chart-provenance.md

Lines changed: 2 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,13 @@
1-
# Workflow Helm charts
1+
# Chart Provenance
22

33
As of Workflow [v2.8.0](../changelogs/v2.8.0.md), Deis has released [Kubernetes Helm][helm] charts for Workflow
44
and for each of its [components](../understanding-workflow/components.md).
55

6-
## Installation
7-
8-
Once [Helm][helm] is installed and its server component is running on a Kubernetes cluster, one may install Workflow with the following steps:
9-
```
10-
$ helm repo add deis https://charts.deis.com/workflow # add the workflow charts repo
11-
12-
$ helm install deis/workflow --version=v2.8.0 --namespace=deis -f <optional values file> # injects resources into your cluster
13-
```
14-
15-
## Chart Provenance
16-
176
Helm provides tools for establishing and verifying chart integrity. (For an overview, see the [Provenance](https://github.com/kubernetes/helm/blob/master/docs/provenance.md) doc.) All release charts from the Deis Workflow team are now signed using this mechanism.
187

198
The full `Deis, Inc. (Helm chart signing key) <[email protected]>` public key can be found [here](../security/1d6a97d0.txt), as well as the [pgp.mit.edu](http://pgp.mit.edu/pks/lookup?op=vindex&fingerprint=on&search=0x17E526B51D6A97D0) keyserver and the official Deis Keybase [account][deis-keybase]. The key's fingerprint can be cross-checked against all of these sources.
209

21-
### Verifying a signed chart
10+
## Verifying a signed chart
2211

2312
The public key mentioned above must exist in a local keyring before a signed chart can be verified.
2413

src/installing-workflow/configuring-object-storage.md

Lines changed: 11 additions & 157 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Every component that relies on object storage uses two inputs for configuration:
1111
1. Component-specific environment variables (e.g. `BUILDER_STORAGE` and `REGISTRY_STORAGE`)
1212
2. Access credentials stored as a Kubernetes secret named `objectstorage-keyfile`
1313

14-
The helm classic chart for Deis Workflow can be easily configured to connect Workflow components to off-cluster object storage. Deis Workflow currently supports Google Compute Storage, Amazon S3, Azure Blob Storage and OpenStack Swift Storage.
14+
The helm chart for Deis Workflow can be easily configured to connect Workflow components to off-cluster object storage. Deis Workflow currently supports Google Compute Storage, Amazon S3, Azure Blob Storage and OpenStack Swift Storage.
1515

1616
### Step 1: Create storage buckets
1717

@@ -25,172 +25,26 @@ If you provide credentials with sufficient access to the underlying storage, Wor
2525

2626
If applicable, generate credentials that have create and write access to the storage buckets created in Step 1.
2727

28-
If you are using AWS S3 and your Kubernetes nodes are configured with appropriate IAM API keys via InstanceRoles, you do not need to create API credentials. Do, however, validate that the InstanceRole has appropriate permissions to the configured buckets!
28+
If you are using AWS S3 and your Kubernetes nodes are configured with appropriate [IAM][aws-iam] API keys via InstanceRoles, you do not need to create API credentials. Do, however, validate that the InstanceRole has appropriate permissions to the configured buckets!
2929

30-
### Step 3: Fetch Workflow charts
30+
### Step 3: Add Deis Repo
3131

32-
If you haven't already fetched the Helm Classic chart, do so with `helmc fetch deis/workflow-v2.8.0`
32+
If you haven't already added the Helm repo, do so with `helm repo add deis https://charts.deis.com/workflow`
3333

34-
### Step 4: Configure Workflow charts
34+
### Step 4: Configure Workflow Chart
3535

36-
Operators should configure object storage by either populating a set of environment variables or editing the the Helm Classic parameters file before running `helmc generate`. Both options are documented below:
36+
Operators should configure object storage by editing the Helm values file before running `helm install`. To do so:
3737

38-
**Option 1:** Using environment variables
39-
40-
After setting a `STORAGE_TYPE` environment variable to the desired object storage type ("s3", "gcs", "azure", or "swift"), set the additional variables as required by the selected object storage:
41-
42-
| Storage Type | Required Variables | Notes |
43-
| --- | --- | --- |
44-
| s3 | `AWS_ACCESS_KEY`, `AWS_SECRET_KEY`, `AWS_REGISTRY_BUCKET`, `AWS_DATABASE_BUCKET`, `AWS_BUILDER_BUCKET`, `S3_REGION` | To use [IAM credentials][aws-iam], it is not necessary to set `AWS_ACCESS_KEY` or `AWS_SECRET_KEY`. |
45-
| gcs | `GCS_KEY_JSON`, `GCS_REGISTRY_BUCKET`, `GCS_DATABASE_BUCKET`, `GCS_BUILDER_BUCKET` | |
46-
| azure | `AZURE_ACCOUNT_NAME`, `AZURE_ACCOUNT_KEY`, `AZURE_REGISTRY_CONTAINER`, `AZURE_DATABASE_CONTAINER`, `AZURE_BUILDER_CONTAINER` | |
47-
| swift | `SWIFT_USERNAME`, `SWIFT_PASSWORD`, `SWIFT_AUTHURL`, `SWIFT_AUTHVERSION`, `SWIFT_REGISTRY_CONTAINER`, `SWIFT_DATABASE_CONTAINER`, `SWIFT_BUILDER_CONTAINER` | To specify tenant set `SWIFT_TENANT` if the auth version is 2 or later. |
48-
49-
!!! note
50-
These environment variables should be set **before** running `helmc generate` in Step 5.
51-
52-
**Option 2:** Using template file `tpl/generate_params.toml` available at `$(helmc home)/workspace/charts/workflow-v2.8.0`
53-
54-
* Edit Helm Classic chart by running `helmc edit workflow-v2.8.0` and look for the template file `tpl/generate_params.toml` (make sure you have the `$EDITOR` environment variable set with your favorite text editor)
55-
* Update the `storage` parameter to reference the platform you are using, e.g. `s3`, `azure`, `gcs`, or `swift`
38+
* Fetch the Helm values by running `helm inspect values deis/workflow | sed -n '1!p' > values.yaml`
39+
* Update the `global/storage` parameter to reference the platform you are using, e.g. `s3`, `azure`, `gcs`, or `swift`
5640
* Find the corresponding section for your storage type and provide appropriate values including region, bucket names, and access credentials.
57-
* Save your changes to `tpl/generate_params.toml`.
58-
59-
!!! note
60-
You do not need to base64 encode any of these values as Helm Classic will handle encoding automatically.
61-
62-
### Step 5: Generate manifests
63-
64-
Generate the Workflow chart by running `helmc generate -x manifests workflow-v2.8.0` (if you have previously run this step, make sure you add `-f` to force its regeneration).
65-
66-
### Step 6: Verify credentials
67-
68-
Helm Classic stores the object storage configuration as a Kubernetes secret.
69-
70-
You may check the contents of the generated file named `deis-objectstorage-secret.yaml` in the `helmc` workspace directory:
71-
```
72-
$ cat $(helmc home)/workspace/charts/workflow-v2.8.0/manifests/deis-objectstorage-secret.yaml
73-
apiVersion: v1
74-
kind: Secret
75-
metadata:
76-
name: objectstorage-keyfile
77-
...
78-
data:
79-
accesskey: bm9wZSBub3BlCg==
80-
secretkey: c3VwZXIgbm9wZSBub3BlIG5vcGUgbm9wZSBub3BlCg==
81-
region: ZWFyZgo=
82-
registry-bucket: bXlmYW5jeS1yZWdpc3RyeS1idWNrZXQK
83-
database-bucket: bXlmYW5jeS1kYXRhYmFzZS1idWNrZXQK
84-
builder-bucket: bXlmYW5jeS1idWlsZGVyLWJ1c2tldAo=
85-
```
86-
87-
You are now ready to `helmc install workflow-v2.8.0` using your desired object storage.
88-
89-
## Object Storage Configuration and Credentials
90-
91-
During the `helmc generate` step, Helm Classic creates a Kubernetes secret in the Deis namespace named `objectstorage-keyfile`. The exact structure of the file depends on storage backend specified in `tpl/generate_params.toml`.
92-
93-
```
94-
# Set the storage backend
95-
#
96-
# Valid values are:
97-
# - s3: Store persistent data in AWS S3 (configure in S3 section)
98-
# - azure: Store persistent data in Azure's object storage
99-
# - gcs: Store persistent data in Google Cloud Storage
100-
# - minio: Store persistent data on in-cluster Minio server
101-
# - swift: Store persistent data in OpenStack Swift object storage cluster
102-
storage = "minio"
103-
```
104-
105-
Individual components map the master credential secret to either secret-backed environment variables or volumes. See below for the component-by-component locations.
106-
107-
## Component Details
108-
109-
### [deis/builder](https://github.com/deis/builder)
110-
111-
The builder looks for a `BUILDER_STORAGE` environment variable, which it then uses as a key to look up the object storage location and authentication information from the `objectstore-creds` volume.
112-
113-
### [deis/slugbuilder](https://github.com/deis/slugbuilder)
114-
115-
Slugbuilder is configured and launched by the builder component. Slugbuilder reads credential information from the standard `objectstorage-keyfile` secret.
116-
117-
If you are using slugbuilder as a standalone component the following configuration is important:
118-
119-
- `TAR_PATH` - The location of the application `.tar` archive, relative to the configured bucket for builder e.g. `home/burley-yeomanry:git-3865c987/tar`
120-
- `PUT_PATH` - The location to upload the finished slug, relative to the configured bucket of builder e.g. `home/burley-yeomanry:git-3865c987/push`
121-
- `CACHE_PATH` - The location to upload the cache, relative to the configured bucket of builder e.g. `home/burley-yeomanry/cache`
41+
* Save your changes.
12242

12343
!!! note
124-
These environment variables are case-sensitive.
125-
126-
### [deis/slugrunner](https://github.com/deis/slugrunner)
127-
128-
Slugrunner is configured and launched by the controller inside a Workflow cluster. If you are using slugrunner as a standalone component the following configuration is important:
129-
130-
- `SLUG_URL` - environment variable containing the path of the slug, relative to the builder storage location, e.g. `home/burley-yeomanry:git-3865c987/push/slug.tgz`
131-
132-
Slugrunner reads credential information from a `objectstorage-keyfile` secret in the current Kubernetes namespace.
133-
134-
### [deis/dockerbuilder](https://github.com/deis/dockerbuilder)
135-
136-
Dockerbuilder is configured and launched by the builder component. Dockerbuilder reads credential information from the standard `objectstorage-keyfile` secret.
137-
138-
If you are using dockerbuilder as a standalone component the following configuration is important:
139-
140-
- `TAR_PATH` - The location of the application `.tar` archive, relative to the configured bucket for builder e.g. `home/burley-yeomanry:git-3865c987/tar`
141-
142-
### [deis/controller](https://github.com/deis/controller)
143-
144-
The controller is responsible for configuring the execution environment for buildpack-based applications. Controller copies `objectstorage-keyfile` into the application namespace so slugrunner can fetch the application slug.
145-
146-
The controller interacts through Kubernetes APIs and does not use any environment variables for object storage configuration.
147-
148-
### [deis/registry](https://github.com/deis/registry)
149-
150-
The registry looks for a `REGISTRY_STORAGE` environment variable which it then uses as a key to look up the object storage location and authentication information.
151-
152-
The registry reads credential information by reading `/var/run/secrets/deis/registry/creds/objectstorage-keyfile`.
153-
154-
This is the file location for the `objectstorage-keyfile` secret on the Pod filesystem.
155-
156-
### [deis/database](https://github.com/deis/postgres)
157-
158-
The database looks for a `DATABASE_STORAGE` environment variable, which it then uses as a key to look up the object storage location and authentication information
159-
160-
Minio (`DATABASE_STORAGE=minio`):
161-
162-
* `AWS_ACCESS_KEY_ID` via /var/run/secrets/deis/objectstore/creds/accesskey
163-
* `AWS_SECRET_ACCESS_KEY` via /var/run/secrets/deis/objectstore/creds/secretkey
164-
* `AWS_DEFAULT_REGION` is the Minio default of "us-east-1"
165-
* `BUCKET_NAME` is the on-cluster default of "dbwal"
166-
167-
AWS (`DATABASE_STORAGE=s3`):
168-
169-
* `AWS_ACCESS_KEY_ID` via /var/run/secrets/deis/objectstore/creds/accesskey
170-
* `AWS_SECRET_ACCESS_KEY` via /var/run/secrets/deis/objectstore/creds/secretkey
171-
* `AWS_DEFAULT_REGION` via /var/run/secrets/deis/objectstore/creds/region
172-
* `BUCKET_NAME` via /var/run/secrets/deis/objectstore/creds/database-bucket
173-
174-
GCS (`DATABASE_STORAGE=gcs`):
175-
176-
* `GS_APPLICATION_CREDS` via /var/run/secrets/deis/objectstore/creds/key.json
177-
* `BUCKET_NAME` via /var/run/secrets/deis/objectstore/creds/database-bucket
178-
179-
Azure (`DATABASE_STORAGE=azure`):
180-
181-
* `WABS_ACCOUNT_NAME` via /var/run/secrets/deis/objectstore/creds/accountname
182-
* `WABS_ACCESS_KEY` via /var/run/secrets/deis/objectstore/creds/accountkey
183-
* `BUCKET_NAME` via /var/run/secrets/deis/objectstore/creds/database-container
44+
You do not need to base64 encode any of these values as Helm will handle encoding automatically.
18445

185-
Swift (`DATABASE_STORAGE=swift`):
46+
You are now ready to run `helm install deis/workflow --namespace deis -f values.yaml` using your desired object storage.
18647

187-
* `SWIFT_USERNAME` via /var/run/secrets/deis/objectstore/creds/username
188-
* `SWIFT_PASSWORD` via /var/run/secrets/deis/objectstore/creds/password
189-
* `SWIFT_AUTHURL` via /var/run/secrets/deis/objectstore/creds/authurl
190-
* `SWIFT_AUTHVERSION` via /var/run/secrets/deis/objectstore/creds/authversion
191-
* `SWIFT_TENANT` via /var/run/secrets/deis/objectstore/creds/tenant
192-
* `BUCKET_NAME` via /var/run/secrets/deis/objectstore/creds/database-container
19348

19449
[minio]: ../understanding-workflow/components.md#object-storage
195-
[generate-params-toml]: https://github.com/deis/charts/blob/master/workflow-dev/tpl/generate_params.toml
19650
[aws-iam]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html

src/installing-workflow/configuring-postgres.md

Lines changed: 10 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -29,30 +29,16 @@ $ psql -h <host> -p <port> -d postgres -U <"postgres" or your own username>
2929

3030
## Configuring Workflow
3131

32-
The Helm Classic chart for Deis Workflow can be easily configured to connect the Workflow controller component to an off-cluster PostgreSQL database.
33-
34-
* **Step 1:** If you haven't already fetched the Helm Classic chart, do so with `helmc fetch deis/workflow-v2.8.0`
35-
* **Step 2:** Update database connection details either by setting the appropriate environment variables _or_ by modifying the template file `tpl/generate_params.toml`. Note that environment variables take precedence over settings in `tpl/generate_params.toml`.
36-
* **1.** Using environment variables:
37-
* Set `DATABASE_LOCATION` to `off-cluster`.
38-
* Set `DATABASE_HOST` to the hostname or public IP of your off-cluster PostgreSQL RDBMS.
39-
* Set `DATABASE_PORT` to the port listened to by your off-cluster PostgreSQL RDBMS-- typically `5432`.
40-
* Set `DATABASE_NAME` to the name of the database provisioned for use by Workflow's controller component-- typically `deis`.
41-
* Set `DATABASE_USERNAME` to the username of the database user that owns the database-- typically `deis`.
42-
* Set `DATABASE_PASSWORD` to the password for the database user that owns the database.
43-
* **2.** Using template file `tpl/generate_params.toml`:
44-
* Open the Helm Classic chart with `helmc edit workflow-v2.8.0` and look for the template file `tpl/generate_params.toml`
45-
* Update the `database_location` parameter to `off-cluster`.
46-
* Update the values in the `[database]` configuration section to properly reflect all connection details.
47-
* Save your changes.
48-
* Note: Whether using environment variables or `tpl/generate_params.toml`, you do not need to (and must not) base64 encode any values, as the Helm Classic chart will automatically handle encoding as necessary.
49-
* **Step 3:** Re-generate the Helm Classic chart by running `helmc generate -x manifests workflow-v2.8.0`
50-
* **Step 4:** Check the generated files in your `manifests` directory. You should see:
51-
* `deis-controller-deployment.yaml` contains relevant connection details.
52-
* `deis-database-secret-creds.yaml` exists and contains base64 encoded database username and password.
53-
* No other database-related Kubernetes resources are defined. i.e. none of `database-database-service-account.yaml`, `database-database-service.yaml`, or `database-database-deployment.yaml` exist.
54-
55-
You are now ready to `helmc install workflow-v2.8.0` [as usual][installing].
32+
The Helm chart for Deis Workflow can be easily configured to connect the Workflow controller component to an off-cluster PostgreSQL database.
33+
34+
* **Step 1:** If you haven't already fetched the values, do so with `helm inspect values deis/workflow | sed -n '1!p' > values.yaml`
35+
* **Step 2:** Update database connection details by modifying `values.yaml`:
36+
* Update the `database_location` parameter to `off-cluster`.
37+
* Update the values in the `[database]` configuration section to properly reflect all connection details.
38+
* Save your changes.
39+
* Note: you do not need to (and must not) base64 encode any values, as the Helm chart will automatically handle encoding as necessary.
40+
41+
You are now ready to `helm install deis/workflow --namespace deis -f values.yaml` [as usual][installing].
5642

5743
[database]: ../understanding-workflow/components.md#database
5844
[object storage]: configuring-object-storage.md

0 commit comments

Comments
 (0)