You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 30, 2021. It is now read-only.
Copy file name to clipboardExpand all lines: src/contributing/overview.md
+1-13Lines changed: 1 addition & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,19 +6,7 @@ Interested in contributing to a Deis project? There are lots of ways to help.
6
6
7
7
Find a bug? Want to see a new feature? Have a request for the maintainers? Open a Github issue in the applicable repository and we’ll get the conversation started.
Our official support channel is the [Deis #community Slack channel][slack].
22
10
23
11
Don't know what the applicable repository for an issue is? Open up in issue in [workflow][] or chat with a maintainer in the [Deis #community Slack channel][slack] and we'll make sure it gets to the right place.
Copy file name to clipboardExpand all lines: src/installing-workflow/chart-provenance.md
+2-13Lines changed: 2 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,24 +1,13 @@
1
-
# Workflow Helm charts
1
+
# Chart Provenance
2
2
3
3
As of Workflow [v2.8.0](../changelogs/v2.8.0.md), Deis has released [Kubernetes Helm][helm] charts for Workflow
4
4
and for each of its [components](../understanding-workflow/components.md).
5
5
6
-
## Installation
7
-
8
-
Once [Helm][helm] is installed and its server component is running on a Kubernetes cluster, one may install Workflow with the following steps:
9
-
```
10
-
$ helm repo add deis https://charts.deis.com/workflow # add the workflow charts repo
11
-
12
-
$ helm install deis/workflow --version=v2.8.0 --namespace=deis -f <optional values file> # injects resources into your cluster
13
-
```
14
-
15
-
## Chart Provenance
16
-
17
6
Helm provides tools for establishing and verifying chart integrity. (For an overview, see the [Provenance](https://github.com/kubernetes/helm/blob/master/docs/provenance.md) doc.) All release charts from the Deis Workflow team are now signed using this mechanism.
18
7
19
8
The full `Deis, Inc. (Helm chart signing key) <[email protected]>` public key can be found [here](../security/1d6a97d0.txt), as well as the [pgp.mit.edu](http://pgp.mit.edu/pks/lookup?op=vindex&fingerprint=on&search=0x17E526B51D6A97D0) keyserver and the official Deis Keybase [account][deis-keybase]. The key's fingerprint can be cross-checked against all of these sources.
20
9
21
-
###Verifying a signed chart
10
+
## Verifying a signed chart
22
11
23
12
The public key mentioned above must exist in a local keyring before a signed chart can be verified.
@@ -11,7 +11,7 @@ Every component that relies on object storage uses two inputs for configuration:
11
11
1. Component-specific environment variables (e.g. `BUILDER_STORAGE` and `REGISTRY_STORAGE`)
12
12
2. Access credentials stored as a Kubernetes secret named `objectstorage-keyfile`
13
13
14
-
The helm classic chart for Deis Workflow can be easily configured to connect Workflow components to off-cluster object storage. Deis Workflow currently supports Google Compute Storage, Amazon S3, Azure Blob Storage and OpenStack Swift Storage.
14
+
The helm chart for Deis Workflow can be easily configured to connect Workflow components to off-cluster object storage. Deis Workflow currently supports Google Compute Storage, Amazon S3, Azure Blob Storage and OpenStack Swift Storage.
15
15
16
16
### Step 1: Create storage buckets
17
17
@@ -25,172 +25,26 @@ If you provide credentials with sufficient access to the underlying storage, Wor
25
25
26
26
If applicable, generate credentials that have create and write access to the storage buckets created in Step 1.
27
27
28
-
If you are using AWS S3 and your Kubernetes nodes are configured with appropriate IAM API keys via InstanceRoles, you do not need to create API credentials. Do, however, validate that the InstanceRole has appropriate permissions to the configured buckets!
28
+
If you are using AWS S3 and your Kubernetes nodes are configured with appropriate [IAM][aws-iam] API keys via InstanceRoles, you do not need to create API credentials. Do, however, validate that the InstanceRole has appropriate permissions to the configured buckets!
29
29
30
-
### Step 3: Fetch Workflow charts
30
+
### Step 3: Add Deis Repo
31
31
32
-
If you haven't already fetched the Helm Classic chart, do so with `helmc fetch deis/workflow-v2.8.0`
32
+
If you haven't already added the Helm repo, do so with `helm repo add deis https://charts.deis.com/workflow`
33
33
34
-
### Step 4: Configure Workflow charts
34
+
### Step 4: Configure Workflow Chart
35
35
36
-
Operators should configure object storage by either populating a set of environment variables or editing the the Helm Classic parameters file before running `helmc generate`. Both options are documented below:
36
+
Operators should configure object storage by editing the Helm values file before running `helm install`. To do so:
37
37
38
-
**Option 1:** Using environment variables
39
-
40
-
After setting a `STORAGE_TYPE` environment variable to the desired object storage type ("s3", "gcs", "azure", or "swift"), set the additional variables as required by the selected object storage:
41
-
42
-
| Storage Type | Required Variables | Notes |
43
-
| --- | --- | --- |
44
-
| s3 |`AWS_ACCESS_KEY`, `AWS_SECRET_KEY`, `AWS_REGISTRY_BUCKET`, `AWS_DATABASE_BUCKET`, `AWS_BUILDER_BUCKET`, `S3_REGION`| To use [IAM credentials][aws-iam], it is not necessary to set `AWS_ACCESS_KEY` or `AWS_SECRET_KEY`. |
| swift |`SWIFT_USERNAME`, `SWIFT_PASSWORD`, `SWIFT_AUTHURL`, `SWIFT_AUTHVERSION`, `SWIFT_REGISTRY_CONTAINER`, `SWIFT_DATABASE_CONTAINER`, `SWIFT_BUILDER_CONTAINER`| To specify tenant set `SWIFT_TENANT` if the auth version is 2 or later. |
48
-
49
-
!!! note
50
-
These environment variables should be set **before** running `helmc generate` in Step 5.
51
-
52
-
**Option 2:** Using template file `tpl/generate_params.toml` available at `$(helmc home)/workspace/charts/workflow-v2.8.0`
53
-
54
-
* Edit Helm Classic chart by running `helmc edit workflow-v2.8.0` and look for the template file `tpl/generate_params.toml` (make sure you have the `$EDITOR` environment variable set with your favorite text editor)
55
-
* Update the `storage` parameter to reference the platform you are using, e.g. `s3`, `azure`, `gcs`, or `swift`
38
+
* Fetch the Helm values by running `helm inspect values deis/workflow | sed -n '1!p' > values.yaml`
39
+
* Update the `global/storage` parameter to reference the platform you are using, e.g. `s3`, `azure`, `gcs`, or `swift`
56
40
* Find the corresponding section for your storage type and provide appropriate values including region, bucket names, and access credentials.
57
-
* Save your changes to `tpl/generate_params.toml`.
58
-
59
-
!!! note
60
-
You do not need to base64 encode any of these values as Helm Classic will handle encoding automatically.
61
-
62
-
### Step 5: Generate manifests
63
-
64
-
Generate the Workflow chart by running `helmc generate -x manifests workflow-v2.8.0` (if you have previously run this step, make sure you add `-f` to force its regeneration).
65
-
66
-
### Step 6: Verify credentials
67
-
68
-
Helm Classic stores the object storage configuration as a Kubernetes secret.
69
-
70
-
You may check the contents of the generated file named `deis-objectstorage-secret.yaml` in the `helmc` workspace directory:
You are now ready to `helmc install workflow-v2.8.0` using your desired object storage.
88
-
89
-
## Object Storage Configuration and Credentials
90
-
91
-
During the `helmc generate` step, Helm Classic creates a Kubernetes secret in the Deis namespace named `objectstorage-keyfile`. The exact structure of the file depends on storage backend specified in `tpl/generate_params.toml`.
92
-
93
-
```
94
-
# Set the storage backend
95
-
#
96
-
# Valid values are:
97
-
# - s3: Store persistent data in AWS S3 (configure in S3 section)
98
-
# - azure: Store persistent data in Azure's object storage
99
-
# - gcs: Store persistent data in Google Cloud Storage
100
-
# - minio: Store persistent data on in-cluster Minio server
101
-
# - swift: Store persistent data in OpenStack Swift object storage cluster
102
-
storage = "minio"
103
-
```
104
-
105
-
Individual components map the master credential secret to either secret-backed environment variables or volumes. See below for the component-by-component locations.
The builder looks for a `BUILDER_STORAGE` environment variable, which it then uses as a key to look up the object storage location and authentication information from the `objectstore-creds` volume.
Slugbuilder is configured and launched by the builder component. Slugbuilder reads credential information from the standard `objectstorage-keyfile` secret.
116
-
117
-
If you are using slugbuilder as a standalone component the following configuration is important:
118
-
119
-
-`TAR_PATH` - The location of the application `.tar` archive, relative to the configured bucket for builder e.g. `home/burley-yeomanry:git-3865c987/tar`
120
-
-`PUT_PATH` - The location to upload the finished slug, relative to the configured bucket of builder e.g. `home/burley-yeomanry:git-3865c987/push`
121
-
-`CACHE_PATH` - The location to upload the cache, relative to the configured bucket of builder e.g. `home/burley-yeomanry/cache`
Slugrunner is configured and launched by the controller inside a Workflow cluster. If you are using slugrunner as a standalone component the following configuration is important:
129
-
130
-
-`SLUG_URL` - environment variable containing the path of the slug, relative to the builder storage location, e.g. `home/burley-yeomanry:git-3865c987/push/slug.tgz`
131
-
132
-
Slugrunner reads credential information from a `objectstorage-keyfile` secret in the current Kubernetes namespace.
Dockerbuilder is configured and launched by the builder component. Dockerbuilder reads credential information from the standard `objectstorage-keyfile` secret.
137
-
138
-
If you are using dockerbuilder as a standalone component the following configuration is important:
139
-
140
-
-`TAR_PATH` - The location of the application `.tar` archive, relative to the configured bucket for builder e.g. `home/burley-yeomanry:git-3865c987/tar`
The controller is responsible for configuring the execution environment for buildpack-based applications. Controller copies `objectstorage-keyfile` into the application namespace so slugrunner can fetch the application slug.
145
-
146
-
The controller interacts through Kubernetes APIs and does not use any environment variables for object storage configuration.
The registry looks for a `REGISTRY_STORAGE` environment variable which it then uses as a key to look up the object storage location and authentication information.
151
-
152
-
The registry reads credential information by reading `/var/run/secrets/deis/registry/creds/objectstorage-keyfile`.
153
-
154
-
This is the file location for the `objectstorage-keyfile` secret on the Pod filesystem.
The database looks for a `DATABASE_STORAGE` environment variable, which it then uses as a key to look up the object storage location and authentication information
159
-
160
-
Minio (`DATABASE_STORAGE=minio`):
161
-
162
-
*`AWS_ACCESS_KEY_ID` via /var/run/secrets/deis/objectstore/creds/accesskey
163
-
*`AWS_SECRET_ACCESS_KEY` via /var/run/secrets/deis/objectstore/creds/secretkey
164
-
*`AWS_DEFAULT_REGION` is the Minio default of "us-east-1"
165
-
*`BUCKET_NAME` is the on-cluster default of "dbwal"
166
-
167
-
AWS (`DATABASE_STORAGE=s3`):
168
-
169
-
*`AWS_ACCESS_KEY_ID` via /var/run/secrets/deis/objectstore/creds/accesskey
170
-
*`AWS_SECRET_ACCESS_KEY` via /var/run/secrets/deis/objectstore/creds/secretkey
171
-
*`AWS_DEFAULT_REGION` via /var/run/secrets/deis/objectstore/creds/region
172
-
*`BUCKET_NAME` via /var/run/secrets/deis/objectstore/creds/database-bucket
173
-
174
-
GCS (`DATABASE_STORAGE=gcs`):
175
-
176
-
*`GS_APPLICATION_CREDS` via /var/run/secrets/deis/objectstore/creds/key.json
177
-
*`BUCKET_NAME` via /var/run/secrets/deis/objectstore/creds/database-bucket
178
-
179
-
Azure (`DATABASE_STORAGE=azure`):
180
-
181
-
*`WABS_ACCOUNT_NAME` via /var/run/secrets/deis/objectstore/creds/accountname
182
-
*`WABS_ACCESS_KEY` via /var/run/secrets/deis/objectstore/creds/accountkey
183
-
*`BUCKET_NAME` via /var/run/secrets/deis/objectstore/creds/database-container
44
+
You do not need to base64 encode any of these values as Helm will handle encoding automatically.
184
45
185
-
Swift (`DATABASE_STORAGE=swift`):
46
+
You are now ready to run `helm install deis/workflow --namespace deis -f values.yaml` using your desired object storage.
186
47
187
-
*`SWIFT_USERNAME` via /var/run/secrets/deis/objectstore/creds/username
188
-
*`SWIFT_PASSWORD` via /var/run/secrets/deis/objectstore/creds/password
189
-
*`SWIFT_AUTHURL` via /var/run/secrets/deis/objectstore/creds/authurl
190
-
*`SWIFT_AUTHVERSION` via /var/run/secrets/deis/objectstore/creds/authversion
191
-
*`SWIFT_TENANT` via /var/run/secrets/deis/objectstore/creds/tenant
192
-
*`BUCKET_NAME` via /var/run/secrets/deis/objectstore/creds/database-container
Copy file name to clipboardExpand all lines: src/installing-workflow/configuring-postgres.md
+10-24Lines changed: 10 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,30 +29,16 @@ $ psql -h <host> -p <port> -d postgres -U <"postgres" or your own username>
29
29
30
30
## Configuring Workflow
31
31
32
-
The Helm Classic chart for Deis Workflow can be easily configured to connect the Workflow controller component to an off-cluster PostgreSQL database.
33
-
34
-
***Step 1:** If you haven't already fetched the Helm Classic chart, do so with `helmc fetch deis/workflow-v2.8.0`
35
-
***Step 2:** Update database connection details either by setting the appropriate environment variables _or_ by modifying the template file `tpl/generate_params.toml`. Note that environment variables take precedence over settings in `tpl/generate_params.toml`.
36
-
***1.** Using environment variables:
37
-
* Set `DATABASE_LOCATION` to `off-cluster`.
38
-
* Set `DATABASE_HOST` to the hostname or public IP of your off-cluster PostgreSQL RDBMS.
39
-
* Set `DATABASE_PORT` to the port listened to by your off-cluster PostgreSQL RDBMS-- typically `5432`.
40
-
* Set `DATABASE_NAME` to the name of the database provisioned for use by Workflow's controller component-- typically `deis`.
41
-
* Set `DATABASE_USERNAME` to the username of the database user that owns the database-- typically `deis`.
42
-
* Set `DATABASE_PASSWORD` to the password for the database user that owns the database.
43
-
***2.** Using template file `tpl/generate_params.toml`:
44
-
* Open the Helm Classic chart with `helmc edit workflow-v2.8.0` and look for the template file `tpl/generate_params.toml`
45
-
* Update the `database_location` parameter to `off-cluster`.
46
-
* Update the values in the `[database]` configuration section to properly reflect all connection details.
47
-
* Save your changes.
48
-
* Note: Whether using environment variables or `tpl/generate_params.toml`, you do not need to (and must not) base64 encode any values, as the Helm Classic chart will automatically handle encoding as necessary.
49
-
***Step 3:** Re-generate the Helm Classic chart by running `helmc generate -x manifests workflow-v2.8.0`
50
-
***Step 4:** Check the generated files in your `manifests` directory. You should see:
*`deis-database-secret-creds.yaml` exists and contains base64 encoded database username and password.
53
-
* No other database-related Kubernetes resources are defined. i.e. none of `database-database-service-account.yaml`, `database-database-service.yaml`, or `database-database-deployment.yaml` exist.
54
-
55
-
You are now ready to `helmc install workflow-v2.8.0`[as usual][installing].
32
+
The Helm chart for Deis Workflow can be easily configured to connect the Workflow controller component to an off-cluster PostgreSQL database.
33
+
34
+
***Step 1:** If you haven't already fetched the values, do so with `helm inspect values deis/workflow | sed -n '1!p' > values.yaml`
35
+
***Step 2:** Update database connection details by modifying `values.yaml`:
36
+
* Update the `database_location` parameter to `off-cluster`.
37
+
* Update the values in the `[database]` configuration section to properly reflect all connection details.
38
+
* Save your changes.
39
+
* Note: you do not need to (and must not) base64 encode any values, as the Helm chart will automatically handle encoding as necessary.
40
+
41
+
You are now ready to `helm install deis/workflow --namespace deis -f values.yaml`[as usual][installing].
0 commit comments