This repository contains these plugins to support running Velero on AWS:
-
An object store plugin for persisting and retrieving backups on AWS S3. Content of backup is kubernetes resources and metadata files for CSI objects, progress of async operations.
It is also used to store the result data of backups and restores include log files, warning/error files, etc. -
A volume snapshotter plugin for creating snapshots from volumes (during a backup) and volumes from snapshots (during a restore) on AWS EBS.
- Since v1.4.0 the snapshotter plugin can handle the volumes provisioned by CSI driver
ebs.csi.aws.com
- Since v1.4.0 the snapshotter plugin can handle the volumes provisioned by CSI driver
Below is a listing of plugin versions and respective Velero versions that are compatible.
| Plugin Version | Velero Version |
|---|---|
| v1.13.x | v1.17.x |
| v1.12.x | v1.16.x |
| v1.11.x | v1.15.x |
| v1.10.x | v1.14.x |
| v1.9.x | v1.13.x |
| Cloud Provider | Notes | Velero Issue | Cloud Provider Issue |
|---|---|---|---|
| Google Cloud Storage | Should use GCP plugin instead | https://issuetracker.google.com/issues/256641357 | |
| Net App | operation error S3: PutObject, https response error StatusCode: 501, RequestID: , HostID: , api error NotImplemented: The s3 command you requested is not implemented. |
vmware-tanzu/velero#7828 vmware-tanzu/velero#8152 | Fixed in ONTAP Release 9.15.1P2. Fixed in Net App StorageGRID® Version 11.8.0.7 |
| Oracle | vmware-tanzu/velero#8013 | ||
| IBM COS | checksumAlgorithm="" should work if retention is not enabled | vmware-tanzu/velero#7543 | |
| Hitachi Content Platform (HCP) | |||
| Cloudian | vmware-tanzu/velero#8264 | ||
| Qumulo | not compatible with x-id, etc. |
vmware-tanzu/velero#8312 | |
| Ceph S3 | checksumAlgorithm="" to avoid api error XAmzContentSHA256Mismatch |
||
| Backblaze B2 | checksumAlgorithm="" to avoid api error XAmzContentSHA256Mismatch |
If you would like to file a GitHub issue for the plugin, please open the issue on the core Velero repo
To set up Velero on AWS, you:
You can also use this plugin to migrate PVs across clusters or create an additional Backup Storage Location.
If you do not have the aws CLI locally installed, follow the user guide to set it up.
Velero requires an object storage bucket to store backups in, preferably unique to a single Kubernetes cluster (see the FAQ for more details). Create an S3 bucket, replacing placeholders appropriately:
BUCKET=<YOUR_BUCKET>
REGION=<YOUR_REGION>
aws s3api create-bucket \
--bucket $BUCKET \
--region $REGION \
--create-bucket-configuration LocationConstraint=$REGIONNOTE: us-east-1 does not support a LocationConstraint. If your region is us-east-1, omit the bucket configuration:
aws s3api create-bucket \
--bucket $BUCKET \
--region us-east-1For more information, see the AWS documentation on IAM users.
-
Create the IAM user:
aws iam create-user --user-name velero
If you'll be using Velero to backup multiple clusters with multiple S3 buckets, it may be desirable to create a unique username per cluster rather than the default
velero. -
Attach policies to give
velerothe necessary permissions (note thats3:PutObjectTaggingis only needed if you make use of theconfig.taggingfield in theBackupStorageLocationspec):cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:PutObjectTagging", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::${BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::${BUCKET}" ] } ] } EOFaws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json
-
Create an access key for the user:
aws iam create-access-key --user-name velero
The result should look like:
{ "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } -
Create a Velero-specific credentials file (
credentials-velero) in your local directory:[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
where the access key id and secret are the values returned from the
create-access-keyrequest.
Kube2iam is a Kubernetes application that allows managing AWS IAM permissions for pod via annotations rather than operating on API keys.
This path assumes you have
kube2iamalready running in your Kubernetes cluster. If that is not the case, please install it first, following the docs here: https://github.com/jtblin/kube2iam
It can be set up for Velero by creating a role that will have required permissions, and later by adding the permissions annotation on the velero deployment to define which role it should use internally.
-
Create a Trust Policy document to allow the role being used for EC2 management & assume kube2iam role:
cat > velero-trust-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ROLE_CREATED_WHEN_INITIALIZING_KUBE2IAM>" }, "Action": "sts:AssumeRole" } ] } EOF -
Create the IAM role:
aws iam create-role --role-name velero --assume-role-policy-document file://./velero-trust-policy.json
-
Attach policies to give
velerothe necessary permissions (note thats3:PutObjectTaggingis only needed if you make use of theconfig.taggingfield in theBackupStorageLocationspec):BUCKET=<YOUR_BUCKET> cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:PutObjectTagging", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::${BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::${BUCKET}" ] } ] } EOFaws iam put-role-policy \ --role-name velero \ --policy-name velero-policy \ --policy-document file://./velero-policy.json
Download Velero
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called velero, and place a deployment named velero in it.
If using IAM user and access key:
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.13.0 \
--bucket $BUCKET \
--backup-location-config region=$REGION \
--snapshot-location-config region=$REGION \
--secret-file ./credentials-veleroIf using kube2iam:
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.13.0 \
--bucket $BUCKET \
--backup-location-config region=$REGION \
--snapshot-location-config region=$REGION \
--pod-annotations iam.amazonaws.com/role=arn:aws:iam::<AWS_ACCOUNT_ID>:role/<VELERO_ROLE_NAME> \
--no-secretAdditionally, you can specify --use-node-agent to enable node agent support, and --wait to wait for the deployment to be ready.
securityContext:
fsGroup: 65534
(Optional) Specify additional configurable parameters for the --backup-location-config flag.
(Optional) Specify additional configurable parameters for the --snapshot-location-config flag.
(Optional) Customize the Velero installation further to meet your needs.
Velero supports using SSE-C encryption for S3 backups. This allows you to provide your own 32-byte encryption key that S3 will use to encrypt your backup data. There are two ways to provide the customer key:
- Create a Kubernetes secret containing your 32-byte encryption key:
# Generate a 32-byte key (example)
openssl rand -out customer-key.txt 32
# Create the secret
kubectl create secret generic velero-sse-c-key \
-n velero \
--from-file=customer-key=customer-key.txt- Mount the secret in the Velero deployment:
kubectl patch deployment/velero -n velero --type='json' -p='[
{
"op": "add",
"path": "/spec/template/spec/volumes/-",
"value": {
"name": "sse-c-key",
"secret": {
"secretName": "velero-sse-c-key"
}
}
},
{
"op": "add",
"path": "/spec/template/spec/containers/0/volumeMounts/-",
"value": {
"name": "sse-c-key",
"mountPath": "/credentials/sse-c",
"readOnly": true
}
}
]'- Configure the backup storage location to use the mounted key file:
velero backup-location create default \
--provider aws \
--bucket $BUCKET \
--config region=$REGION,customerKeyEncryptionFile=/credentials/sse-c/customer-keyThis option allows Velero to read the encryption key directly from a Kubernetes secret without mounting it as a file.
- Create a Kubernetes secret containing your 32-byte encryption key:
# Generate a 32-byte key (example)
openssl rand 32 | kubectl create secret generic velero-sse-c-key \
-n velero \
--from-file=customer-key=/dev/stdin- Configure the backup storage location to reference the secret:
velero backup-location create default \
--provider aws \
--bucket $BUCKET \
--config region=$REGION,customerKeyEncryptionSecret=velero-sse-c-key/customer-keyThe format for customerKeyEncryptionSecret is secretName/key, where:
secretNameis the name of the Kubernetes secretkeyis the key within the secret that contains the 32-byte encryption key
The secret must exist in the same namespace as Velero (determined by the VELERO_NAMESPACE environment variable).
- The customer key must be exactly 32 bytes
- You cannot use SSE-C in combination with
kmsKeyId - You must specify either
customerKeyEncryptionFileorcustomerKeyEncryptionSecret, not both - Keep your encryption key secure - losing it means losing access to your backups
- The same key must be available during restore operations
For more complex installation needs, use either the Helm chart, or add --dry-run -o yaml options for generating the YAML representation for the installation.
If you are using Velero v1.6.0 or later, you can create additional AWS Backup Storage Locations that use their own credentials. These can also be created alongside Backup Storage Locations that use other providers.
It is not possible to use different credentials for additional Backup Storage Locations if you are pod based authentication such as kube2iam.
- Velero 1.6.0 or later
- AWS plugin must be installed, either at install time, or by running
velero plugin add velero/velero-plugin-for-aws:plugin-version, replace theplugin-versionwith the corresponding value
To configure a new Backup Storage Location with its own credentials, it is necessary to follow the steps above to create the bucket to use and to generate the credentials file to interact with that bucket. Once you have created the credentials file, create a Kubernetes Secret in the Velero namespace that contains these credentials:
kubectl create secret generic -n velero bsl-credentials --from-file=aws=</path/to/credentialsfile>This will create a secret named bsl-credentials with a single key (aws) which contains the contents of your credentials file.
The name and key of this secret will be given to Velero when creating the Backup Storage Location, so it knows which secret data to use.
Once the bucket and credentials have been configured, these can be used to create the new Backup Storage Location:
velero backup-location create <bsl-name> \
--provider aws \
--bucket $BUCKET \
--config region=$REGION \
--credential=bsl-credentials=awsThe Backup Storage Location is ready to use when it has the phase Available.
You can check this with the following command:
velero backup-location getTo use this new Backup Storage Location when performing a backup, use the flag --storage-location <bsl-name> when running velero backup create.
If you have multiple clusters and you want to support migration of resources between them, you can use kubectl edit deploy/velero -n velero to edit your deployment:
Add the environment variable AWS_CLUSTER_NAME under spec.template.spec.env, with the current cluster's name. When restoring backup, it will make Velero (and cluster it's running on) claim ownership of AWS volumes created from snapshots taken on different cluster.
The best way to get the current cluster's name is to either check it with used deployment tool or to read it directly from the EC2 instances tags.
The following listing shows how to get the cluster's nodes EC2 Tags. First, get the nodes external IDs (EC2 IDs):
kubectl get nodes -o jsonpath='{.items[*].spec.externalID}'Copy one of the returned IDs <ID> and use it with the aws CLI tool to search for one of the following:
-
The
kubernetes.io/cluster/<AWS_CLUSTER_NAME>tag of the valueowned. The<AWS_CLUSTER_NAME>is then your cluster's name:aws ec2 describe-tags --filters "Name=resource-id,Values=<ID>" "Name=value,Values=owned"
-
If the first output returns nothing, then check for the legacy Tag
KubernetesClusterof the value<AWS_CLUSTER_NAME>:aws ec2 describe-tags --filters "Name=resource-id,Values=<ID>" "Name=key,Values=KubernetesCluster"