Skip to content

Kubelet credential provider that uses Service Account Tokens for image pulls via a GCP OIDC token exchange.

License

Notifications You must be signed in to change notification settings

ginkgobioworks/gcp-sa-credential-provider

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

gcp-sa-credential-provider

kubelet credential provider that uses Service Account Tokens for image pulls via a GCP OIDC token exchange.

Set up provider in GCP

https://cloud.google.com/iam/docs/workload-identity-federation-with-kubernetes#kubernetes

First, log into and gather the following info from the k8s cluster:

  1. ISSUER - Save the issuer field from this command: kubectl get --raw /.well-known/openid-configuration
  2. JWKS - Save the output of the following command: kubectl get --raw /openid/v1/jwks

Next create the provider using Terraform, swapping CLUSTERNAME for your cluster name, ISSUER from the issuer above, and JWKS from the jwks above:

locals {
  project_id = "PROJECT_ID"
  project_num = "PROJECT_NUMBER"
}

# Multiple clusters can share this pool but each one should have its own provider
resource "google_iam_workload_identity_pool" "k8s_pool" {
  project                   = local.project_id
  workload_identity_pool_id = "k8s-pool"
  display_name              = "k8s Identity Pool"
  description               = "Workload identity pool for k8s workloads."
}

resource "google_iam_workload_identity_pool_provider" "k8s_provider_CLUSTERNAME" {
  project                            = local.project_id
  workload_identity_pool_id          = google_iam_workload_identity_pool.k8s_pool.workload_identity_pool_id
  workload_identity_pool_provider_id = "k8s-CLUSTERNAME"
  display_name                       = "k8s IdP for cluster CLUSTERNAME"
  description                        = "Workload identity pool provider for k8s workloads."
  attribute_mapping                  = {
    "google.subject"                 = "assertion.sub"
    "attribute.namespace"            = "assertion['kubernetes.io']['namespace']"
    "attribute.service_account_name" = "assertion['kubernetes.io']['serviceaccount']['name']"
    "attribute.pod"                  = "assertion['kubernetes.io']['pod']['name']"
  }
  oidc {
    issuer_uri = ISSUER
    jwks_json  = JWKS
    allowed_audiences = [
      "//iam.googleapis.com/projects/${local.project_num}/locations/global/workloadIdentityPools/k8s-pool/providers/k8s-CLUSTERNAME"
    ]
  }
}

k8s setup

On each cluster node, do the following:

  1. If running k8s 1.33, set the feature gate KubeletServiceAccountTokenForCredentialProviders=true. In k3s this would be in /etc/rancher/k3s/config.yaml:

    # Only needed if running k8s 1.33. Not needed in 1.34+
    kubelet-arg:
    - "feature-gates=KubeletServiceAccountTokenForCredentialProviders=true"
  2. Set the following in the credential provider config, swapping REGION for your GAR region. In k3s this file would be at /var/lib/rancher/credentialprovider/config.yaml:

    apiVersion: kubelet.config.k8s.io/v1
    kind: CredentialProviderConfig
    providers:
      - name: gcp-sa-credential-provider
        matchImages:
          - "REGION-docker.pkg.dev"
        defaultCacheDuration: "1h"
        apiVersion: credentialprovider.kubelet.k8s.io/v1
        env:
          - name: GCP_AUDIENCE
            # This must match the full resource name of the Workload Identity Pool provider created above
            value: "//iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/k8s-pool/providers/k8s-$CLUSTERNAME"
        tokenAttributes:
          # This must be the same value as GCP_AUDIENCE just above
          serviceAccountTokenAudience: "//iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/k8s-pool/providers/k8s-$CLUSTERNAME"
          # Only set this in k8s 1.34+. Can be "Token" or "ServiceAccount"
          cacheType: Token
          requireServiceAccount: true
  3. Copy the compiled gcp-sa-credential-provider Go program to a bin folder adjacent to the credential provider config. In k3s this would be at /var/lib/rancher/credentialprovider/bin/gcp-sa-credential-provider.

Set up RBAC

The following RBAC is needed to allow nodes to request service account tokens:

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: registry-audience-access
rules:
- verbs: ["request-serviceaccounts-token-audience"]
  apiGroups: [""]
  resources: ["GCP_AUDIENCE"] # Set to audience as above or "*" for any audience
  resourceNames: ["registry-access-sa"]  # Optional: specific ServiceAccount
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-registry-audience
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: registry-audience-access
subjects:
- kind: Group
  name: system:nodes
  apiGroup: rbac.authorization.k8s.io

References

About

Kubelet credential provider that uses Service Account Tokens for image pulls via a GCP OIDC token exchange.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages