This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.


The following sections give you an overview of the main concepts of Bank-Vaults. Most of these apply equally to the bank-vaults CLI and to the Vault operator, because under the hood the operator often uses the CLI tool with the appropriate parameters.

1 - Initialize Vault and store the root token and unseal keys

Vault starts in an uninitialized state, which means it has to be initialized with an initial set of parameters. The response to the init request is the root token and unseal keys. After that, Vault becomes initialized, but remains in a sealed state.

Bank-Vaults stores the root token and the unseal keys in one of the following:

  • AWS KMS keyring (backed by S3)
  • Azure Key Vault
  • Google Cloud KMS keyring (backed by GCS)
  • Alibaba Cloud KMS (backed by OSS)

For development and testing purposes, the following solutions are also supported. Do not use these in production environments.

  • Kubernetes Secrets (should be used only for development purposes)
  • Dev Mode (useful for vault server -dev dev mode Vault servers)
  • Files (backed by files, should be used only for development purposes)

Keys stored by Bank-Vaults

Bank-Vaults stores the following keys:

  • vault-root, which is Vault’s root token.
  • vault-unseal-N unseal keys, where N is a number, starting at 0 up to the maximum defined minus 1. For example, 5 unseal keys will be vault-unseal-0 ... vault-unseal-4.

HashiCorp recommends revoking the root tokens after the initial set up of Vault has been completed.

Note: The vault-root token is not needed to unseal Vault, and can be removed from the storage if it was put there via the --init call to bank-vaults.

If you want to decrypt the root token for some reason, see Decrypt the root token.

Unseal Vault

Unsealing is the process of constructing the master key necessary to read the decryption key to decrypt data, allowing access to Vault. (From the official Vault documentation)

After initialization, Vault remains in a sealed state. In sealed state no secrets can reach or leave Vault until a person, possibly more people than one, unseals it with the required number of unseal keys.

Vault data and the unseal keys live together: if you delete a Vault instance installed by the operator, or if you delete the Helm chart, all your data and the unseal keys to that initialized state should remain untouched. For details, see the official documentation.

Note: If you change the unseal configuration after initializing Vault, you may have to move the unseal keys from the old location to the new one, or reinitialize vault.

The Bank-Vaults Init and Unseal process

Bank-Vaults runs in an endless loop and does the following:

Vault Unseal Flow

  1. Bank-Vaults checks if Vault is initialized. If yes, it continues to step 2, otherwise Bank-Vaults:
    1. Calls Vault init, which returns the root token and the configured number of unseal keys.
    2. Encrypts the received token and keys with the configured KMS key.
    3. Stores the encrypted token and keys in the cloud provider’s object storage.
    4. Flushes the root token and keys from its memory with explicit garbage control as soon as possible.
  2. Bank-Vaults checks if Vault is sealed. If it isn’t, it continues to step 3, otherwise Bank-Vaults:
    1. Reads the encrypted unseal keys from the cloud provider’s object storage.
    2. Decrypts the unseal keys with the configured KMS key.
    3. Unseals Vault with the decrypted unseal keys.
    4. Flushes the keys from its memory with explicit garbage control as soon as possible.
  3. If the external configuration file was changed and an OS signal is received, then Bank-Vaults:
    1. Parses the configuration file.
    2. Reads the encrypted root token from the cloud provider’s object storage.
    3. Decrypts the root token with the configured KMS key.
    4. Applies the parsed configuration on the Vault API.
    5. Flushes the root token from its memory with explicit garbage control as soon as possible.
  4. Repeats from the second step after the configured time period.

1.1 - Decrypt the root token

If you want to decrypt the root token for some reason, see the section corresponding to the storage provider you used to store the token.


To use the KMS-encrypted root token with Vault CLI:

Required CLI tools:

  • aws


  1. Download and decrypt the root token (and the unseal keys, but that is not mandatory) into a file on your local file system:

    for key in "vault-root" "vault-unseal-0" "vault-unseal-1" "vault-unseal-2" "vault-unseal-3" "vault-unseal-4"
        aws s3 cp s3://${BUCKET}/${key} .
        aws kms decrypt \
            --region ${REGION} \
            --ciphertext-blob fileb://${key} \
            --encryption-context Tool=bank-vaults \
            --output text \
            --query Plaintext | base64 -d > ${key}.txt
        rm ${key}
  2. Save it as an environment variable:

    export VAULT_TOKEN="$(cat vault-root.txt)"

Google Cloud

To use the KMS-encrypted root token with vault CLI:

Required CLI tools:

  • gcloud
  • gsutil

export VAULT_TOKEN=$(gsutil cat gs://${BUCKET}/vault-root | gcloud kms decrypt \
                     --project ${GOOGLE_PROJECT} \
                     --location ${GOOGLE_REGION} \
                     --keyring ${KEYRING} \
                     --key ${KEY} \
                     --ciphertext-file - \
                     --plaintext-file -)


There is a Kubernetes Secret backed unseal storage in Bank-Vaults, you should be aware of that Kubernetes Secrets are base64 encoded only if you are not using a EncryptionConfiguration in your Kubernetes cluster.


export VAULT_TOKEN=$(kubectl get secrets ${VAULT_NAME}-unseal-keys -o jsonpath={.data.vault-root} | base64 -d)

1.2 - Migrate unseal keys between cloud providers

Note: If you change the unseal configuration after initializing Vault, you may have to move the unseal keys from the old location to the new one, or reinitialize vault.

If you need to move your Vault instance from one provider or an external managed Vault, you have to:

  1. Retrieve and decrypt the unseal keys (and optionally the root token) in the Bank-Vaults format. For details, see Decrypt the root token.
  2. Migrate the Vault storage data to the new provider. Use the official migration command provided by Vault.

All examples assume that you have created files holding the root-token and the 5 unseal keys in plaintext:

  • vault-root.txt
  • vault-unseal-0.txt
  • vault-unseal-1.txt
  • vault-unseal-2.txt
  • vault-unseal-3.txt
  • vault-unseal-4.txt


Move the above mentioned files to an AWS bucket and encrypt them with KMS before:


for key in "vault-root" "vault-unseal-0" "vault-unseal-1" "vault-unseal-2" "vault-unseal-3" "vault-unseal-4"
    aws kms encrypt \
        --region ${REGION} --key-id ${KMS_KEY_ID} \
        --plaintext fileb://${key}.txt \
        --encryption-context Tool=bank-vaults \
        --output text \
        --query CiphertextBlob | base64 -d > ${key}

    aws s3 cp ./${key} s3://${BUCKET}/

    rm ${key} ${key}.txt

2 - Cloud permissions

The operator and the bank-vaults CLI command needs certain cloud permissions to function properly (init, unseal, configuration).

Google Cloud

The Service Account in which the Pod is running has to have the following IAM Roles:

  • Cloud KMS Admin
  • Cloud KMS CryptoKey Encrypter/Decrypter
  • Storage Admin

A CLI example how to run bank-vaults based Vault configuration on Google Cloud:

bank-vaults configure --google-cloud-kms-key-ring vault --google-cloud-kms-crypto-key bank-vaults --google-cloud-kms-location global --google-cloud-storage-bucket vault-ha --google-cloud-kms-project continual-flow-276578


The Access Policy in which the Pod is running has to have the following IAM Roles:

  • Key Vault All Key permissions
  • Key Vault All Secret permissions


Enable IAM OIDC provider for an EKS cluster

To allow Vault pods to assume IAM roles in order to access AWS services the IAM OIDC provider needs to be enabled on the cluster.


# Enable OIDC provider for the cluster with eksctl
# Follow the docs here to do it manually
eksctl utils associate-iam-oidc-provider \

# Create a KMS key and S3 bucket and enter details here
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
OIDC_PROVIDER=$(aws eks describe-cluster --name ${BANZAI_CURRENT_CLUSTER_NAME} --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")

cat > trust.json <<EOF
  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_PROVIDER}:sub": "system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}"

cat > vault-policy.json <<EOF
    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::${BUCKET}"

# AWS IAM role and Kubernetes service account setup
aws iam create-role --role-name vault --assume-role-policy-document file://trust.json
aws iam create-policy --policy-name vault --policy-document file://vault-policy.json
aws iam attach-role-policy --role-name vault --policy-arn arn:aws:iam::${AWS_ACCOUNT_ID}:policy/vault

# If you are having a ServiceAccount already, only the annotation is needed
kubectl create serviceaccount $SERVICE_ACCOUNT_NAME --namespace $SERVICE_ACCOUNT_NAMESPACE
kubectl annotate serviceaccount $SERVICE_ACCOUNT_NAME  --namespace $SERVICE_ACCOUNT_NAMESPACE"arn:aws:iam::${AWS_ACCOUNT_ID}:role/vault"

# Cleanup
rm vault-policy.json trust.json

Getting the root token

After Vault is successfully deployed, you can query the root-token for admin access.

# Fetch Vault root token, check bucket for actual name based on
aws s3 cp s3://$s3_bucket_name/vault-root /tmp/vault-root

export VAULT_TOKEN="$(aws kms decrypt \
  --ciphertext-blob fileb:///tmp/vault-root \
  --encryption-context Tool=bank-vaults \
  --query Plaintext --output text | base64 --decode)"

The Instance profile in which the Pod is running has to have the following IAM Policies:

  • KMS: kms:Encrypt, kms:Decrypt
  • S3: s3:GetObject, s3:PutObject, s3:DeleteObject on object level and s3:ListBucket on bucket level

An example command how to init and unseal Vault on AWS:

bank-vaults unseal --init --mode aws-kms-s3 --aws-kms-key-id 9f054126-2a98-470c-9f10-9b3b0cad94a1 --aws-s3-region eu-west-1 --aws-kms-region eu-west-1 --aws-s3-bucket bank-vaults

When using existing unseal keys, you need to make sure to kms encrypt these with the proper EncryptionContext. If this is not done, the invocation of bank-vaults will trigger an InvalidCiphertextException from AWS KMS. An example how to encrypt the keys (specify --profile and --region accordingly):

aws kms encrypt --key-id "alias/kms-key-alias" --encryption-context "Tool=bank-vaults"  --plaintext fileb://vault-unseal-0.txt --output text --query CiphertextBlob | base64 -D > vault-unseal-0

From this point on copy the encrypted files to the appropriate S3 bucket. As an additional security measure make sure to turn on encryption of the S3 bucket before uploading the files.

Alibaba Cloud

A CLI example how to run bank-vaults based Vault unsealing on Alibaba Cloud:

bank-vaults unseal --mode alibaba-kms-oss --alibaba-access-key-id ${ALIBABA_ACCESS_KEY_ID} --alibaba-access-key-secret ${ALIBABA_ACCESS_KEY_SECRET} --alibaba-kms-region eu-central-1 --alibaba-kms-key-id ${ALIBABA_KMS_KEY_UUID} --alibaba-oss-endpoint --alibaba-oss-bucket bank-vaults


The Service Account in which the bank-vaults Pod is running has to have the following Roles rules:

- apiGroups: [""]
  resources: ["secrets"]
  verbs:     ["get", "create", "update"]

3 - External configuration for Vault

In addition to the standard Vault configuration, the operator and CLI can continuously configure Vault using an external YAML/JSON configuration. That way you can configure Vault declaratively using your usual automation tools and workflow.

The following sections describe the configuration sections you can use.

3.1 - Fully or partially purging unmanaged configuration in Vault

Bank-Vaults gives you a full control over Vault in a declarative style by removing any unmanaged configuration.

By enabling purgeUnmanagedConfig you keep Vault configuration up-to-date. So if you added a policy using Bank-Vaults then removed it from the configuration, Bank-Vaults will remove it from Vault too. In other words, if you enabled purgeUnmanagedConfig then any changes not in Bank-Vaults configuration will be removed (including manual changes).


This feature is destructive, so be careful when you enable it especially for the first time because it can delete all data in your Vault. Always test it a non-production environment first.

This feature is disabled by default and it needs to be enabled explicitly in your configuration.


Bank-Vaults handles unmanaged configuration by simply comparing what in Bank-Vaults configuration (the desired state) and what’s already in Vault (the actual state), then it removes any differences that are not in Bank-Vaults configuration.

Fully purge unmanaged configuration

You can remove all unmanaged configuration by enabling the purge option as following:

  enabled: true

Partially purge unmanaged configuration

You can also enable the purge feature for some of the config by excluding any config that you don’t want to purge its unmanaged config.

It could be done by explicitly exclude the Vault configuration that you don’t want to mange:

  enabled: true
    secrets: true

This will remove any unmanaged or manual changes in Vault but it will leave secrets untouched. So if you enabled a new secret engine manually (and it’s not in Bank-Vaults configuration), Bank-Vaults will not remove it.

3.2 - Audit devices

You can configure Audit Devices in Vault (File, Syslog, Socket).

  - type: file
    description: "File based audit logging device"
      file_path: /tmp/vault.log

3.3 - Authentication

You can configure Auth Methods in Vault.

Currently the following auth methods are supported:

AppRole auth method

Allow machines/apps to authenticate with Vault-defined roles. For details, see the official Vault documentation.

  - type: approle
    - name: default
      policies: allow_secrets
      secret_id_ttl: 10m
      token_num_uses: 10
      token_ttl: 20m
      token_max_ttl: 30m
      secret_id_num_uses: 40

AWS auth method

Creating roles in Vault which can be used for AWS IAM based authentication.

  - type: aws
    # Make the auth provider visible in the web ui
    # See for more
    # information.
      listing_visibility: "unauth"
      access_key: VKIAJBRHKH6EVTTNXDHA
      secret_key: vCtSM8ZUEQ3mOFVlYPBQkf2sO6F/W7a5TVzrl3Oj
      iam_server_id_header_value: # consider setting this to the Vault server's DNS name
    # Add cross account number and role to assume in the cross account
    - sts_account: 12345671234
      sts_role: arn:aws:iam::12345671234:role/crossaccountrole
    # Add roles for AWS instances or principals
    # See
    - name: dev-role-iam
      bound_iam_principal_arn: arn:aws:iam::123456789012:role/dev-vault
      policies: allow_secrets
      period: 1h
    - name: cross-account-role
      bound_iam_principal_arn: arn:aws:iam::12345671234:role/crossaccountrole
      policies: allow_secrets
      period: 1h

Azure auth method

The Azure auth method allows authentication against Vault using Azure Active Directory credentials for more information.

  - type: azure
      tenant_id: 00000000-0000-0000-0000-000000000000
      client_id: 00000000-0000-0000-0000-000000000000
      client_secret: 00000000-0000-0000-0000-000000000000
    # Add roles for azure identities
    # See
      - name: dev-mi
        policies: allow_secrets
          - "00000000-0000-0000-0000-000000000000"
          - "00000000-0000-0000-0000-000000000000"

GCP auth method

Create roles in Vault which can be used for GCP IAM based authentication.

  - type: gcp
    # Make the auth provider visible in the web ui
    # See for more
    # information.
      listing_visibility: "unauth"
      # Credentials context is service account's key. Can download when you create a key for service account.
      # No need to manually create it. Just paste the json context as multiline yaml.
      credentials: -|
          "type": "service_account",
          "project_id": "PROJECT_ID",
          "private_key_id": "KEY_ID",
          "private_key": "-----BEGIN PRIVATE KEY-----.....-----END PRIVATE KEY-----\n",
          "client_email": "",
          "client_id": "CLIENT_ID",
          "auth_uri": "",
          "token_uri": "",
          "auth_provider_x509_cert_url": "",
          "client_x509_cert_url": ""
    # Add roles for gcp service account
    # See
    - name: user-role
      type: iam
      project_id: PROJECT_ID
      policies: "readonly_secrets"
      bound_service_accounts: ""
    - name: admin-role
      type: iam
      project_id: PROJECT_ID
      policies: "allow_secrets"
      bound_service_accounts: ""

GitHub auth method

Create team mappings in Vault which can be used later on for the GitHub authentication.

  - type: github
    # Make the auth provider visible in the web ui
    # See for more
    # information.
      listing_visibility: "unauth"
      organization: banzaicloud
      # Map the banzaicloud GitHub team on to the dev policy in Vault
        dev: dev
      # Map my username (bonifaido) to the root policy in Vault
        bonifaido: allow_secrets

JWT auth method

Create roles in Vault which can be used for JWT-based authentication.

  - type: jwt
    path: jwt
    - name: role1
        - https://vault.plugin.auth.jwt.test
      user_claim: https://vault/user
      groups_claim: https://vault/groups
      policies: allow_secrets
      ttl: 1h

Kubernetes auth method

Use the Kubernetes auth method to authenticate with Vault using a Kubernetes Service Account Token.

  - type: kubernetes
    # If you want to configure with specific kubernetes service account instead of default service account
    # config:
    #   token_reviewer_jwt: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9....
    #   kubernetes_ca_cert: |
    #     -----BEGIN CERTIFICATE-----
    #     ...
    #     -----END CERTIFICATE-----
    #   kubernetes_host:
    # Allows creating roles in Vault which can be used later on for the Kubernetes based
    # authentication.
    #  See for
    # more information.
      # Allow every pod in the default namespace to use the secret kv store
      - name: default
        bound_service_account_names: default
        bound_service_account_namespaces: default
        policies: allow_secrets
        ttl: 1h

LDAP auth method

Create group mappings in Vault which can be used for LDAP based authentication.

  • To start an LDAP test server, run: docker run -it –rm -p 389:389 -e LDAP_TLS=false –name ldap osixia/openldap
  • To start an LDAP admin server, run: docker run -it –rm -p 6443:443 –link ldap:ldap -e PHPLDAPADMIN_LDAP_HOSTS=ldap -e PHPLDAPADMIN_LDAP_CLIENT_TLS=false osixia/phpldapadmin
  - type: ldap
    description: LDAP directory auth.
    # add mount options
    # See for more
    # information.
      listing_visibility: "unauth"
      url: ldap://localhost
      binddn: "cn=admin,dc=example,dc=org"
      bindpass: "admin"
      userattr: uid
      userdn: "ou=users,dc=example,dc=org"
      groupdn: "ou=groups,dc=example,dc=org"
      # Map the banzaicloud dev team on GitHub to the dev policy in Vault
        policies: allow_secrets
    # Map myself to the allow_secrets policy in Vault
        groups: developers
        policies: allow_secrets

3.4 - Plugins

To register a new plugin in Vault’s plugin catalog, set the plugin_directory option in the Vault server configuration to the directory where the plugin binary is located. Also, for some plugins readOnlyRootFilesystem Pod Security Policy should be disabled to allow RPC communication between plugin and Vault server via Unix socket. For details, see the Hashicorp Go plugin documentation.

  - plugin_name: ethereum-plugin
    command: ethereum-vault-plugin --ca-cert=/vault/tls/client/ca.crt --client-cert=/vault/tls/server/server.crt --client-key=/vault/tls/server/server.key
    sha256: 62fb461a8743f2a0af31d998074b58bb1a589ec1d28da3a2a5e8e5820d2c6e0a
    type: secret

3.5 - Policies

You can create policies in Vault, and later use these policies in roles for the Kubernetes-based authentication. For details, see Policies in the official Vault documentation.

  - name: allow_secrets
    rules: path "secret/*" {
             capabilities = ["create", "read", "update", "delete", "list"]
  - name: readonly_secrets
    rules: path "secret/*" {
             capabilities = ["read", "list"]

3.6 - Secrets engines

You can configure Secrets Engines in Vault. The Key-Value, Database, and SSH values are tested, but the configuration is free form, so probably others work as well.


The AWS secrets engine generates AWS access credentials dynamically based on IAM policies.

  - type: aws
    path: aws
    description: AWS Secrets Engine
          - name: root
            access_key: "${env `AWS_ACCESS_KEY_ID`}"
            secret_key: "${env `AWS_SECRET_ACCESS_KEY`}"
            region: us-east-1
          - credential_type: iam_user
            policy_arns: arn-of-policy
            name: my-aws-role


The Consul secrets engine generates Consul ACL tokens dynamically based on policies created in Consul.

  - path: consul
    type: consul
    description: Consul secrets
        - name: "access"
          address: "consul-server:8500"
          token: "${env `CONSUL_GLOBAL_MANAGEMENT_TOKEN`}" # Example how to read environment variables
        - name: "<application_name>-read-only-role"
          consul_policies: "<application_name>-read-only-policy"
        - name: "<application_name>-read-write-role"
          consul_policies: "<application_name>-read-write-policy"


This plugin stores database credentials dynamically based on configured roles for the MySQL/MariaDB database.

  - type: database
    description: MySQL Database secret engine.
        - name: my-mysql
          plugin_name: "mysql-database-plugin"
          connection_url: "{{username}}:{{password}}@tcp("
          allowed_roles: [pipeline]
          username: "${env `ROOT_USERNAME`}" # Example how to read environment variables
          password: "${env `ROOT_PASSWORD`}"
        - name: pipeline
          db_name: my-mysql
          creation_statements: "GRANT ALL ON *.* TO '{{name}}'@'%' IDENTIFIED BY '{{password}}';"
          default_ttl: "10m"
          max_ttl: "24h"

Identity Groups

Allows you to configure identity groups.


Only external groups are supported at the moment through the use of group-aliases. For supported authentication backends (for example JWT, which automatically matches those aliases to groups returned by the backend) the configuration files for the groups and group-aliases need to be parsed after the authentication backend has been mounted. Ideally they should be in the same file to avoid of errors.

  - name: admin
      - admin
      admin: "true"
      priviliged: "true"
    type: external

  - name: admin
    mountpath: jwt
    group: admin


This plugin stores arbitrary secrets within the configured physical storage for Vault.

  - path: secret
    type: kv
    description: General secrets.
      version: 2
        - max_versions: 100

Non-default plugin path

Mounts a non-default plugin’s path.

  - path: ethereum-gateway
    type: plugin
    plugin_name: ethereum-plugin
    description: Immutability's Ethereum Wallet


The PKI secrets engine generates X.509 certificates.

  - type: pki
    description: Vault PKI Backend
      default_lease_ttl: 168h
      max_lease_ttl: 720h
      - name: urls
        issuing_certificates: https://vault.default:8200/v1/pki/ca
        crl_distribution_points: https://vault.default:8200/v1/pki/crl
      - name: internal
        common_name: vault.default
      - name: default
        allowed_domains: localhost,pod,svc,default
        allow_subdomains: true
        generate_lease: true
        ttl: 30m


The RabbitMQ secrets engine generates user credentials dynamically based on configured permissions and virtual hosts.

To start a RabbitMQ test server, run: docker run -it –rm -p 15672:15672 rabbitmq:3.7-management-alpine

  - type: rabbitmq
    description: local-rabbit
        - name: connection
          connection_uri: "http://localhost:15672"
          username: guest
          password: guest
        - name: prod_role
          vhosts: '{"/web":{"write": "production_.*", "read": "production_.*"}}'


Create a named Vault role for signing SSH client keys.

  - type: ssh
    path: ssh-client-signer
    description: SSH Client Key Signing.
        - name: ca
          generate_signing_key: "true"
        - name: my-role
          allow_user_certificates: "true"
          allowed_users: "*"
          key_type: "ca"
          default_user: "ubuntu"
          ttl: "24h"
            permit-pty: ""
            permit-port-forwarding: ""
            permit-agent-forwarding: ""

3.7 - Startup secrets

Allows writing some secrets to Vault (useful for development purposes). For details, see the Key-Value secrets engine.

  - type: kv
    path: secret/data/accounts/aws
        AWS_ACCESS_KEY_ID: secretId
        AWS_SECRET_ACCESS_KEY: s3cr3t