This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Vault operator

The Vault operator builds on Bank-Vaults features such as:

  • external, API based configuration (secret engines, auth methods, policies) to automatically re/configure a Vault cluster
  • automatic unsealing (AWS, GCE, Azure, Alibaba, Kubernetes Secrets (for dev purposes), Oracle)
  • TLS support

The operator flow is the following:

operator

The source code can be found in the vault-operator repository.

The operator requires the following cloud permissions.

Deploy a local Vault operator

This is the simplest scenario: you install the Vault operator on a simple cluster. The following commands install a single-node Vault instance that stores unseal and root tokens in Kubernetes secrets. If you want to customize the Helm chart, see the list of vault-operator Helm chart values.

  1. Install the Bank-Vaults operator:

    helm upgrade --install --wait vault-operator oci://ghcr.io/bank-vaults/helm-charts/vault-operator
    

    Expected output:

    Release "vault-operator" does not exist. Installing it now.
    Pulled: ghcr.io/bank-vaults/helm-charts/vault-operator:1.20.0
    Digest: sha256:46045be1c3b215f0c734908bb1d4022dc91eae48d2285382bb71d63f72c737d1
    NAME: vault-operator
    LAST DEPLOYED: Thu Jul 27 11:22:55 2023
    NAMESPACE: default
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    
  2. Create a Vault instance using the Vault custom resources. This will create a Kubernetes CustomResource called vault and a PersistentVolumeClaim for it:

    kubectl kustomize https://github.com/bank-vaults/vault-operator/deploy/rbac | kubectl apply -f -
        

    Expected output:

    serviceaccount/vault created
        role.rbac.authorization.k8s.io/vault created
        role.rbac.authorization.k8s.io/leader-election-role created
        rolebinding.rbac.authorization.k8s.io/leader-election-rolebinding created
        rolebinding.rbac.authorization.k8s.io/vault created
        clusterrolebinding.rbac.authorization.k8s.io/vault-auth-delegator created
        
    kubectl apply -f https://raw.githubusercontent.com/bank-vaults/vault-operator/v1.21.0/deploy/examples/cr-raft.yaml
        

    Expected output:

    vault.vault.banzaicloud.com/vault created
        

    Note: If needed, you can install the latest CustomResource from the main branch, but that’s usually under development and might not be stable.

    kubectl kustomize https://github.com/bank-vaults/vault-operator/deploy/crd | kubectl apply -f -
        
  3. Wait a few seconds, then check the operator and the vault pods:

    kubectl get pods
    

    Expected output:

    NAME                                                        READY     STATUS    RESTARTS   AGE
    vault-0                                                     3/3       Running   0          10s
    vault-configurer-6c545cb6b4-dmvb5                           1/1       Running   0          10s
    vault-operator-788559bdc5-kgqkg                             1/1       Running   0          23s
    
  4. Configure your Vault client to access the Vault instance running in the vault-0 pod.

    1. Port-forward into the pod:

      kubectl port-forward vault-0 8200 &
      
    2. Set the address of the Vault instance.

      export VAULT_ADDR=https://127.0.0.1:8200
      
    3. Import the CA certificate of the Vault instance by running the following commands (otherwise, you’ll get x509: certificate signed by unknown authority errors):

      kubectl get secret vault-tls -o jsonpath="{.data.ca\.crt}" | base64 --decode > $PWD/vault-ca.crt
      export VAULT_CACERT=$PWD/vault-ca.crt
      

      Alternatively, you can instruct the Vault client to skip verifying the certificate of Vault by running: export VAULT_SKIP_VERIFY=true

    4. If you already have the Vault CLI installed, check that you can access the Vault:

      vault status
      

      Expected output:

      Key             Value
      ---             -----
      Seal Type       shamir
      Initialized     true
      Sealed          false
      Total Shares    5
      Threshold       3
      Version         1.5.4
      Cluster Name    vault-cluster-27ecd0e6
      Cluster ID      ed5492f3-7ef3-c600-aef3-bd77897fd1e7
      HA Enabled      false
      
    5. To authenticate to Vault, you can access its root token by running:

      export VAULT_TOKEN=$(kubectl get secrets vault-unseal-keys -o jsonpath={.data.vault-root} | base64 --decode)
      

      Note: Using the root token is recommended only in test environments. In production environment, create dedicated, time-limited tokens.

    6. Now you can interact with Vault. For example, add a secret by running vault kv put secret/demosecret/aws AWS_SECRET_ACCESS_KEY=s3cr3t If you want to access the Vault web interface, open https://127.0.0.1:8200 in your browser using the root token (to reveal the token, run echo $VAULT_TOKEN).

For other configuration examples of the Vault CustomResource, see the YAML files in the deploy/examples and test/deploy directories of the vault-operator repository. After you are done experimenting with Bank-Vaults and you want to delete the operator, you can delete the related CRs:

kubectl kustomize https://github.com/bank-vaults/vault-operator/deploy/rbac | kubectl delete -f -
kubectl delete -f https://raw.githubusercontent.com/bank-vaults/vault-operator/v1.21.0/deploy/examples/cr-raft.yaml

HA setup with Raft

In a production environment you want to run Vault as a cluster. The following CR creates a 3-node Vault instance that uses the Raft storage backend:

  1. Install the Bank-Vaults operator:

    helm upgrade --install --wait vault-operator oci://ghcr.io/bank-vaults/helm-charts/vault-operator
    
  2. Create a Vault instance using the cr-raft.yaml custom resource. This will create a Kubernetes CustomResource called vault that uses the Raft backend:

    kubectl kustomize https://github.com/bank-vaults/vault-operator/deploy/rbac | kubectl apply -f -
        

    Expected output:

    serviceaccount/vault created
        role.rbac.authorization.k8s.io/vault created
        role.rbac.authorization.k8s.io/leader-election-role created
        rolebinding.rbac.authorization.k8s.io/leader-election-rolebinding created
        rolebinding.rbac.authorization.k8s.io/vault created
        clusterrolebinding.rbac.authorization.k8s.io/vault-auth-delegator created
        
    kubectl apply -f https://raw.githubusercontent.com/bank-vaults/vault-operator/v1.21.0/deploy/examples/cr-raft.yaml
        

    Expected output:

    vault.vault.banzaicloud.com/vault created
        

    Note: If needed, you can install the latest CustomResource from the main branch, but that’s usually under development and might not be stable.

    kubectl kustomize https://github.com/bank-vaults/vault-operator/deploy/crd | kubectl apply -f -
        

CAUTION:

Make sure to set up a solution for backing up the storage backend to prevent data loss. Bank-Vaults doesn’t do this automatically. We recommend using Velero for backups.

Pod anti-affinity

If you want to setup pod anti-affinity, you can set podAntiAffinity vault with a topologyKey value. For example, you can use failure-domain.beta.kubernetes.io/zone to force K8S deploy vault on multi AZ.

Delete a resource created by the operator

If you manually delete a resource that the Bank-Vaults operator has created (for example, the Ingress resource), the operator automatically recreates it every 30 seconds. If it doesn’t, then something went wrong, or the operator is not running. In this case, check the logs of the operator.

1 - Running Vault with external end to end encryption

This document assumes you have a working Kubernetes cluster which has a:

  • Working install of Vault.
  • That you have a working knowledge of Kubernetes.
  • A working install of helm
  • A working knowledge of Kubernetes ingress
  • A valid external (www.example.com) SSL certificate, verified by your provider as a Kubernetes secret.

Background

The bank-vaults operator takes care of creating and maintaining internal cluster communications but if you wish to use your vault install outside of your Kubernetes cluster what is the best way to maintain a secure state. Creating a standard Ingress object will reverse proxy these requests to your vault instance but this is a hand off between the external SSL connection and the internal one. This might not be acceptable under some circumstances, for example, if you have to adhere to strict security standards.

Workflow

Here we will create a separate TCP listener for vault using a custom SSL certificate on an external domain of your choosing. We will then install a unique ingress-nginx controller allowing SSL pass through. SSL Pass through comes with a performance hit, so you would not use this on a production website or ingress-controller that has a lot of traffic.

Install

ingress-nginx

values.yaml

controller:
  electionID: vault-ingress-controller-leader
  ingressClass: nginx-vault
  extraArgs:
    enable-ssl-passthrough:
  publishService:
    enabled: true
  scope:
    enabled: true
  replicaCount: 2
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: release
            operator: In
            values: ["vault-ingress"]
        topologyKey: kubernetes.io/hostname

Install nginx-ingress via helm

helm install nginx-stable/nginx-ingress --name my-release -f values.yaml

Configuration

SSL Secret example:

apiVersion: v1
data:
  tls.crt: LS0tLS1......=
  tls.key: LS0tLS.......==
kind: Secret
metadata:
  labels:
    ssl: "true"
    tls: "true"
  name: wildcard.example.com
type: Opaque

CR Vault Config:

---
apiVersion: "vault.banzaicloud.com/v1alpha1"
kind: "Vault"
metadata:
  name: "vault"
  namespace: secrets
spec:
  size: 2
  image: hashicorp/vault:1.14.1
  bankVaultsImage: ghcr.io/bank-vaults/bank-vaults:latest

  # A YAML representation of a final vault config file.
  # See https://developer.hashicorp.com/vault/docs/configuration for more information.
  config:
    listener:
      - tcp:
          address: "0.0.0.0:8200"
          tls_cert_file: /vault/tls/server.crt
          tls_key_file: /vault/tls/server.key
      - tcp:
          address: "0.0.0.0:8300"
          tls_cert_file: /etc/ingress-tls/tls.crt
          tls_key_file: /etc/ingress-tls/tls.key
    api_addr: https://vault:8200
    cluster_addr: https://vault:8201
    ui: true

CR Service:

  # Specify the Service's type where the Vault Service is exposed
  serviceType: ClusterIP
  servicePorts:
    api-port: 8200
    cluster-port: 8201
    ext-api-port: 8300
    ext-clu-port: 8301

Mount the secret into your vault pod

  volumes:
    - name: wildcard-ssl
      secret:
        defaultMode: 420
        secretName: wildcard.example.com

  volumeMounts:
    - name: wildcard-ssl
      mountPath: /etc/ingress-tls

CR Ingress:

  # Request an Ingress controller with the default configuration
  ingress:
    annotations:
      kubernetes.io/ingress.allow-http: "false"
      kubernetes.io/ingress.class: "nginx-vault"
      nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
      nginx.ingress.kubernetes.io/ssl-passthrough: "true"
      nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
      nginx.ingress.kubernetes.io/whitelist-source-range: "127.0.0.1"

    spec:
      rules:
        - host: vault.example.com
          http:
            paths:
              - path: /
                backend:
                  serviceName: vault
                  servicePort: 8300

2 - Using templates for injecting dynamic configuration

Background

When configuring a Vault object via the externalConfig property, sometimes it’s convenient (or necessary) to inject settings that are only known at runtime, for example:

  • secrets that you don’t want to store in source control
  • dynamic resources managed elsewhere
  • computations based on multiple values (string or arithmetic operations).

For these cases, the operator supports parameterized templating. The vault-configurer component evaluates the templates and injects the rendered configuration into Vault.

This templating is based on Go templates, extended by Sprig, with some custom functions available specifically for bank-vaults (for example, to decrypt strings using the AWS Key Management Service or the Cloud Key Management Service of the Google Cloud Platform).

Using templates

To avoid confusion and potential parsing errors (and interference with other templating tools like Helm), the templates don’t use the default delimiters that Go templates use ({{ and }}). Instead, use the following characters:

  • ${ for the left delimiter
  • } for the right one.
  • To quote parameters being passed to functions, surround them with backticks (`) instead.

For example, to call the env function, you can use this in your manifest:

password: "${ env `MY_ENVIRONMENT_VARIABLE` }"

In this case, vault-configurer evaluates the value of MY_ENVIRONMENT_VARIABLE at runtime (assuming it was properly injected), and sets the result as the value of the password field.

Note that you can also use Sprig functions and custom Kubernetes-related functions in your templates.

For a detailed example, see the Using templates for injecting dynamic configuration in Vault blog post.

Sprig functions

In addition to the default functions in Go templates, you can also use Sprig functions in your configuration.

CAUTION:

Use only functions that return a string, otherwise the generated configuration is invalid.

Custom functions

To provide functionality that’s more Kubernetes-friendly and cloud-native, bank-vaults provides a few additional functions not available in Sprig or Go. The functions and their parameters (in the order they should go in the function) are documented below.

awskms

Takes a base64-encoded, KMS-encrypted string and returns the decrypted string. Additionally, the function takes an optional second parameter for any encryption context that might be required for decrypting. If any encryption context is required, the function will take any number of additional parameters, each of which should be a key-value pair (separated by a = character), corresponding to the full context.

Note: This function assumes that the vault-configurer pod has the appropriate AWS IAM credentials and permissions to decrypt the given string. You can inject the AWS IAM credentials by using Kubernetes secrets as environment variables, an EC2 instance role, kube2iam, or EKS IAM roles, and so on.

ParameterTypeRequired
encodedStringBase64-encoded stringYes
encryptionContextVariadic list of stringsNo

For example:

password: '${ awskms (env `ENCRYPTED_DB_CREDS`) }'

You can also nest functions in the template, for example:

password: '${ awskms (blob `s3://bank-vaults/encrypted/db-creds?region=eu-west-1`) }'

gcpkms

Takes a base64-encoded string, encrypted with a Google Cloud Platform (GCP) symmetric key and returns the decrypted string.

Note: This function assumes that the vault-configurer pod has the appropriate GCP IAM credentials and permissions to decrypt the given string. You can inject the GCP IAM credentials by using Kubernetes secrets as environment variables, or they can be acquired via a service account authentication, and so on.

ParameterTypeRequired
encodedStringBase64-encoded stringYes
projectIdStringYes
locationStringYes
keyRingStringYes
keyStringYes

blob

Reads the content of a blob from disk (file) or from cloud blob storage services (object storage) at the given URL and returns it. This assumes that the path exists, and is readable by vault-configurer.

Valid values for the URL parameters are listed below, for more fine grained options check the documentation of the underlying library:

  • file:///path/to/dir/file
  • s3://my-bucket/object?region=us-west-1
  • gs://my-bucket/object
  • azblob://my-container/blob

Note: This function assumes that the vault-configurer pod has the appropriate rights to access the given cloud service. For details, see the awskms and gcpkms functions.

ParameterTypeRequired
urlStringYes

For example:

password: '${ blob `s3://bank-vaults/encrypted/db-creds?region=eu-west-1` }'

You can also nest functions in the template, for example:

password: '${ awskms (blob `s3://bank-vaults/encrypted/db-creds?region=eu-west-1`) }'

file

Reads the content of a file from disk at the given path and returns it. This assumes that the file exists, it’s mounted, and readable by vault-configurer.

ParameterTypeRequired
pathStringYes

accessor

Looks up the accessor id of the given auth path and returns it. This function is only useful in policies that use templated policies, to generalize the <mount accessor> field.

ParameterTypeRequired
pathStringYes

For example:

policies:
  - name: allow_secrets
    rules: path "secret/data/{{identity.entity.aliases.${ accessor `kubernetes/` }.metadata.service_account_namespace}}/*" {
             capabilities = ["read"]
           }

3 - Environment variables

You can add environment variables to the different containers of the Bank-Vaults pod using the following configuration options:

  • envsConfig: Adds environment variables to all Bank-Vaults pods.
  • sidecarEnvsConfig: Adds environment variables to Vault sidecar containers.
  • vaultEnvsConfig: Adds environment variables to the Vault container.

For example:

envsConfig:
  - name: ROOT_USERNAME
    valueFrom:
      secretKeyRef:
        name: mysql-login
        key: user
  - name: ROOT_PASSWORD
    valueFrom:
      secretKeyRef:
        name: mysql-login
        key: password

See the database secret engine section for usage. Further information:

4 - Upgrade strategies

Upgrade Vault

To upgrade the Vault, complete the following steps.

  1. Check the release notes of Vault for any special upgrade instructions. Usually there are no instructions, but it’s better to be safe than sorry.
  2. Adjust the spec.image field in the Vault custom resource. If you are using the Vault Helm chart, adjust the image.tag field in the values.yaml.
  3. The Vault Helm chart updates the StatefulSet. It does not take the HA leader into account in HA scenarios, but this has never caused any issues so far.

Upgrade Vault operator

v1.20.0 upgrade guide

The release of the Vault Operator v1.20.0 marks the beginning of a new chapter in the development of the Bank-Vaults ecosystem, as this is the first release across the project after it has been dissected and migrated from the original banzaicloud/bank-vaults repository under its own bank-vaults organization. We paid attention to not introduce breaking changes during the process, however, the following changes are now in effect:

  • All Helm charts will now be distributed via OCI registry on GitHub.
  • All releases will be tagged with v prefix, starting from v1.20.0.

Upgrade to the new Vault Operator by changing the helm repository to the new OCI registry, and specifying the new version numbers:

helm upgrade vault-operator oci://ghcr.io/bank-vaults/helm-charts/vault-operator \
	--set image.tag=v1.20.0 \
	--set bankVaults.image.tag=v1.20.0 \
	--wait

Make sure to also change the bank-vaults image in the Vault CR’s spec.bankVaultsImage field to ghcr.io/bank-vaults/bank-vaults:1.20.x.

5 - Operator Configuration for Functioning Webhook Secrets Mutation

You can find several examples of the vault operator CR manifest in the vault-operator repository. The following examples use only this vanilla CR to demonstrate some main points about how to properly configure the operator for secrets mutations to function.

This document does not attempt to explain every possible scenario with respect to the CRs in the aforementioned directory, but instead attempts to explain at a high level the important aspects of the CR, so that you can determine how best to configure your operator.

Main points

Some important aspects of the operator and its configuration with respect to secrets mutation are:

  • The vault operator instantiates:
    • the vault configurer pod(s),
    • the vault pod(s),
    • the vault-secrets-webhook pod(s).
  • The vault configurer:
    • unseals the vault,
    • configures vault with policies, roles, and so on.
  • vault-secrets-webhook does nothing more than:
    • monitors cluster for resources with specific annotations for secrets injection, and
    • integrates with vault API to answer secrets requests from those resources for requested secrets.
    • For pods using environment secrets, it injects a binary vault-env into pods and updates ENTRYPOINT to run vault-env CMD instead of CMD. vault-env intercepts requests for env secrets requests during runtime of pod and upon such requests makes vault API call for requested secret injecting secret into environment variable so CMD works with proper secrets.
  • Vault
    • the secrets workhorse
    • surfaces a RESTful API for secrets management

CR configuration properties

This section goes over some important properties of the CR and their purpose.

Vault’s service account

This is the serviceaccount where Vault will be running. The Configurer runs in the same namespace and should have the same service account. The operator assigns this serviceaccount to Vault.

  # Specify the ServiceAccount where the Vault Pod and the Bank-Vaults configurer/unsealer is running
  serviceAccount: vault

caNamespaces

In order for vault communication to be encrypted, valid TLS certificates need to be used. The following property automatically creates TLS certificate secrets for the namespaces specified here. Notice that this is a list, so you can specify multiple namespaces per line, or use the splat or wildcard asterisk to specify all namespaces:

  # Support for distributing the generated CA certificate Secret to other namespaces.
  # Define a list of namespaces or use ["*"] for all namespaces.
  caNamespaces:
    - "*"

Vault Config

The following is simply a YAML representation (as the comment says) for the Vault configuration you want to run. This is the configuration that vault configurer uses to configure your running Vault:

  # A YAML representation of a final vault config file.
  config:
    api_addr: https://vault:8200
    cluster_addr: https://${.Env.POD_NAME}:8201
    listener:
      tcp:
        address: 0.0.0.0:8200
        # Commenting the following line and deleting tls_cert_file and tls_key_file disables TLS
        tls_cert_file: /vault/tls/server.crt
        tls_key_file: /vault/tls/server.key
    storage:
      file:
        path: "${ .Env.VAULT_STORAGE_FILE }"
    ui: true
  credentialsConfig:
    env: ""
    path: ""
    secretName: ""
  etcdSize: 0
  etcdVersion: ""
  externalConfig:
    policies:
      - name: allow_secrets
        rules: path "secret/*" {
          capabilities = ["create", "read", "update", "delete", "list"]
          }
    auth:
      - type: kubernetes
        roles:
          # Allow every pod in the default namespace to use the secret kv store
          - name: default
            bound_service_account_names:
              - external-secrets
              - vault
              - dex
            bound_service_account_namespaces:
              - external-secrets
              - vault
              - dex
              - auth-system
              - loki
              - grafana
            policies:
              - allow_secrets
            ttl: 1h

          # Allow mutation of secrets using secrets-mutation annotation to use the secret kv store
          - name: secretsmutation
            bound_service_account_names:
              - vault-secrets-webhook
            bound_service_account_namespaces:
              - vault-secrets-webhook
            policies:
              - allow_secrets
            ttl: 1h

externalConfig

The externalConfig portion of this CR example correlates to Kubernetes configuration as specified by .auth[].type.

This YAML representation of configuration is flexible enough to work with any auth methods available to Vault as documented in the Vault documentation. For now, we’ll stick with this kubernetes configuration.

externalConfig.purgeUnmanagedConfig

Delete any configuration that in Vault but not in externalConfig. For more details please check Purge unmanaged configuration

externalConfig.policies

Correlates 1:1 to the creation of the specified policy in conjunction with Vault policies.

externalConfig.auth[].type

- type: kubernetes - specifies to configure Vault to use Kubernetes authentication

Other types are yet to be documented with respect to the operator configuration.

externalConfig.auth[].roles[]

Correlates to Creating Kubernetes roles. Some important nuances here are:

  • Vault does not respect inline secrets serviceaccount annotations, so the namespace of any serviceaccount annotations for secrets are irrelevant to getting inline secrets mutations functioning.
  • Instead, the serviceaccount of the vault-secrets-webhook pod(s) should be used to configure the bound_service_account_names and bound_service_account_namespaces for inline secrets to mutate.
  • Pod serviceaccounts, however, are respected so bound_service_account_namespaces and bound_service_account_names for environment mutations must identify such of the running pods.

Note: There are two roles specified in the YAML example above: one for pods, and one for inline secrets mutations. While this was not strictly required, it makes for cleaner implementation.

6 - TLS

Bank-Vaults tries to automate as much as possible for handling TLS certificates.

  • The vault-operator automates the creation and renewal of TLS certificates for Vault.
  • The vault Helm Chart automates only the creation of TLS certificates for Vault via Sprig.

Both the operator and the chart generate a Kubernetes Secret holding the TLS certificates, this is named ${VAULT_CR_NAME}-tls. For most examples in the vault-operator repository, the name of the secret is vault-tls.

The Secret data keys are:

  • ca.crt
  • ca.key
  • server.crt
  • server.key

Note: The operator doesn’t overwrite this Secret if it already exists, so you can provide this certificate in any other way, for example using cert-manager or by simply placing it there manually.

Operator custom TLS settings

The following attributes influence the TLS settings of the operator. The ca.crt key is mandatory in existingTlsSecretName, otherwise the Bank-Vaults components can’t verify the Vault server certificate.

CANamespaces

The list of namespaces where the generated CA certificate for Vault should be distributed. Use ["*"] for all namespaces.

Default value: []

ExistingTLSSecretName

The name of the secret that contains a TLS server certificate, key, and the corresponding CA certificate. The secret must be in the kubernetes.io/tls type secret keys + ca.crt key format. If the attribute is set, the operator uses the certificate already set in the secret, otherwise it generates a new one.

The ca.crt key is mandatory, otherwise the Bank-Vaults components can’t verify the Vault server certificate.

Default value: ""

TLSAdditionalHosts

A list hostnames or IP addresses to add to the SAN on the automatically generated TLS certificate.

Default value: []

TLSExpiryThreshold

The expiration threshold of the Vault TLS certificate in Go Duration format.

Default value: 168h

Helm chart custom TLS settings

Starting with version 1.20, the Vault Helm chart allows you to set custom TLS settings. The following attributes influence the TLS settings of the Helm chart. The ca.crt key is mandatory in secretName, otherwise the Bank-Vaults components can’t verify the Vault server certificate.

SecretName

The name of the secret that contains a TLS server certificate, key, and the corresponding CA certificate. The secret must be in the kubernetes.io/tls type secret keys + ca.crt key format. If the attribute is set, the operator uses the certificate already set in the secret, otherwise it generates a new one.

The ca.crt key is mandatory, otherwise the Bank-Vaults components can’t verify the Vault server certificate.

Default value: ""

CANamespaces

The list of namespaces where the generated CA certificate for Vault should be distributed.

Default value: []

Using the generated custom TLS certificate with vault-operator

To use an existing secret which contains the TLS certificate, define existingTlsSecretName in the Vault custom resource.

Generate custom certificates with CFSSL

If you don’t want to use the certificates generated by Helm or the Bank-Vaults operator, the easiest way to create a custom certificate for Bank-Vaults is using CFSSL.

The TLS directory in the documentation holds a set of custom CFSSL configurations which are prepared for the Helm release name vault in the default namespace. Of course, you can put any other certificates into the Secret below, this is just an example.

  1. Install CFSSL.

  2. Create a CA:

    cfssl genkey -initca csr.json | cfssljson -bare ca
    
  3. Create a server certificate:

    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=config.json -profile=server server.json | cfssljson -bare server
    
  4. Put these certificates (and the server key) into a Kubernetes Secret:

    kubectl create secret generic vault-tls --from-file=ca.crt=ca.pem --from-file=server.crt=server.pem --from-file=server.key=server-key.pem
    
  5. Install the Vault instance:

    • With the chart which uses this certificate:
    helm upgrade --install vault ../charts/vault --set tls.secretName=vault-tls
    
    • With the operator, create a Vault custom resource, and apply it:
    kubectl apply -f vault-cr.yaml
    

Generate custom certificates with cert-manager

You can use the following cert-manager custom resource to generate a certificate for Bank-Vaults.

kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: test-selfsigned
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: selfsigned-cert
spec:
  commonName: vault
  usages:
    - server auth
  dnsNames:
    - vault
    - vault.default
    - vault.default.svc
    - vault.default.svc.cluster.local
  ipAddresses:
    - 127.0.0.1
  secretName: selfsigned-cert-tls
  issuerRef:
    name: test-selfsigned
EOF

7 - Backing up Vault

You can configure the vault-operator to create backups of the Vault cluster with Velero.

Prerequisites

  • The Velero CLI must be installed on your computer.
  • To create Persistent Volume (PV) snapshots, you need access to an object storage. The following example uses an Amazon S3 bucket called bank-vaults-velero in the Stockholm region.

Install Velero

To configure the vault-operator to create backups of the Vault cluster, complete the following steps.

  1. Install Velero on the target cluster with Helm.

    1. Add the Velero Helm repository:

      helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
      
    2. Create a namespace for Velero:

      kubectl create namespace velero
      
    3. Install Velero with Restic so you can create PV snapshots as well:

      BUCKET=bank-vaults-velero
      REGION=eu-north-1
      KMS_KEY_ID=alias/bank-vaults-velero
      SECRET_FILE=~/.aws/credentials
      
      helm upgrade --install velero --namespace velero \
                --set "configuration.backupStorageLocation[0].name"=aws \
                --set "configuration.backupStorageLocation[0].provider"=aws \
                --set "configuration.backupStorageLocation[0].bucket"=YOUR_BUCKET_NAME \
                --set "configuration.backupStorageLocation[0].config.region"=${REGION} \
                --set "configuration.backupStorageLocation[0].config.kmsKeyId"=${KMS_KEY_ID} \
                --set "configuration.volumeSnapshotLocation[0].name"=aws \
                --set "configuration.volumeSnapshotLocation[0].provider"=aws \
                --set "configuration.volumeSnapshotLocation[0].config.region"=${REGION} \
                --set "initContainers[0].name"=velero-plugin-for-aws \
                --set "initContainers[0].image"=velero/velero-plugin-for-aws:v1.7.0 \
                --set "initContainers[0].volumeMounts[0].mountPath"=/target \
                --set "initContainers[0].volumeMounts[0].name"=plugins \
                vmware-tanzu/velero
      
  2. Install the vault-operator to the cluster:

    helm upgrade --install vault-operator oci://ghcr.io/bank-vaults/helm-charts/vault-operator
    
    kubectl apply -f operator/deploy/rbac.yaml
    kubectl apply -f operator/deploy/cr-raft.yaml
    

    Note: The Vault CR in cr-raft.yaml has a special flag called veleroEnabled. This is useful for file-based Vault storage backends (file, raft), see the Velero documentation:

      # Add Velero fsfreeze sidecar container and supporting hook annotations to Vault Pods:
      # https://velero.io/docs/v1.2.0/hooks/
      veleroEnabled: true
    
  3. Create a backup with the Velero CLI or with the predefined Velero Backup CR:

    velero backup create --selector vault_cr=vault vault-1
    
    # OR
    
    kubectl apply -f https://raw.githubusercontent.com/bank-vaults/bank-vaults.dev/main/content/docs/operator/backup/backup.yaml
    

    Note: For a daily scheduled backup, see schedule.yaml.

  4. Check that the Velero backup got created successfully:

    velero backup describe --details vault-1
    

    Expected output:

    Name:         vault-1
    Namespace:    velero
    Labels:       velero.io/backup=vault-1
                  velero.io/pv=pvc-6eb4d9c1-25cd-4a28-8868-90fa9d51503a
                  velero.io/storage-location=default
    Annotations:  <none>
    
    Phase:  Completed
    
    Namespaces:
      Included:  *
      Excluded:  <none>
    
    Resources:
      Included:        *
      Excluded:        <none>
      Cluster-scoped:  auto
    
    Label selector:  vault_cr=vault
    
    Storage Location:  default
    
    Snapshot PVs:  auto
    
    TTL:  720h0m0s
    
    Hooks:  <none>
    
    Backup Format Version:  1
    
    Started:    2020-01-29 14:17:41 +0100 CET
    Completed:  2020-01-29 14:17:45 +0100 CET
    
    Expiration:  2020-02-28 14:17:41 +0100 CET
    

Test the backup

  1. To emulate a catastrophe, remove Vault entirely from the cluster:

    kubectl delete vault -l vault_cr=vault
    kubectl delete pvc -l vault_cr=vault
    
  2. Now restore Vault from the backup.

    1. Scale down the vault-operator, so it won’t reconcile during the restore process:

      kubectl scale deployment vault-operator --replicas 0
      
    2. Restore all Vault-related resources from the backup:

      velero restore create --from-backup vault-1
      
    3. Check that the restore has finished properly:

      velero restore get
      NAME                    BACKUP   STATUS      WARNINGS   ERRORS   CREATED                         SELECTOR
      vault1-20200129142409   vault1   Completed   0          0        2020-01-29 14:24:09 +0100 CET   <none>
      
    4. Check that the Vault cluster got actually restored:

      kubectl get pods
      NAME                                READY   STATUS    RESTARTS   AGE
      vault-0                             4/4     Running   0          1m42s
      vault-1                             4/4     Running   0          1m42s
      vault-2                             4/4     Running   0          1m42s
      vault-configurer-5499ff64cb-g75vr   1/1     Running   0          1m42s
      
    5. Scale the operator back after the restore process:

      kubectl scale deployment vault-operator --replicas 1
      
  3. Delete the backup if you don’t wish to keep it anymore:

    velero backup delete vault-1
    

8 - Running the Bank-Vaults secret webhook alongside Istio

Both the vault-operator and the vault-secrets-webhook can work on Istio-enabled clusters.

We support the following three scenarios:

Prerequisites

  1. Install the Istio operator.

  2. Make sure you have mTLS enabled in the Istio mesh through the operator with the following command:

    Enable mTLS if it is not set to STRICT:

    kubectl patch istio -n istio-system mesh --type=json -p='[{"op": "replace", "path": "/spec/meshPolicy/mtlsMode", "value":STRICT}]'
    
  3. Check that mesh is configured with mTLS turned on which applies to all applications in the cluster in Istio-enabled namespaces. You can change this if you would like to use another policy.

    kubectl get meshpolicy default -o yaml
    

    Expected output:

    apiVersion: authentication.istio.io/v1alpha1
    kind: MeshPolicy
    metadata:
      name: default
      labels:
        app: security
    spec:
      peers:
      - mtls: {}
    

Now your cluster is properly running on Istio with mTLS enabled globally.

Install the Bank-Vaults components

  1. You are recommended to create a separate namespace for Bank-Vaults called vault-system. You can enable Istio sidecar injection here as well, but Kubernetes won’t be able to call back the webhook properly since mTLS is enabled (and Kubernetes is outside of the Istio mesh). To overcome this, apply a PERMISSIVE Istio authentication policy to the vault-secrets-webhook Service itself, so Kubernetes can call it back without Istio mutual TLS authentication.

    kubectl create namespace vault-system
    kubectl label namespace vault-system name=vault-system istio-injection=enabled
    
    kubectl apply -f - <<EOF
    apiVersion: authentication.istio.io/v1alpha1
    kind: Policy
    metadata:
      name: vault-secrets-webhook
      namespace: vault-system
      labels:
        app: security
    spec:
      targets:
      - name: vault-secrets-webhook
      peers:
      - mtls:
          mode: PERMISSIVE
    EOF
    
  2. Now you can install the operator and the webhook to the prepared namespace:

    helm upgrade --install --wait vault-secrets-webhook oci://ghcr.io/bank-vaults/helm-charts/vault-secrets-webhook --namespace vault-system --create-namespace
    helm upgrade --install --wait vault-operator oci://ghcr.io/bank-vaults/helm-charts/vault-operator --namespace vault-system
    

Soon the webhook and the operator become up and running. Check that the istio-proxy got injected into all Pods in vault-system.

Proceed to the description of your scenario:

8.1 - Scenario 1 - Vault runs outside, the application inside the mesh

In this scenario, Vault runs outside an Istio mesh, whereas the namespace where the application runs and the webhook injects secrets has Istio sidecar injection enabled.

Running Vault outside the Istio mesh

First, complete the Prerequisites, then install Vault outside the mesh, and finally install an application within the mesh.

Install Vault outside the mesh

  1. Provision a Vault instance with the Bank-Vaults operator in a separate namespace:

    kubectl create namespace vault
    
  2. Apply the RBAC and CR files to the cluster to create a Vault instance in the vault namespace with the operator:

    kubectl apply -f rbac.yaml -f cr-istio.yaml
    
    kubectl get pods -n vault
    

    Expected output:

    NAME                               READY   STATUS    RESTARTS   AGE
    vault-0                            3/3     Running   0          22h
    vault-configurer-6458cc4bf-6tpkz   1/1     Running   0          22h
    

    If you are writing your own Vault CR make sure that istioEnabled: true is configured, this influences port naming so the Vault service port protocols are detected by Istio correctly.

  3. The vault-secrets-webhook can’t inject Vault secrets into initContainers in an Istio-enabled namespace when the STRICT authentication policy is applied to the Vault service, because Istio needs a sidecar container to do mTLS properly, and in the phase when initContainers are running the Pod doesn’t have a sidecar yet. If you wish to inject into initContainers as well, you need to apply a PERMISSIVE authentication policy in the vault namespace, since it has its own TLS certificate outside of Istio scope (so this is safe to do from networking security point of view).

    kubectl apply -f - <<EOF
    apiVersion: authentication.istio.io/v1alpha1
    kind: Policy
    metadata:
      name: default
      namespace: vault
      labels:
        app: security
    spec:
      peers:
      - mtls:
          mode: PERMISSIVE
    EOF
    

Install the application inside a mesh

In this scenario Vault is running outside the Istio mesh (as we have installed it in the previous steps and our demo application runs within the Istio mesh. To install the demo application inside the mesh, complete the following steps:

  1. Create a namespace first for the application and enable Istio sidecar injection:

    kubectl create namespace app
    kubectl label namespace app istio-injection=enabled
    
  2. Install the application manifest to the cluster:

    kubectl apply -f app.yaml
    
  3. Check that the application is up and running. It should have two containers, the app itself and the istio-proxy:

    kubectl get pods -n app
    

    Expected output:

    NAME                  READY   STATUS    RESTARTS   AGE
    app-5df5686c4-sl6dz   2/2     Running   0          119s
    
    kubectl logs -f -n app deployment/app app
    

    Expected output:

    time="2020-02-18T14:26:01Z" level=info msg="Received new Vault token"
    time="2020-02-18T14:26:01Z" level=info msg="Initial Vault token arrived"
    s3cr3t
    going to sleep...
    

8.2 - Scenario 2 - Running Vault inside the mesh

To run Vault inside the mesh, complete the following steps.

Running Vault inside the Istio mesh

Note: These instructions assume that you have Scenario 1 up and running, and modifying it to run Vault inside the mesh.

  1. Turn off Istio in the app namespace by removing the istio-injection label:

    kubectl label namespace app istio-injection-
    kubectl label namespace vault istio-injection=enabled
    
  2. Delete the Vault pods in the vault namespace, so they will get recreated with the istio-proxy sidecar:

    kubectl delete pods --all -n vault
    
  3. Check that they both come back with an extra container (4/4 and 2/2 now):

    kubectl get pods -n vault
    

    Expected output:

    NAME                                READY   STATUS    RESTARTS   AGE
    vault-0                             4/4     Running   0          1m
    vault-configurer-6d9b98c856-l4flc   2/2     Running   0          1m
    
  4. Delete the application pods in the app namespace, so they will get recreated without the istio-proxy sidecar:

    kubectl delete pods --all -n app
    

The app pod got recreated with only the app container (1/1) and Vault access still works:

kubectl get pods -n app

Expected output:

NAME                  READY   STATUS    RESTARTS   AGE
app-5df5686c4-4n6r7   1/1     Running   0          71s
kubectl logs -f -n app deployment/app

Expected output:

time="2020-02-18T14:41:20Z" level=info msg="Received new Vault token"
time="2020-02-18T14:41:20Z" level=info msg="Initial Vault token arrived"
s3cr3t
going to sleep...

8.3 - Scenario 3 - Both Vault and the app are running inside the mesh

In this scenario, both Vault and the app are running inside the mesh.

Both Vault and the app are running inside the mesh

  1. Complete the Prerequisites.

  2. Enable sidecar auto-injection for both namespaces:

    kubectl label namespace app   istio-injection=enabled
    kubectl label namespace vault istio-injection=enabled
    
  3. Delete all pods so they are getting injected with the proxy:

    kubectl delete pods --all -n app
    kubectl delete pods --all -n vault
    
  4. Check the logs in the app container. It should sill show success:

    kubectl logs -f -n app deployment/app
    

    Expected output:

    time="2020-02-18T15:04:03Z" level=info msg="Initial Vault token arrived"
    time="2020-02-18T15:04:03Z" level=info msg="Renewed Vault Token"
    s3cr3t
    going to sleep...
    

9 - HSM Support

Bank-Vaults offers a lot of alternatives for encrypting and storing the unseal-keys and the root-token for Vault. One of the encryption technics is the HSM - Hardware Security Module. HSM offers an industry-standard way to encrypt your data in on-premise environments.

You can use a Hardware Security Module (HSM) to generate and store the private keys used by Bank-Vaults. Some articles still point out the speed of HSM devices as their main selling point, but an average PC can do more cryptographic operations. Actually, the main benefit is from the security point of view. An HSM protects your private keys and handles cryptographic operations, which allows the encryption of protected information without exposing the private keys (they are not extractable). Bank-Vaults currently supports the PKCS11 software standard to communicate with an HSM. Fulfilling compliance requirements (for example, PCI DSS) is also a great benefit of HSMs, so from now on you can achieve that with Bank-Vaults.

Implementation in Bank-Vaults

Vault HSM

To support HSM devices for encrypting unseal-keys and root-tokens, Bank-Vaults:

  • implements an encryption/decryption Service named hsm in the bank-vaults CLI,
  • the bank-vaults Docker image now includes the SoftHSM (for testing) and the OpenSC tooling,
  • the operator is aware of HSM and its nature.

The HSM offers an encryption mechanism, but the unseal-keys and root-token have to be stored somewhere after they got encrypted. Currently there are two possible solutions for that:

  • Some HSM devices can store a limited quantity of arbitrary data (like Nitrokey HSM), and Bank-Vaults can store the unseal-keys and root-token here.
  • If the HSM does not support that, Bank-Vaults uses the HSM to encrypt the unseal-keys and root-token, then stores them in Kubernetes Secrets. We believe that it is safe to store these keys in Kubernetes Secrets in encrypted format.

Bank-Vaults offers the ability to use the pre-created the cryptographic encryption keys on the HSM device, or generate a key pair on the fly if there isn’t any with the specified label in the specified slot.

Since Bank-Vaults is written in Go, it uses the github.com/miekg/pkcs11 wrapper to pull in the PKCS11 library, to be more precise the p11 high-level wrapper .

Supported HSM solutions

Bank-Vaults currently supports the following HSM solutions:

  • SoftHSM, recommended for testing
  • NitroKey HSM.
  • AWS CloudHSM supports the PKCS11 API as well, so it probably works, though it needs a custom Docker image.

9.1 - NitroKey HSM support (OpenSC)

Nitrokey HSM is a USB HSM device based on the OpenSC project. We are using NitroKey to develop real hardware-based HSM support for Bank-Vaults. This device is not a cryptographic accelerator, only key generation and the private key operations (sign and decrypt) are supported. Public key operations should be done by extracting the public key and working on the computer, and this is how it is implemented in Bank-Vaults. It is not possible to extract private keys from NitroKey HSM, the device is tamper-resistant.

This device supports only RSA based encryption/decryption, and thus this is implemented in Bank-Vaults currently. It supports ECC keys as well, but only for sign/verification operations.

To start using a NitroKey HSM, complete the following steps.

  1. Install OpenSC and initialize the NitroKey HSM stick:

    brew install opensc
    sc-hsm-tool --initialize --label bank-vaults --pin banzai --so-pin banzaicloud
    pkcs11-tool --module /usr/local/lib/opensc-pkcs11.so --keypairgen --key-type rsa:2048 --pin banzai --token-label bank-vaults --label bank-vaults
    
  2. Check that you got a keypair object in slot 0:

    pkcs11-tool --list-objects
    
    
    Using slot 0 with a present token (0x0)
    Public Key Object; RSA 2048 bits
      label:      bank-vaults
      ID:         a9548075b20243627e971873826ead172e932359
      Usage:      encrypt, verify, wrap
      Access:     none
    
    pkcs15-tool --list-keys
    
    Using reader with a card: Nitrokey Nitrokey HSM
    Private RSA Key [bank-vaults]
      Object Flags   : [0x03], private, modifiable
      Usage          : [0x0E], decrypt, sign, signRecover
      Access Flags   : [0x1D], sensitive, alwaysSensitive, neverExtract, local
      ModLength      : 2048
      Key ref        : 1 (0x01)
      Native         : yes
      Auth ID        : 01
      ID             : a9548075b20243627e971873826ead172e932359
      MD:guid        : a6b2832c-1dc5-f4ef-bb0f-7b3504f67015
    
  3. If you are testing the HSM on macOS, setup minikube. Otherwise, continue with the next step.

  4. Configure the operator to use NitroKey HSM for unsealing.

    You must adjust the unsealConfig section in the vault-operator configuration, so the operator can communicate with OpenSC HSM devices correctly. Adjust your configuration based on the following snippet:

      # This example relies on an OpenSC HSM (NitroKey HSM) device initialized and plugged in to the Kubernetes Node.
      unsealConfig:
        hsm:
          # OpenSC daemon is needed in this case to communicate with the device
          daemon: true
          # The HSM SO module path (opensc is built into the bank-vaults image)
          modulePath: /usr/lib/opensc-pkcs11.so
          # For OpenSC slotId is the preferred way instead of tokenLabel
          # (OpenSC appends/prepends some extra stuff to labels)
          slotId: 0
          pin: banzai # This can be specified in the BANK_VAULTS_HSM_PIN environment variable as well, from a Secret
          keyLabel: bank-vaults
    
  5. Configure your Kubernetes node that has the HSM attached so Bank-Vaults can access it.

Setup on Minikube for testing (optional)

On macOS where you run Docker in VMs you need to do some extra steps before mounting your HSM device to Kubernetes. Complete the following steps to mount NitroKey into the minikube Kubernetes cluster:

  1. Make sure that the Oracle VM VirtualBox Extension Pack for USB 2.0 support is installed.

    VBoxManage list extpacks
    
  2. Remove the HSM device from your computer if it is already plugged in.

  3. Specify VirtualBox as the VM backend for Minikube.

    minikube config set vm-driver virtualbox
    
  4. Create a minikube cluster with the virtualbox driver and stop it, so you can modify the VM.

    minikube start
    minikube stop
    
  5. Enable USB 2.0 support for the minikube VM.

    VBoxManage modifyvm minikube --usbehci on
    
  6. Find the vendorid and productid for your Nitrokey HSM device.

    VBoxManage list usbhost
    
    VENDORID=0x20a0
    PRODUCTID=0x4230
    
  7. Create a filter for it.

    VBoxManage usbfilter add 1 --target minikube --name "Nitrokey HSM" --vendorid ${VENDORID} --productid ${PRODUCTID}
    
  8. Restart the minikube VM.

    minikube start
    
  9. Plug in the USB device.

  10. Check that minikube captured your NitorKey HSM.

    minikube ssh lsusb | grep ${VENDORID:2}:${PRODUCTID:2}
    

Now your minikube Kubernetes cluster has access to the HSM device through USB.

Kubernetes node setup

Some HSM vendors offer network daemons to enhance the reach of their HSM equipment to different servers. Unfortunately, there is no networking standard defined for PKCS11 access and thus currently Bank-Vaults has to be scheduled to the same node where the HSM device is attached directly (if not using a Cloud HSM).

Since the HSM is a hardware device connected to a physical node, Bank-Vaults has to find its way to that node. To make this work, create an HSM extended resource on the Kubernetes nodes for which the HSM device is plugged in. Extended resources must be advertised in integer amounts, for example, a Node can advertise four HSM devices, but not 4.5.

  1. You need to patch the node to specify that it has an HSM device as a resource. Because of the integer constraint and because all Bank-Vaults related Pods has to land on a Node where an HSM resource is available we need to advertise two units for 1 device, one will be allocated by each Vault Pod and one by the Configurer. If you would like to run Vault in HA mode - multiple Vault instances in different nodes - you will need multiple HSM devices plugged into those nodes, having the same key and slot setup.

    kubectl proxy &
    
    NODE=minikube
    
    curl --header "Content-Type: application/json-patch+json" \
        --request PATCH \
        --data '[{"op": "add", "path": "/status/capacity/nitrokey.com~1hsm", "value": "2"}]' \
        http://localhost:8001/api/v1/nodes/${NODE}/status
    

    From now on, you can request the nitrokey.com/hsm resource in the PodSpec

  2. Include the nitrokey.com/hsm resource in your PodSpec:

      # If using the NitroKey HSM example, that resource has to be part of the resource scheduling request.
      resources:
        hsmDaemon:
          requests:
            cpu: 100m
            memory: 64Mi
            nitrokey.com/hsm: 1
          limits:
            cpu: 200m
            memory: 128Mi
            nitrokey.com/hsm: 1
    
  3. Apply the modified setup from scratch:

    kubectl delete vault vault
    kubectl delete pvc vault-file-vault-0
    kubectl delete secret vault-unseal-keys
    kubectl apply -f https://raw.githubusercontent.com/bank-vaults/vault-operator/v1.20.0/deploy/examples/cr-hsm-nitrokey.yaml
    
  4. Check the logs that unsealing uses the NitroKey HSM device. Run the following command:

    kubectl logs -f vault-0 bank-vaults
    

    The output should be something like:

    time="2020-03-04T13:32:29Z" level=info msg="HSM Information {CryptokiVersion:{Major:2 Minor:20} ManufacturerID:OpenSC Project Flags:0 LibraryDescription:OpenSC smartcard framework LibraryVersion:{Major:0 Minor:20}}"
    time="2020-03-04T13:32:29Z" level=info msg="HSM Searching for slot in HSM slots [{ctx:0xc0000c0318 id:0}]"
    time="2020-03-04T13:32:29Z" level=info msg="found HSM slot 0 in HSM by slot ID"
    time="2020-03-04T13:32:29Z" level=info msg="HSM TokenInfo {Label:bank-vaults (UserPIN)\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 ManufacturerID:www.CardContact.de Model:PKCS#15 emulated SerialNumber:DENK0200074 Flags:1037 MaxSessionCount:0 SessionCount:0 MaxRwSessionCount:0 RwSessionCount:0 MaxPinLen:15 MinPinLen:6 TotalPublicMemory:18446744073709551615 FreePublicMemory:18446744073709551615 TotalPrivateMemory:18446744073709551615 FreePrivateMemory:18446744073709551615 HardwareVersion:{Major:24 Minor:13} FirmwareVersion:{Major:3 Minor:3} UTCTime:\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00}"
    time="2020-03-04T13:32:29Z" level=info msg="HSM SlotInfo for slot 0: {SlotDescription:Nitrokey Nitrokey HSM (DENK02000740000         ) 00 00 ManufacturerID:Nitrokey Flags:7 HardwareVersion:{Major:0 Minor:0} FirmwareVersion:{Major:0 Minor:0}}"
    time="2020-03-04T13:32:29Z" level=info msg="found objects with label \"bank-vaults\" in HSM"
    time="2020-03-04T13:32:29Z" level=info msg="this HSM device doesn't support encryption, extracting public key and doing encrytion on the computer"
    time="2020-03-04T13:32:29Z" level=info msg="no storage backend specified for HSM, using on device storage"
    time="2020-03-04T13:32:29Z" level=info msg="joining leader vault..."
    time="2020-03-04T13:32:29Z" level=info msg="vault metrics exporter enabled: :9091/metrics"
    [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
    - using env:	export GIN_MODE=release
    - using code:	gin.SetMode(gin.ReleaseMode)
    
    [GIN-debug] GET    /metrics                  --> github.com/gin-gonic/gin.WrapH.func1 (3 handlers)
    [GIN-debug] Listening and serving HTTP on :9091
    time="2020-03-04T13:32:30Z" level=info msg="initializing vault..."
    time="2020-03-04T13:32:30Z" level=info msg="initializing vault"
    time="2020-03-04T13:32:31Z" level=info msg="unseal key stored in key store" key=vault-unseal-0
    time="2020-03-04T13:32:31Z" level=info msg="unseal key stored in key store" key=vault-unseal-1
    time="2020-03-04T13:32:32Z" level=info msg="unseal key stored in key store" key=vault-unseal-2
    time="2020-03-04T13:32:32Z" level=info msg="unseal key stored in key store" key=vault-unseal-3
    time="2020-03-04T13:32:33Z" level=info msg="unseal key stored in key store" key=vault-unseal-4
    time="2020-03-04T13:32:33Z" level=info msg="root token stored in key store" key=vault-root
    time="2020-03-04T13:32:33Z" level=info msg="vault is sealed, unsealing"
    time="2020-03-04T13:32:39Z" level=info msg="successfully unsealed vault"
    
  5. Find the unseal keys and the root token on the HSM:

    pkcs11-tool --list-objects
    

    Expected output:

    Using slot 0 with a present token (0x0)
    Public Key Object; RSA 2048 bits
      label:      bank-vaults
      ID:         a9548075b20243627e971873826ead172e932359
      Usage:      encrypt, verify, wrap
      Access:     none
    Data object 2168561792
      label:          'vault-test'
      application:    'vault-test'
      app_id:         <empty>
      flags:           modifiable
    Data object 2168561168
      label:          'vault-unseal-0'
      application:    'vault-unseal-0'
      app_id:         <empty>
      flags:           modifiable
    Data object 2168561264
      label:          'vault-unseal-1'
      application:    'vault-unseal-1'
      app_id:         <empty>
      flags:           modifiable
    Data object 2168561360
      label:          'vault-unseal-2'
      application:    'vault-unseal-2'
      app_id:         <empty>
      flags:           modifiable
    Data object 2168562304
      label:          'vault-unseal-3'
      application:    'vault-unseal-3'
      app_id:         <empty>
      flags:           modifiable
    Data object 2168562400
      label:          'vault-unseal-4'
      application:    'vault-unseal-4'
      app_id:         <empty>
      flags:           modifiable
    Data object 2168562496
      label:          'vault-root'
      application:    'vault-root'
      app_id:         <empty>
      flags:           modifiable
    
  6. If you don’t need the encryption keys or the unseal keys on the HSM anymore, you can delete them with the following commands:

    PIN=banzai
    
    # Delete the unseal keys and the root token
    for label in "vault-test" "vault-root" "vault-unseal-0" "vault-unseal-1" "vault-unseal-2" "vault-unseal-3" "vault-unseal-4"
    do
      pkcs11-tool --delete-object --type data --label ${label} --pin ${PIN}
    done
    
    # Delete the encryption key
    pkcs11-tool --delete-object --type privkey --label bank-vaults --pin ${PIN}
    

9.2 - SoftHSM support for testing

You can use SoftHSMv2 to implement and test software interacting with PKCS11 implementations. You can install it on macOS by running the following commands:

# Initializing SoftHSM to be able to create a working example (only for dev),
# sharing the HSM device is emulated with a pre-created keypair in the image.
brew install softhsm
softhsm2-util --init-token --free --label bank-vaults --so-pin banzai --pin banzai
pkcs11-tool --module /usr/local/lib/softhsm/libsofthsm2.so --keypairgen --key-type rsa:2048 --pin banzai --token-label bank-vaults --label bank-vaults

To interact with SoftHSM when using the vault-operator, include the following unsealConfig snippet in the Vault CR:

  # This example relies on the SoftHSM device initialized in the Docker image.
  unsealConfig:
    hsm:
      # The HSM SO module path (softhsm is built into the bank-vaults image)
      modulePath: /usr/lib/softhsm/libsofthsm2.so 
      tokenLabel: bank-vaults
      pin: banzai
      keyLabel: bank-vaults

To run the whole SoftHSM based example in Kubernetes, run the following commands:

kubectl create namespace vault-infra
helm upgrade --install vault-operator oci://ghcr.io/bank-vaults/helm-charts/vault-operator --namespace vault-infra
kubectl apply -f https://raw.githubusercontent.com/bank-vaults/vault-operator/v1.21.0/deploy/examples/cr-hsm-softhsm.yaml

10 - Monitoring

You can use Prometheus to monitor Vault. You can configure Vault to expose metrics through statsd. Both the Helm chart and the Vault Operator installs the Prometheus StatsD exporter and annotates the pods correctly with Prometheus annotations so Prometheus can discover and scrape them. All you have to do is to put the telemetry stanza into your Vault configuration:

    telemetry:
      statsd_address: localhost:9125

You may find the generic Prometheus kubernetes client Go Process runtime monitoring dashboard useful for monitoring the webhook or any other Go process.

To monitor the mutating webhook, see Monitoring the Webhook with Grafana and Prometheus.

11 - Annotations and labels

Annotations

The Vault Operator supports annotating most of the resources it creates using a set of fields in the Vault Specs:

Common Vault Resources annotations

apiVersion: "vault.banzaicloud.com/v1alpha1"
kind: "Vault"
metadata:
  name: "vault"
    spec:
      annotations:
          example.com/test: "something"

These annotations are common to all Vault created resources

  • Vault StatefulSet
  • Vault Pods
  • Vault Configurer Deployment
  • Vault Configurer Pod
  • Vault Services
  • Vault Configurer Service
  • Vault TLS Secret

Vault StatefulSet Resources annotations

apiVersion: "vault.banzaicloud.com/v1alpha1"
kind: "Vault"
metadata:
  name: "vault"
    spec:
      vaultAnnotations:
          example.com/vault: "true"

These annotations are common to all Vault StatefulSet created resources

  • Vault StatefulSet
  • Vault Pods
  • Vault Services
  • Vault TLS Secret

These annotations will override any annotation defined in the common set

Vault Configurer deployment Resources annotations

apiVersion: "vault.banzaicloud.com/v1alpha1"
kind: "Vault"
metadata:
  name: "vault"
    spec:
      vaultConfigurerAnnotations:
          example.com/vaultConfigurer: "true"

These annotations are common to all Vault Configurer Deployment created resources

  • Vault Configurer Deployment
  • Vault Configurer Pod
  • Vault Configurer Service

These annotations will override any annotation defined in the common set

ETCD CRD Annotations

apiVersion: "vault.banzaicloud.com/v1alpha1"
kind: "Vault"
metadata:
  name: "vault"
    spec:
      etcdAnnotations:
          etcd.database.coreos.com/scope: clusterwide

These annotations are set only on the etcdcluster resource

ETCD PODs Annotations

apiVersion: "vault.banzaicloud.com/v1alpha1"
kind: "Vault"
metadata:
  name: "vault"
    spec:
      etcdPodAnnotations:
          backup.velero.io/backup-volumes: "YOUR_VOLUME_NAME"

These annotations are set only on the etcd pods created by the etcd-operator

Labels

The Vault Operator support labelling most of the resources it creates using a set of fields in the Vault Specs:

Vault StatefulSet Resources labels

apiVersion: "vault.banzaicloud.com/v1alpha1"
kind: "Vault"
metadata:
  name: "vault"
    spec:
      vaultLabels:
          example.com/log-format: "json"

These labels are common to all Vault StatefulSet created resources

  • Vault StatefulSet
  • Vault Pods
  • Vault Services
  • Vault TLS Secret

Vault Configurer deployment Resources labels

apiVersion: "vault.banzaicloud.com/v1alpha1"
kind: "Vault"
metadata:
  name: "vault"
    spec:
      vaultConfigurerLabels:
          example.com/log-format: "string"

These labels are common to all Vault Configurer Deployment created resources

  • Vault Configurer Deployment
  • Vault Configurer Pod
  • Vault Configurer Service

12 - Watching External Secrets

In some cases, you might have to restart the Vault StatefulSet when secrets that are not managed by the operator control are changed. For example:

  • Cert-Manager managing a public Certificate for vault using Let’s Encrypt.
  • Cloud IAM Credentials created with an external tool (like terraform) to allow vault to interact with the cloud services.

The operator can watch a set of secrets in the namespace of the Vault resource using either a list of label selectors or an annotations selector. When the content of any of those secrets changes, the operator updates the StatefulSet and triggers a rolling restart.

Configure label selectors

Set the secrets to watch using the watchedSecretsAnnotations and watchedSecretsLabels fields in your Vault custom resource.

Note: For cert-manager 0.11 or newer, use the watchedSecretsAnnotations field.

In the following example, the Vault StatefulSet is restarted when:

  • A secret with label certmanager.k8s.io/certificate-name: vault-letsencrypt-cert changes its contents (cert-manager 0.10 and earlier).
  • A secret with label test.com/scope: gcp AND test.com/credentials: vault changes its contents.
  • A secret with annotation cert-manager.io/certificate-name: vault-letsencrypt-cert changes its contents (cert-manager 0.11 and newer).
watchedSecretsLabels:
  - certmanager.k8s.io/certificate-name: vault-letsencrypt-cert
  - test.com/scope: gcp
    test.com/credentials: vault

watchedSecretsAnnotations:
  - cert-manager.io/certificate-name: vault-letsencrypt-cert

The operator controls the restart of the StatefulSet by adding an annotation to the spec.template of the vault resource

kubectl get -n vault statefulset vault -o json | jq .spec.template.metadata.annotations
{
  "prometheus.io/path": "/metrics",
  "prometheus.io/port": "9102",
  "prometheus.io/scrape": "true",
  "vault.banzaicloud.io/watched-secrets-sum": "ff1f1c79a31f76c68097975977746be9b85878f4737b8ee5a9d6ee3c5169b0ba"
}

13 - API Reference

Packages

vault.banzaicloud.com/v1alpha1

Package v1alpha1 contains API Schema definitions for the vault.banzaicloud.com v1alpha1 API group

AWSUnsealConfig

AWSUnsealConfig holds the parameters for AWS KMS based unsealing

Appears in:

kmsKeyId (string)

kmsRegion (string)

kmsEncryptionContext (string)

s3Bucket (string)

s3Prefix (string)

s3Region (string)

s3SSE (string)

AlibabaUnsealConfig

AlibabaUnsealConfig holds the parameters for Alibaba Cloud KMS based unsealing –alibaba-kms-region eu-central-1 –alibaba-kms-key-id 9d8063eb-f9dc-421b-be80-15d195c9f148 –alibaba-oss-endpoint oss-eu-central-1.aliyuncs.com –alibaba-oss-bucket bank-vaults

Appears in:

kmsRegion (string)

kmsKeyId (string)

ossEndpoint (string)

ossBucket (string)

ossPrefix (string)

AzureUnsealConfig

AzureUnsealConfig holds the parameters for Azure Key Vault based unsealing

Appears in:

keyVaultName (string)

CredentialsConfig

CredentialsConfig configuration for a credentials file provided as a secret

Appears in:

env (string)

path (string)

secretName (string)

EmbeddedObjectMetadata

EmbeddedObjectMetadata contains a subset of the fields included in k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta Only fields which are relevant to embedded resources are included. controller-gen discards embedded ObjectMetadata type fields, so we have to overcome this.

Appears in:

name (string)

Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names

labels (object (keys:string, values:string))

Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels

annotations (object (keys:string, values:string))

Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations

EmbeddedPersistentVolumeClaim

EmbeddedPersistentVolumeClaim is an embeddable and controller-gen friendly version of k8s.io/api/core/v1.PersistentVolumeClaim. It contains TypeMeta and a reduced ObjectMeta.

Appears in:

metadata (EmbeddedObjectMetadata)

Refer to Kubernetes API documentation for fields of metadata.

spec (PersistentVolumeClaimSpec)

Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

EmbeddedPodSpec

EmbeddedPodSpec is a description of a pod, which allows containers to be missing, almost as k8s.io/api/core/v1.PodSpec.

Appears in:

volumes (Volume array)

List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes

initContainers (Container array)

List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

containers (Container array)

List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated.

ephemeralContainers (EphemeralContainer array)

List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod’s ephemeralcontainers subresource.

restartPolicy (RestartPolicy)

Restart policy for all containers within the pod. One of Always, OnFailure, Never. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy

terminationGracePeriodSeconds (integer)

Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds.

activeDeadlineSeconds (integer)

Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer.

dnsPolicy (DNSPolicy)

Set DNS policy for the pod. Defaults to “ClusterFirst”. Valid values are ‘ClusterFirstWithHostNet’, ‘ClusterFirst’, ‘Default’ or ‘None’. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to ‘ClusterFirstWithHostNet’.

nodeSelector (object (keys:string, values:string))

NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/

serviceAccountName (string)

ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/

serviceAccount (string)

DeprecatedServiceAccount is a depreciated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead.

automountServiceAccountToken (boolean)

AutomountServiceAccountToken indicates whether a service account token should be automatically mounted.

nodeName (string)

NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements.

hostNetwork (boolean)

Host networking requested for this pod. Use the host’s network namespace. If this option is set, the ports that will be used must be specified. Default to false.

hostPID (boolean)

Use the host’s pid namespace. Optional: Default to false.

hostIPC (boolean)

Use the host’s ipc namespace. Optional: Default to false.

shareProcessNamespace (boolean)

Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false.

securityContext (PodSecurityContext)

SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field.

imagePullSecrets (LocalObjectReference array)

ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod

hostname (string)

Specifies the hostname of the Pod If not specified, the pod’s hostname will be set to a system-defined value.

subdomain (string)

If specified, the fully qualified Pod hostname will be “...svc.”. If not specified, the pod will not have a domainname at all.

affinity (Affinity)

If specified, the pod’s scheduling constraints

schedulerName (string)

If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler.

tolerations (Toleration array)

If specified, the pod’s tolerations.

hostAliases (HostAlias array)

HostAliases is an optional list of hosts and IPs that will be injected into the pod’s hosts file if specified. This is only valid for non-hostNetwork pods.

priorityClassName (string)

If specified, indicates the pod’s priority. “system-node-critical” and “system-cluster-critical” are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default.

priority (integer)

The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority.

dnsConfig (PodDNSConfig)

Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy.

readinessGates (PodReadinessGate array)

If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to “True” More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates

runtimeClassName (string)

RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the “legacy” RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class

EnableServiceLinks indicates whether information about services should be injected into pod’s environment variables, matching the syntax of Docker links. Optional: Defaults to true.

preemptionPolicy (PreemptionPolicy)

PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset.

overhead (ResourceList)

Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md

topologySpreadConstraints (TopologySpreadConstraint array)

TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed.

setHostnameAsFQDN (boolean)

If true the pod’s hostname will be configured as the pod’s FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false.

os (PodOS)

Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set. If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[].securityContext.seLinuxOptions - spec.containers[].securityContext.seccompProfile - spec.containers[].securityContext.capabilities - spec.containers[].securityContext.readOnlyRootFilesystem - spec.containers[].securityContext.privileged - spec.containers[].securityContext.allowPrivilegeEscalation - spec.containers[].securityContext.procMount - spec.containers[].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup

hostUsers (boolean)

Use the host’s user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature.

schedulingGates (PodSchedulingGate array)

SchedulingGates is an opaque list of values that if specified will block scheduling the pod. More info: https://git.k8s.io/enhancements/keps/sig-scheduling/3521-pod-scheduling-readiness. This is an alpha-level feature enabled by PodSchedulingReadiness feature gate.

resourceClaims (PodResourceClaim array)

ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable.

GoogleUnsealConfig

GoogleUnsealConfig holds the parameters for Google KMS based unsealing

Appears in:

kmsKeyRing (string)

kmsCryptoKey (string)

kmsLocation (string)

kmsProject (string)

storageBucket (string)

HSMUnsealConfig

HSMUnsealConfig holds the parameters for remote HSM based unsealing

Appears in:

daemon (boolean)

modulePath (string)

slotId (integer)

tokenLabel (string)

pin (string)

keyLabel (string)

Ingress

Ingress specification for the Vault cluster

Appears in:

annotations (object (keys:string, values:string))

spec (IngressSpec)

KubernetesUnsealConfig

KubernetesUnsealConfig holds the parameters for Kubernetes based unsealing

Appears in:

secretNamespace (string)

secretName (string)

Resources

Resources holds different container’s ResourceRequirements

Appears in:

vault (ResourceRequirements)

bankVaults (ResourceRequirements)

hsmDaemon (ResourceRequirements)

prometheusExporter (ResourceRequirements)

fluentd (ResourceRequirements)

UnsealConfig

UnsealConfig represents the UnsealConfig field of a VaultSpec Kubernetes object

Appears in:

options (UnsealOptions)

kubernetes (KubernetesUnsealConfig)

google (GoogleUnsealConfig)

alibaba (AlibabaUnsealConfig)

azure (AzureUnsealConfig)

aws (AWSUnsealConfig)

vault (VaultUnsealConfig)

hsm (HSMUnsealConfig)

UnsealOptions

UnsealOptions represents the common options to all unsealing backends

Appears in:

preFlightChecks (boolean)

storeRootToken (boolean)

secretThreshold (integer)

secretShares (integer)

Vault

Vault is the Schema for the vaults API

Appears in:

apiVersion string vault.banzaicloud.com/v1alpha1

kind string Vault

metadata (ObjectMeta)

Refer to Kubernetes API documentation for fields of metadata.

spec (VaultSpec)

VaultList

VaultList contains a list of Vault

apiVersion string vault.banzaicloud.com/v1alpha1

kind string VaultList

metadata (ListMeta)

Refer to Kubernetes API documentation for fields of metadata.

items (Vault array)

VaultSpec

VaultSpec defines the desired state of Vault

Appears in:

size (integer)

Size defines the number of Vault instances in the cluster (>= 1 means HA) default: 1

image (string)

Image specifies the Vault image to use for the Vault instances default: hashicorp/vault:latest

bankVaultsImage (string)

BankVaultsImage specifies the Bank Vaults image to use for Vault unsealing and configuration default: ghcr.io/bank-vaults/bank-vaults:latest

bankVaultsVolumeMounts (VolumeMount array)

BankVaultsVolumeMounts define some extra Kubernetes Volume mounts for the Bank Vaults Sidecar container. default:

statsdDisabled (boolean)

StatsDDisabled specifies if StatsD based metrics should be disabled default: false

statsdImage (string)

StatsDImage specifices the StatsD image to use for Vault metrics exportation default: prom/statsd-exporter:latest

statsdConfig (string)

StatsdConfig specifices the StatsD mapping configuration default:

fluentdEnabled (boolean)

FluentDEnabled specifies if FluentD based log exportation should be enabled default: false

fluentdImage (string)

FluentDImage specifices the FluentD image to use for Vault log exportation default: fluent/fluentd:edge

fluentdConfLocation (string)

FluentDConfLocation is the location of the fluent.conf file default: “/fluentd/etc”

fluentdConfFile (string)

FluentDConfFile specifices the FluentD configuration file name to use for Vault log exportation default:

fluentdConfig (string)

FluentDConfig specifices the FluentD configuration to use for Vault log exportation default:

watchedSecretsLabels (object array)

WatchedSecretsLabels specifices a set of Kubernetes label selectors which select Secrets to watch. If these Secrets change the Vault cluster gets restarted. For example a Secret that Cert-Manager is managing a public Certificate for Vault using let’s Encrypt. default:

watchedSecretsAnnotations (object array)

WatchedSecretsAnnotations specifices a set of Kubernetes annotations selectors which select Secrets to watch. If these Secrets change the Vault cluster gets restarted. For example a Secret that Cert-Manager is managing a public Certificate for Vault using let’s Encrypt. default:

annotations (object (keys:string, values:string))

Annotations define a set of common Kubernetes annotations that will be added to all operator managed resources. default:

vaultAnnotations (object (keys:string, values:string))

VaultAnnotations define a set of Kubernetes annotations that will be added to all Vault Pods. default:

vaultLabels (object (keys:string, values:string))

VaultLabels define a set of Kubernetes labels that will be added to all Vault Pods. default:

vaultPodSpec (EmbeddedPodSpec)

VaultPodSpec is a Kubernetes Pod specification snippet (spec: block) that will be merged into the operator generated Vault Pod specification. default:

vaultContainerSpec (Container)

VaultContainerSpec is a Kubernetes Container specification snippet that will be merged into the operator generated Vault Container specification. default:

vaultConfigurerAnnotations (object (keys:string, values:string))

VaultConfigurerAnnotations define a set of Kubernetes annotations that will be added to the Vault Configurer Pod. default:

vaultConfigurerLabels (object (keys:string, values:string))

VaultConfigurerLabels define a set of Kubernetes labels that will be added to all Vault Configurer Pod. default:

vaultConfigurerPodSpec (EmbeddedPodSpec)

VaultConfigurerPodSpec is a Kubernetes Pod specification snippet (spec: block) that will be merged into the operator generated Vault Configurer Pod specification. default:

config (JSON)

Config is the Vault Server configuration. See https://www.vaultproject.io/docs/configuration/ for more details. default:

externalConfig (JSON)

ExternalConfig is higher level configuration block which instructs the Bank Vaults Configurer to configure Vault through its API, thus allows setting up: - Secret Engines - Auth Methods - Audit Devices - Plugin Backends - Policies - Startup Secrets (Bank Vaults feature) A documented example: https://github.com/bank-vaults/vault-operator/blob/main/vault-config.yml default:

unsealConfig (UnsealConfig)

UnsealConfig defines where the Vault cluster’s unseal keys and root token should be stored after initialization. See the type’s documentation for more details. Only one method may be specified. default: Kubernetes Secret based unsealing

credentialsConfig (CredentialsConfig)

CredentialsConfig defines a external Secret for Vault and how it should be mounted to the Vault Pod for example accessing Cloud resources. default:

envsConfig (EnvVar array)

EnvsConfig is a list of Kubernetes environment variable definitions that will be passed to all Bank-Vaults pods. default:

securityContext (PodSecurityContext)

SecurityContext is a Kubernetes PodSecurityContext that will be applied to all Pods created by the operator. default:

serviceType (string)

ServiceType is a Kubernetes Service type of the Vault Service. default: ClusterIP

loadBalancerIP (string)

LoadBalancerIP is an optional setting for allocating a specific address for the entry service object of type LoadBalancer default: ""

serviceRegistrationEnabled (boolean)

serviceRegistrationEnabled enables the injection of the service_registration Vault stanza. This requires elaborated RBAC privileges for updating Pod labels for the Vault Pod. default: false

raftLeaderAddress (string)

RaftLeaderAddress defines the leader address of the raft cluster in multi-cluster deployments. (In single cluster (namespace) deployments it is automatically detected). “self” is a special value which means that this instance should be the bootstrap leader instance. default: ""

servicePorts (object (keys:string, values:integer))

ServicePorts is an extra map of ports that should be exposed by the Vault Service. default:

affinity (Affinity)

Affinity is a group of affinity scheduling rules applied to all Vault Pods. default:

podAntiAffinity (string)

PodAntiAffinity is the TopologyKey in the Vault Pod’s PodAntiAffinity. No PodAntiAffinity is used if empty. Deprecated. Use Affinity. default:

nodeAffinity (NodeAffinity)

NodeAffinity is Kubernetees NodeAffinity definition that should be applied to all Vault Pods. Deprecated. Use Affinity. default:

nodeSelector (object (keys:string, values:string))

NodeSelector is Kubernetees NodeSelector definition that should be applied to all Vault Pods. default:

tolerations (Toleration array)

Tolerations is Kubernetes Tolerations definition that should be applied to all Vault Pods. default:

serviceAccount (string)

ServiceAccount is Kubernetes ServiceAccount in which the Vault Pods should be running in. default: default

volumes (Volume array)

Volumes define some extra Kubernetes Volumes for the Vault Pods. default:

volumeMounts (VolumeMount array)

VolumeMounts define some extra Kubernetes Volume mounts for the Vault Pods. default:

volumeClaimTemplates (EmbeddedPersistentVolumeClaim array)

VolumeClaimTemplates define some extra Kubernetes PersistentVolumeClaim templates for the Vault Statefulset. default:

vaultEnvsConfig (EnvVar array)

VaultEnvsConfig is a list of Kubernetes environment variable definitions that will be passed to the Vault container. default:

sidecarEnvsConfig (EnvVar array)

SidecarEnvsConfig is a list of Kubernetes environment variable definitions that will be passed to Vault sidecar containers. default:

resources (Resources)

Resources defines the resource limits for all the resources created by the operator. See the type for more details. default:

ingress (Ingress)

Ingress, if it is specified the operator will create an Ingress resource for the Vault Service and will annotate it with the correct Ingress annotations specific to the TLS settings in the configuration. See the type for more details. default:

serviceMonitorEnabled (boolean)

ServiceMonitorEnabled enables the creation of Prometheus Operator specific ServiceMonitor for Vault. default: false

existingTlsSecretName (string)

ExistingTLSSecretName is name of the secret that contains a TLS server certificate and key and the corresponding CA certificate. Required secret format kubernetes.io/tls type secret keys + ca.crt key If it is set, generating certificate will be disabled default: ""

tlsExpiryThreshold (string)

TLSExpiryThreshold is the Vault TLS certificate expiration threshold in Go’s Duration format. default: 168h

tlsAdditionalHosts (string array)

TLSAdditionalHosts is a list of additional hostnames or IP addresses to add to the SAN on the automatically generated TLS certificate. default:

caNamespaces (string array)

CANamespaces define a list of namespaces where the generated CA certificate for Vault should be distributed, use ["*"] for all namespaces. default:

istioEnabled (boolean)

IstioEnabled describes if the cluster has a Istio running and enabled. default: false

veleroEnabled (boolean)

VeleroEnabled describes if the cluster has a Velero running and enabled. default: false

veleroFsfreezeImage (string)

VeleroFsfreezeImage specifices the Velero Fsrfeeze image to use in Velero backup hooks default: velero/fsfreeze-pause:latest

vaultContainers (Container array)

VaultContainers add extra containers

vaultInitContainers (Container array)

VaultInitContainers add extra initContainers

VaultUnsealConfig

VaultUnsealConfig holds the parameters for remote Vault based unsealing

Appears in:

address (string)

unsealKeysPath (string)

role (string)

authPath (string)

tokenPath (string)

token (string)