The following sections give you an overview of the main concepts of Bank-Vaults. Most of these apply equally to the bank-vaults
CLI and to the Vault operator, because under the hood the operator often uses the CLI tool with the appropriate parameters.
This is the multi-page printable view of this section. Click here to print.
Concepts
- 1: Initialize Vault and store the root token and unseal keys
- 2: Cloud permissions
- 3: External configuration for Vault
- 3.1: Fully or partially purging unmanaged configuration in Vault
- 3.2: Audit devices
- 3.3: Authentication
- 3.4: Plugins
- 3.5: Policies
- 3.6: Secrets engines
- 3.7: Startup secrets
1 - Initialize Vault and store the root token and unseal keys
Vault starts in an uninitialized state, which means it has to be initialized with an initial set of parameters. The response to the init request is the root token and unseal keys. After that, Vault becomes initialized, but remains in a sealed state.
Bank-Vaults stores the root token and the unseal keys in one of the following:
- AWS KMS keyring (backed by S3)
- Azure Key Vault
- Google Cloud KMS keyring (backed by GCS)
- Alibaba Cloud KMS (backed by OSS)
For development and testing purposes, the following solutions are also supported. Do not use these in production environments.
- Kubernetes Secrets (should be used only for development purposes)
- Dev Mode (useful for
vault server -dev
dev mode Vault servers) - Files (backed by files, should be used only for development purposes)
Keys stored by Bank-Vaults
Bank-Vaults stores the following keys:
vault-root
, which is Vault’s root token.vault-unseal-N
unseal keys, whereN
is a number, starting at 0 up to the maximum defined minus 1. For example, 5 unseal keys will bevault-unseal-0 ... vault-unseal-4
.
HashiCorp recommends revoking the root tokens after the initial set up of Vault has been completed.
Note: The
vault-root
token is not needed to unseal Vault, and can be removed from the storage if it was put there via the--init
call tobank-vaults
.
If you want to decrypt the root token for some reason, see Decrypt the root token.
Unseal Vault
Unsealing is the process of constructing the master key necessary to read the decryption key to decrypt data, allowing access to Vault. (From the official Vault documentation)
After initialization, Vault remains in a sealed state. In sealed state no secrets can reach or leave Vault until a person, possibly more people than one, unseals it with the required number of unseal keys.
Vault data and the unseal keys live together: if you delete a Vault instance installed by the operator, or if you delete the Helm chart, all your data and the unseal keys to that initialized state should remain untouched. For details, see the official documentation.
Note: If you change the unseal configuration after initializing Vault, you may have to move the unseal keys from the old location to the new one, or reinitialize vault.
The Bank-Vaults Init and Unseal process
Bank-Vaults runs in an endless loop and does the following:
- Bank-Vaults checks if Vault is initialized. If yes, it continues to step 2, otherwise Bank-Vaults:
- Calls Vault init, which returns the root token and the configured number of unseal keys.
- Encrypts the received token and keys with the configured KMS key.
- Stores the encrypted token and keys in the cloud provider’s object storage.
- Flushes the root token and keys from its memory with explicit garbage control as soon as possible.
- Bank-Vaults checks if Vault is sealed. If it isn’t, it continues to step 3, otherwise Bank-Vaults:
- Reads the encrypted unseal keys from the cloud provider’s object storage.
- Decrypts the unseal keys with the configured KMS key.
- Unseals Vault with the decrypted unseal keys.
- Flushes the keys from its memory with explicit garbage control as soon as possible.
- If the external configuration file was changed and an OS signal is received, then Bank-Vaults:
- Parses the configuration file.
- Reads the encrypted root token from the cloud provider’s object storage.
- Decrypts the root token with the configured KMS key.
- Applies the parsed configuration on the Vault API.
- Flushes the root token from its memory with explicit garbage control as soon as possible.
- Repeats from the second step after the configured time period.
1.1 - Decrypt the root token
If you want to decrypt the root token for some reason, see the section corresponding to the storage provider you used to store the token.
AWS
To use the KMS-encrypted root token with Vault CLI:
Required CLI tools:
- aws
Steps:
-
Download and decrypt the root token (and the unseal keys, but that is not mandatory) into a file on your local file system:
BUCKET=bank-vaults-0 REGION=eu-central-1 for key in "vault-root" "vault-unseal-0" "vault-unseal-1" "vault-unseal-2" "vault-unseal-3" "vault-unseal-4" do aws s3 cp s3://${BUCKET}/${key} . aws kms decrypt \ --region ${REGION} \ --ciphertext-blob fileb://${key} \ --encryption-context Tool=bank-vaults \ --output text \ --query Plaintext | base64 -d > ${key}.txt rm ${key} done
-
Save it as an environment variable:
export VAULT_TOKEN="$(cat vault-root.txt)"
Google Cloud
To use the KMS-encrypted root token with vault CLI:
Required CLI tools:
gcloud
gsutil
GOOGLE_PROJECT="my-project"
GOOGLE_REGION="us-central1"
BUCKET="bank-vaults-bucket"
KEYRING="beta"
KEY="beta"
export VAULT_TOKEN=$(gsutil cat gs://${BUCKET}/vault-root | gcloud kms decrypt \
--project ${GOOGLE_PROJECT} \
--location ${GOOGLE_REGION} \
--keyring ${KEYRING} \
--key ${KEY} \
--ciphertext-file - \
--plaintext-file -)
Kubernetes
There is a Kubernetes Secret backed unseal storage in Bank-Vaults, you should be aware of that Kubernetes Secrets are base64 encoded only if you are not using a EncryptionConfiguration in your Kubernetes cluster.
VAULT_NAME="vault"
export VAULT_TOKEN=$(kubectl get secrets ${VAULT_NAME}-unseal-keys -o jsonpath={.data.vault-root} | base64 -d)
1.2 - Migrate unseal keys between cloud providers
Note: If you change the unseal configuration after initializing Vault, you may have to move the unseal keys from the old location to the new one, or reinitialize vault.
If you need to move your Vault instance from one provider or an external managed Vault, you have to:
- Retrieve and decrypt the unseal keys (and optionally the root token) in the Bank-Vaults format. For details, see Decrypt the root token.
- Migrate the Vault storage data to the new provider. Use the official migration command provided by Vault.
All examples assume that you have created files holding the root-token and the 5 unseal keys in plaintext:
vault-root.txt
vault-unseal-0.txt
vault-unseal-1.txt
vault-unseal-2.txt
vault-unseal-3.txt
vault-unseal-4.txt
AWS
Move the above mentioned files to an AWS bucket and encrypt them with KMS before:
REGION=eu-central-1
KMS_KEY_ID=02a2ba49-42ce-487f-b006-34c64f4b760e
BUCKET=bank-vaults-1
for key in "vault-root" "vault-unseal-0" "vault-unseal-1" "vault-unseal-2" "vault-unseal-3" "vault-unseal-4"
do
aws kms encrypt \
--region ${REGION} --key-id ${KMS_KEY_ID} \
--plaintext fileb://${key}.txt \
--encryption-context Tool=bank-vaults \
--output text \
--query CiphertextBlob | base64 -d > ${key}
aws s3 cp ./${key} s3://${BUCKET}/
rm ${key} ${key}.txt
done
2 - Cloud permissions
The operator and the bank-vaults
CLI command needs certain cloud permissions to function properly (init, unseal, configuration).
Google Cloud
The Service Account in which the Pod is running has to have the following IAM Roles:
- Cloud KMS Admin
- Cloud KMS CryptoKey Encrypter/Decrypter
- Storage Admin
A CLI example how to run bank-vaults based Vault configuration on Google Cloud:
bank-vaults configure --google-cloud-kms-key-ring vault --google-cloud-kms-crypto-key bank-vaults --google-cloud-kms-location global --google-cloud-storage-bucket vault-ha --google-cloud-kms-project continual-flow-276578
Azure
The Access Policy in which the Pod is running has to have the following IAM Roles:
- Key Vault All Key permissions
- Key Vault All Secret permissions
AWS
Enable IAM OIDC provider for an EKS cluster
To allow Vault pods to assume IAM roles in order to access AWS services the IAM OIDC provider needs to be enabled on the cluster.
BANZAI_CURRENT_CLUSTER_NAME="mycluster"
# Enable OIDC provider for the cluster with eksctl
# Follow the docs here to do it manually https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html
eksctl utils associate-iam-oidc-provider \
--cluster ${BANZAI_CURRENT_CLUSTER_NAME} \
--approve
# Create a KMS key and S3 bucket and enter details here
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
REGION="eu-west-1"
KMS_KEY_ID="9f054126-2a98-470c-9f10-9b3b0cad94a1"
KMS_KEY_ARN="arn:aws:kms:${REGION}:${AWS_ACCOUNT_ID}:key/${KMS_KEY_ID}"
BUCKET="bank-vaults"
OIDC_PROVIDER=$(aws eks describe-cluster --name ${BANZAI_CURRENT_CLUSTER_NAME} --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")
SERVICE_ACCOUNT_NAME="vault"
SERVICE_ACCOUNT_NAMESPACE="vault"
cat > trust.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_PROVIDER}:sub": "system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
}
}
}
]
}
EOF
cat > vault-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt"
],
"Resource": [
"${KMS_KEY_ARN}"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::${BUCKET}/*"
]
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::${BUCKET}"
}
]
}
EOF
# AWS IAM role and Kubernetes service account setup
aws iam create-role --role-name vault --assume-role-policy-document file://trust.json
aws iam create-policy --policy-name vault --policy-document file://vault-policy.json
aws iam attach-role-policy --role-name vault --policy-arn arn:aws:iam::${AWS_ACCOUNT_ID}:policy/vault
# If you are having a ServiceAccount already, only the annotation is needed
kubectl create serviceaccount $SERVICE_ACCOUNT_NAME --namespace $SERVICE_ACCOUNT_NAMESPACE
kubectl annotate serviceaccount $SERVICE_ACCOUNT_NAME --namespace $SERVICE_ACCOUNT_NAMESPACE eks.amazonaws.com/role-arn="arn:aws:iam::${AWS_ACCOUNT_ID}:role/vault"
# Cleanup
rm vault-policy.json trust.json
Getting the root token
After Vault is successfully deployed, you can query the root-token for admin access.
# Fetch Vault root token, check bucket for actual name based on unsealConfig.aws.s3Prefix
aws s3 cp s3://$s3_bucket_name/vault-root /tmp/vault-root
export VAULT_TOKEN="$(aws kms decrypt \
--ciphertext-blob fileb:///tmp/vault-root \
--encryption-context Tool=bank-vaults \
--query Plaintext --output text | base64 --decode)"
The Instance profile in which the Pod is running has to have the following IAM Policies:
- KMS:
kms:Encrypt, kms:Decrypt
- S3:
s3:GetObject, s3:PutObject
,s3:DeleteObject
on object level ands3:ListBucket
on bucket level
An example command how to init and unseal Vault on AWS:
bank-vaults unseal --init --mode aws-kms-s3 --aws-kms-key-id 9f054126-2a98-470c-9f10-9b3b0cad94a1 --aws-s3-region eu-west-1 --aws-kms-region eu-west-1 --aws-s3-bucket bank-vaults
When using existing unseal keys, you need to make sure to kms encrypt these with the proper EncryptionContext
.
If this is not done, the invocation of bank-vaults
will trigger an InvalidCiphertextException
from AWS KMS.
An example how to encrypt the keys (specify --profile
and --region
accordingly):
aws kms encrypt --key-id "alias/kms-key-alias" --encryption-context "Tool=bank-vaults" --plaintext fileb://vault-unseal-0.txt --output text --query CiphertextBlob | base64 -D > vault-unseal-0
From this point on copy the encrypted files to the appropriate S3 bucket. As an additional security measure make sure to turn on encryption of the S3 bucket before uploading the files.
Alibaba Cloud
A CLI example how to run bank-vaults based Vault unsealing on Alibaba Cloud:
bank-vaults unseal --mode alibaba-kms-oss --alibaba-access-key-id ${ALIBABA_ACCESS_KEY_ID} --alibaba-access-key-secret ${ALIBABA_ACCESS_KEY_SECRET} --alibaba-kms-region eu-central-1 --alibaba-kms-key-id ${ALIBABA_KMS_KEY_UUID} --alibaba-oss-endpoint oss-eu-central-1.aliyuncs.com --alibaba-oss-bucket bank-vaults
Kubernetes
The Service Account in which the bank-vaults Pod is running has to have the following Roles rules:
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "update"]
3 - External configuration for Vault
In addition to the standard Vault configuration, the operator and CLI can continuously configure Vault using an external YAML/JSON configuration. That way you can configure Vault declaratively using your usual automation tools and workflow.
The following sections describe the configuration sections you can use.
3.1 - Fully or partially purging unmanaged configuration in Vault
Bank-Vaults gives you a full control over Vault in a declarative style by removing any unmanaged configuration.
By enabling purgeUnmanagedConfig
you keep Vault configuration up-to-date.
So if you added a policy using Bank-Vaults then removed it from the configuration,
Bank-Vaults will remove it from Vault too. In other words, if you enabled purgeUnmanagedConfig
then any changes not in Bank-Vaults configuration will be removed (including manual changes).
WARNING:
This feature is
destructive
, so be careful when you enable it especially for the first time because it can delete all data in your Vault. Always test it a non-production environment first.
This feature is disabled by default and it needs to be enabled explicitly in your configuration.
Mechanism
Bank-Vaults handles unmanaged configuration by simply comparing what in Bank-Vaults configuration (the desired state) and what’s already in Vault (the actual state), then it removes any differences that are not in Bank-Vaults configuration.
Fully purge unmanaged configuration
You can remove all unmanaged configuration by enabling the purge option as following:
purgeUnmanagedConfig:
enabled: true
Partially purge unmanaged configuration
You can also enable the purge feature for some of the config by excluding any config that you don’t want to purge its unmanaged config.
It could be done by explicitly exclude the Vault configuration that you don’t want to mange:
purgeUnmanagedConfig:
enabled: true
exclude:
secrets: true
This will remove any unmanaged or manual changes in Vault but it will leave secrets
untouched.
So if you enabled a new secret engine manually (and it’s not in Bank-Vaults configuration),
Bank-Vaults will not remove it.
3.2 - Audit devices
You can configure Audit Devices in Vault (File, Syslog, Socket).
audit:
- type: file
description: "File based audit logging device"
options:
file_path: /tmp/vault.log
3.3 - Authentication
You can configure Auth Methods in Vault.
Currently the following auth methods are supported:
AppRole auth method
Allow machines/apps to authenticate with Vault-defined roles. For details, see the official Vault documentation.
auth:
- type: approle
roles:
- name: default
policies: allow_secrets
secret_id_ttl: 10m
token_num_uses: 10
token_ttl: 20m
token_max_ttl: 30m
secret_id_num_uses: 40
AWS auth method
Creating roles in Vault which can be used for AWS IAM based authentication.
auth:
- type: aws
# Make the auth provider visible in the web ui
# See https://developer.hashicorp.com/vault/api-docs/system/auth#config for more
# information.
options:
listing_visibility: "unauth"
config:
access_key: VKIAJBRHKH6EVTTNXDHA
secret_key: vCtSM8ZUEQ3mOFVlYPBQkf2sO6F/W7a5TVzrl3Oj
iam_server_id_header_value: vault-dev.example.com # consider setting this to the Vault server's DNS name
crossaccountrole:
# Add cross account number and role to assume in the cross account
# https://developer.hashicorp.com/vault/api-docs/auth/aws#create-sts-role
- sts_account: 12345671234
sts_role: arn:aws:iam::12345671234:role/crossaccountrole
roles:
# Add roles for AWS instances or principals
# See https://developer.hashicorp.com/vault/api-docs/auth/aws#create-role
- name: dev-role-iam
bound_iam_principal_arn: arn:aws:iam::123456789012:role/dev-vault
policies: allow_secrets
period: 1h
- name: cross-account-role
bound_iam_principal_arn: arn:aws:iam::12345671234:role/crossaccountrole
policies: allow_secrets
period: 1h
Azure auth method
The Azure auth method allows authentication against Vault using Azure Active Directory credentials for more information.
auth:
- type: azure
config:
tenant_id: 00000000-0000-0000-0000-000000000000
resource: https://vault-dev.example.com
client_id: 00000000-0000-0000-0000-000000000000
client_secret: 00000000-0000-0000-0000-000000000000
roles:
# Add roles for azure identities
# See https://developer.hashicorp.com/vault/api-docs/auth/azure#create-role
- name: dev-mi
policies: allow_secrets
bound_subscription_ids:
- "00000000-0000-0000-0000-000000000000"
bound_service_principal_ids:
- "00000000-0000-0000-0000-000000000000"
GCP auth method
Create roles in Vault which can be used for GCP IAM based authentication.
auth:
- type: gcp
# Make the auth provider visible in the web ui
# See https://developer.hashicorp.com/vault/api-docs/system/auth#config for more
# information.
options:
listing_visibility: "unauth"
config:
# Credentials context is service account's key. Can download when you create a key for service account.
# No need to manually create it. Just paste the json context as multiline yaml.
credentials: -|
{
"type": "service_account",
"project_id": "PROJECT_ID",
"private_key_id": "KEY_ID",
"private_key": "-----BEGIN PRIVATE KEY-----.....-----END PRIVATE KEY-----\n",
"client_email": "SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com",
"client_id": "CLIENT_ID",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT%40PROJECT_ID.iam.gserviceaccount.com"
}
roles:
# Add roles for gcp service account
# See https://developer.hashicorp.com/vault/api-docs/auth/gcp#create-role
- name: user-role
type: iam
project_id: PROJECT_ID
policies: "readonly_secrets"
bound_service_accounts: "USER_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com"
- name: admin-role
type: iam
project_id: PROJECT_ID
policies: "allow_secrets"
bound_service_accounts: "ADMIN_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com"
GitHub auth method
Create team mappings in Vault which can be used later on for the GitHub authentication.
auth:
- type: github
# Make the auth provider visible in the web ui
# See https://developer.hashicorp.com/vault/api-docs/system/auth#config for more
# information.
options:
listing_visibility: "unauth"
config:
organization: banzaicloud
map:
# Map the banzaicloud GitHub team on to the dev policy in Vault
teams:
dev: dev
# Map my username (bonifaido) to the root policy in Vault
users:
bonifaido: allow_secrets
JWT auth method
Create roles in Vault which can be used for JWT-based authentication.
auth:
- type: jwt
path: jwt
config:
oidc_discovery_url: https://myco.auth0.com/
roles:
- name: role1
bound_audiences:
- https://vault.plugin.auth.jwt.test
user_claim: https://vault/user
groups_claim: https://vault/groups
policies: allow_secrets
ttl: 1h
Kubernetes auth method
Use the Kubernetes auth method to authenticate with Vault using a Kubernetes Service Account Token.
auth:
- type: kubernetes
# If you want to configure with specific kubernetes service account instead of default service account
# https://developer.hashicorp.com/vault/docs/auth/kubernetes
# config:
# token_reviewer_jwt: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9....
# kubernetes_ca_cert: |
# -----BEGIN CERTIFICATE-----
# ...
# -----END CERTIFICATE-----
# kubernetes_host: https://192.168.64.42:8443
# Allows creating roles in Vault which can be used later on for the Kubernetes based
# authentication.
# See https://developer.hashicorp.com/vault/docs/auth/kubernetes#creating-a-role for
# more information.
roles:
# Allow every pod in the default namespace to use the secret kv store
- name: default
bound_service_account_names: default
bound_service_account_namespaces: default
policies: allow_secrets
ttl: 1h
LDAP auth method
Create group mappings in Vault which can be used for LDAP based authentication.
- To start an LDAP test server, run: docker run -it –rm -p 389:389 -e LDAP_TLS=false –name ldap osixia/openldap
- To start an LDAP admin server, run: docker run -it –rm -p 6443:443 –link ldap:ldap -e PHPLDAPADMIN_LDAP_HOSTS=ldap -e PHPLDAPADMIN_LDAP_CLIENT_TLS=false osixia/phpldapadmin
auth:
- type: ldap
description: LDAP directory auth.
# add mount options
# See https://developer.hashicorp.com/vault/api-docs/system/auth#config for more
# information.
options:
listing_visibility: "unauth"
config:
url: ldap://localhost
binddn: "cn=admin,dc=example,dc=org"
bindpass: "admin"
userattr: uid
userdn: "ou=users,dc=example,dc=org"
groupdn: "ou=groups,dc=example,dc=org"
groups:
# Map the banzaicloud dev team on GitHub to the dev policy in Vault
developers:
policies: allow_secrets
# Map myself to the allow_secrets policy in Vault
users:
bonifaido:
groups: developers
policies: allow_secrets
3.4 - Plugins
To register a new plugin in Vault’s plugin catalog, set the plugin_directory option in the Vault server configuration to the directory where the plugin binary is located. Also, for some plugins readOnlyRootFilesystem Pod Security Policy should be disabled to allow RPC communication between plugin and Vault server via Unix socket. For details, see the Hashicorp Go plugin documentation.
plugins:
- plugin_name: ethereum-plugin
command: ethereum-vault-plugin --ca-cert=/vault/tls/client/ca.crt --client-cert=/vault/tls/server/server.crt --client-key=/vault/tls/server/server.key
sha256: 62fb461a8743f2a0af31d998074b58bb1a589ec1d28da3a2a5e8e5820d2c6e0a
type: secret
3.5 - Policies
You can create policies in Vault, and later use these policies in roles for the Kubernetes-based authentication. For details, see Policies in the official Vault documentation.
policies:
- name: allow_secrets
rules: path "secret/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
- name: readonly_secrets
rules: path "secret/*" {
capabilities = ["read", "list"]
}
3.6 - Secrets engines
You can configure Secrets Engines in Vault. The Key-Value, Database, and SSH values are tested, but the configuration is free form, so probably others work as well.
AWS
The AWS secrets engine generates AWS access credentials dynamically based on IAM policies.
secrets:
- type: aws
path: aws
description: AWS Secrets Engine
configuration:
config:
- name: root
access_key: "${env `AWS_ACCESS_KEY_ID`}"
secret_key: "${env `AWS_SECRET_ACCESS_KEY`}"
region: us-east-1
roles:
- credential_type: iam_user
policy_arns: arn-of-policy
name: my-aws-role
Consul
The Consul secrets engine generates Consul ACL tokens dynamically based on policies created in Consul.
secrets:
- path: consul
type: consul
description: Consul secrets
configuration:
config:
- name: "access"
address: "consul-server:8500"
token: "${env `CONSUL_GLOBAL_MANAGEMENT_TOKEN`}" # Example how to read environment variables
roles:
- name: "<application_name>-read-only-role"
consul_policies: "<application_name>-read-only-policy"
- name: "<application_name>-read-write-role"
consul_policies: "<application_name>-read-write-policy"
Database
This plugin stores database credentials dynamically based on configured roles for the MySQL/MariaDB database.
secrets:
- type: database
description: MySQL Database secret engine.
configuration:
config:
- name: my-mysql
plugin_name: "mysql-database-plugin"
connection_url: "{{username}}:{{password}}@tcp(127.0.0.1:3306)/"
allowed_roles: [pipeline]
username: "${env `ROOT_USERNAME`}" # Example how to read environment variables
password: "${env `ROOT_PASSWORD`}"
roles:
- name: pipeline
db_name: my-mysql
creation_statements: "GRANT ALL ON *.* TO '{{name}}'@'%' IDENTIFIED BY '{{password}}';"
default_ttl: "10m"
max_ttl: "24h"
Identity Groups
Allows you to configure identity groups.
Note:
Only external groups are supported at the moment through the use of group-aliases. For supported authentication backends (for example JWT, which automatically matches those aliases to groups returned by the backend) the configuration files for the groups and group-aliases need to be parsed after the authentication backend has been mounted. Ideally they should be in the same file to avoid of errors.
groups:
- name: admin
policies:
- admin
metadata:
admin: "true"
priviliged: "true"
type: external
group-aliases:
- name: admin
mountpath: jwt
group: admin
Key-Values
This plugin stores arbitrary secrets within the configured physical storage for Vault.
secrets:
- path: secret
type: kv
description: General secrets.
options:
version: 2
configuration:
config:
- max_versions: 100
Non-default plugin path
Mounts a non-default plugin’s path.
- path: ethereum-gateway
type: plugin
plugin_name: ethereum-plugin
description: Immutability's Ethereum Wallet
PKI
The PKI secrets engine generates X.509 certificates.
secrets:
- type: pki
description: Vault PKI Backend
config:
default_lease_ttl: 168h
max_lease_ttl: 720h
configuration:
config:
- name: urls
issuing_certificates: https://vault.default:8200/v1/pki/ca
crl_distribution_points: https://vault.default:8200/v1/pki/crl
root/generate:
- name: internal
common_name: vault.default
roles:
- name: default
allowed_domains: localhost,pod,svc,default
allow_subdomains: true
generate_lease: true
ttl: 30m
RabbitMQ
The RabbitMQ secrets engine generates user credentials dynamically based on configured permissions and virtual hosts.
To start a RabbitMQ test server, run: docker run -it –rm -p 15672:15672 rabbitmq:3.7-management-alpine
secrets:
- type: rabbitmq
description: local-rabbit
configuration:
config:
- name: connection
connection_uri: "http://localhost:15672"
username: guest
password: guest
roles:
- name: prod_role
vhosts: '{"/web":{"write": "production_.*", "read": "production_.*"}}'
SSH
Create a named Vault role for signing SSH client keys.
secrets:
- type: ssh
path: ssh-client-signer
description: SSH Client Key Signing.
configuration:
config:
- name: ca
generate_signing_key: "true"
roles:
- name: my-role
allow_user_certificates: "true"
allowed_users: "*"
key_type: "ca"
default_user: "ubuntu"
ttl: "24h"
default_extensions:
permit-pty: ""
permit-port-forwarding: ""
permit-agent-forwarding: ""
3.7 - Startup secrets
Allows writing some secrets to Vault (useful for development purposes). For details, see the Key-Value secrets engine.
startupSecrets:
- type: kv
path: secret/data/accounts/aws
data:
data:
AWS_ACCESS_KEY_ID: secretId
AWS_SECRET_ACCESS_KEY: s3cr3t