Skip to main content
BlueXP copy and sync

Set up a data broker group to use an external HashiCorp Vault

Contributors netapp-bcammett netapp-ivanad

When you create a sync relationship that requires Amazon S3, Azure, or Google Cloud credentials, you need to specify those credentials through the BlueXP copy and sync user interface or API. An alternative is to set up the data broker group to access the credentials (or secrets) directly from an external HashiCorp Vault.

This feature is supported through the BlueXP copy and sync API with sync relationships that require Amazon S3, Azure, or Google Cloud credentials.

One Prepare the vault

Prepare the vault to supply credentials to the data broker group by setting up the URLs. The URLs to the secrets in the vault must end with Creds.

Two Prepare the data broker group

Prepare the data broker group to fetch credentials from the external vault by modifying the local config file for each data broker in the group.

Three Create a sync relationship using the API

Now that everything is set up, you can send an API call to create a sync relationship that uses your vault to get the secrets.

Prepare the vault

You'll need to provide BlueXP copy and sync with the URL to the secrets in your vault. Prepare the vault by setting up those URLs. You need to set up URLs to the credentials for each source and target in the sync relationships that you plan to create.

The URL must be set up as follows:

/<path>/<requestid>/<endpoint-protocol>Creds

Path

The prefix path to the secret. This can be any value that's unique to you.

Request ID

A request ID that you need to generate. You'll need to provide the ID in one of the headers in the API POST request when you create the sync relationship.

Endpoint protocol

One of the following protocols, as defined in the post relationship v2 documentation: S3, AZURE, or GCP (each must be in uppercase).

Creds

The URL must end with Creds.

Examples

The following examples show URLs to secrets.

Example for the full URL and path for source credentials

http://example.vault.com:8200/my-path/all-secrets/hb312vdasr2/S3Creds

As you can see in the example, the prefix path is /my-path/all-secrets/, the request ID is hb312vdasr2 and the source endpoint is S3.

Example for the full URL and path for target credentials

http://example.vault.com:8200/my-path/all-secrets/n32hcbnejk2/AZURECreds

The prefix path is /my-path/all-secrets/, the request ID is n32hcbnejk2, and the target endpoint is Azure.

Prepare the data broker group

Prepare the data broker group to fetch credentials from the external vault by modifying the local config file for each data broker in the group.

Steps
  1. SSH to a data broker in the group.

  2. Edit the local.json file that resides in /opt/netapp/databroker/config.

  3. Set enable to true and set the config parameter fields under external-integrations.hashicorp as follows:

    enabled
    • Valid values: true/false

    • Type: Boolean

    • Default value: false

    • True: The data broker gets secrets from your own external HashiCorp Vault

    • False: The data broker stores credentials in its local vault

    url
    • Type: string

    • Value: The URL to your external vault

    path
    • Type: string

    • Value: Prefix path to the secret with your credentials

    Reject-unauthorized
    • Determines if you want the data broker to reject unauthorized external vault

    • Type: Boolean

    • Default: false

    auth-method
    • The authentication method that the data broker should use to access credentials from the external vault

    • Type: string

    • Valid values: “aws-iam” / “role-app” / "gcp-iam"

    role-name
    • Type: string

    • Your role name (in case you use aws-iam or gcp-iam)

    Secretid & rootid
    • Type: string (in case you use app-role)

    Namespace
    • Type: string

    • Your namespace (X-Vault-Namespace header if needed)

  4. Repeat these steps for any other data brokers in the group.

Example for aws-role authentication

{
          “external-integrations”: {
                  “hashicorp”: {
                         “enabled”: true,
                         “url”: “https://example.vault.com:8200”,
                         “path”: ““my-path/all-secrets”,
                         “reject-unauthorized”: false,
                         “auth-method”: “aws-role”,
                         “aws-role”: {
                               “role-name”: “my-role”
                         }
                }
       }
}

Example for gcp-iam authentication

{
"external-integrations": {
    "hashicorp": {
      "enabled": true,
      "url": http://ip-10-20-30-55.ec2.internal:8200,
      "path": "v1/secret",
      "namespace": "",
      "reject-unauthorized": true,
      "auth-method": "gcp-iam",
      "aws-iam": {
        "role-name": ""
      },
      "app-role": {
        "root_id": "",
        "secret_id": ""
      },
"gcp-iam": {
          "role-name": "my-iam-role"
      }
    }
  }
}

Set up permissions when using gcp-iam authentication

If you're using the gcp-iam authentication method, then the data broker must have the following GCP permission:

- iam.serviceAccounts.signJwt

Creating a new sync relationship using secrets from the vault

Now that everything is set up, you can send an API call to create a sync relationship that uses your vault to get the secrets.

Post the relationship using the BlueXP copy and sync REST API.

Headers:
Authorization: Bearer <user-token>
Content-Type: application/json
x-account-id: <accountid>
x-netapp-external-request-id-src: request ID as part of path for source credentials
x-netapp-external-request-id-trg: request ID as part of path for target credentials
Body: post relationship v2 body

Example

Example for the POST request:

url: https://api.cloudsync.netapp.com/api/relationships-v2
headers:
"x-account-id": "CS-SasdW"
"x-netapp-external-request-id-src": "hb312vdasr2"
"Content-Type": "application/json"
"Authorization": "Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6Ik…"
Body:
{
"dataBrokerId": "5e6e111d578dtyuu1555sa60",
"source": {
        "protocol": "s3",
        "s3": {
                "provider": "sgws",
                "host": "1.1.1.1",
                "port": "443",
                "bucket": "my-source"
     },
"target": {
        "protocol": "s3",
        "s3": {
                "bucket": "my-target-bucket"
        }
    }
}