Skip to main content

Create a volume using NFS

Contributors netapp-ranuk

You can use this workflow to create a volume accessed through the NFS protocol.

Note If the properties aggregateName and maxNumOfDisksApprovedToAdd are not provided on the REST API call, the response will fail with a suggested name for the aggregate and the number of disks needed to fulfill the request.

Choose the workflow to use based on the type of Cloud Volumes ONTAP deployment:

Create a volume using NFS for single node

You can use this workflow to create volume using NFS for a single node system.

1. Select a working environment

Perform the workflow Get working environments and choose the publicId and the svmName values for the workingEnvironmentId and the svmName parameters.

2. Select an aggregate

Perform the workflow Get aggregates and choose the name value of the aggregate for the name parameter.

Note If aggregate name does not exist and the createAggregateIfNotFound query parameter is set true, the create volume request is allowed if the named aggregate is not found.

3. Select a virtual private cloud

Perform the workflow Get virtual private clouds and choose the cidrBlock value of the required VPC for the ips parameter or fill in the desired exportPolicyInfo value manually.

4. Choose the size for the disk

Choose the size value for the size:size parameter. The size:unit must be one of the following: TB, GB, MB, KB, or Byte.

5. Select the rules

Choose values for the exportPolicyInfo→rules→ruleAccessControl and exportPolicyInfo→rules→superUser parameters.

6. Create a quote

Perform the workflow Create quote. This is a recommended step but is not mandatory.

7. Create a volume

HTTP method Path

POST

/occm/api/vsa/volumes

curl example
curl --location --request POST 'https://cloudmanager.cloud.netapp.com/occm/api/vsa/volumes' --header 'Content-Type: application/json' --header 'x-agent-id: <AGENT_ID>' --header 'Authorization: Bearer <ACCESS_TOKEN>' --d @JSONinput
Input

The JSON input example includes the minimum list of input parameters, including:

  • <WORKING_ENV_ID> (workingEnvironmentId)

  • <SVM_NAME> (svmName)

  • <AGGR_NAME> (aggregateName)

If aggregate name does not exist, you can set the createAggregateIfNotFound query parameter to true which allows the aggregate not-found condition.

JSON input example
{
  "workingEnvironmentId": "vsaworkingenvironment-sfrf3wvj",
  "svmName": "svm_zivgcp01we02",
  "aggregateName": "ziv01agg01",
  "name": "zivagg01vol01",
  "size": {
    "size": 100,
    "unit": "GB"
  },
  "snapshotPolicyName": "default",
  "enableThinProvisioning": true,
  "enableCompression": true,
  "enableDeduplication": true,
  "maxNumOfDisksApprovedToAdd": 0,
  "exportPolicyInfo": {
    "name": "rule",
    "policyType": "custom",
    "ips": ["x.0.0.0"],
    "nfsVersion": [
      "nfs3",
      "nfs4"
      ],
    "rules": [
      {
        "index": 1,
        "ruleAccessControl": "readwrite",
        "ips":  ["1.2.3.4"],
        "nfsVersion": [
          "nfs3",
          "nfs4"
          ],
        "superUser": True
      }
    ]
  }
}
Output

None

Create a volume using NFS for high availability pair

You can use this workflow to create volume using NFS for an HA working environment.

1. Select a working environment

Perform the workflow Get working environments and choose the publicId and the svmName values for the workingEnvironmentId and the svmName parameters.

2. Select an aggregate

Perform the workflow Get aggregates and choose the name value of the aggregate for the name parameter.

Note If aggregate name does not exist and the createAggregateIfNotFound query parameter is set true, the create volume request is allowed if the named aggregate is not found.

3. Select a virtual private cloud

Perform the workflow Get virtual private clouds and choose the cidrBlock value of the required VPC for the ips parameter or fill in the desired exportPolicyInfo value manually.

4. Choose the size for the disk

Choose the size value for the size:size parameter. The size:unit must be one of the following: TB, GB, MB, KB, or Byte.

5. Select the rules

Choose values for the exportPolicyInfo→rules→ruleAccessControl and exportPolicyInfo→rules→superUser
parameters.

6. Create a quote

Perform the workflow Create quote. This is a recommended step but is not mandatory.

7. Create a volume

HTTP method Path

POST

/occm/api/aws/ha/volumes

curl example
curl --location --request POST 'https://cloudmanager.cloud.netapp.com/occm/api/aws/ha/volumes' --header 'Content-Type: application/json' --header 'x-agent-id: <AGENT_ID>' --header 'Authorization: Bearer <ACCESS_TOKEN>' --d @JSONinput
Input

The JSON input example includes the minimum list of input parameters, including:

  • <WORKING_ENV_ID> (workingEnvironmentId)

  • <SVM_NAME> (svmName)

  • <AGGR_NAME> (aggregateName)

If aggregate name does not exist, you can set the createAggregateIfNotFound query parameter to true which allows the aggregate not-found condition.

JSON input example
{
  "workingEnvironmentId": "vsaworkingenvironment-sfrf3wvj",
  "svmName": "svm_zivgcp01we02",
  "aggregateName": "ziv01agg01",
  "name": "zivagg01vol01",
  "size": {
    "size": 100,
    "unit": "GB"
  },
  "snapshotPolicyName": "default",
  "enableThinProvisioning": true,
  "enableCompression": true,
  "enableDeduplication": true,
  "maxNumOfDisksApprovedToAdd": 0,
  "exportPolicyInfo": {
    "name": "rule",
    "policyType": "custom",
    "ips": ["x.0.0.0"],
    "nfsVersion": [
      "nfs3",
      "nfs4"
      ],
    "rules": [
      {
        "index": 1,
        "ruleAccessControl": "readwrite",
        "ips":  ["1.2.3.4"],
        "nfsVersion": [
          "nfs3",
          "nfs4"
          ],
        "superUser": True
      }
    ]
  }
}
Output

None