Skip to main content
Snapdrive for Unix

Error message values

Contributors

It is helpful for you to be aware of some of the more common error messages that you might see when using SnapDrive for UNIX, and to know how to address them.

The following table gives you detailed information about the most common errors that you might encounter when using SnapDrive for UNIX:

Error code Return code Type Message Solution

0000-001

NA

Admin

Datapath has been configured for the storage system <STORAGE-SYSTEM-NAME>. Please delete it using snapdrive config delete -mgmtpath command and retry.

Before deleting the storage system, delete the management path configured for the storage system by using the snapdrive config delete -mgmtpath command.

0001-242

NA

Admin

Unable to connect using https to storage system: 10.72.197.213. Ensure that 10.72.197.213 is a valid storage system name/address, and if the storage system that you configure is running on a Data ONTAP operating in 7-Mode, add the host to the trusted hosts (options trusted.hosts) and enable SSL on the storage system 10.72.197.213 or modify the snapdrive.conf to use http for communication and restart the snapdrive daemon. If the storage system that you configure is running on clustered Data ONTAP, ensure that the Vserver name is mapped to IP address of the Vserver's management LIF.

Check the following conditions:

  • Ensure that the storage system you are connected is a valid storage system.

  • If the storage system that you are trying to configure is running on Data ONTAP operating in 7-Mode, add the same to the trusted hosts, and enable SSL on the storage system or modify the snapdrive.conf file to use HTTP for communication; then restart the SnapDrive daemon.

  • If the storage system that you are trying to configure is running on clustered Data ONTAP, ensure that the Vserver name is mapped to the IP address of the Vserver's management logical interface (LIF).

0003- 004

NA

Admin

Failed to deport LUN <LUN-NAME> on storage system <STORAGE-SYSTEM-NAME> from the Guest OS. Reason: No mapping device information populated from CoreOS

This happens when you execute the snapdrive snap disconnect operation in the guest operating system.

Check if there is any RDM LUN mapping in the ESX server or stale RDM entry in the ESX server.

Delete the RDM mapping manually in the ESX server as well as in the guest operating system.

0001- 019

3

Command

invalid command line — duplicate filespecs: <dg1/vol2 and dg1/vol2>

This happens when the command executed has multiple host entities on the same host volume.

For example, the command explicitly specified the host volume and the file system on the same host volume.

Complete the following steps:

  1. Remove all the duplicate instances of the host entities.

  2. Execute the command again.

0001-023

11

Admin

Unable to discover all LUNs in Disk Group dg1.Devices not responding: dg1 Please check the LUN status on the storage system and bring the LUN online if necessary or add the host to the trusted hosts (options trusted.hosts) and enable SSL on the storage system or retry after changing snapdrive.conf to use (http/https) for storage system communication and restarting snapdrive daemon.

This happens when a SCSI inquiry on the device fails. A SCSI inquiry on the device can fail for multiple reasons.

Execute the following steps:

  1. Set the device-retries configuration variable to a higher value.

    For example, set it to 10 (the default value is 3) and execute the command again.

  2. Use the snapdrive storage show command with the -all option to get information about the device.

  3. Check if the FC or iSCSI service is up and running on the storage system.

    If not, contact the storage administrator to bring the storage system online.

  4. Check if the FC or iSCSI service is up on the host.

If the preceding solutions do not solve the issue, contact technical support.

0001-218

Admin

Device /dev/mapper - SCSI Inquiry has failed. LUN not responding. Please check the LUN status on the storage system and bring the LUN online if necessary.

This occurs when the SCSI inquiry on the device fails in SLES10 SP2. The lvm2-2.02.17-7.27.8 and the filter setting is assigned as =[a|/dev/mapper/.*|", "a|/dev/cciss/.*|", "r/.*/"] in the lvm.conf file in SLES10 SP2.

Set the filter setting as ["r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|", "r|/dev/cciss/.*|", "a/.*/"] in the lvm.conf file.

0001-395

NA

Admin

No HBAs on this host!

This occurs If you have a large number of LUNs connected to your host system.

Check if the variable enable-fcp-cache is set to on in the snapdrive.conf file.

0001-389

NA

Admin

Cannot get HBA type for HBA assistant solarisfcp

This occurs If you have a large number of LUNs connected to your host system.

Check if the variable enable-fcp-cache is set to on in the snapdrive.conf file.

0001-389

NA

Admin

Cannot get HBA type for HBA assistant vmwarefcp

The following conditions must be checked:

  • Before you create a storage, ensure if you have configured the virtual interface using the command:

    snapdrive config set -viadmin <user> <virtual_interface_IP or name>

  • Check if the storage system exists for a virtual interface and still you encounter the same error message, then restart SnapDrive for UNIX for the storage create operation to be successful.

  • Check if you meet the configuration requirements of Virtual Storage Console, as documented in the NetApp Virtual Storage Console for VMware vSphere

0001-682

NA

Admin

Host preparation for new LUNs failed: This functionality checkControllers is not supported.

Execute the command again for the SnapDrive operation to be successful.

0001-859

NA

Admin

None of the host's interfaces have NFS permissions to access directory <directory name> on storage system <storage system name>

In the snapdrive.conf file, ensure that the check-export-permission-nfs-clone configuration variable is set to off.

0002-253

Admin

Flex clone creation failed

It is a storage system side error. Please collect the sd-trace.log and storage system logs to troubleshoot it.

0002-264

Admin

FlexClone is not supported on filer <filer name>

FlexClone is not supported with the current Data ONTAP version of the storage system. Upgrade storage system's Data ONTAP version to 7.0 or later and then retry the command.

0002-265

Admin

Unable to check flex_clone license on filer <filername>

It is a storage system side error. Collect the sd-trace.log and storage system logs to troubleshoot it.

0002-266

NA

Admin

FlexClone is not licensed on filer <filername>

FlexClone is not licensed on the storage system. Retry the command after adding FlexClone license on the storage system.

0002-267

NA

Admin

FlexClone is not supported on root volume <volume-name>

FlexClones cannot be created for root volumes.

0002-270

NA

Admin

The free space on the aggregate <aggregate-name> is less than <size> MB(megabytes) required for diskgroup/flexclone metadata

  1. For connecting to raw LUNs using FlexClones, 2 MB free space on the aggregate is required.

  2. Free some space on the aggregate as per steps 1 and 2, and then retry the command.

0002-332

NA

Admin

SD.SnapShot.Restore access denied on qtree storage_array1:/vol/vol1/qtree1 for user lnx197-142\john

Contact Operations Manager administrator to grant the required capability to the user.

0002-364

NA

Admin

Unable to contact DFM: lnx197-146, please change user name and/or password.

Verify and correct the user name and password of sd-admin user.

0002-268

NA

Admin

<volume-Name> is not a flexible volume

FlexClones cannot be created for traditional volumes.

0003-003

Admin

  1. Failed to export LUN <LUN_NAME> on storage system <STORAGE_NAME> to the Guest OS.

or

  • Check if there is any RDM LUN mapping in the ESX server (or) stale RDM entry in the ESX server.

  • Delete the RDM mapping manually in the ESX server as well as in the guest operating system.

0003-012

Admin

Virtual Interface Server win2k3-225-238 is not reachable.

NIS is not configured on for the host/guest OS.

You must provide the name and IP mapping in the file located at /etc/hosts

For example: # cat /etc/hosts10.72.225.238 win2k3-225-238.eng.org.com win2k3-225-238

0001-552

NA

Command

Not a valid Volume-clone or LUN-clone

Clone-split cannot be created for traditional volumes.

0001-553

NA

Command

Unable to split “FS-Name” due to insufficient storage space in <Filer- Name>

Clone-split continues the splitting process and suddenly, the clone split stops due to insufficient storage space not available in the storage system.

0003-002

Command

No more LUN's can be exported to the guest OS.

As the number of devices supported by the ESX server for a controller has reached the maximum limit, you must add more controllers for the guest operating system.

NOTE: The ESX server limits the maximum controller per guest operating system to 4.

9000- 023

1

Command

No arguments for keyword -lun

This error occurs when the command with the -lun keyword does not have the lun_name argument.

What to do: Do either of the following;

  1. Specify the lun_name argument for the command with the -lun keyword.

  2. Check the SnapDrive for UNIX help message

0001-028

1

Command

File system </mnt/qa/dg4/vol1> is of a type (hfs) not managed by snapdrive. Please resubmit your request, leaving out the file system <mnt/qa/dg4/vol1>

This error occurs when a non-supported file system type is part of a command.

What to do: Exclude or update the file system type and then use the command again.

For the latest software compatibility information see the Interoperability Matrix.

9000-030

1

Command

-lun may not be combined with other keywords

This error occurs when you combine the -lun keyword with the -fs or -dg keyword. This is a syntax error and indicates invalid usage of command.

What to do: Execute the command again only with the -lun keyword.

0001-034

1

Command

mount failed: mount: <device name> is not a valid block device"

This error occurs only when the cloned LUN is already connected to the same filespec present in Snapshot copy and then you try to execute the snapdrive snap restore command.

The command fails because the iSCSI daemon remaps the device entry for the restored LUN when you delete the cloned LUN.

What to do: Do either of the following:

  1. Execute the snapdrive snap restore command again.

  2. Delete the connected LUN (if it is mounted on the same filespec as in Snapshot copy) before trying to restore a Snapshot copy of an original LUN.

0001-046 and 0001-047

1

Command

Invalid snapshot name: </vol/vol1/NO_FILER_PRE FIX> or Invalid snapshot name: NO_LONG_FILERNAME - filer volume name is missing

This is a syntax error which indicates invalid use of command, where a Snapshot operation is attempted with an invalid Snapshot name.

What to do: Complete the following steps:

  1. Use the snapdrive snap list - filer <filer-volume-name> command to get a list of Snapshot copies.

  2. Execute the command with the long_snap_name argument.

9000-047

1

Command

More than one -snapname argument given

SnapDrive for UNIX cannot accept more than one Snapshot name in the command line for performing any Snapshot operations.

What to do: Execute the command again, with only one Snapshot name.

9000-049

1

Command

-dg and -v may not be combined

This error occurs when you combine the -dg and -vg keywords. This is a syntax error and indicates invalid usage of commands.

What to do: Execute the command either with the -dg or -vg keyword.

9000-050

1

Command

-lvol and -hostvo may not be combined

This error occurs when you combine the -lvol and -hostvol keywords. This is a syntax error and indicates invalid usage of commands. What to do: Complete the following steps:

  1. Change the -lvol option to - hostvol option or vice-versa in the command line.

  2. Execute the command.

9000-057

1

Command

Missing required -snapname argument

This is a syntax error that indicates an invalid usage of command, where a Snapshot operation is attempted without providing the snap_name argument.

What to do: Execute the command with an appropriate Snapshot name.

0001-067

6

Command

Snapshot hourly.0 was not created by snapdrive.

These are the automatic hourly Snapshot copies created by Data ONTAP.

0001-092

6

Command

snapshot <non_existent_24965> doesn't exist on a filervol exocet: </vol/vol1>

The specified Snapshot copy was not found on the storage system. What to do: Use the snapdrive snap list command to find the Snapshot copies that exist in the storage system.

0001- 099

10

Admin

Invalid snapshot name: <exocet:/vol2/dbvol:New SnapName> doesn't match filer volume name <exocet:/vol/vol1>

This is a syntax error that indicates invalid use of commands, where a Snapshot operation is attempted with an invalid Snapshot name.

What to do: Complete the following steps:

  1. Use the snapdrive snap list - filer <filer-volume-name> command to get a list of Snapshot copies.

  2. Execute the command with the correct format of the Snapshot name that is qualified by SnapDrive for UNIX. The qualified formats are: long_snap_name and short_snap_name.

0001-122

6

Admin

Failed to get snapshot list on filer <exocet>: The specified volume does not exist.

This error occurs when the specified storage system (filer) volume does not exist.

What to do: Complete the following steps:

  1. Contact the storage administrator to get the list of valid storage system volumes.

  2. Execute the command with a valid storage system volume name.

0001-124

111

Admin

Failed to removesnapshot <snap_delete_multi_inuse_24374> on filer <exocet>: LUN clone

The Snapshot delete operation failed for the specified Snapshot copy because the LUN clone was present.

What to do: Complete the following steps:

  1. Use the snapdrive storage show command with the -all option to find the LUN clone for the Snapshot copy (part of the backing Snapshot copy output).

  2. Contact the storage administrator to split the LUN from the clone.

  3. Execute the command again.

0001-155

4

Command

Snapshot <dup_snapname23980> already exists on <exocet: /vol/vol1>. Please use -f (force) flag to overwrite existing snapshot

This error occurs if the Snapshot copy name used in the command already exists.

What to do: Do either of the following:

  1. Execute the command again with a different Snapshot name.

  2. Execute the command again with the -f (force) flag to overwrite the existing Snapshot copy.

0001-158

84

Command

diskgroup configuration has changed since <snapshotexocet:/vol/vo l1:overwrite_noforce_25 078> was taken. removed hostvol </dev/dg3/vol4> Please use '-f' (force) flag to override warning and complete restore

The disk group can contain multiple LUNs and when the disk group configuration changes, you encounter this error. For example, when creating a Snapshot copy, the disk group consisted of X number of LUNs and after making the copy, the disk group can have X+Y number of LUNs.

What to do: Use the command again with the -f (force) flag.

0001-185

NA

Command

storage show failed: no NETAPP devices to show or enable SSL on the filers or retry after changing snapdrive.conf to use http for filer communication.

This problem can occur for the following reasons: If the iSCSI daemon or the FC service on the host has stopped or is malfunction, the snapdrive storage show -all command fails, even if there are configured LUNs on the host.

What to do: Resolve the malfunctioning iSCSI or FC service. The storage system on which the LUNs are configured is down or is undergoing a reboot.

What to do: Wait until the LUNs are up. The value set for the usehttps- to-filer configuration variable might not be a supported configuration.

What to do: Complete the following steps:

  1. Use the sanlun lun show all command to check if there are any LUNs mapped to the host.

  2. If there are any LUNs mapped to the host, follow the instructions mentioned in the error message.

Change the value of the usehttps- to-filer configuration variable (to “on” if the value is “off”; to “off' if the value is "`on”).

0001-226

3

Command

'snap create' requires all filespecs to be accessible Please verify the following inaccessible filespec(s): File System: </mnt/qa/dg1/vol3>

This error occurs when the specified host entity does not exist.

What to do: Use the snapdrive storage show command again with the -all option to find the host entities which exist on the host.

0001- 242

18

Admin

Unable to connect to filer: <filername>

SnapDrive for UNIX attempts to connect to a storage system through the secure HTTP protocol. The error can occur when the host is unable to connect to the storage system. What to do: Complete the following steps:

  1. Network problems:

    1. Use the nslookup command to check the DNS name resolution for the storage system that works through the host.

    2. Add the storage system to the DNS server if it does not exist.

You can also use an IP address instead of a host name to connect to the storage system.

  1. Storage system Configuration:

    1. For SnapDrive for UNIX to work, you must have the license key for the secure HTTP access.

    2. After the license key is set up, check if you can access the storage system through a Web browser.

  2. Execute the command after performing either Step 1 or Step 2 or both.

0001- 243

10

Command

Invalid dg name: <SDU_dg1>

This error occurs when the disk group is not present in the host and subsequently the command fails. For example, SDU_dg1 is not present in the host.

What to do: Complete the following steps:

  1. Use the snapdrive storage show -all command to get all the disk group names.

  2. Execute the command again, with the correct disk group name.

0001- 246

10

Command

Invalid hostvolume name: </mnt/qa/dg2/BADFS>, the valid format is <vgname/hostvolname>, i.e. <mygroup/vol2>

What to do: Execute the command again, with the following appropriate format for the host volume name: vgname/hostvolname

0001- 360

34

Admin

Failed to create LUN </vol/badvol1/nanehp13_ unnewDg_fve_SdLun> on filer <exocet>: No such volume

This error occurs when the specified path includes a storage system volume which does not exist.

What to do: Contact your storage administrator to get the list of storage system volumes which are available for use.

0001- 372

58

Command

Bad lun name:: </vol/vol1/sce_lun2a> - format not recognized

This error occurs if the LUN names that are specified in the command do not adhere to the pre-defined format that SnapDrive for UNIX supports. SnapDrive for UNIX requires LUN names to be specified in the following pre-defined format: <filer-name: /vol/<volname>/<lun-name>

What to do: Complete the following steps:

  1. Use the snapdrive help command to know the pre-defined format for LUN names that SnapDrive for UNIX supports.

  2. Execute the command again.

0001- 373

6

Command

The following required 1 LUN(s) not found: exocet:</vol/vol1/NotARealLun>

This error occurs when the specified LUN is not found on the storage system.

What to do: Do either of the following:

  1. To see the LUNs connected to the host, use the snapdrive storage show -dev command or snapdrive storage show -all command.

  2. To see the entire list of LUNs on the storage system, contact the storage administrator to get the output of the lun show command from the storage system.

0001- 377

43

Command

Disk group name <name> is already in use or conflicts with another entity.

This error occurs when the disk group name is already in use or conflicts with another entity. What to do:

Do either of the following:

Execute the command with the - autorename option

Use the snapdrive storage show command with the -all option to find the names that the host is using. Execute the command specifying another name that the host is not using.

0001- 380

43

Command

Host volume name <dg3/vol1> is already in use or conflicts with another entity.

This error occurs when the host volume name is already in use or conflicts with another entity

What to do: Do either of the following:

  1. Execute the command with the - autorename option.

  2. Use the snapdrive storage show command with the -all option to find the names that the host is using. Execute the command specifying another name that the host is not using.

0001- 417

51

Command

The following names are already in use: <mydg1>. Please specify other names.

What to do: Do either of the following:

  1. Execute the command again with the -autorename option.

  2. Use snapdrive storage show - all command to find the names that exists on the host. Execute the command again to explicitly specify another name that the host is not using.

0001-422

NA

Command

LVM initialization of luns failed: c2t500A09818667B9DAd0 VxVM vxdisksetup ERROR V-5-2-5241 Cannot label as disk geometry cannot be obtained.

What to do: Ensure that you have installed the latest patch, 146019-02, for Solaris Scalable Processor Architecture (SPARC).

0001- 430

51

Command

You cannot specify both -dg/vg dg and - lvol/hostvol dg/vol

This is a syntax error which indicates an invalid usage of commands. The command line can accept either -dg/vg keyword or the -lvol/hostvol keyword, but not both.

What to do: Execute the command with only the -dg/vg or - lvol/hostvol keyword.

0001- 434

6

Command

snapshot exocet:/vol/vol1:NOT_E IST doesn't exist on a storage volume exocet:/vol/vol1

This error occurs when the specified Snapshot copy is not found on the storage system.

What to do: Use the snapdrive snap list command to find the Snapshot copies that exist in the storage system.

0001- 435

3

Command

You must specify all host volumes and/or all file systems on the command line or give the -autoexpand option.

The following names were missing on the command line but were found in snapshot <snap2_5VG_SINGLELUN _REMOTE>: Host Volumes: <dg3/vol2> File Systems: </mnt/qa/dg3/vol2>

The specified disk group has multiple host volumes or file system, but the complete set is not mentioned in the command.

What to do: Do either of the following:

  1. Re-issue the command with the - autoexpand option.

  2. Use the snapdrive snap show command to find the entire list of host volumes and file systems. Execute the command specifying all the host volumes or file systems.

0001- 440

6

Command

snapshot snap25VG_SINGLELUN REMOTE does not contain disk group 'dgBAD'

This error occurs when the specified disk group is not part of the specified Snapshot copy.

What to do: To find if there is any Snapshot copy for the specified disk group, do either of the following:

  1. Use the snapdrive snap list command to find the Snapshot copies in the storage system.

  2. Use the snapdrive snap show command to find the disk groups, host volumes, file systems, or LUNs that are present in the Snapshot copy.

  3. If a Snapshot copy exists for the disk group, execute the command with the Snapshot name.

0001- 442

1

Command

More than one destination - <dis> and <dis1> specified for a single snap connect source <src>. Please retry using separate commands.

What to do: Execute a separate snapdrive snap connect command, so that the new destination disk group name (which is part of the snap connect command) is not the same as what is already part of the other disk group units of the same snapdrive snap connect command.

0001- 465

1

Command

The following filespecs do not exist and cannot be deleted: Disk Group: <nanehp13_ dg1>

The specified disk group does not exist on the host, therefore the deletion operation for the specified disk group failed.

What to do: See the list of entities on the host by using the snapdrive storage show command with the all option.

0001- 476

NA

Admin

Unable to discover the device associated with <long lun name> If multipathing in use, there may be a possible multipathing configuration error. Please verify the configuration and then retry.

There can be many reasons for this failure.

  • Invalid host configuration:

    The iSCSI, FC, or the multipathing solution is not properly setup.

  • Invalid network or switch configuration:

    The IP network is not setup with the proper forwarding rules or filters for iSCSI traffic, or the FC switches are not configured with the recommended zoning configuration.

The preceding issues are very difficult to diagnose in an algorithmic or sequential manner.

What to do: NetAppIt is recommends that before you use SnapDrive for UNIX, you follow the steps recommended in the Host Utilities Setup Guide (for the specific operating system) for discovering LUNs manually.

After you discover LUNs, use the SnapDrive for UNIX commands.

0001- 486

12

Admin

LUN(s) in use, unable to delete. Please note it is dangerous to remove LUNs that are under Volume Manager control without properly removing them from Volume Manager control first.

SnapDrive for UNIX cannot delete a LUN that is part of a volume group.

What to do: Complete the following steps:

  1. Delete the disk group using the command snapdrive storage delete -dg <dgname>.

  2. Delete the LUN.

0001- 494

12

Command

Snapdrive cannot delete <mydg1>, because 1 host volumes still remain on it. Use -full flag to delete all file systems and host volumes associated with <mydg1>

SnapDrive for UNIX cannot delete a disk group until all the host volumes on the disk group are explicitly requested to be deleted.

What to do: Do either of the following:

  1. Specify the -full flag in the command.

  2. Complete the following steps:

    1. Use the snapdrive storage show -all command to get the list of host volumes that are on the disk group.

    2. Mention each of them explicitly in the SnapDrive for UNIX command.

0001- 541

65

Command

Insufficient access permission to create a LUN on filer, <exocet>.

SnapDrive for UNIX uses the sdhostname.prbac or sdgeneric.prbac file on the root storage system (filer) volume for its pseudo access control mechanism.

What to do: Do either of the following:

  1. Modify the sd-hostname.prbac or sdgeneric.prbac file in the storage system to include the following requisite permissions (can be one or many):

    1. NONE

    2. SNAP CREATE

    3. SNAP USE

    4. SNAP ALL

    5. STORAGE CREATE DELETE

    6. STORAGE USE

    7. STORAGE ALL

    8. ALL ACCESS

      NOTE:

      • If you do not have sd-hostname.prbac file, then modify the sdgeneric.prbac file in the storage system.

      • If you have both sd-hostname.prbac and sdgeneric.prbac file, then modify the settings only in sdhostname.prbac file in the storage system.

  2. In the snapdrive.conf file, ensure that the all-access-if-rbacunspecified configuration variable is set to “on”.

0001-559

NA

Admin

Detected I/Os while taking snapshot. Please quiesce your application. See Snapdrive Admin. Guide for more information.

This error occurs if you try to create a Snapshot copy, while parallel input/output operations occur on the file specification and the value of snapcreate-cg-timeout is set to urgent.

What to do: Increase the value of consistency groups time out by setting the value of snapcreate-cg-timeout to relaxed.

0001- 570

6

Command

Disk group <dg1> does not exist and hence cannot be resized

This error occurs when the disk group is not present in the host and subsequently the command fails.

What to do: Complete the following steps:

  1. Use the snapdrive storage show -all command to get all the disk group names.

  2. Execute the command with the correct disk group name.

0001- 574

1

Command

<VmAssistant> lvm does not support resizing LUNs in disk groups

This error occurs when the volume manager that is used to perform this task does not support LUN resizing.

SnapDrive for UNIX depends on the volume manager solution to support the LUN resizing, if the LUN is part of a disk group.

What to do: Check if the volume manager that you are using supports LUN resizing.

0001- 616

6

Command

1 snapshot(s) NOT found on filer: exocet:/vol/vol1:MySnapName>

SnapDrive for UNIX cannot accept more than one Snapshot name in the command line for performing any Snapshot operations. To rectify this error, re-issue the command with one Snapshot name.

This is a syntax error which indicates invalid use of command, where a Snapshot operation is attempted with an invalid Snapshot name. To rectify this error, complete the following steps:

  1. Use the snapdrive snap list - filer <filer-volume-name> command to get a list of Snapshot copies.

  2. Execute the command with the long_snap_name argument.

0001- 640

1

Command

Root file system / is not managed by snapdrive

This error occurs when the root file system on the host is not supported by SnapDrive for UNIX. This is an invalid request to SnapDrive for UNIX.

0001- 684

45

Admin

Mount point <fs_spec> already exists in mount table

What to do: Do either of the following:

  1. Execute the SnapDrive for UNIX command with a different mountpoint.

  2. Check that the mountpoint is not in use and then manually (using any editor) delete the entry from the following files:

Solaris: /etc/vfstab

0001- 796 and 0001- 767

3

Command

0001-796 and 0001-767

SnapDrive for UNIX does not support more than one LUN in the same command with the -nolvm option.

What to do: Do either of the following:

  1. Use the command again to specify only one LUN with the -nolvm option.

  2. Use the command without the - nolvm option. This will use the supported volume manager present in the host, if any.

2715

NA

NA

Volume restore zephyr not available for the filer <filename>Please proceed with lun restore

For older Data ONTAP versions, volume restore zapi is not available. Reissue the command with SFSR.

2278

NA

NA

SnapShots created after <snapname> do not have volume clones …​ FAILED

Split or delete the clones

2280

NA

NA

LUNs mapped and not in active or SnapShot <filespec-name> FAILED

Un-map/ storage disconnect the host entities

2282

NA

NA

No SnapMirror relationships exist …​ FAILED

  1. Either Delete the relationships, or

  2. If SnapDrive for UNIX RBAC with Operations Manager is configured, ask the Operations Manager administrator to grant SD.Snapshot.DisruptBaseline capability to the user.

2286

NA

NA

LUNs not owned by <fsname> are application consistent in snapshotted volume …​ FAILED. Snapshot luns not owned by <fsname> which may be application inconsistent

Verify that the LUNs mentioned in the check results are not in use. Only after that, use the -force option.

2289

NA

NA

No new LUNs created after snapshot <snapname> …​ FAILED

Verify that the LUNs mentioned in the check results are not in use. Only after that, use the -force option.

2290

NA

NA

Could not perform inconsistent and newer Luns check. Snapshot version is prior to SDU 4.0

This happens with SnapDrive 3.0 for UNIX Snapshots when used with --vbsr. Manually check that any newer LUNs created will not be used anymore and then proceed with -force option.

2292

NA

NA

No new SnapShots exist…​ FAILED. SnapShots created will be lost.

Check that snapshots mentioned in the check results will no longer be used. And if so, then proceed with -force option.

2297

NA

NA

Both normal files) and LUN(s) exist …​ FAILED

Ensure that the files and LUNs mentioned in the check results will not be used anymore. And if so, then proceed with -force option.

2302

NA

NA

NFS export list does not have foreign hosts …​ FAILED

Contact the storage administrator to remove the foreign hosts from the export list or ensure that the foreign hosts are not using the volumes through NFS.

9000-305

NA

Command

Could not detect type of the entity /mnt/my_fs. Provide a specific option (-lun, -dg, -fs or -lvol) if you know the type of the entity

Verify the entity if it already exists in the host. If you know the type of the entity provide the file-spec type.

9000-303

NA

Command

Multiple entities with the same name - /mnt/my_fs exist on this host. Provide a specific option (-lun, -dg, -fs or -lvol) for the entity you have specified.

The user has multiple entities with the same name. In this case user has to provide the file-spec type explicitly.

9000-304

NA

Command

/mnt/my_fs is detected as keyword of type file system, which is not supported with this command.

Operation on the auto detected file_spec is not supported with this command. Verify with the respective help for the operation.

9000-301

NA

Command

Internal error in auto defection

Auto detection engine error. Provide the trace and daemon log for further analysis.

NA

NA

Command

snapdrive.dc tool unable to compress data on RHEL 5Ux environment

Compression utility is not installed by default. You must install the compression utility ncompress, for example ncompress-4.2.4-47.i386.rpm.

To install the compression utility, enter the following command: rpm -ivh ncompress-4.2.4-47.i386.rpm

NA

NA

Command

Invalid filespec

This error occurs when the specified host entity does not exist or inaccessible.

NA

NA

Command

Job Id is not valid

This message is displayed for the clone split status, result, or stop operation if the specified job ID is invalid job or the result of the job is already queried. You must specify a valid or available job ID and retry this operation.

NA

NA

Command

Split is already in progress

This message is displayed when:

  • Clone split is already in progress for the given volume clone or LUN clone.

  • Clone split is completed but the job is not removed.

NA

NA

Command

Not a valid Volume-Clone or LUN-Clone

Specified filespec or LUN pathname is not a valid volume clone or LUN clone.

NA

NA

Command

No space to split volume

The error message is due to the required storage space is not available to split the volume. Free enough space in the aggregate to split the volume clone.

NA

NA

NA

filer-data:junction_dbsw information not available — LUN may be offline

This error could occur when the /etc/fstab file was incorrectly configured. In this case, while the mount paths were NFS, but was considered as LUNs by SnapDrive for UNIX.

What to do: Add "/" between the filer name and the junction path.

0003-013

NA

Command

A connection error occurred with Virtual Interface server. Please check if Virtual Interface server is up and running.

This error could occur when the license in the esx server expires and VSC service is not running.

What to do: Install ESX Server license and restart the VSC service.

0002-137

NA

Command

Unable to get the fstype and mntOpts for 10.231.72.21:/vol/ips_vol3 from snapshot 10.231.72.21:/vol/ips_vol3:t5120-206-66_nfssnap.

What to do: Do either of the following

  1. Add the IP address of the datapath interface or specific IP address as the host name into the /etc/hosts file.

  2. Create an entry for your datapath interface or host name IP address in the DNS.

  3. Configure the data LIFS of Vserver to support the Vserver management (with firewall-policy=mgmt)

    net int modify -vserver Vserver_name LIF_name-firewall -policy mgmt

  4. Add the host's management IP address to the export rules of the Vserver.

13003

NA

Command

Insufficient privileges: user does not have read access to this resource.

This issue is seen in SnapDrive for UNIX 5.2.2. Prior to SnapDrive for UNIX 5.2.2, the vsadmin user configured in SnapDrive for UNIX needs to have 'vsadmin_volume' role. From SnapDrive for UNIX 5.2.2, the vsadmin user needs elevated access roles, else snapmirror-get-iter zapi fails.

What to do: Create role vsadmin instead of vsadmin_volume and assign to vsadmin user.

0001-016

NA

Command

Could not acquire lock file on storage system.

Snapshot creation fails due to insufficient space in the volume. Or due to the existence of .snapdrive_lock file in the storage system.

What to do: Do either of the following:

  1. Delete file /vol/<volname>/.snapdrive_lock on storage system and retry snap create operation. To delete the file, login to storage system, enter advanced privilege mode and execute the command rm /vol/<volname>/.snapdrive_lock at storage system prompt.

  2. Ensure sufficient space is available in the volume before taking snapshot.

0003-003

NA

Admin

Failed to export LUN on storage system <controller name> to the Guest OS. Reason: FLOW-11019: Failure in MapStorage: No storage system configured with interface.

This error occurs due to the absence of storage controllers, which is configured in ESX server.

What to do: Add the storage controllers and credentials in the ESX server.

0001-493

NA

Admin

Error creating mount point: Unexpected error from mkdir: mkdir: cannot create directory: Permission denied Check whether mount point is under automount paths.

Clone operations fail when the destination file spec is under the automount paths.

What to do: Make sure that the destination filespec/mount point is not under the automount paths.

0009-049

NA

Admin

Failed to restore from snapshot on storage system: Failed to restore file from Snapshot copy for volume on Vserver.

This error occurs when the volume size is full or the volume has crossed the autodelete threshold.

What to do: Increase the volume size and ensure that the threshold value for a volume is maintained below the autodelete value.

0001-682

NA

Admin

Host preparation for new LUNs failed: This functionality is not supported.

This error occurs when the new LUN IDs creation fails.

What to do: Increase the number of LUNs to be created using

snapdrive config prepare luns -count count_value

command.

0001-060

NA

Admin

Failed to get information about Diskgroup: Volume Manager linuxlvm returned vgdisplay command failed.

This error occurs when SnapDrive for UNIX 4.1.1 and below version is used on RHEL 5 and above version.

What to do: Upgrade the Snapdrive version and retry since support is not available for SnapDrive for UNIX 4.1.1 and below version from RHEL5 onwards.

0009-045

NA

Admin

Failed to create snapshot on storage system: Snapshot operation not allowed due to clones backed by snapshots. Try again after sometime.

This error occurs during Single-file Snap Restore (SFSR) operation followed by immediate snapshot creation.

What to do: Retry the Snapshot create operation after sometime.

0001-304

NA

Admin

Error creating disk/volume group: Volume manager failed with: metainit: No such file or directory.

This error occurs while performing Snapdrive storage create dg, hostvol and fs Solaris with Sun Cluster environment.

What to do: Uninstall the Sun Cluster software and retry the operations.

0001-122

NA

Admin

Failed to get snapshot list on filer the specified volume <volname> does not exist.

This error occurs when SnapDrive for UNIX tries to create Snapshot using the exported active file system path of the volume (actual path) and not with the dummy exported volume path.

What to do: Use volumes with the exported active file system path.

0001-476

NA

Admin

Unable to discover the device. If multipathing in use, there may be a possible multipathing configuration error. Please verify the configuration and then retry.

There are multiple reasons for this error could occur.

The following conditions to be checked: Before you create the storage, ensure zoning is proper.

Check the transport protocol and multipathing-type in snapdrive.conf file and ensure proper values are set.

Check the multipath daemon status, if multipathing-type is set as nativempio start multipathd and restart the snapdrived daemon.

NA

NA

NA

FS fails to be mounted after reboot due to unavailability of LV.

This happens when LV is not available after the reboot. Hence the filesystem is not mounted.

What to do: After the reboot, do vgchange which brings LV up and then mount the file system.

NA

NA

NA

Status call to SDU daemon failed.

There are multiple reasons for this error to occur. This error indicates that the SnapDrive for UNIX job related to a specific operation has failed abruptly (child daemon ended) before the operation could be completed.

If the storage creation or the deletion fails with "Status call to SnapDrive for UNIX daemon failed", it could be because of failing call to ONTAP to get the volume information. volume-get-iter zapi might fail. Retry the snapdrive operations after sometime.

SnapDrive for UNIX operation might fail while executing "kpartx -l" while creating partitions or other operating system commands due to the inappropriate multipath.conf values. Ensure proper values are set and no duplicate keywords exist in multipath.conf file.

While performing SFSR, SnapDrive for UNIX creates temporary Snapshot which might fail if the maximum number of snapshot value has reached. Delete the older snapshots and retry the restore operation.

NA

NA

NA

map in use; can't flush

This error occurs if there are any stale devices left behind when trying to flush the multipath device during the storage delete or disconnect operations.

What to do: Check if there are any stale devices by executing the command

multipath

-l egrep -i fail and ensure flush_on_last_del is set to 'yes' in the multipath.conf file.

Related information