Skip to main content

system node coredump trigger

Contributors
Suggest changes

Make the node dump system core and reset

Availability: This command is available to cluster administrators at the advanced privilege level.

Description

This command triggers a Non-maskable Interrupt (NMI) on the specified node via the Service Processor of that node, causing a dirty shutdown of the node. This operation forces a dump of the kernel core when halting the node. LIF migration or storage takeover occurs as normal in a dirty shutdown. This command is different from the -dump parameter of the system node shutdown , system node halt , or system node reboot command in that this command uses a control flow through the Service Processor of the remote node, whereas the -dump parameter uses a communication channel between Data ONTAP running on the nodes. This command is helpful in cases where Data ONTAP on the remote node is hung or does not respond for some reason. If the panic node reboots back up, then the generated coredump can be seen by using the system node coredump show command. This command works for a single node only and the full name of the node must be entered exactly.

Parameters

-node {<nodename>|local} - Node (privilege: advanced)

This parameter specifies the node for which you want to trigger a coredump.

Examples

The following example triggers a NMI via the Service Processor and causes node2 to panic and generate a coredump. Once node2 reboots back up, the command system node coredump show can be used to display the generated coredump.

cluster1::> set advanced

Warning: These advanced commands are potentially dangerous; use them only when
         directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y

cluster1::*> system node coredump trigger -node node2

Warning: The Service Processor is about to perform an operation that will cause
         a dirty shutdown of node "node2". This operation can
         cause data loss. Before using this command, ensure that the cluster
         will have enough remaining nodes to stay in quorum. To reboot or halt
         a node gracefully, use the "system node reboot" or "system node halt"
         command instead. Do you want to continue? {yes|no}: yes

Warning: This operation will reboot the current node. You will lose this login
         session. Do you want to continue? {y|n}: y

cluster1::*>
cluster1::> system coredump show
Node:Type Core Name                                   Saved Panic Time
--------- ------------------------------------------- ----- -----------------
node2:kernel
          core.1786429481.2013-10-04.11_18_37.nz      false 10/4/2013 11:18:37
              Partial Core: false
              Number of Attempts to Save Core: 0
              Space Needed To Save Core: 3.60GB
1 entries were displayed.

cluster1::>