安裝參考設定檔(RCF)
首次設定 Nexus 3132Q-V 交換器後,安裝參考設定檔 (RCF)。
請核實以下安裝和連接:
-
交換器配置的目前備份。
-
一個功能齊全的群集(日誌中沒有錯誤或類似問題)。
-
當前的RCF。
-
安裝 RCF 時需要將控制台連接到交換器。
流程需要同時使用ONTAP指令和Cisco Nexus 3000 系列交換器指令;除非另有說明,否則使用ONTAP指令。
在此過程中不需要任何可操作的交換器間連結 (ISL)。這是設計使然,因為 RCF 版本的變更可能會暫時影響 ISL 連線。為了實現無中斷叢集操作,以下步驟將所有叢集 LIF 遷移到可操作的合作夥伴交換機,同時在目標交換器上執行步驟。
步驟 1:在交換器上安裝 RCF
-
顯示每個節點上連接到叢集交換器的叢集連接埠:
network device-discovery show顯示範例
cluster1::*> network device-discovery show Node/ Local Discovered Protocol Port Device (LLDP: ChassisID) Interface Platform ----------- ------ ------------------------- ---------------- ------------ cluster1-01/cdp e0a cs1 Ethernet1/7 N3K-C3132Q-V e0d cs2 Ethernet1/7 N3K-C3132Q-V cluster1-02/cdp e0a cs1 Ethernet1/8 N3K-C3132Q-V e0d cs2 Ethernet1/8 N3K-C3132Q-V cluster1-03/cdp e0a cs1 Ethernet1/1/1 N3K-C3132Q-V e0b cs2 Ethernet1/1/1 N3K-C3132Q-V cluster1-04/cdp e0a cs1 Ethernet1/1/2 N3K-C3132Q-V e0b cs2 Ethernet1/1/2 N3K-C3132Q-V cluster1::*> -
檢查每個叢集連接埠的管理和運作狀態。
-
確認叢集所有連接埠均已啟動且狀態正常:
network port show -ipspace Cluster顯示範例
cluster1::*> network port show -ipspace Cluster Node: cluster1-01 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/100000 healthy false e0d Cluster Cluster up 9000 auto/100000 healthy false Node: cluster1-02 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/100000 healthy false e0d Cluster Cluster up 9000 auto/100000 healthy false 8 entries were displayed. Node: cluster1-03 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false Node: cluster1-04 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false cluster1::*> -
確認所有叢集介面(LIF)都位於主連接埠上:
network interface show -vserver Cluster顯示範例
cluster1::*> network interface show -vserver Cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ------------------ ---------- ----------------- ------------ ------- ---- Cluster cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0a true cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0d true cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0a true cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0d true cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0a true cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0b true cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0a true cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0b true cluster1::*> -
確認集群顯示兩個集群交換器的資訊:
system cluster-switch show -is-monitoring-enabled-operational true顯示範例
cluster1::*> system cluster-switch show -is-monitoring-enabled-operational true Switch Type Address Model --------------------------- ------------------ ---------------- --------------- cs1 cluster-network 10.0.0.1 NX3132QV Serial Number: FOXXXXXXXGS Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 9.3(4) Version Source: CDP cs2 cluster-network 10.0.0.2 NX3132QV Serial Number: FOXXXXXXXGD Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 9.3(4) Version Source: CDP 2 entries were displayed.
對於ONTAP 9.8 及更高版本,請使用以下指令 system switch ethernet show -is-monitoring-enabled-operational true。 -
-
停用群集 LIF 的自動回滾功能。
cluster1::*> network interface modify -vserver Cluster -lif * -auto-revert false
執行此命令後,請確保已停用自動還原功能。
-
在叢集交換器 cs2 上,關閉連接到節點叢集連接埠的連接埠。
cs2> enable cs2# configure cs2(config)# interface eth1/1/1-2,eth1/7-8 cs2(config-if-range)# shutdown cs2(config-if-range)# exit cs2# exit
顯示的連接埠數量取決於叢集中的節點數量。 -
驗證叢集連接埠是否已故障轉移到叢集交換器 cs1 上託管的連接埠。這可能需要幾秒鐘。
network interface show -vserver Cluster顯示範例
cluster1::*> network interface show -vserver Cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ----------------- ---------- ------------------ ------------- ------- ---- Cluster cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0a true cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0a false cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0a true cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0a false cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0a true cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0a false cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0a true cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0a false cluster1::*> -
驗證叢集是否運作正常:
cluster show顯示範例
cluster1::*> cluster show Node Health Eligibility Epsilon -------------------- ------- ------------ ------- cluster1-01 true true false cluster1-02 true true false cluster1-03 true true true cluster1-04 true true false cluster1::*>
-
如果您尚未儲存目前交換器配置,請將以下命令的輸出複製到文字檔案中:
show running-config -
記錄目前運行配置和正在使用的 RCF 檔案之間的任何自訂新增。
確保設定以下內容:* 使用者名稱和密碼* 管理 IP 位址* 預設閘道* 交換器名稱
-
儲存基本配置詳細信息 `write_erase.cfg`啟動閃存上的檔案。
升級或套用新的 RCF 時,必須清除交換器設定並執行基本配置。您必須連接到交換器序列控制台連接埠才能重新設定交換器。 cs2# show run | section "switchname" > bootflash:write_erase.cfgcs2# show run | section "hostname" >> bootflash:write_erase.cfgcs2# show run | i "username admin password" >> bootflash:write_erase.cfgcs2# show run | section "vrf context management" >> bootflash:write_erase.cfgcs2# show run | section "interface mgmt0" >> bootflash:write_erase.cfg -
安裝 RCF 1.12 及更高版本時,請執行以下命令:
cs2# echo "hardware access-list tcam region vpc-convergence 256" >> bootflash:write_erase.cfgcs2# echo "hardware access-list tcam region racl 256" >> bootflash:write_erase.cfgcs2# echo "hardware access-list tcam region e-racl 256" >> bootflash:write_erase.cfgcs2# echo "hardware access-list tcam region qos 256" >> bootflash:write_erase.cfg請參閱知識庫文章 "如何在保持遠端連線的情況下清除Cisco互連交換器上的配置"更多詳情請見下文。
-
確認 `write_erase.cfg`文件已如預期填入:
show file bootflash:write_erase.cfg -
問題 `write erase`清除目前已儲存配置的命令:
cs2# write eraseWarning: This command will erase the startup-configuration.Do you wish to proceed anyway? (y/n) [n] y -
將先前儲存的基本配置複製到啟動配置中。
cs2# copy bootflash:write_erase.cfg startup-config -
重啟交換器:
cs2# reloadThis command will reboot the system. (y/n)? [n] y -
在交換器 cs1 上重複步驟 7 至 14。
-
將ONTAP叢集中所有節點的叢集連接埠連接到交換器 cs1 和 cs2。
步驟 2:驗證交換器連接
-
確認連接到叢集連接埠的交換器連接埠已啟用。
show interface brief | grep up顯示範例
cs1# show interface brief | grep up . . Eth1/1/1 1 eth access up none 10G(D) -- Eth1/1/2 1 eth access up none 10G(D) -- Eth1/7 1 eth trunk up none 100G(D) -- Eth1/8 1 eth trunk up none 100G(D) -- . .
-
驗證 cs1 和 cs2 之間的 ISL 連線是否正常:
show port-channel summary顯示範例
cs1# show port-channel summary Flags: D - Down P - Up in port-channel (members) I - Individual H - Hot-standby (LACP only) s - Suspended r - Module-removed b - BFD Session Wait S - Switched R - Routed U - Up (port-channel) p - Up in delay-lacp mode (member) M - Not in use. Min-links not met -------------------------------------------------------------------------------- Group Port- Type Protocol Member Ports Channel -------------------------------------------------------------------------------- 1 Po1(SU) Eth LACP Eth1/31(P) Eth1/32(P) cs1# -
確認叢集 LIF 已恢復到其原始連接埠:
network interface show -vserver Cluster顯示範例
cluster1::*> network interface show -vserver Cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ------------------ ---------- ------------------ ------------------- ------- ---- Cluster cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0d true cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0d true cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0d true cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0d true cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0b true cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0b true cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0b true cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0b true cluster1::*> -
驗證叢集是否運作正常:
cluster show顯示範例
cluster1::*> cluster show Node Health Eligibility Epsilon -------------------- ------- ------------- ------- cluster1-01 true true false cluster1-02 true true false cluster1-03 true true true cluster1-04 true true false cluster1::*>
步驟 3:設定ONTAP集群
NetApp建議您使用系統管理員來設定新的叢集。
系統管理器為叢集設定和配置提供了簡單易行的工作流程,包括分配節點管理 IP 位址、初始化叢集、建立本機層、設定協定和配置初始儲存。
參考"使用 System Manager 在新叢集上設定ONTAP"了解設定說明。
安裝 RCF 後,您可以… "驗證 SSH 配置"。