Some Commvault HyperScale X Snippets
Set the Time on HyperScale X Nodes
Often times networks will block Redhat’s internet based time servers and the nodes will be out of sync. This should be corrected either by using internal servers or external servers as the nodes are expected to be in time sync.
# Pause ransomware protection or it won't be possible to modify the time
cd /opt/commvault/MediaAgent64
./cvsecurity.py pause_protection
# Verify permissive
sestatus
# Modify time server configuration file
vim /etc/chrony.conf
# Restart chronyd
systemctl restart chronyd.service
# Verify change in place
chronyc sources
# Check time
date
# If time still not correct run the following to force the change
chronyc -a makestep
# If time zone is incorrect run the following to the correct time zone
timedatectl set-timezone America/Chicago
# Verify again
date
# Restart Commvault services
commvault restart
# Verify service up
commvault list
Helpful HyperScale X Commands
To get the versions
#To get the CDS version
/usr/local/hedvig/scripts/whichCommit.sh
#To get the kernel version
uname -a
Some example helpful looper commands. Modify commands as necessary. Login to one of the HyperScale nodes to run. Modify the host name and count. The example included would loop through 12 nodes with hostnames myHsxNode001-12.
#To check the time on all nodes to verify in sync. Ex. myHsxNode001-12
for i in {01..12}; do ssh -q myHsxNode0$i "date"; done
#To verify the chrony.conf on all nodes
for i in {01..12}; do ssh -q myHsxNode0$i "date; cat /etc/chrony.conf"; done
#To get time configuration per node
for i in {01..02}; do ssh -q myHsxNode0$i "echo XXXXXX; hostname; date; head -5 /etc/chrony.conf; echo XXXXXX"; done
#To get the CDS version and hostname of each node
for i in {01..12}; do ssh -q myHsxNode0$i "echo XXXXXX; hostname; /usr/local/hedvig/scripts/whichCommit.sh; echo XXXXXX"; done
To get the MAC address and detect if connections up
Ip link show
To verify network speed
Ethtool bond2 | grep -i speed
Setup a Routable Backup Network on HyperScale X nodes
By default, the backup network is not routable and adding a second gateway is not a good idea. The way to address this scenario is to use static routes. These can be used to route traffic to specific destination networks to specific routers. One can define a static route for the backup networks, pointing them to the gateway on the backup network. This allows you to have multiple routable interfaces without introducing the issue of multiple default routers.
To do this, create network and bond device files. You can copy the existing files and I recommend using those from the storage bond since the backup network will not have a default gateway.
NOTE: Your bond type may be different
Example:
cp /etc/sysconfig/network-scripts/ifcfg-enpsf3 /etc/sysconfig/network-scripts/ifcfg-enpsf5
vim ifcfg-enpsf5
The device files for the backup network looks like below. The other device file would be bond3-slave2
NAME=bond3-slave1
DEVICE=enpsf5
BOOTPROTO=none
TYPE=Ethernet
ONBOOT=yes
MASTER=bond3
SLAVE=yes
NM_CONTROLLED=no
ETHTOOL_OPTS="-G enpsf5 tx 4096 rx 4096"
cp /etc/sysconfig/network-scripts/ifcfg-bond2 /etc/sysconfig/network-scripts/ifcfg-bond3
nano ifcfg-bond3
The bond file for the backup network looks like below.
DEVICE=bond3
BOOTPROTO=none
IPADDR=<Your IP>
NETMASK=255.255.255.240
ONBOOT=yes
USERCTL=no
NM_CONTROLLED=no
IPV6INIT=no
TYPE=Bond
BONDING_MASTER=yes
BONDING_OPTS="fail_over_mac=0 miimon=100 mode=802.3ad"
Create the route file to maintain the route across reboots
vim /etc/sysconfig/network-scripts/route-bond3
The new entry will look as follows:
<CIDR Target Network Address> via <Gateway> dev bond3
Restart the network
systemctl restart network