Supported Versions
veritas infoscale 7.4.2

Introduction

Veritas Cluster Server is a high-availability cluster software for unix, linux and Microsoft windows computer systems. It is an availability clustering solution that manages the availability of application services, such as Db2 database, by enabling application failover. The state of each individual cluster node and its associated software services are regularly monitored.

Failover

Failover is a process. Whenever a primary system, network or a database fails or is abnormally terminated, then a Failover acts as a standby which helps resume these operations.

Failover Cluster

Failover cluster is a set of servers that work together to provide High Availability (HA) or Continuous availability (CA). As mentioned earlier, if one of the servers goes down another node in the cluster can take over its workload with minimal or no downtime. Some failover clusters use physical servers whereas others involve virtual machines (VMs).

CA clusters allow users to access and work on the services and applications without any incidence of timeouts (100% availability), in case of a server failure. HA clusters, on the other hand, may cause a short hiatus in the service, but system recovers automatically with minimum downtime and no data loss.

A cluster is a set of two or more nodes (servers) that transmit data for processing through cables or a dedicated secure network. Even load balancing, storage or concurrent/parallel processing is possible through other clustering technologies.

The above image shows an application that runs on a primary or master server. A dedicated redundant server is present to take over on any failure. The redundant server is not configured to perform any other functions. The redundant server is on stand-by with full performance capability.

Veritas Cluster Server (VCS)

Veritas Cluster Server is a high-availability cluster software for Unix, Linux and Microsoft Windows computer systems.

Veritas Cluster Server connects multiple, independent systems into a management framework for increased availability. Each system or node runs its own operating system and cooperates at the software level to form a cluster. VCS links commodity hardware with intelligent software to provide application failover and control. So when a node or a monitored application fails, other nodes take over and bring up services elsewhere in the cluster.

How VCS detects failure

VCS detects failure by issuing specific commands or scripts to monitor the overall health of an application. VCS also determines the health of underlying resources supporting the application, such as network interfaces or file systems.

VCS uses a redundant network heartbeat to differentiate between the loss of a system and the loss of communication between systems.

How VCS ensures application availability

When VCS detects an node or application failure, VCS brings application services up on a different node in a cluster. VCS virtualizes IP addresses and system names, so client systems continue to access the application without any interruption.

Integration is done with Veritas Cluster using SSH and shell script.

Pre-requisites

  • OpsRamp Classic Gateway 10.0 and above.

  • Ensure that “adapter integrations” add-on is enabled in client configuration. Once enabled you can see Veritas Cluster integration under Setup -> Integrations -> Adapter section.

  • It is mandatory to provide administrator (or) operator level VCS credentials in input configuration. VCS credentials are required to fetch veritas_cluster_group_State, veritas_cluster_group_Status, veritas_cluster_node_State, veritas_cluster_resource_State, veritas_cluster_resource_Status, veritas_cluster_group_failover_Status metric details.

  • In order to get Additional metrics ( veritas_cluster_lltLinks_State, veritas_cluster_lltInterface_Status), you need to provide permissions for the non-root SSH user. So, you need to make an entry as below in “/etc/sudoers”.

        cat /etc/sudoers
    
        ## Allow root to run any commands anywhere 
        root    ALL=(ALL)     ALL
        content    ALL=NOPASSWD:/usr/sbin/lltstat -n,/usr/sbin/lltstat -nvv configured
    
        
  • In case root SSH Credentials are provided, there is no need to configure the second prerequisite.

  • All nodes IP Address should be publicly available or it should be configured in /etc/hosts of all the existing nodes in the Veritas Cluster.

        cat /etc/hosts
        [root@centos-node1 ~]# cat /etc/hosts
        127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
        ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
        172.26.1.25 centos-node1
        172.26.1.26 centos-node2
    
        
  • non-root user running VCS commands must have a home directory on the system on which the VCS commands will be run.

        pwd
        [content@centos-node1 ~]$ pwd
        /home/content
    
        

Resource Hierarchy

    Veritas Cluster
                → Server

Default monitoring configurations

Veritas Cluster application has default Global Device Management Policies, Global Templates, Global Monitors and Global metrics in OpsRamp. Users can customize these default monitoring configurations as per their business use cases by cloning respective global templates and global Device Management Policies. OpsRamp recommends doing this activity before installing the application to avoid noise alerts and data.

  1. Default Global Device Management Policies

    OpsRamp has a Global Device Management Policy for each Native Type of VERITAS-CLUSTER. We can find those Device Management Policies at Setup -> Resources -> Device Management Policies, search with suggested names in global scope. Each Device Management Policy follows below naming convention:

    {appName nativeType - version}

    Ex: veritas-cluster Veritas Cluster - 1 (i.e, appName = veritas-cluster, nativeType = Veritas Cluster, version = 1)

  2. Default Global Templates

    OpsRamp has a Global template for each Native Type of VERITAS-CLUSTER. We can find those templates at Setup -> Monitoring -> Templates, search with suggested names in global scope. Each template follows below naming convention:

{ appName nativeType 'Template' - version}

Ex: veritas-cluster Veritas Cluster Template - 1 (i.e, appName = veritas-cluster, nativeType = Veritas Cluster, version = 1)
  1. Default Global Monitors

    OpsRamp has a Global Monitors for each Native Type which has monitoring support. You can find those monitors at Setup -> Monitoring -> Monitors, search with suggested names in global scope. Each Monitors follows below naming convention:

    {monitorKey appName nativeType - version}

    Ex: Veritas Cluster Monitor veritas-cluster Veritas Cluster 1 (i.e, monitorKey = Veritas Cluster Monitor, appName = veritas-cluster, nativeType = Veritas Cluster, version = 1)

Configure and install the integration

  1. From All Clients, select a client.
  2. Go to Setup > Integrations and Apps.
  3. Click Manage Apps.
    Notes:
    • If there are already installed applications, it will redirect to the INSTALLED APPS page where all the installed applications are displayed.
    • If there are no installed applications, it will navigate to the ADD APP page.
  1. Click + ADD on the INSTALLED INTEGRATIONS page. The AVAILABLE INTEGRATIONS AND APPS page displays all the available applications along with the newly created application with the version.
    Note: You can even search for the application using the search option available. Also you can use the All Categories option to search.
  1. Click ADD in the Veritas Cluster application.
  2. In the Configuration page, click + ADD. The Add Configuration page appears.
  3. Enter the below mentioned BASIC INFORMATION:
FunctionalityDescription
NameEnter the name for the configuration.
IP AddressIP address of the target.
SSH CredentialsSelect the SSH credentials from the drop-down list.
Note: Click + Add to create a credential.
VCS CredentialsSelect the VCS credentials from the drop-down list.
Note: Click + Add to create a credential.
Veritas Cluster PathEnter the Veritas cluster path.
Note: By default /opt/VRTSvcs/bin/ is displayed.

Note: Select App Failure Notifications: if turned on, you will be notified in case of an application failure that is, Connectivity Exception, Authentication Exception.

  1. Select the below mentioned Custom Attribute:
FunctionalityDescription
Custom AttributeSelect the custom attribute from the drop down list box.
ValueSelect the value from the drop down list box.

Note: The custom attribute that you add here will be assigned to all the resources that are created by the integration. You can add a maximum of five custom attributes (key and value pair).

  1. In the RESOURCE TYPE section, select:
    • ALL: All the existing and future resources will be discovered.
    • SELECT: You can select one or multiple resources to be discovered.
  2. In the DISCOVERY SCHEDULE section, select Recurrence Pattern to add one of the following patterns:
    • Minutes
    • Hourly
    • Daily
    • Weekly
    • Monthly
  3. Click ADD.
  1. Now the configuration is saved and displayed on the configurations page after you save it. From the same page, you may Edit and Remove the created configuration..
  2. Click Next.
  3. In the Installation page, select an existing registered profile, and click FINISH.

The application is now installed and displayed on the INSTALLED INTEGRATION page. Use the search field to find the installed application.

Modify the Configuration

View the Veritas Cluster details

The discovered resource(s) are displayed in Infrastructure > Resources > Cluster, with Native Resource Type as Veritas Cluster. You can navigate to the Attributes tab to view the discovery details, and the Metrics tab to view the metric details for Veritas Cluster.

View Resource Metrics

To confirm Veritas Cluster monitoring, review the following:

  • Metric graphs: A graph is plotted for each metric that is enabled in the configuration.
  • Alerts: Alerts are generated for metrics that are configured as defined for integration.

Supported Metrics

Resource Type: Cluster

s
Metric NamesDisplay NameUnitDescription
veritas_cluster_group_StateVeritas Cluster Group StateVeritas cluster service group status on each node. Possible values 0-OFFLINE, 1-ONLINE, 2-FAULTED, 3-PARTIAL, 4-STARTING, 5-STOPPING, 6-MIGRATING, 7-OFFLINE|FAULTED, 8-OFFLINE|STARTING, 9-PARTIAL|FAULTED, 10-PARTIAL|STARTING, 11-PARTIAL|STOPPING, 12-ONLINE|STOPPING
veritas_cluster_group_StatusVeritas Cluster Group StatusVeritas cluster service group status. Possible values 0 - Service group not online on any cluster node, 1 - Service group online on cluster node.
veritas_cluster_node_StateVeritas Cluster Node StatusVeritas cluster node's status. Possible values 0-RUNNING, 1-ADMIN_WAIT, 2-CURRENT_DISCOVER_WAIT, 3-CURRENT_PEER_WAIT, 4-EXITING, 5-EXITED, 6-EXITING_FORCIBLY, 7-FAULTED, 8-INITING, 9-LEAVING, 10-LOCAL_BUILD, 11-REMOTE_BUILD, 12-STALE_ADMIN_WAIT, 13-STALE_DISCOVER_WAIT, 14-STALE_PEER_WAIT, 15-UNKNOWN
veritas_cluster_resource_StateVeritas Cluster Resource StateVeritas cluster resource status on each node. Possible values 0-OFFLINE, 1-ONLINE, 2-FAULTED, 3-PARTIAL, 4-STARTING, 5-STOPPING, 6-MIGRATING, 7-OFFLINE|FAULTED, 8-OFFLINE|STARTING, 9-PARTIAL|FAULTED, 10-PARTIAL|STARTING, 11-PARTIAL|STOPPING, 12-ONLINE|STOPPING
veritas_cluster_resource_StatusVeritas Cluster Resource StatusVeritas cluster resource status. Possible values 0 - Resource state is not online on any cluster node, 1 - Resource state in online on any cluster node.
veritas_cluster_group_failover_StatusVeritas Cluster Group Failover StatusVeritas cluster service group failover status. Possible values 0 - No change. 1 - Cluster group change from one node to another due to failover. 2 - The specific cluster group is not online on any cluster node.
veritas_cluster_service_status_LLTVeritas Cluster LLT Service StatusLow latency transport status, used for communication between nodes in the cluster. Possible values are 1-Active, 0-Inactive.
veritas_cluster_service_status_GABVeritas Cluster GAB Service StatusGroup membership and Atomic Broadcast service status, used for creating membership between all the nodes. Possible values 1-Active, 0-Inactive.
veritas_cluster_service_status_FencingVeritas Cluster Fencing Service StatusFencing service status. Possible values 1-Active, 0-Inactive.
veritas_cluster_highAvailability_daemon_StatusVeritas Cluster HighAvailability Daemon StatusHigh availability daemon status, main VCS engine which manages the agents and service groups. Possible values 1-Active, 0-Inactive.
veritas_cluster_highAvailabilityCompanion_daemon_StatusVeritas Cluster HighAvailability Companion Daemon StatusHigh availability companion daemon ( hashadow) status. Possible values 1-Active, 0-Inactive.
veritas_cluster_resourceAgent_daemon_StatusVeritas Cluster ResourceAgent Daemon StatusResource agent daemon status. Possible values 1-Active, 0-Inactive.
veritas_cluster_clusterMgmt_daemon_StatusVeritas Cluster ClusterManagement Daemon StatusWeb console cluster management daemon status. Possible values 1-Active, 0-Inactive.
veritas_cluster_volumeManager_daemon_StatusVeritas Cluster VolumeManager Daemon StatusVolume manager daemon status, manages disk configurations at veritas level. Possible values 1-Active, 0-Inactive.
veritas_cluster_RunningModeVeritas Cluster Running ModeVeritas cluster running mode of the configuration(/etc/VRTSvcs/conf/config/main.cf). Possible values 1-ReadOnly,0-Writemode.
veritas_cluster_active_NodeCountVeritas Cluster Running Node CountcountCount of the running cluster nodes at that instance.
veritas_cluster_running_NodeCountVeritas Cluster Running Node CountcountCount of the running cluster nodes at that instance.
veritas_cluster_node_HealthVeritas Cluster Node Health%Cluster health - percentage of running nodes.
veritas_cluster_system_os_UptimeSystem UptimemTime lapsed since last reboot in minutes.
veritas_cluster_system_cpu_LoadSystem CPU LoadMonitors the system's last 1min, 5min and 15min load. It sends per cpu core load average.
veritas_cluster_system_cpu_UsageStatsSystem CPU Usage Statistics%Monitors cpu time in percentage spent in various program spaces. User - The processor time spent running user space processes System - The amount of time that the CPU spent running the kernel. IOWait - The time the CPU spends idle while waiting for an I/O operation to complete Idle - The time the processor spends idle Steal - The time virtual CPU has spent waiting for the hypervisor to service another virtual CPU running on a different virtual machine. Kernal Time Total Time.
veritas_cluster_system_disk_inode_UtilizationSystem Disk Inode Utilization%This monitor is to collect DISK Inode metrics for all physical disks in a server.
veritas_cluster_system_disk_FreeSpaceSystem Disk FreeSpaceGBMonitors the Free Space usage in GB.
veritas_cluster_system_disk_UsedSpaceSystem Disk UsedSpaceGBMonitors disk used space in GB
veritas_cluster_system_disk_UtilizationSystem Disk Utilization%Monitors disk utilization in percentage.
veritas_cluster_system_cpu_UtilizationSystem CPU Utilization%The percentage of elapsed time that the processor spends to execute a non-Idle thread(This doesn't includes CPU steal time).
veritas_cluster_system_memory_UsedSpaceSystem Memory Used SpaceGBPhysical and virtual memory usage in GB
veritas_cluster_system_memory_UtilizationSystem Memory Utilization%Physical and virtual memory usage in percentage.
veritas_cluster_system_network_interface_OutTrafficSystem Network Out TrafficKbpsMonitors Out traffic of each interface for linux Devices.
veritas_cluster_system_network_interface_InDiscardsSystem Network In discardspsecMonitors Network in discards of each interface for linux Devices.
veritas_cluster_system_network_interface_OutPacketsSystem Network out packetspackets/secMonitors Out packets of each interface for linux Devices.
veritas_cluster_system_network_interface_OutErrorsSystem Network Out ErrorsErrors per SecMonitors network out errors of each interface for linux Devices.
veritas_cluster_system_network_interface_OutDiscardsSystem Network Out discardspsecMonitors network Out Discards of each interface for linux Devices.
veritas_cluster_system_network_interface_InPacketsSystem Network In packetspackets/secMonitors in Packets of each interface for linux Devices.
veritas_cluster_system_network_interface_InErrorsSystem Network In ErrorsErrors per SecMonitors network in errors of each interface for linux Devices.
veritas_cluster_system_network_interface_InTrafficSystem Network In TrafficKbpsMonitors In traffic of each interface for linux Devices
veritas_cluster_lltLinks_StateVeritas Cluster LLT Links StateLow latency transport link status on each node. Possible values are 0-CONNWAIT, 1-OPEN.

Resource Type: Server

Metric NamesMetric UnitMetric Description
veritas_cluster_node_lltInterface_StatusVeritas Node LLT Interface StatusLow latency transport interface status on each node. Possible values are 0-DOWN,1-UP
veritas_cluster_node_system_os_UptimeSystem UptimemTime lapsed since last reboot in minutes.
veritas_cluster_node_system_cpu_LoadSystem CPU LoadMonitors the system's last 1min, 5min and 15min load. It sends per cpu core load average.
veritas_cluster_node_system_cpu_UsageStatsSystem CPU Usage Statistics%Monitors cpu time in percentage spent in various program spaces. User - The processor time spent running user space processes System - The amount of time that the CPU spent running the kernel. IOWait - The time the CPU spends idle while waiting for an I/O operation to complete Idle - The time the processor spends idle Steal - The time virtual CPU has spent waiting for the hypervisor to service another virtual CPU running on a different virtual machine. Kernal Time Total Time
veritas_cluster_node_system_disk_inode_UtilizationSystem Disk Inode Utilization%This monitor is to collect DISK Inode metrics for all physical disks in a server.
veritas_cluster_node_system_disk_FreeSpaceSystem Disk FreeSpaceGBMonitors the Free Space usage in GB.
veritas_cluster_node_system_disk_UsedSpaceSystem Disk UsedSpaceGBMonitors disk used space in GB
veritas_cluster_node_system_disk_UtilizationSystem Disk Utilization%Monitors disk utilization in percentage.
veritas_cluster_node_system_cpu_UtilizationSystem CPU Utilization%The percentage of elapsed time that the processor spends to execute a non-Idle thread(This doesn't includes CPU steal time).
veritas_cluster_node_system_memory_UsedSpaceSystem Memory Used SpaceGBPhysical and virtual memory usage in GB.
veritas_cluster_node_system_memory_UtilizationSystem Memory Utilization%Physical and virtual memory usage in percentage.
veritas_cluster_node_system_network_interface_OutTrafficSystem Network Out TrafficKbpsMonitors Out traffic of each interface for linux Devices.
veritas_cluster_node_system_network_interface_InDiscardsSystem Network In discardspsecMonitors Network in discards of each interface for linux Devices.
veritas_cluster_node_system_network_interface_OutPacketsSystem Network out packetspackets/secMonitors Out packets of each interface for linux Devices.
veritas_cluster_node_system_network_interface_OutErrorsSystem Network Out ErrorsErrors per SecMonitors network out errors of each interface for linux Devices.
veritas_cluster_node_system_network_interface_OutDiscardsSystem Network Out discardspsecMonitors network Out Discards of each interface for linux Devices.
veritas_cluster_node_system_network_interface_OutDiscardsSystem Network Out discardspsecMonitors network Out Discards of each interface for linux Devices.
veritas_cluster_node_system_network_interface_InPacketsSystem Network In packetspackets/secMonitors in Packets of each interface for linux Devices.
veritas_cluster_node_system_network_interface_InErrorsSystem Network In ErrorsErrors per SecMonitors network in errors of each interface for linux Devices.
veritas_cluster_node_system_network_interface_InTrafficSystem Network In TrafficKbpsMonitors In traffic of each interface for linux Devices.

Risks, Limitations & Assumptions

  • As of now supporting only Linux based Veritas failover cluster.
  • When we add two configurations with same end device details (like IP, credentials), we might observe gaps in the graphs due to internal VCS login and logouts parallely on the same device.
  • Component level threshold configuration is not possible.
  • Resource level metric threshold customization and frequency setting are not possible.
  • Usability issues in app configuration page while adding/editing.
  • Optional configuration parameters cannot be defined.
  • Application upgrade is a manual process without version change.