The Nextgen gateway Collector has been introduced for users who want a High Availability (HA) gateway in their kubernetes environment. This gateway consists of a Single Pod and the Pod contains a set of containers in the k3s environment.

Please refer OpsRamp’s Collector Bootstrap Tool for general guidelines on how to install and register NextGen gateway.


To deploy HA gateway in your kubernetes environment, make sure your environment meets these requirements:

  • 8 GB Memory
  • 50 GB Disk
  • 4 Core of CPU
  • AMD64 Arch
  • For High Availability 3 nodes are recommended
  • Additional IP is required for gateway in HA mode
  • Additional IP is required to run the squid-proxy in cluster mode
    (Refer MetalLB IP Range document to know how to add additional IP’s).

Install the k3s and Enable HA for NextGen Gateway

Follow the below steps to install the HA gateway on Kubernetes environment.

  1. Use the following command to check the available options under setup command.

    opsramp-collector-start setup --help
      opsramp-collector-start setup [command]
    Available Commands:
      init       	     Install Kubernetes in your hostmachine and configure high availability
      node       	     Kubernetes node options
      updatehostname     Updates hostmachine name
      -h, --help   help for setup
    Use "opsramp-collector-start setup [command] --help" for more information about a command.

  2. Update the hostname before installing k3s.
    Make sure that each node should have a unique hostname.

    opsramp-collector-start setup updatehostname hostname

  3. Run the follwong command to get the available flags to install k3s:

    opsramp-collector-start setup init --help
    Available Flags

--enable-ha (-E)Enable High availability(true/false)
--loadbalancer-ip (-L)Ip for loadbalancer
--repository (-R)Pull helm charts and images from custom repository (default "us-docker.pkg.dev")
--repo-user (-a)Repository username
--repo-password (-s)Repository Password
--read-repopass-from-file (-f)Read repository password from file
  1. Install the k3s:
    • If you don’t want to use OpsRamp repository and use your own repository (either public or private) for pulling docker images and Helm charts in all available nodes, then follow the below steps:

      • Open the specified yaml file and uncomment the “configs” section.
        vi /var/cgw/asserts_k3s/registries.yaml.template
      • Provide your repository details as follows and ensure proper YAML indentation.
            - "https://us-docker.pkg.dev"
              username: "{user}"
              password: "{password}"
    • Install the k3s and initialize the gateway on the first node.

      opsramp-collector-start setup init --enable-ha=true --loadbalancer-ip {loadbalancerIp}

    • Install the k3s with a custom pod/service IP range, use the following command.

      opsramp-collector-start setup init --enable-ha=true --loadbalancer-ip {loadbalancerIp} --cluster-cidr <cluster-cidr-ip> --service-cidr <service-cide-ip> 

Load balancer format
The following are the IP-range information.

IP-rangeDescriptionResult Value is used to add single metallb IP192.25.254.45 is used to add 4 IPs from a given IP192.25.254.44 - - is used to add a custom range192.25.254.44 -
  1. If you want to add a new node (1st node) to the cluster, use the following two commands.
    • Run the following command to generate the k3s token on the first node.
      opsramp-collector-start setup node token
    • To join the new VM to the existing cluster, run the following command.
      Here, NodeIP should be the first node IP and token is what we get in the previous step.
      opsramp-collector-start setup node add -u https://{NodeIp}:6443 -t token
  2. The K3s is now installed in the new node.
  3. Repeat step 5 to add the 2nd and 3rd node to the cluster. This will complete the setup for the 2nd and 3rd node HA NextGen gateways.

After K3s is installed, now register the gateway to OpsRamp Cloud.

Register the gateway

Click here to know how to register the gateway to the OpsRamp Cloud.

Commands to check the status of the cluster

  1. Make sure all the nodes are in Ready State.

    kubectl get nodes 
    Sample output:
    NAME    STATUS   ROLES                       AGE     VERSION
    nodea   Ready    control-plane,etcd,master   8m3s    v1.23.5+k3s1
    nodeb   Ready    control-plane,etcd,master   5m13s   v1.23.5+k3s1
    nodec   Ready    control-plane,etcd,master   4m5s    v1.23.5+k3s1

  2. Make sure longhorn and metallb are deployed successfully.

    helm list -A
    Sample output:
    NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
    longhorn        longhorn-system 1               2023-01-11 06:17:37.412149576 +0000 UTC deployed        longhorn-1.0.0          v1.2.4
    metallb         kube-system     1               2023-01-11 06:17:34.252217507 +0000 UTC deployed        metallb-1.0.0           0.9.5

  3. Make sure all the pods are in Running State.

    kubectl get pods -A
    Sample output:
    NAMESPACE         NAME                                        READY   STATUS    RESTARTS   AGE
    default           nextgen-gw-0                                4/4     Running   0          23h
    default           stan-0                                      2/2     Running   0          23h
    kube-system       coredns-d76bd69b-n8jhh                      1/1     Running   0          23h
    kube-system       metallb-controller-7954c9c84d-pm89k         1/1     Running   0          23h
    kube-system       metallb-speaker-j69tp                       1/1     Running   0          23h
    kube-system       metallb-speaker-mddqj                       1/1     Running   0          23h
    kube-system       metallb-speaker-n45g4                       1/1     Running   0          23h
    kube-system       metrics-server-7cd5fcb6b7-tvnps             1/1     Running   0          23h
    longhorn-system   csi-attacher-76c9f797d7-2jg5w               1/1     Running   0          23h
    longhorn-system   csi-attacher-76c9f797d7-qhs85               1/1     Running   0          23h
    longhorn-system   csi-attacher-76c9f797d7-qn9pr               1/1     Running   0          23h
    longhorn-system   csi-provisioner-b749dbdf9-chjs9             1/1     Running   0          23h
    longhorn-system   csi-provisioner-b749dbdf9-hbwx2             1/1     Running   0          23h