A LoadBalancer service in Kubernetes is an efficient way to expose a service from within a cluster to the external world. By simply specifying the service type as type=LoadBalancer in the service definition, the service will be accessible externally. Kubernetes natively supports this type of service, making it the fastest method to connect external traffic to a service within the cluster.

There are three ways to setting up the Kubernetes cluster:

  • Single-Node Deployments
  • Multi-Node Deployments (High Availability - HA)
  • Customer-Owned Kubernetes Clusters (Cloud-Native Deployment / Helm-Based Deployment)

Single-Node Deployment

Single-node deployment is based on an ISO/OVA file provided by OpsRamp. In this setup, the default K3S load balancer known as ServiceLB is used.

To enable the ServiceLB option during the K3S cluster setup, use the following command:

opsramp-collector-start setup init

How ServiceLB Works:

  • Monitoring Services: The ServiceLB controller monitors Kubernetes services with spec.type set to LoadBalancer.
  • DaemonSet Creation: For each LoadBalancer service, a DaemonSet is created in the kube-system namespace. This DaemonSet creates pods with a svc- prefix on each node.
  • Traffic Forwarding: These pods use iptables to forward traffic from the pod’s NodePort to the service’s ClusterIP address and port.
  • External vs. Internal IP: If the ServiceLB pod runs on a node with an external IP, that IP is populated in the service’s status.loadBalancer.ingress address list. Otherwise, the node’s internal IP is used.
  • Multiple Services: Separate DaemonSets are created for each LoadBalancer service, and multiple services can be exposed on the same node as long as different ports are used.
  • Port Conflicts: If a LoadBalancer service tries to listen on a port that is already in use (e.g., port 80), ServiceLB will search for an available node with that port. If no such node is available, the load balancer will remain in a Pending state.

Multi-Node Deployment

Multi-node deployment, also built on the ISO/OVA file provided by OpsRamp, uses MetalLB for load balancing across the K3S cluster.

How MetalLB Works

  • MetalLB Integration: MetalLB integrates with your Kubernetes cluster to provide a network load balancer. It enables the creation of LoadBalancer services in clusters that are not hosted on a cloud provider.
  • Configuration: Follow the OpsRamp documentation to configure MetalLB in your multi-node cluster and update or add the MetalLB IP address in the NextGen Gateway.

MetalLB is ideal for HA setups, as it allows for the distribution of traffic across multiple nodes, ensuring higher availability and reliability.

Follow this document to configure metallb in your multi-node cluster Update or add metallb-ip address in the nextgen gateway.

Customer-Owned Kubernetes Cluster

In this scenario, the customer manages their own Kubernetes cluster, which could be hosted on a cloud provider (e.g., EKS, GKE, AKS) or on bare-metal (e.g., K3S, MicroK8s, Kubeadm).

Load Balancing Options

  • Cloud-Provider Kubernetes Clusters:
    • Automatic Load Balancing: If you are using a Kubernetes cluster provided by a cloud provider, the provider will typically manage the load balancer for you. You just need to set the service type to LoadBalancer.
  • Bare-Metal Kubernetes Clusters:
    • Choose Your Load Balancer: On bare-metal setups, you will need to choose and configure a suitable load balancer service based on whether your cluster is single-node or multi-node.
    • Options: Consider using MetalLB, ServiceLB, or other third-party load balancer solutions tailored to your Kubernetes setup.