Prerequisites:
HAProxy Configurations:
SSH to the nodes which will function as the load balancer and execute the following commands to install HAProxy.
1
| apt update && apt install -y haproxy
|
Edit haproxy.cfg
to connect it to the master nodes, set the correct values for <loadbalancer-vip>
and <kube-masterX-ip>
and add an extra entry for each additional master:
1
| vim /etc/haproxy/haproxy.cfg
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
| global
log /dev/log local0
log /dev/log local1 notice
daemon
maxconn 2000
user haproxy
group haproxy
defaults
log global
mode tcp
option tcplog
option dontlognull
retries 3
timeout connect 10s
timeout client 1m
timeout server 1m
listen stats
bind *:8404
mode http
stats enable
stats uri /stats
stats refresh 10s
stats show-node
stats auth admin:adminpass
frontend kubernetes-api
bind *:6443
default_backend kubernetes-masters
backend kubernetes-masters
balance roundrobin
option tcp-check
server k8s-master-0 <kube-masterX-ip>:6443 check
server k8s-master-1 <kube-masterY-ip>:6443 check
server k8s-master-2 <kube-masterZ-ip>:6443 check
|
Verify haproxy configuration & restart HAproxy:
1
| haproxy -f /etc/haproxy/haproxy.cfg -c
|
1
2
3
4
5
6
| {
systemctl daemon-reload
sudo systemctl enable haproxy
sudo systemctl start haproxy
sudo systemctl status haproxy
}
|
On both the nodes[master & backup], run the following commands:
1
| apt update && apt install -y keepalived && apt install -y libipset13
|
Keepalived Configurations:
On Master/Primary node:
1
| vim /etc/keepalived/keepalived.conf
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| # Define the script used to check if haproxy is still working
vrrp_script chk_haproxy {
script "/usr/bin/killall -0 haproxy"
interval 2
weight 2
}
# Configuration for Virtual Interface
vrrp_instance LB_VIP {
interface ens6
state MASTER # set to BACKUP on the peer machine
priority 301 # set to 300 on the peer machine
virtual_router_id 51
authentication {
auth_type user
auth_pass UGFzcwo= # Password for accessing vrrpd. Same on all devices
}
unicast_src_ip <lb-master-ip> # IP address of master-lb
unicast_peer {
<lb-backup-ip> # IP address of the backup-lb
}
# The virtual ip address shared between the two loadbalancers
virtual_ipaddress {
<lb-vip> # vip
}
# Use the Defined Script to Check whether to initiate a fail over
track_script {
chk_haproxy
}
}
|
On Backup/Secondary node:
1
| vim /etc/keepalived/keepalived.conf
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| # Define the script used to check if haproxy is still working
vrrp_script chk_haproxy {
script "/usr/bin/killall -0 haproxy"
interval 2
weight 2
}
# Configuration for Virtual Interface
vrrp_instance LB_VIP {
interface ens6
state BACKUP # set to BACKUP on the peer machine
priority 300 # set to 301 on the peer machine
virtual_router_id 51
authentication {
auth_type user
auth_pass UGFzcwo= # Password for accessing vrrpd. Same on all devices
}
unicast_src_ip <lb-backup-ip> #IP address of backup-lb
unicast_peer {
<lb-master-ip> #IP address of the master-lb
}
# The virtual ip address shared between the two loadbalancers
virtual_ipaddress {
<lb-vip> #vip
}
# Use the Defined Script to Check whether to initiate a fail over.
track_script {
chk_haproxy
}
}
|
Enable and restart keepalived service:
1
2
3
4
5
| {
systemctl enable --now keepalived
systemctl start keepalived
systemctl status keepalived
}
|
Reference Links: