目錄
nginx負載均衡
nginx負載均衡介紹
反向代理與負載均衡
nginx負載均衡配置
Keepalived高可用nginx負載均衡器
修改Web服務器的默認主頁
開啟nginx負載均衡和反向代理
安裝Keepalived
配置Keepalived
編寫腳本監控Keepalived和nginx的狀態
配置keepalived加入監控腳本的配置
nginx負載均衡介紹
nginx應用場景之一就是負載均衡。在訪問量較多的時候,可以通過負載均衡,將多個請求分攤到多臺服務器上,相當于把一臺服務器需要承擔的負載量交給多臺服務器處理,進而提高系統的吞吐率;另外如果其中某一臺服務器掛掉,其他服務器還可以正常提供服務,以此來提高系統的可伸縮性與可靠性。
下圖為負載均衡示例圖,當用戶請求發送后,首先發送到負載均衡服務器,而后由負載均衡服務器根據配置規則將請求轉發到不同的web服務器上。
反向代理與負載均衡
nginx通常被用作后端服務器的反向代理,這樣就可以很方便的實現動靜分離以及負載均衡,從而大大提高服務器的處理能力。
nginx實現動靜分離,其實就是在反向代理的時候,如果是靜態資源,就直接從nginx發布的路徑去讀取,而不需要從后臺服務器獲取了。
但是要注意,這種情況下需要保證后端跟前端的程序保持一致,可以使用Rsync做服務端自動同步或者使用NFS、MFS分布式共享存儲。
Http Proxy模塊,功能很多,最常用的是proxy_pass和proxy_cache
如果要使用proxy_cache,需要集成第三方的ngx_cache_purge模塊,用來清除指定的URL緩存。這個集成需要在安裝nginx的時候去做,如:
./configure --add-module=../ngx_cache_purge-1.0 ......
nginx通過upstream模塊來實現簡單的負載均衡,upstream需要定義在http段內
在upstream段內,定義一個服務器列表,默認的方式是輪詢,如果要確定同一個訪問者發出的請求總是由同一個后端服務器來處理,可以設置ip_hash,如:
upstream idfsoft.com { ip_hash; server 127.0.0.1:9080 weight=5; server 127.0.0.1:8080 weight=5; server 127.0.0.1:1111; }
注意:這個方法本質還是輪詢,而且由于客戶端的ip可能是不斷變化的,比如動態ip,代理,FQ等,因此ip_hash并不能完全保證同一個客戶端總是由同一個服務器來處理。
定義好upstream后,需要在server段內添加如下內容:
server { location / { proxy_pass http://idfsoft.com; } }
nginx負載均衡配置
環境說明
系統 | IP | 角色 | 服務 |
---|---|---|---|
centos8 | 192.168.222.250 | Nginx負載均衡器 | nginx |
centos8 | 192.168.222.137 | Web1服務器 | apache |
centos8 | 192.168.222.138 | Web2服務器 | nginx |
nginx負載均衡器使用源碼的方式安裝nginx,另外兩臺Web服務器使用yum的方式分別安裝nginx與apache服務
nginx源碼安裝可以看我的博客nginx,里面有nginx詳細的源碼安裝
修改Web服務器的默認主頁
Web1:
[root@Web1 ~]# yum -y install httpd //下載服務 [root@Web1 ~]# systemctl stop firewalld.service //關閉防火墻 [root@Web1 ~]# vim /etc/selinux/config SELINUX=disabled [root@Web1 ~]# setenforce 0 [root@Web1 ~]# systemctl disable --now firewalld.service Removed /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@Web1 ~]# cd /var/www/html/ [root@Web1 html]# ls [root@Web1 html]# echo "apache" > index.html //編輯內容到網站里面 [root@Web1 html]# cat index.html apache [root@Web1 html]# systemctl enable --now httpd Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service. [root@Web1 html]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 [::]:22 [::]:* LISTEN 0 128 *:80 *:*
訪問:
Web2:
[root@Web2 ~]# yum -y install nginx //下載服務 [root@Web2 ~]# systemctl stop firewalld.service //關閉防火墻 [root@Web2 ~]# vim /etc/selinux/config SELINUX=disabled [root@Web2 ~]# setenforce 0 [root@Web2 ~]# systemctl disable --now firewalld.service Removed /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@Web2 ~]# cd /usr/share/nginx/html/ [root@Web2 html]# ls 404.html 50x.html index.html nginx-logo.png poweredby.png [root@Web2 html]# echo "nginx" > index.html //編輯內容到網站里面 [root@Web2 html]# cat index.html nginx [root@Web2 html]# systemctl enable --now nginx.service Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service. [root@Web2 html]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:111 0.0.0.0:* LISTEN 0 128 0.0.0.0:80 0.0.0.0:* LISTEN 0 32 192.168.122.1:53 0.0.0.0:* LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 [::]:111 [::]:* LISTEN 0 128 [::]:80 [::]:* LISTEN 0 128 [::]:22 [::]:*
訪問:
開啟nginx負載均衡和反向代理
[root@nginx ~]# vim /usr/local/nginx/conf/nginx.conf ... upstream webserver { //http字段內添加 server 192.168.222.137; server 192.168.222.138; } ... location / { //server字段里面修改 root html; proxy_pass http://webserver; } [root@nginx ~]# systemctl reload nginx.service //重新加載配置
測試:
在瀏覽器輸入nginx負載均衡器的IP地址
編輯nginx負載均衡器的nginx配置文件
[root@nginx ~]# vim /usr/local/nginx/conf/nginx.conf upstream webserver { //在http字段內修改 server 192.168.222.137 weight=3; server 192.168.222.138; } [root@nginx ~]# systemctl reload nginx.service //重新加載配置 [root@nginx ~]# curl 192.168.222.250 apache [root@nginx ~]# curl 192.168.222.250 apache [root@nginx ~]# curl 192.168.222.250 apache [root@nginx ~]# curl 192.168.222.250 nginx [root@nginx ~]# curl 192.168.222.250 apache [root@nginx ~]# curl 192.168.222.250 apache [root@nginx ~]# curl 192.168.222.250 apache [root@nginx ~]# curl 192.168.222.250 nginx //可以觀察到每訪問三次apache就會訪問一次nginx,意思就是配置要連續訪問3次,才會進行下一次輪查詢,當集群中有配置較低,較老的服務器可以進行使用,來減輕這些服務器的壓力。 [root@nginx ~]# vim /usr/local/nginx/conf/nginx.conf upstream webserver { //http字段里面進行修改 ip_hash; server 192.168.222.137 weight=3; server 192.168.222.138; } [root@nginx ~]# systemctl reload nginx.service //重新加載配置 [root@nginx ~]# curl 192.168.222.250 nginx [root@nginx ~]# curl 192.168.222.250 nginx [root@nginx ~]# curl 192.168.222.250 nginx [root@nginx ~]# curl 192.168.222.250 nginx [root@nginx ~]# curl 192.168.222.250 nginx [root@nginx ~]# curl 192.168.222.250 nginx [root@nginx ~]# curl 192.168.222.250 nginx //可以看見訪問到的全部是nginx,因為ip_hash配置,這條配置可以讓客戶端訪問到服務器端,以后就一直是此服務器來進行響應客戶端,所以才會一直訪問到nginx,當然前面已經說過,這個方式的本質還是輪詢,并不能保證一個客戶端總是由同一個服務器來進行響應
Keepalived高可用nginx負載均衡器
實驗環境
系統 | 角色 | 服務 | IP |
---|---|---|---|
centos8 | nginx負載均衡器,master | nginx,keepalived | 192.168.222.250 |
centos8 | nginx負載均衡器,backup | nginx,keepalived | 192.168.222.139 |
centos8 | Web1服務器 | apache | 192.168.222.137 |
centos8 | Web2服務器 | nginx | 192.168.222.138 |
nginx源碼安裝可以看我的博客nginx,里面有nginx詳細的源碼安裝
VIP為:192.168.222.133
修改Web服務器的默認主頁
Web1:
[root@Web1 ~]# yum -y install httpd //下載服務 [root@Web1 ~]# systemctl stop firewalld.service //關閉防火墻 [root@Web1 ~]# vim /etc/selinux/config SELINUX=disabled [root@Web1 ~]# setenforce 0 [root@Web1 ~]# systemctl disable --now firewalld.service Removed /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@Web1 ~]# cd /var/www/html/ [root@Web1 html]# ls [root@Web1 html]# echo "apache" > index.html //編輯內容到網站里面 [root@Web1 html]# cat index.html apache [root@Web1 html]# systemctl enable --now httpd Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service. [root@Web1 html]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 [::]:22 [::]:* LISTEN 0 128 *:80 *:*
訪問:
Web2:
[root@Web2 ~]# yum -y install nginx //下載服務 [root@Web2 ~]# systemctl stop firewalld.service //關閉防火墻 [root@Web2 ~]# vim /etc/selinux/config SELINUX=disabled [root@Web2 ~]# setenforce 0 [root@Web2 ~]# systemctl disable --now firewalld.service Removed /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@Web2 ~]# cd /usr/share/nginx/html/ [root@Web2 html]# ls 404.html 50x.html index.html nginx-logo.png poweredby.png [root@Web2 html]# echo "nginx" > index.html //編輯內容到網站里面 [root@Web2 html]# cat index.html nginx [root@Web2 html]# systemctl enable --now nginx.service Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service. [root@Web2 html]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:111 0.0.0.0:* LISTEN 0 128 0.0.0.0:80 0.0.0.0:* LISTEN 0 32 192.168.122.1:53 0.0.0.0:* LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 [::]:111 [::]:* LISTEN 0 128 [::]:80 [::]:* LISTEN 0 128 [::]:22 [::]:*
訪問:
開啟nginx負載均衡和反向代理
Keepalived高可用的主節點的nginx是需要設置開機自啟的
master:
[root@master ~]# systemctl status nginx.service ● nginx.service - nginx server daemon Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2022-10-18 21:27:54 CST; 1h 1min ago Process: 46768 ExecStart=/usr/local/nginx/sbin/nginx (code=exited, status=0/SUCCESS) Main PID: 46769 (nginx) Tasks: 2 (limit: 12221) Memory: 2.6M CGroup: /system.slice/nginx.service ├─46769 nginx: master process /usr/local/nginx/sbin/nginx └─46770 nginx: worker process Oct 18 21:27:54 nginx systemd[1]: Starting nginx server daemon... Oct 18 21:27:54 nginx systemd[1]: Started nginx server daemon. [root@master ~]# vim /usr/local/nginx/conf/nginx.conf ... upstream webserver { //http字段內添加 server 192.168.222.137; server 192.168.222.138; } ... location / { //server字段里面修改 root html; proxy_pass http://webserver; } [root@master ~]# systemctl reload nginx.service //重新加載配置
測試:
在瀏覽器輸入nginx負載均衡器的IP地址
backup:
Keepalived高可用的備用節點的nginx是不設置開機自啟的,如果開啟的話,后面訪問VIP的時候可能會訪問不到,可以在需要測試的時候進行開啟
[root@backup ~]# systemctl status nginx.service ● nginx.service - nginx server daemon Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2022-10-18 22:25:31 CST; 1s ago Process: 73641 ExecStart=/usr/local/nginx/sbin/nginx (code=exited, status=0/SUCCESS) Main PID: 73642 (nginx) Tasks: 2 (limit: 12221) Memory: 2.7M CGroup: /system.slice/nginx.service ├─73642 nginx: master process /usr/local/nginx/sbin/nginx └─73643 nginx: worker process Oct 18 22:25:31 backup systemd[1]: Starting nginx server daemon... Oct 18 22:25:31 backup systemd[1]: Started nginx server daemon. [root@backup ~]# vim /usr/local/nginx/conf/nginx.conf ... upstream webserver { //http字段內添加 server 192.168.222.137; server 192.168.222.138; } ... location / { //server字段里面修改 root html; proxy_pass http://webserver; } [root@backup ~]# systemctl reload nginx.service //重新加載一下配置
訪問:
在瀏覽器輸入nginx負載均衡器的IP地址
安裝Keepalived
master:
[root@master ~]# dnf list all |grep keepalived //查找系統中是否存在其安裝包 Failed to set locale, defaulting to C.UTF-8 keepalived.x86_64 2.1.5-6.el8 AppStream [root@master ~]# dnf -y install keepalived
backup:
[root@backup ~]# dnf list all |grep keepalived //查找系統中是否存在其安裝包 Failed to set locale, defaulting to C.UTF-8 Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] Module yaml error: Unexpected key in data: static_context [line 9 col 3] keepalived.x86_64 2.1.5-6.el8 AppStream [root@backup ~]# dnf -y install keepalived
配置Keepalived
master
[root@master ~]# cd /etc/keepalived/ [root@master keepalived]# ls keepalived.conf [root@master keepalived]# mv keepalived.conf{,-bak} //備份一下配置文件 [root@master keepalived]# ls keepalived.conf-bak [root@master keepalived]# vim keepalived.conf //編輯一個新配置文件 [root@master keepalived]# cat keepalived.conf ! Configuration File for keepalived global_defs { router_id lb01 } vrrp_instance VI_1 { //這里主備節點需要一致 state BACKUP interface ens33 //網卡 virtual_router_id 51 priority 100 //這里比備節點的高 advert_int 1 authentication { auth_type PASS auth_pass tushanbu //密碼(可以隨機生成) } virtual_ipaddress { 192.168.222.133 //高可用虛擬IP(VIP)地址 } } virtual_server 192.168.222.133 80 { delay_loop 6 lb_algo rr lb_kind DR persistence_timeout 50 protocol TCP real_server 192.168.222.250 80 { //主節點ip weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.222.139 80 { //備節點ip weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } [root@master keepalived]# systemctl enable --now keepalived.service Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
backup:
[root@backup ~]# cd /etc/keepalived/ [root@backup keepalived]# ls keepalived.conf [root@backup keepalived]# mv keepalived.conf{,-bak} //備份一下配置文件 [root@backup keepalived]# ls keepalived.conf-bak [root@backup keepalived]# vim keepalived.conf //編輯新的配置文件 [root@backup keepalived]# cat keepalived.conf ! Configuration File for keepalived global_defs { router_id lb02 } vrrp_instance VI_1 { //這里主備節點需要一致 state BACKUP interface ens33 //網卡 virtual_router_id 51 priority 90 //這里比主節點的小 advert_int 1 authentication { auth_type PASS auth_pass tushanbu //密碼(可以隨機生成) } virtual_ipaddress { 192.168.222.133 //高可用虛擬IP(VIP)地址 } } virtual_server 192.168.222.133 80 { delay_loop 6 lb_algo rr lb_kind DR persistence_timeout 50 protocol TCP real_server 192.168.222.250 80 { //主節點ip weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.222.137 80 { //備節點ip weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } [root@backup keepalived]# systemctl enable --now keepalived.service Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service. [root@backup keepalived]# systemctl start nginx //此時測試的時候可以開啟nginx
查看VIP
master:
[root@master keepalived]# ip a 1: lo:mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:0528 brd ffffff:ff inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.222.133/32 scope global ens33 valid_lft forever preferred_lft forever
backup:
[root@backup keepalived]# ip a 1: lo:mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:31f9 brd ffffff:ff inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ffaff9/64 scope link valid_lft forever preferred_lft forever
//VIP在master主機上面因為在Keepalived配置文件里我們設置master的優先級要比backup高一些,所以VIP在這里很正常
訪問:
master:
[root@master keepalived]# curl 192.168.222.133 apache [root@master keepalived]# curl 192.168.222.133 nginx
此是關閉master上面的nginx和keepalived的
[root@master keepalived]# systemctl stop nginx.service [root@master keepalived]# systemctl stop keepalived.service [root@master keepalived]# ip a 1: lo:mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:0528 brd ffffff:ff inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever //此時master上面沒有VIP
backup:
[root@backup keepalived]# systemctl enable --now keepalived [root@backup keepalived]# systemctl start nginx.service [root@backup keepalived]# ip a 1: lo:mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:31f9 brd ffffff:ff inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.222.133/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ffaff9/64 scope link valid_lft forever preferred_lft forever //此時backup上面出現VIP,備節點變成了主節點 [root@backup keepalived]# curl 192.168.222.133 apache [root@backup keepalived]# curl 192.168.222.133 nginx
訪問:
可以看到,其中一個nginx負載均衡器掛掉了,也不會影響正常訪問,這就是nginx負載均衡的高可用的配置
重啟master上面的nginx和keepalived
[root@master keepalived]# systemctl enable --now keepalived [root@master keepalived]# systemctl enable --now nginx [root@master keepalived]# ip a 1: lo:mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:0528 brd ffffff:ff inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.222.133/32 scope global ens33 valid_lft forever preferred_lft forever //可以發現VIP出現在master節點上面
編寫腳本監控Keepalived和nginx的狀態
master:
[root@master keepalived]# cd [root@master ~]# mkdir /scripts [root@master ~]# cd /scripts/ [root@master scripts]# vim check_nginx.sh [root@master scripts]# cat check_nginx.sh #!/bin/bash nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l) if [ $nginx_status -lt 1 ];then systemctl stop keepalived fi [root@master scripts]# chmod +x check_nginx.sh [root@master scripts]# ll total 4 -rwxr-xr-x. 1 root root 151 Oct 19 00:32 check_nginx.sh [root@master scripts]# vim notify.sh [root@master scripts]# cat notify.sh #!/bin/bash case "$1" in master) nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l) if [ $nginx_status -lt 1 ];then systemctl start nginx fi ;; backup) nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l) if [ $nginx_status -gt 0 ];then systemctl stop nginx fi ;; *) echo "Usage:$0 master|backup VIP" ;; esac [root@master scripts]# chmod +x notify.sh [root@master scripts]# ll total 8 -rwxr-xr-x. 1 root root 151 Oct 19 00:32 check_nginx.sh -rwxr-xr-x. 1 root root 399 Oct 19 00:35 notify.sh
backup:
可以先提前創建好存放腳本的目錄
[root@backup keepalived]# cd [root@backup ~]# mkdir /scripts [root@backup ~]# cd /scripts/
從主節點上面將腳本到備節點提前創建好的存放目錄里面
[root@master scripts]# scp notify.sh 192.168.222.139:/scripts/ root@192.168.222.139's password: notify.sh 100% 399 216.0KB/s 00:00
[root@backup scripts]# ls notify.sh [root@backup scripts]# cat notify.sh #!/bin/bash case "$1" in master) nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l) if [ $nginx_status -lt 1 ];then systemctl start nginx fi ;; backup) nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l) if [ $nginx_status -gt 0 ];then systemctl stop nginx fi ;; *) echo "Usage:$0 master|backup VIP" ;; esac
配置keepalived加入監控腳本的配置
master:
[root@master scripts]# cd [root@master ~]# vim /etc/keepalived/keepalived.conf [root@master ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id lb01 } vrrp_script nginx_check{ script "/scripts/check_nginx.sh" interval 5 weight -20 } vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS ! Configuration File for keepalived global_defs { router_id lb01 } vrrp_script nginx_check{ //添加 script "/scripts/check_nginx.sh" //添加 interval 1 //添加 weight -20 //添加 } //添加 vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass tushanbu } virtual_ipaddress { 192.168.222.133 } track_script { //添加 nginx_check //添加 } //添加 notify_master "/scripts/notify.sh master" //添加 } virtual_server 192.168.222.133 80 { delay_loop 6 lb_algo rr lb_kind DR persistence_timeout 50 protocol TCP real_server 192.168.222.250 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.222.139 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } [root@master ~]# systemctl restart keepalived.service
backup:
backup無需檢測nginx是否正常,當升級為MASTER時啟動nginx,當降級為BACKUP時關閉
[root@backup scripts]# cd [root@backup ~]# vim /etc/keepalived/keepalived.conf [root@backup ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id lb02 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 priority 90 advert_int 1 authentication { auth_type PASS auth_pass tushanbu } virtual_ipaddress { 192.168.222.133 } notify_master "/scripts/notify.sh master" //添加 notify_backup "/scripts/notify.sh backup" //添加 } virtual_server 192.168.222.133 80 { delay_loop 6 lb_algo rr lb_kind DR persistence_timeout 50 protocol TCP real_server 192.168.222.250 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.222.139 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } [root@backup ~]# systemctl restart keepalived.service
測試
正常狀態運行查看狀態
[root@master ~]# ip a 1: lo:mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:2983:57 brd ffffff:ff inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.222.133/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff8357/64 scope link valid_lft forever preferred_lft forever [root@master]# curl 192.168.222.133 apache [root@master]# curl 192.168.222.133 nginx //此時VIP在主節點上面
關閉master的nginx
[root@master ~]# systemctl stop nginx.service [root@master ~]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 [::]:22 [::]:* [root@master ~]# ip a 1: lo:mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:0528 brd ffffff:ff inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever //沒有VIP
backup:
[root@backup ~]# ip a 1: lo:mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:31f9 brd ffffff:ff inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.222.133/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ffaff9/64 scope link valid_lft forever preferred_lft forever [root@backup ~]# curl 192.168.222.133 apache [root@backup ~]# curl 192.168.222.133 nginx //備節點變成主機節點
重新開啟master的nginx
[root@master ~]# systemctl restart keepalived.service [root@master ~]# systemctl restart nginx.service [root@master ~]# ip a 1: lo:mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:2983:57 brd ffffff:ff inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.222.133/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff8357/64 scope link valid_lft forever preferred_lft forever [root@master]# curl 192.168.222.133 apache [root@master]# curl 192.168.222.133 nginx //此時VIP重新回到master上面
審核編輯:彭菁
-
監控
+關注
關注
6文章
2272瀏覽量
55681 -
服務器
+關注
關注
12文章
9547瀏覽量
86830 -
Nginx負載均衡
+關注
關注
0文章
2瀏覽量
1720
原文標題:Keepalived高可用nginx負載均衡器
文章出處:【微信號:magedu-Linux,微信公眾號:馬哥Linux運維】歡迎添加關注!文章轉載請注明出處。
發布評論請先 登錄
相關推薦
路由器負載均衡怎么配置
f5負載均衡和Nginx負載均衡有什么區別

超詳細!使用 LVS 實現負載均衡原理及安裝配置詳解
詳解Nginx負載均衡配置誤區

評論