在线观看www成人影院-在线观看www日本免费网站-在线观看www视频-在线观看操-欧美18在线-欧美1级

0
  • 聊天消息
  • 系統消息
  • 評論與回復
登錄后你可以
  • 下載海量資料
  • 學習在線課程
  • 觀看技術視頻
  • 寫文章/發帖/加入社區
會員中心
創作中心

完善資料讓更多小伙伴認識你,還能領取20積分哦,立即完善>

3天內不再提示

nginx負載均衡配置介紹

馬哥Linux運維 ? 來源:馬哥Linux運維 ? 2024-11-10 13:39 ? 次閱讀

目錄

nginx負載均衡

nginx負載均衡介紹

反向代理與負載均衡

nginx負載均衡配置

Keepalived高可用nginx負載均衡器

修改Web服務器的默認主頁

開啟nginx負載均衡和反向代理

安裝Keepalived

配置Keepalived

編寫腳本監控Keepalived和nginx的狀態

配置keepalived加入監控腳本的配置

nginx負載均衡介紹

nginx應用場景之一就是負載均衡。在訪問量較多的時候,可以通過負載均衡,將多個請求分攤到多臺服務器上,相當于把一臺服務器需要承擔的負載量交給多臺服務器處理,進而提高系統的吞吐率;另外如果其中某一臺服務器掛掉,其他服務器還可以正常提供服務,以此來提高系統的可伸縮性與可靠性。

下圖為負載均衡示例圖,當用戶請求發送后,首先發送到負載均衡服務器,而后由負載均衡服務器根據配置規則將請求轉發到不同的web服務器上。
7f9bba76-9e97-11ef-93f3-92fbcf53809c.png

反向代理與負載均衡

nginx通常被用作后端服務器的反向代理,這樣就可以很方便的實現動靜分離以及負載均衡,從而大大提高服務器的處理能力。

nginx實現動靜分離,其實就是在反向代理的時候,如果是靜態資源,就直接從nginx發布的路徑去讀取,而不需要從后臺服務器獲取了。

但是要注意,這種情況下需要保證后端跟前端的程序保持一致,可以使用Rsync做服務端自動同步或者使用NFS、MFS分布式共享存儲。

Http Proxy模塊,功能很多,最常用的是proxy_pass和proxy_cache

如果要使用proxy_cache,需要集成第三方的ngx_cache_purge模塊,用來清除指定的URL緩存。這個集成需要在安裝nginx的時候去做,如:

./configure --add-module=../ngx_cache_purge-1.0 ......

nginx通過upstream模塊來實現簡單的負載均衡,upstream需要定義在http段內

在upstream段內,定義一個服務器列表,默認的方式是輪詢,如果要確定同一個訪問者發出的請求總是由同一個后端服務器來處理,可以設置ip_hash,如:

upstream idfsoft.com {
  ip_hash;
  server 127.0.0.1:9080 weight=5;
  server 127.0.0.1:8080 weight=5;
  server 127.0.0.1:1111;
}

注意:這個方法本質還是輪詢,而且由于客戶端的ip可能是不斷變化的,比如動態ip,代理,FQ等,因此ip_hash并不能完全保證同一個客戶端總是由同一個服務器來處理。

定義好upstream后,需要在server段內添加如下內容:

server {
  location / {
    proxy_pass http://idfsoft.com;
  }
}

nginx負載均衡配置

環境說明

系統 IP 角色 服務
centos8 192.168.222.250 Nginx負載均衡器 nginx
centos8 192.168.222.137 Web1服務器 apache
centos8 192.168.222.138 Web2服務器 nginx

nginx負載均衡器使用源碼的方式安裝nginx,另外兩臺Web服務器使用yum的方式分別安裝nginx與apache服務

nginx源碼安裝可以看我的博客nginx,里面有nginx詳細的源碼安裝

修改Web服務器的默認主頁
Web1:

[root@Web1 ~]# yum -y install httpd   //下載服務
[root@Web1 ~]# systemctl stop firewalld.service  //關閉防火墻
[root@Web1 ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@Web1 ~]# setenforce 0
[root@Web1 ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@Web1 ~]# cd /var/www/html/
[root@Web1 html]# ls
[root@Web1 html]# echo "apache" > index.html  //編輯內容到網站里面
[root@Web1 html]# cat index.html 
apache
[root@Web1 html]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@Web1 html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
LISTEN     0          128                         *:80                        *:*                    

訪問:
7fad2478-9e97-11ef-93f3-92fbcf53809c.png

Web2:

[root@Web2 ~]# yum -y install nginx  //下載服務
[root@Web2 ~]# systemctl stop firewalld.service //關閉防火墻 
[root@Web2 ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@Web2 ~]# setenforce 0
[root@Web2 ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@Web2 ~]# cd /usr/share/nginx/html/
[root@Web2 html]# ls
404.html  50x.html  index.html  nginx-logo.png  poweredby.png
[root@Web2 html]# echo "nginx" > index.html  //編輯內容到網站里面
[root@Web2 html]# cat index.html 
nginx
[root@Web2 html]# systemctl enable --now nginx.service 
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
[root@Web2 html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    

訪問:
7fce95e0-9e97-11ef-93f3-92fbcf53809c.png

開啟nginx負載均衡和反向代理

[root@nginx ~]# vim /usr/local/nginx/conf/nginx.conf
...

upstream webserver {              //http字段內添加
    server 192.168.222.137;
    server 192.168.222.138;
}
...

 location / {               //server字段里面修改
            root   html;
             proxy_pass http://webserver;
        }

[root@nginx ~]# systemctl reload nginx.service 
//重新加載配置

測試:
在瀏覽器輸入nginx負載均衡器的IP地址
7fd955f2-9e97-11ef-93f3-92fbcf53809c.png
7fe9f5ba-9e97-11ef-93f3-92fbcf53809c.png
編輯nginx負載均衡器的nginx配置文件

[root@nginx ~]# vim /usr/local/nginx/conf/nginx.conf
 upstream webserver {      //在http字段內修改
    server 192.168.222.137 weight=3;
    server 192.168.222.138;
}
[root@nginx ~]# systemctl reload nginx.service 
//重新加載配置
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
nginx
//可以觀察到每訪問三次apache就會訪問一次nginx,意思就是配置要連續訪問3次,才會進行下一次輪查詢,當集群中有配置較低,較老的服務器可以進行使用,來減輕這些服務器的壓力。
[root@nginx ~]# vim /usr/local/nginx/conf/nginx.conf
 upstream webserver {    //http字段里面進行修改
     ip_hash; 
    server 192.168.222.137 weight=3;
    server 192.168.222.138;
}
[root@nginx ~]# systemctl reload nginx.service 
//重新加載配置
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
//可以看見訪問到的全部是nginx,因為ip_hash配置,這條配置可以讓客戶端訪問到服務器端,以后就一直是此服務器來進行響應客戶端,所以才會一直訪問到nginx,當然前面已經說過,這個方式的本質還是輪詢,并不能保證一個客戶端總是由同一個服務器來進行響應

Keepalived高可用nginx負載均衡器

實驗環境

系統 角色 服務 IP
centos8 nginx負載均衡器,master nginx,keepalived 192.168.222.250
centos8 nginx負載均衡器,backup nginx,keepalived 192.168.222.139
centos8 Web1服務器 apache 192.168.222.137
centos8 Web2服務器 nginx 192.168.222.138

nginx源碼安裝可以看我的博客nginx,里面有nginx詳細的源碼安裝
VIP為:192.168.222.133

修改Web服務器的默認主頁

Web1:

[root@Web1 ~]# yum -y install httpd   //下載服務
[root@Web1 ~]# systemctl stop firewalld.service  //關閉防火墻
[root@Web1 ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@Web1 ~]# setenforce 0
[root@Web1 ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@Web1 ~]# cd /var/www/html/
[root@Web1 html]# ls
[root@Web1 html]# echo "apache" > index.html  //編輯內容到網站里面
[root@Web1 html]# cat index.html 
apache
[root@Web1 html]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@Web1 html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
LISTEN     0          128                         *:80                        *:*                    

訪問:
7fad2478-9e97-11ef-93f3-92fbcf53809c.png

Web2:

[root@Web2 ~]# yum -y install nginx  //下載服務
[root@Web2 ~]# systemctl stop firewalld.service //關閉防火墻 
[root@Web2 ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@Web2 ~]# setenforce 0
[root@Web2 ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@Web2 ~]# cd /usr/share/nginx/html/
[root@Web2 html]# ls
404.html  50x.html  index.html  nginx-logo.png  poweredby.png
[root@Web2 html]# echo "nginx" > index.html  //編輯內容到網站里面
[root@Web2 html]# cat index.html 
nginx
[root@Web2 html]# systemctl enable --now nginx.service 
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
[root@Web2 html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    

訪問:
7fce95e0-9e97-11ef-93f3-92fbcf53809c.png

開啟nginx負載均衡和反向代理

Keepalived高可用的主節點的nginx是需要設置開機自啟的
master:

[root@master ~]# systemctl status nginx.service 
● nginx.service - nginx server daemon
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2022-10-18 21:27:54 CST; 1h 1min ago
  Process: 46768 ExecStart=/usr/local/nginx/sbin/nginx (code=exited, status=0/SUCCESS)
 Main PID: 46769 (nginx)
    Tasks: 2 (limit: 12221)
   Memory: 2.6M
   CGroup: /system.slice/nginx.service
           ├─46769 nginx: master process /usr/local/nginx/sbin/nginx
           └─46770 nginx: worker process

Oct 18 21:27:54 nginx systemd[1]: Starting nginx server daemon...
Oct 18 21:27:54 nginx systemd[1]: Started nginx server daemon.
[root@master ~]# vim /usr/local/nginx/conf/nginx.conf
...

upstream webserver {              //http字段內添加
    server 192.168.222.137;
    server 192.168.222.138;
}
...

 location / {               //server字段里面修改
            root   html;
            proxy_pass http://webserver;
        }

[root@master ~]# systemctl reload nginx.service 
//重新加載配置

測試:
在瀏覽器輸入nginx負載均衡器的IP地址
7fd955f2-9e97-11ef-93f3-92fbcf53809c.png
7fe9f5ba-9e97-11ef-93f3-92fbcf53809c.png

backup:
Keepalived高可用的備用節點的nginx是不設置開機自啟的,如果開啟的話,后面訪問VIP的時候可能會訪問不到,可以在需要測試的時候進行開啟

[root@backup ~]# systemctl status nginx.service 
● nginx.service - nginx server daemon
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2022-10-18 22:25:31 CST; 1s ago
  Process: 73641 ExecStart=/usr/local/nginx/sbin/nginx (code=exited, status=0/SUCCESS)
 Main PID: 73642 (nginx)
    Tasks: 2 (limit: 12221)
   Memory: 2.7M
   CGroup: /system.slice/nginx.service
           ├─73642 nginx: master process /usr/local/nginx/sbin/nginx
           └─73643 nginx: worker process

Oct 18 22:25:31 backup systemd[1]: Starting nginx server daemon...
Oct 18 22:25:31 backup systemd[1]: Started nginx server daemon.
[root@backup ~]# vim /usr/local/nginx/conf/nginx.conf
...

upstream webserver {              //http字段內添加
    server 192.168.222.137;
    server 192.168.222.138;
}
...

 location / {               //server字段里面修改
            root   html;
            proxy_pass http://webserver;
        }
[root@backup ~]# systemctl reload nginx.service 
//重新加載一下配置

訪問:
在瀏覽器輸入nginx負載均衡器的IP地址
801e6200-9e97-11ef-93f3-92fbcf53809c.png
803e00a6-9e97-11ef-93f3-92fbcf53809c.png

安裝Keepalived

master:

[root@master ~]# dnf list all |grep keepalived  //查找系統中是否存在其安裝包
Failed to set locale, defaulting to C.UTF-8
keepalived.x86_64                                      2.1.5-6.el8                                            AppStream 
[root@master ~]# dnf -y install keepalived

backup:

[root@backup ~]# dnf list all |grep keepalived //查找系統中是否存在其安裝包
Failed to set locale, defaulting to C.UTF-8
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
keepalived.x86_64                                                 2.1.5-6.el8                                            AppStream   
[root@backup ~]# dnf -y install keepalived

配置Keepalived

master

[root@master ~]# cd /etc/keepalived/
[root@master keepalived]# ls
keepalived.conf
[root@master keepalived]# mv keepalived.conf{,-bak}  //備份一下配置文件
[root@master keepalived]# ls
keepalived.conf-bak
[root@master keepalived]# vim keepalived.conf  //編輯一個新配置文件
[root@master keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb01
}

vrrp_instance VI_1 {        //這里主備節點需要一致
    state BACKUP
    interface ens33      //網卡
    virtual_router_id 51
    priority 100     //這里比備節點的高
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu   //密碼(可以隨機生成)
    }
    virtual_ipaddress {
        192.168.222.133    //高可用虛擬IP(VIP)地址
    }
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.250 80 {  //主節點ip
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.139 80 {   //備節點ip
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@master keepalived]# systemctl enable --now keepalived.service 
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.

backup:

[root@backup ~]# cd /etc/keepalived/
[root@backup keepalived]# ls
keepalived.conf
[root@backup keepalived]# mv keepalived.conf{,-bak} //備份一下配置文件
[root@backup keepalived]# ls
keepalived.conf-bak
[root@backup keepalived]# vim keepalived.conf //編輯新的配置文件
[root@backup keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb02    
}

vrrp_instance VI_1 {       //這里主備節點需要一致
    state BACKUP
    interface ens33      //網卡
    virtual_router_id 51
    priority 90     //這里比主節點的小
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu   //密碼(可以隨機生成)
    }
    virtual_ipaddress {
        192.168.222.133    //高可用虛擬IP(VIP)地址
    }
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.250 80 {   //主節點ip
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.137 80 {   //備節點ip
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@backup keepalived]# systemctl enable --now keepalived.service 
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
[root@backup keepalived]# systemctl start nginx
//此時測試的時候可以開啟nginx

查看VIP
master:

[root@master keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:0528 brd ffffff:ff
    inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever

backup:

[root@backup keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever

//VIP在master主機上面因為在Keepalived配置文件里我們設置master的優先級要比backup高一些,所以VIP在這里很正常

訪問:
805253ee-9e97-11ef-93f3-92fbcf53809c.png
80751082-9e97-11ef-93f3-92fbcf53809c.png

master:

[root@master keepalived]# curl 192.168.222.133
apache
[root@master keepalived]# curl 192.168.222.133
nginx

此是關閉master上面的nginx和keepalived的

[root@master keepalived]# systemctl stop nginx.service 
[root@master keepalived]# systemctl stop keepalived.service 
[root@master keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:0528 brd ffffff:ff
    inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
//此時master上面沒有VIP

backup:

[root@backup keepalived]# systemctl enable --now keepalived
[root@backup keepalived]# systemctl start nginx.service 
[root@backup keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever
//此時backup上面出現VIP,備節點變成了主節點

[root@backup keepalived]# curl 192.168.222.133
apache
[root@backup keepalived]# curl 192.168.222.133
nginx

訪問:
8088ef80-9e97-11ef-93f3-92fbcf53809c.png
80ac123a-9e97-11ef-93f3-92fbcf53809c.png

可以看到,其中一個nginx負載均衡器掛掉了,也不會影響正常訪問,這就是nginx負載均衡的高可用的配置

重啟master上面的nginx和keepalived

[root@master keepalived]# systemctl enable --now keepalived
[root@master keepalived]# systemctl enable --now nginx
[root@master keepalived]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:0528 brd ffffff:ff
    inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
//可以發現VIP出現在master節點上面

編寫腳本監控Keepalived和nginx的狀態

master:

[root@master keepalived]# cd
[root@master ~]# mkdir /scripts
[root@master ~]# cd /scripts/
[root@master scripts]# vim check_nginx.sh
[root@master scripts]# cat check_nginx.sh
#!/bin/bash
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
    if [ $nginx_status -lt 1 ];then
            systemctl stop keepalived
    fi
[root@master scripts]# chmod +x check_nginx.sh 
[root@master scripts]# ll
total 4
-rwxr-xr-x. 1 root root 151 Oct 19 00:32 check_nginx.sh
[root@master scripts]# vim notify.sh
[root@master scripts]# cat notify.sh 
#!/bin/bash
case "$1" in
    master)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -lt 1 ];then
            systemctl start nginx
        fi
    ;;
    backup)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -gt 0 ];then
            systemctl stop nginx
        fi
    ;;
    *)
         echo "Usage:$0 master|backup VIP"
    ;;
esac

[root@master scripts]# chmod +x notify.sh 
[root@master scripts]# ll
total 8
-rwxr-xr-x. 1 root root 151 Oct 19 00:32 check_nginx.sh
-rwxr-xr-x. 1 root root 399 Oct 19 00:35 notify.sh

backup:
可以先提前創建好存放腳本的目錄

[root@backup keepalived]# cd
[root@backup ~]# mkdir  /scripts
[root@backup ~]# cd /scripts/

從主節點上面將腳本到備節點提前創建好的存放目錄里面

[root@master scripts]# scp notify.sh 192.168.222.139:/scripts/
root@192.168.222.139's password: 
notify.sh                                                          100%  399   216.0KB/s   00:00    
[root@backup scripts]# ls
notify.sh
[root@backup scripts]# cat notify.sh 
#!/bin/bash
case "$1" in
    master)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -lt 1 ];then
            systemctl start nginx
        fi
    ;;
    backup)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -gt 0 ];then
            systemctl stop nginx
        fi
    ;;
    *)
         echo "Usage:$0 master|backup VIP"
    ;;
esac

配置keepalived加入監控腳本的配置

master:

[root@master scripts]# cd
[root@master ~]# vim /etc/keepalived/keepalived.conf
[root@master ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
 
global_defs {
   router_id lb01
}
 
vrrp_script nginx_check{
    script "/scripts/check_nginx.sh"
    interval 5
    weight -20
}
  
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
! Configuration File for keepalived
 
global_defs {
   router_id lb01
}
 
vrrp_script nginx_check{                                //添加
    script "/scripts/check_nginx.sh"                    //添加
    interval 1                                          //添加
    weight -20                                          //添加
}                                                       //添加
 
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu
    }
    virtual_ipaddress {
        192.168.222.133
    }
     track_script {                     //添加
        nginx_check                     //添加
    }                                   //添加
    notify_master "/scripts/notify.sh master"  //添加
}
virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP
 
    real_server 192.168.222.250 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
 
    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@master ~]# systemctl restart keepalived.service 

backup:
backup無需檢測nginx是否正常,當升級為MASTER時啟動nginx,當降級為BACKUP時關閉

[root@backup scripts]# cd
[root@backup ~]# vim /etc/keepalived/keepalived.conf
[root@backup ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
 
global_defs {
   router_id lb02
}
 
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu
    }
    virtual_ipaddress {
        192.168.222.133
    }
    notify_master "/scripts/notify.sh master"           //添加
    notify_backup "/scripts/notify.sh backup"           //添加
}
virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP
 
    real_server 192.168.222.250 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
 
    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@backup ~]# systemctl restart keepalived.service 

測試
正常狀態運行查看狀態

[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
[root@master]# curl 192.168.222.133
apache
[root@master]# curl 192.168.222.133
nginx
//此時VIP在主節點上面

關閉master的nginx

[root@master ~]# systemctl stop nginx.service 
[root@master ~]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:0528 brd ffffff:ff
    inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
//沒有VIP

backup:

[root@backup ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:31f9 brd ffffff:ff
    inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ffaff9/64 scope link 
       valid_lft forever preferred_lft forever
[root@backup ~]# curl 192.168.222.133
apache
[root@backup ~]# curl 192.168.222.133
nginx
//備節點變成主機節點

重新開啟master的nginx

[root@master ~]# systemctl restart keepalived.service 
[root@master ~]# systemctl restart nginx.service 
[root@master ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:2983:57 brd ffffff:ff
    inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.222.133/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff8357/64 scope link 
       valid_lft forever preferred_lft forever
[root@master]# curl 192.168.222.133
apache
[root@master]# curl 192.168.222.133
nginx
//此時VIP重新回到master上面

審核編輯:彭菁

聲明:本文內容及配圖由入駐作者撰寫或者入駐合作網站授權轉載。文章觀點僅代表作者本人,不代表電子發燒友網立場。文章及其配圖僅供工程師學習之用,如有內容侵權或者其他違規問題,請聯系本站處理。 舉報投訴
  • 監控
    +關注

    關注

    6

    文章

    2272

    瀏覽量

    55681
  • 服務器
    +關注

    關注

    12

    文章

    9547

    瀏覽量

    86830
  • Nginx負載均衡

    關注

    0

    文章

    2

    瀏覽量

    1720

原文標題:Keepalived高可用nginx負載均衡器

文章出處:【微信號:magedu-Linux,微信公眾號:馬哥Linux運維】歡迎添加關注!文章轉載請注明出處。

收藏 人收藏

    評論

    相關推薦

    路由器負載均衡怎么配置

    路由器負載均衡是一種重要的網絡技術,它能夠將多個網絡連接的流量分配到多個路由器上,以提高網絡的性能和穩定性。本文將詳細介紹路由器負載均衡
    的頭像 發表于 12-13 11:17 ?3960次閱讀

    高性能負載均衡Tomcat集群的實現

    Nginx+Tomcat搭建高性能負載均衡集群
    發表于 08-21 14:31

    使用nginx實現tomcat負載均衡

    Nginx+tomcat+memcached實現負載均衡及session(交叉存儲)
    發表于 08-28 08:52

    nginx實現的負載均衡

    nginx實現負載均衡
    發表于 05-04 13:42

    16nginx+keepalived +zuul如何實現高可用及負載均衡

    學習筆記微服務-16 nginx+keepalived +zuul 實現高可用及負載均衡
    發表于 05-22 10:16

    Nginx和Tomcat負載均衡實現session共享

    Nginx和Tomcat負載均衡實現session共享
    發表于 09-05 10:40 ?9次下載
    <b class='flag-5'>Nginx</b>和Tomcat<b class='flag-5'>負載</b><b class='flag-5'>均衡</b>實現session共享

    構建實戰:Nginx+IIS構筑Web服務器集群負載均衡

    構建實戰:Nginx+IIS構筑Web服務器集群負載均衡
    發表于 09-05 10:56 ?4次下載
    構建實戰:<b class='flag-5'>Nginx</b>+IIS構筑Web服務器集群<b class='flag-5'>負載</b><b class='flag-5'>均衡</b>

    f5負載均衡Nginx負載均衡有什么區別

    負載均衡是分攤到多個操作單元上進行執行,建立在現有網絡結構之上,提供了一種廉價有效透明的方法擴展網絡設備和服務器的帶寬、增加吞吐量、加強網絡數據處理能力、提高網絡的靈活性和可用性。市場上有很多的負載
    發表于 01-01 18:41 ?9156次閱讀
    f5<b class='flag-5'>負載</b><b class='flag-5'>均衡</b>和<b class='flag-5'>Nginx</b><b class='flag-5'>負載</b><b class='flag-5'>均衡</b>有什么區別

    超詳細!使用 LVS 實現負載均衡原理及安裝配置詳解

    負載均衡集群是 load balance 集群的簡寫,翻譯成中文就是負載均衡集群。常用的負載均衡
    發表于 01-21 14:01 ?1343次閱讀

    詳解Nginx負載均衡配置誤區

    之前有很多朋友問關于Nginx的upstream模塊中max_fails及fail_timeout,這兩個指令,分別是配置關于負載均衡過程中,對于上游(后端)服務器的失敗嘗試次數和不可
    的頭像 發表于 05-13 14:36 ?1846次閱讀
    詳解<b class='flag-5'>Nginx</b><b class='flag-5'>負載</b><b class='flag-5'>均衡</b><b class='flag-5'>配置</b>誤區

    解密負載均衡技術和負載均衡算法

    承諾的 SLA),選擇正確的負載均衡算法會對應用程序性能產生重大影響。 本文將會介紹常見的負載均衡算法,并結合主流
    的頭像 發表于 11-12 09:16 ?1311次閱讀

    聊聊Nginx作為負載均衡器它支持的算法都有哪些?

    Nginx作為一款最流行WEB服務器軟件,同時也是一款反向代理和負載均衡軟件。毫不夸張地說,Nginx負載
    的頭像 發表于 02-14 17:50 ?873次閱讀

    如何使用Nginx作為應用程序的負載均衡器?

    Nginx因其高性能和可擴展性而廣受歡迎。它是排名第一的開源Web 服務器。在本教程中,我們將學習如何使用Nginx作為應用程序的負載均衡器? 要將
    的頭像 發表于 03-23 14:52 ?1232次閱讀

    搭建Keepalived+Lvs+Nginx高可用集群負載均衡

    ? 一、Nginx安裝 二、配置反向代理 三、配置負載均衡 四、upstream指令參數 五、配置
    的頭像 發表于 06-25 15:39 ?3401次閱讀
    搭建Keepalived+Lvs+<b class='flag-5'>Nginx</b>高可用集群<b class='flag-5'>負載</b><b class='flag-5'>均衡</b>

    零基礎也可以搞懂負載均衡怎么配置

    負載均衡怎么配置?在Linux中配置負載均衡器的步驟涉及多個環節,包括選擇
    的頭像 發表于 10-12 15:58 ?456次閱讀
    主站蜘蛛池模板: 国产手机视频在线 | 中国一级做a爰片久久毛片 中韩日欧美电影免费看 | 五月天激情开心网 | 日韩一卡 二卡 三卡 四卡 免费视频 | 久久婷婷一区二区三区 | 色色色色色色色色色色色色色色 | 四虎国产精品成人永久免费影视 | 日本a级精品一区二区三区 日本a级特黄三级三级三级 | 色成人综合网 | 男人都懂的网址在线看片 | 日本三级在线观看免费 | 国产亚洲综合一区 柠檬导航 | 伊人久久大香线蕉综合bd高清 | 1000部啪啪未满十八勿入 | 四虎国产精品永久免费网址 | 一级特黄aaa大片大全 | 国产精品露脸脏话对白 | 一区中文字幕 | 久久ww| 国产福利免费观看 | 国产福利你懂的 | 免费视频不卡一区二区三区 | 国产精品美女免费视频大全 | 亚洲国产精品久久网午夜 | 免费大片黄在线观看日本 | 最新亚洲一区二区三区四区 | 亚洲成年人影院 | 天天摸天天舔天天操 | 嫩草影院地址一地址二 | 18年大片免费在线观看 | 免费网站你懂得 | 免费又黄又硬又大爽日本 | 成人狠狠色综合 | 亚洲操综合 | 美女视频黄a视频免费全过程 | 日本黄页网站 | 天堂中文字幕 | 欧美成人69 | 亚洲五月婷婷 | 国内精品伊人久久大香线焦 | 三级日韩 |