
本文将带你深入实践如何在生产环境中部署和配置 HAProxy,实现四层(TCP)和七层(HTTP/HTTPS)负载均衡。内容涵盖从基础安装、精细配置到高级特性如健康检查、会话保持、SSL终结、性能调优以及结合 Keepalived 实现高可用集群。无论你是要为 MySQL 从库做读负载均衡,还是构建一个高可用的 Web 服务入口,这份指南都提供了可落地的完整方案。
如果你在部署过程中遇到任何问题,欢迎到 云栈社区 的运维/DevOps板块与同行交流探讨。
适用场景 & 前置条件
适用业务:Web 集群入口、微服务网关、MySQL/Redis 读负载均衡、SSL 卸载代理
前置条件:
- HAProxy ≥ 2.0(推荐 2.4+,支持 HTTP/2 与动态后端更新)
- OS:RHEL 7/8, Ubuntu 18.04/20.04/22.04
- 网络:至少 2 块网卡(推荐前端/后端分离)或单网卡多 IP
- 权限:Root 或可 sudo,可绑定 80/443 端口
- 后端服务:至少 2 个可用后端实例
环境与版本矩阵
| 组件 |
版本要求 |
OS 支持 |
关键特性 |
| HAProxy |
2.0+ (推荐 2.4-2.8) |
RHEL 7/8, Ubuntu 18.04/20.04/22.04 |
HTTP/2, 动态后端, Runtime API |
| OpenSSL |
1.1.1+ (TLS 1.3) |
同上 |
ALPN, SNI, OCSP Stapling |
| Keepalived |
2.0+ |
同上 |
VIP 漂移(高可用场景) |
| 系统资源 |
2C/4G/20G(最小) |
- |
可处理 10000 并发连接 |
快速清单(Checklist)
- 安装 HAProxy 并验证版本
- 配置四层 TCP 负载均衡(MySQL/Redis)
- 配置七层 HTTP/HTTPS 负载均衡(Web 服务)
- 配置后端健康检查(TCP/HTTP/SSL)
- 配置负载均衡算法(轮询/最少连接/一致性哈希)
- 配置会话保持(Cookie/Source IP)
- 配置 SSL/TLS 终结与 SNI
- 启用统计页面与监控
- 测试故障转移与服务下线
- 配置高可用(Keepalived VIP)
实施步骤
Step 1:安装 HAProxy 并验证版本
RHEL/CentOS 安装:
# RHEL 8
sudo dnf install -y haproxy
# RHEL 7(需要 EPEL)
sudo yum install -y epel-release
sudo yum install -y haproxy
Ubuntu/Debian 安装:
sudo apt update
sudo apt install -y haproxy
安装最新版本(从官方 PPA):
# Ubuntu
sudo add-apt-repository ppa:vbernat/haproxy-2.8 -y
sudo apt update
sudo apt install -y haproxy=2.8.*
# 或编译安装
wget https://www.haproxy.org/download/2.8/src/haproxy-2.8.3.tar.gz
tar xzf haproxy-2.8.3.tar.gz
cd haproxy-2.8.3
make TARGET=linux-glibc USE_OPENSSL=1 USE_PCRE=1 USE_SYSTEMD=1
sudo make install
验证安装:
haproxy -v
预期输出:
HAProxy version 2.8.3 2023/11/23
检查编译选项:
haproxy -vv | grep -E “OpenSSL|PCRE|epoll”
预期输出:
Built with OpenSSL version : OpenSSL 1.1.1
Support for PCRE2 regex
Built with Linux epoll support
Step 2:配置四层 TCP 负载均衡(MySQL 读负载)
场景:MySQL 主从架构,读流量分发到多个从库
创建 /etc/haproxy/haproxy.cfg:
global
log /dev/log local0 info
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# 性能调优
maxconn 40000 # 最大并发连接数
nbproc 1 # HAProxy 2.4+ 推荐单进程 + nbthread
nbthread 4 # 线程数 = CPU 核心数
cpu-map auto:1/1-4 0-3 # 绑定 CPU 核心
# SSL 优化
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
tune.ssl.default-dh-param 2048
defaults
log global
mode tcp # 默认四层模式
option tcplog # TCP 日志格式
option dontlognull # 不记录健康检查日志
timeout connect 5s # 连接后端超时
timeout client 50s # 客户端空闲超时
timeout server 50s # 后端服务器超时
timeout check 5s # 健康检查超时
retries 3 # 连接后端失败重试次数
maxconn 30000
# ========== MySQL 读负载均衡(四层 TCP)==========
listen mysql-read
bind 0.0.0.0:3307 # 监听端口(区别于主库 3306)
mode tcp
balance leastconn # 最少连接数算法(长连接场景最优)
# 健康检查(MySQL TCP 探测)
option tcp-check
tcp-check connect port 3306
tcp-check send-binary 0a # MySQL 协议握手包
tcp-check expect binary 0a # 期望返回协议版本
# 后端服务器
server mysql-slave-01 10.0.1.101:3306 check inter 3s rise 2 fall 3 maxconn 1000
server mysql-slave-02 10.0.1.102:3306 check inter 3s rise 2 fall 3 maxconn 1000
server mysql-slave-03 10.0.1.103:3306 check inter 3s rise 2 fall 3 maxconn 1000 backup
# ========== Redis 负载均衡(四层 TCP)==========
listen redis-cluster
bind 0.0.0.0:6380
mode tcp
balance roundrobin # 轮询算法
# 健康检查(Redis PING 命令)
option tcp-check
tcp-check send PING\r\n
tcp-check expect string +PONG
# 后端服务器
server redis-01 10.0.1.201:6379 check inter 2s rise 2 fall 3
server redis-02 10.0.1.202:6379 check inter 2s rise 2 fall 3
server redis-03 10.0.1.203:6379 check inter 2s rise 2 fall 3
关键参数解释:
balance leastconn:选择当前连接数最少的后端(适合 MySQL 长连接)
check inter 3s:每 3 秒检查一次后端健康
rise 2 fall 3:连续成功 2 次标记为 UP,失败 3 次标记为 DOWN
maxconn 1000:单个后端最大连接数(防止过载)
backup:备用服务器,仅当主服务器全部 DOWN 时启用
验证配置语法:
haproxy -c -f /etc/haproxy/haproxy.cfg
预期输出:
Configuration file is valid
启动 HAProxy:
systemctl enable haproxy
systemctl start haproxy
systemctl status haproxy
测试四层负载均衡:
# 测试 MySQL 连接
mysql -h 127.0.0.1 -P 3307 -u test -ppassword -e “SELECT @@hostname;”
# 多次执行观察返回的主机名(应轮询不同后端)
for i in {1..10}; do
mysql -h 127.0.0.1 -P 3307 -u test -ppassword -e “SELECT @@hostname;” 2>/dev/null | tail -1
done
预期输出(负载均衡生效):
mysql-slave-01
mysql-slave-02
mysql-slave-01
mysql-slave-02
Step 3:配置七层 HTTP/HTTPS 负载均衡(Web 服务)
场景:多个 Web 后端,按域名/路径路由,启用 SSL 终结
在 /etc/haproxy/haproxy.cfg 添加:
# ========== HTTP 前端(七层)==========
frontend http-in
bind *:80
mode http
option httplog # HTTP 日志格式
option forwardfor # 添加 X-Forwarded-For 头
option http-server-close # 短连接模式(防止连接堆积)
# ACL 规则(域名路由)
acl is_api hdr(host) -i api.example.com
acl is_web hdr(host) -i www.example.com
acl is_admin hdr(host) -i admin.example.com
# ACL 规则(路径路由)
acl is_static path_beg /static /images /css /js
# 路由到后端
use_backend api-backend if is_api
use_backend web-backend if is_web
use_backend admin-backend if is_admin
use_backend static-backend if is_static
# 默认后端
default_backend web-backend
# HTTP 重定向到 HTTPS
# redirect scheme https code 301 if !{ ssl_fc }
# ========== HTTPS 前端(SSL 终结)==========
frontend https-in
bind *:443 ssl crt /etc/haproxy/certs/example.com.pem alpn h2,http/1.1
mode http
option httplog
option forwardfor
# 添加 SSL 相关头部
http-request set-header X-Forwarded-Proto https
http-request set-header X-Forwarded-Port 443
# 域名路由(与 HTTP 相同)
acl is_api hdr(host) -i api.example.com
acl is_web hdr(host) -i www.example.com
use_backend api-backend if is_api
default_backend web-backend
# ========== Web 后端(轮询 + Cookie 会话保持)==========
backend web-backend
mode http
balance roundrobin # 轮询算法
cookie SERVERID insert indirect nocache # Cookie 会话保持
# 健康检查(HTTP GET)
option httpchk GET /health
http-check expect status 200
# 后端服务器
server web-01 10.0.2.11:8080 check cookie web01 maxconn 500
server web-02 10.0.2.12:8080 check cookie web02 maxconn 500
server web-03 10.0.2.13:8080 check cookie web03 maxconn 500
# ========== API 后端(最少连接 + Source IP 会话保持)==========
backend api-backend
mode http
balance leastconn # 最少连接算法
stick-table type ip size 100k expire 30m # Source IP 会话保持表
stick on src # 按源 IP 绑定后端
# 健康检查(HTTP POST)
option httpchk POST /api/health
http-check expect status 200
server api-01 10.0.2.21:8080 check maxconn 1000
server api-02 10.0.2.22:8080 check maxconn 1000
server api-03 10.0.2.23:8080 check maxconn 1000
# ========== 静态资源后端(一致性哈希)==========
backend static-backend
mode http
balance uri # URI 哈希(相同 URL 打到同一后端,提高缓存命中)
hash-type consistent # 一致性哈希(后端变动时影响最小)
option httpchk HEAD /favicon.ico
http-check expect status 200
server static-01 10.0.2.31:8080 check
server static-02 10.0.2.32:8080 check
# ========== 管理后端(单点,带备用)==========
backend admin-backend
mode http
balance roundrobin
option httpchk GET /admin/health
http-check expect status 200
server admin-01 10.0.2.41:8080 check
server admin-02 10.0.2.42:8080 check backup # 备用服务器
关键参数解释:
option forwardfor:在请求头添加 X-Forwarded-For(客户端真实 IP)
cookie SERVERID insert:HAProxy 插入 Cookie 用于会话保持
stick-table + stick on src:Source IP 会话保持(适合 API 无状态服务)
balance uri:根据 URI 哈希(CDN 场景优化缓存)
alpn h2,http/1.1:支持 HTTP/2 协议协商
准备 SSL 证书(合并 cert + key + chain):
# 创建证书目录
sudo mkdir -p /etc/haproxy/certs
# 合并证书文件
sudo cat /etc/ssl/certs/example.com.crt \
/etc/ssl/private/example.com.key \
/etc/ssl/certs/ca-chain.crt \
> /etc/haproxy/certs/example.com.pem
# 设置权限
sudo chmod 600 /etc/haproxy/certs/example.com.pem
sudo chown haproxy:haproxy /etc/haproxy/certs/example.com.pem
测试配置并重载:
haproxy -c -f /etc/haproxy/haproxy.cfg
systemctl reload haproxy
测试七层负载均衡:
# 测试 HTTP 负载均衡
for i in {1..10}; do
curl -s http://www.example.com/ | grep “Server:”
done
# 测试 HTTPS SSL 终结
curl -kv https://www.example.com/ 2>&1 | grep -E “SSL|Server:”
# 测试域名路由
curl -H “Host: api.example.com” http://127.0.0.1/
curl -H “Host: www.example.com” http://127.0.0.1/
# 测试会话保持(Cookie)
curl -c /tmp/cookie.txt http://www.example.com/
curl -b /tmp/cookie.txt http://www.example.com/ # 应返回同一后端
Step 4:配置高级健康检查
HTTP 健康检查(带自定义头部与响应验证):
backend advanced-health-check
mode http
balance roundrobin
# 自定义健康检查请求
option httpchk GET /health HTTP/1.1\r\nHost:\ health.example.com\r\nUser-Agent:\ HAProxy-Health-Check
# 验证响应状态码与内容
http-check expect status 200
http-check expect string “healthy”
server app-01 10.0.3.11:8080 check port 8080 inter 5s rise 2 fall 3
server app-02 10.0.3.12:8080 check port 8080 inter 5s rise 2 fall 3
SSL 健康检查:
backend ssl-backend
mode tcp
balance roundrobin
# SSL 握手检查
option ssl-hello-chk
server secure-01 10.0.3.21:443 check check-ssl verify none
server secure-02 10.0.3.22:443 check check-ssl verify none
MySQL 复制延迟检查(通过 HTTP 代理脚本):
# 在每个 MySQL 从库部署健康检查脚本
# /usr/local/bin/mysql-health-check.sh
#!/bin/bash
DELAY=$(mysql -u monitor -ppassword -e “SHOW SLAVE STATUS\G” | grep “Seconds_Behind_Master” | awk ‘{print $2}’)
if [ “$DELAY” == “NULL” ] || [ “$DELAY” -gt 30 ]; then
echo “unhealthy”
exit 1
else
echo “healthy”
exit 0
fi
通过 xinetd 暴露健康检查端口:
# 安装 xinetd
sudo apt install -y xinetd
# 创建服务定义 /etc/xinetd.d/mysql-health
service mysql-health
{
flags = REUSE
socket_type = stream
port = 9200
wait = no
user = nobody
server = /usr/local/bin/mysql-health-check.sh
log_on_failure += USERID
disable = no
only_from = 10.0.0.0/8
}
# 重启 xinetd
systemctl restart xinetd
HAProxy 配置使用 HTTP 健康检查:
backend mysql-smart-health
mode tcp
balance leastconn
option httpchk GET /
http-check expect string healthy
server mysql-slave-01 10.0.1.101:3306 check port 9200 inter 5s
server mysql-slave-02 10.0.1.102:3306 check port 9200 inter 5s
Step 5:配置负载均衡算法对比
| 算法 |
配置 |
适用场景 |
优缺点 |
| 轮询 |
balance roundrobin |
后端性能均衡的 Web 服务 |
简单均匀,但不考虑后端负载 |
| 最少连接 |
balance leastconn |
长连接场景(数据库/WebSocket) |
动态适应负载,但有额外计算开销 |
| Source IP 哈希 |
balance source |
需要会话保持的无状态服务 |
同一客户端固定后端,但负载可能不均 |
| URI 哈希 |
balance uri |
CDN/缓存场景 |
提高缓存命中率,但 URL 分布不均时负载失衡 |
| 一致性哈希 |
balance uri + hash-type consistent |
后端动态扩缩容场景 |
后端变动时影响最小(重映射率 ~ 1/N) |
| 最少响应时间 |
balance leasttime response (HAProxy 2.4+) |
后端性能差异大的场景 |
自动选择最快后端,但需额外监控开销 |
一致性哈希示例:
backend consistent-hash-backend
mode http
balance uri
hash-type consistent # 启用一致性哈希
hash-balance-factor 150 # 虚拟节点数(提高均匀度)
server cache-01 10.0.4.11:8080 check weight 100
server cache-02 10.0.4.12:8080 check weight 100
server cache-03 10.0.4.13:8080 check weight 100
Step 6:配置会话保持(Session Persistence)
方法 1:Cookie 插入(推荐,无状态)
backend cookie-persistence
mode http
balance roundrobin
cookie SERVERID insert indirect nocache httponly secure
server web-01 10.0.5.11:8080 check cookie srv01
server web-02 10.0.5.12:8080 check cookie srv02
关键参数:
insert:HAProxy 插入 Cookie
indirect:后端不可见此 Cookie(安全)
httponly secure:防 XSS 与中间人攻击
方法 2:Source IP 粘性(简单但受 NAT 影响)
backend source-ip-persistence
mode http
balance roundrobin
stick-table type ip size 100k expire 30m
stick on src
server web-01 10.0.5.11:8080 check
server web-02 10.0.5.12:8080 check
方法 3:URL 参数粘性(适合 API Token 场景)
backend url-param-persistence
mode http
balance roundrobin
stick-table type string len 32 size 100k expire 1h
stick on url_param(session_id)
server api-01 10.0.5.21:8080 check
server api-02 10.0.5.22:8080 check
测试会话保持:
# Cookie 方式
curl -c /tmp/cookies.txt http://www.example.com/
cat /tmp/cookies.txt | grep SERVERID
curl -b /tmp/cookies.txt http://www.example.com/ # 应命中同一后端
# Source IP 方式
for i in {1..10}; do
curl http://www.example.com/ | grep “Server:”
done # 应始终返回同一后端
Step 7:启用统计页面与监控
配置统计页面:
listen stats
bind *:8404
mode http
stats enable
stats uri /haproxy-stats # 访问路径
stats refresh 30s # 自动刷新间隔
stats realm “HAProxy\ Statistics”
stats auth admin:StrongPassword123 # 认证(用户名:密码)
stats admin if TRUE # 启用管理功能(可手动下线后端)
访问统计页面:
# 浏览器访问
http://your-haproxy-ip:8404/haproxy-stats
# 命令行查看
curl -u admin:StrongPassword123 http://localhost:8404/haproxy-stats
通过 Runtime API 管理后端:
# 查看所有后端状态
echo “show servers state” | socat stdio /run/haproxy/admin.sock
# 禁用后端服务器(维护模式)
echo “set server web-backend/web-01 state maint” | socat stdio /run/haproxy/admin.sock
# 启用后端服务器
echo “set server web-backend/web-01 state ready” | socat stdio /run/haproxy/admin.sock
# 查看当前连接数
echo “show stat” | socat stdio /run/haproxy/admin.sock | grep “^web-backend”
集成 Prometheus 监控(使用 haproxy_exporter):
# 安装 haproxy_exporter
wget https://github.com/prometheus/haproxy_exporter/releases/download/v0.15.0/haproxy_exporter-0.15.0.linux-amd64.tar.gz
tar xzf haproxy_exporter-0.15.0.linux-amd64.tar.gz
sudo cp haproxy_exporter-0.15.0.linux-amd64/haproxy_exporter /usr/local/bin/
# 创建 systemd 服务
sudo tee /etc/systemd/system/haproxy_exporter.service << EOF
[Unit]
Description=HAProxy Exporter
After=network.target
[Service]
Type=simple
User=haproxy
ExecStart=/usr/local/bin/haproxy_exporter --haproxy.scrape-uri=“unix:/run/haproxy/admin.sock”
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
# 启动服务
sudo systemctl daemon-reload
sudo systemctl enable haproxy_exporter
sudo systemctl start haproxy_exporter
# 验证指标
curl http://localhost:9101/metrics | grep haproxy_backend_up
Step 8:测试故障转移与服务下线
模拟后端故障:
# 方法 1:停止后端服务
ssh web-01 “systemctl stop nginx”
# 方法 2:通过 iptables 阻断健康检查
ssh web-01 “iptables -A INPUT -p tcp --dport 8080 -j DROP”
# 查看 HAProxy 日志
journalctl -u haproxy -f | grep web-01
预期日志:
Server web-backend/web-01 is DOWN, reason: Layer4 connection problem
验证流量切换:
# 持续发送请求(应无中断)
while true; do
curl -s http://www.example.com/ | grep “Server:”
sleep 0.5
done
优雅下线后端(零停机维护):
# 设置为 DRAIN 模式(不接受新连接,等待现有连接结束)
echo “set server web-backend/web-01 state drain” | socat stdio /run/haproxy/admin.sock
# 等待连接归零
watch -n 1 ‘echo “show stat” | socat stdio /run/haproxy/admin.sock | grep web-01’
# 完全下线
echo “set server web-backend/web-01 state maint” | socat stdio /run/haproxy/admin.sock
# 维护完成后恢复
echo “set server web-backend/web-01 state ready” | socat stdio /run/haproxy/admin.sock
Step 9:配置高可用(Keepalived VIP)
场景:双 HAProxy 节点 + VIP 漂移
安装 Keepalived:
# RHEL/CentOS
sudo yum install -y keepalived
# Ubuntu/Debian
sudo apt install -y keepalived
主节点配置 /etc/keepalived/keepalived.conf:
global_defs {
router_id HAProxy-Master
}
vrrp_script chk_haproxy {
script “/usr/bin/killall -0 haproxy” # 检查 HAProxy 进程
interval 2
weight -20 # 失败时优先级降低 20
}
vrrp_instance VI_1 {
state MASTER
interface eth0 # 网卡名称
virtual_router_id 51
priority 100 # 主节点优先级高
advert_int 1
authentication {
auth_type PASS
auth_pass SecurePassword123
}
virtual_ipaddress {
10.0.0.100/24 # VIP 地址
}
track_script {
chk_haproxy
}
# 主节点上线时执行
notify_master “/usr/local/bin/haproxy_master.sh”
}
备节点配置 /etc/keepalived/keepalived.conf:
global_defs {
router_id HAProxy-Backup
}
vrrp_script chk_haproxy {
script “/usr/bin/killall -0 haproxy”
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 90 # 备节点优先级低
advert_int 1
authentication {
auth_type PASS
auth_pass SecurePassword123
}
virtual_ipaddress {
10.0.0.100/24
}
track_script {
chk_haproxy
}
}
启动 Keepalived:
# 主备节点都执行
systemctl enable keepalived
systemctl start keepalived
systemctl status keepalived
验证 VIP:
# 主节点查看 VIP
ip addr show eth0 | grep 10.0.0.100
# 测试 VIP 访问
curl http://10.0.0.100/
# 模拟主节点故障
systemctl stop haproxy
# 备节点应接管 VIP(10-20 秒)
ip addr show eth0 | grep 10.0.0.100
监控与告警
HAProxy 关键指标
# 查看后端状态
echo “show stat” | socat stdio /run/haproxy/admin.sock | column -t -s ‘,’
# 查看当前会话数
echo “show info” | socat stdio /run/haproxy/admin.sock | grep -E “CurrConns|MaxConn”
# 查看错误统计
echo “show errors” | socat stdio /run/haproxy/admin.sock
Prometheus 监控指标
# 后端宕机告警
haproxy_backend_up == 0
# 后端响应时间超阈值
haproxy_backend_response_time_average_seconds > 1
# 5xx 错误率
rate(haproxy_backend_http_responses_total{code=“5xx”}[5m]) / rate(haproxy_backend_http_responses_total[5m]) > 0.05
# 队列深度(后端过载)
haproxy_backend_current_queue > 10
Grafana 面板建议
| 面板 |
查询 |
阈值 |
| 后端可用性 |
haproxy_backend_up |
< 1 |
| 当前并发连接 |
haproxy_frontend_current_sessions |
> 25000 |
| 后端响应时间 P95 |
histogram_quantile(0.95, haproxy_backend_response_time) |
> 500ms |
| HTTP 错误率 |
rate(haproxy_frontend_http_responses_total{code=~“4..\|5..”}[5m]) |
> 5% |
性能与容量
性能基准测试
# 测试四层 TCP 性能
wrk -t 8 -c 1000 -d 60s http://10.0.0.100:3307
# 测试七层 HTTP 性能
wrk -t 8 -c 2000 -d 60s --latency http://10.0.0.100/
# 测试 HTTPS 性能
wrk -t 8 -c 1000 -d 60s --latency https://10.0.0.100/
预期性能(4C/8G 服务器):
| 场景 |
QPS |
P99 延迟 |
CPU 使用率 |
| 四层 TCP |
50000 |
< 5ms |
40% |
| 七层 HTTP |
30000 |
< 10ms |
60% |
| 七层 HTTPS |
15000 |
< 20ms |
80% |
调优参数
global
maxconn 100000 # 提高最大并发数
nbthread 8 # 增加线程数(= CPU 核心数)
tune.bufsize 32768 # 增大缓冲区(默认 16384)
tune.maxrewrite 8192 # 增大重写缓冲区
tune.ssl.cachesize 100000 # SSL 会话缓存
安全与合规
SSL/TLS 加固
global
# 禁用弱加密套件
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
# OCSP Stapling
tune.ssl.ocsp-update.mode on
DDoS 防护
frontend http-in
# 限制连接速率
stick-table type ip size 1m expire 30s store conn_rate(10s)
tcp-request connection track-sc0 src
tcp-request connection reject if { sc_conn_rate(0) gt 100 } # 单 IP 10 秒超 100 连接拒绝
# 限制请求速率
http-request track-sc1 src
http-request deny if { sc_http_req_rate(1) gt 200 } # 单 IP 每秒超 200 请求拒绝
审计日志
global
log /dev/log local0 info
log /dev/log local1 notice # 错误日志单独输出
frontend https-in
option httplog # 详细 HTTP 日志
log-format “%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r”
集成到 Syslog-ng:
# /etc/syslog-ng/conf.d/haproxy.conf
source s_haproxy { unix-dgram(“/dev/log”); };
filter f_haproxy { facility(local0) and program(“haproxy”); };
destination d_haproxy { file(“/var/log/haproxy/haproxy.log”); };
log { source(s_haproxy); filter(f_haproxy); destination(d_haproxy); };
常见故障与排错
| 症状 |
诊断命令 |
可能根因 |
快速修复 |
永久修复 |
| 后端标记为 DOWN |
echo “show stat” \| socat stdio /run/haproxy/admin.sock |
健康检查失败 |
检查后端服务与网络 |
调整健康检查参数 |
| 503 Service Unavailable |
journalctl -u haproxy -f |
所有后端不可用 |
恢复至少 1 个后端 |
增加后端/配置 backup |
| SSL 握手失败 |
openssl s_client -connect localhost:443 |
证书过期/配置错误 |
更新证书 |
配置证书监控 |
| 连接超时 |
echo “show info” \| socat stdio /run/haproxy/admin.sock |
后端响应慢/maxconn 达到上限 |
增大 timeout/maxconn |
扩容后端 |
| VIP 无法访问 |
ip addr \| grep VIP |
Keepalived 未运行/VRRP 被阻断 |
重启 Keepalived |
检查防火墙规则 |
| 会话保持失效 |
curl -v 查看 Cookie |
Cookie 未插入/被覆盖 |
检查配置 cookie 指令 |
调试后端应用 |
变更与回滚剧本
维护窗口
推荐时间:凌晨 2:00 - 4:00
变更前置条件:
- [ ] 备份配置文件(
/etc/haproxy/haproxy.cfg)
- [ ] 在测试环境验证新配置
- [ ] 准备回滚命令
- [ ] 通知业务方
灰度策略
阶段 1:配置验证
haproxy -c -f /etc/haproxy/haproxy.cfg.new
阶段 2:热重载(零停机)
systemctl reload haproxy
阶段 3:监控 5 分钟
watch -n 1 ‘echo “show stat” | socat stdio /run/haproxy/admin.sock | grep DOWN’
回退条件与命令
触发条件:
- 后端全部 DOWN
- 5xx 错误率 > 10%
- 客户端连接失败率 > 5%
回退操作:
cp /etc/haproxy/haproxy.cfg.backup /etc/haproxy/haproxy.cfg
systemctl reload haproxy
最佳实践
- 健康检查必须精准:检查后端真实服务(如
/health),而非静态文件
- 避免单点故障:部署至少 2 个 HAProxy 节点 + Keepalived VIP
- 会话保持优先 Cookie:避免 Source IP(受 NAT/代理影响)
- 超时参数保守设置:
timeout client/server 应大于后端最慢接口响应时间
- 启用访问日志:
option httplog 记录完整请求,便于排查
- 限制单后端连接数:
maxconn 防止后端过载(建议后端最大连接数 × 0.8)
- SSL 证书自动化:使用 Let’s Encrypt + certbot 自动续期
- 监控后端响应时间:P95/P99 延迟超阈值提前告警
- 定期演练故障切换:手动下线后端验证流量切换
- 配置版本管理:配置纳入 Git,变更通过 PR 审查
附录
完整生产配置模板
/etc/haproxy/haproxy.cfg:
global
log /dev/log local0 info
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
maxconn 50000
nbthread 4
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5s
timeout client 50s
timeout server 50s
retries 3
maxconn 40000
frontend http-in
bind *:80
redirect scheme https code 301
frontend https-in
bind *:443 ssl crt /etc/haproxy/certs/ alpn h2,http/1.1
option forwardfor
http-request set-header X-Forwarded-Proto https
use_backend web-backend
backend web-backend
balance roundrobin
cookie SERVERID insert indirect nocache httponly secure
option httpchk GET /health
http-check expect status 200
server web-01 10.0.1.11:8080 check cookie web01
server web-02 10.0.1.12:8080 check cookie web02
listen stats
bind *:8404
mode http
stats enable
stats uri /
stats refresh 30s
stats auth admin:password
测试环境:RHEL 8.8 / Ubuntu 22.04, HAProxy 2.8.3
测试日期:2025-10-31
维护周期:配置每季度审查,SSL 证书每月检查过期时间