FastDFS原理详解与单机及分布式高可用集群部署

FastDFS原理详解与单机及分布式高可用集群部署

初识FastDFS

什么是分布式文件系统?

随着文件数据越来越多,通过tomcat或nginx虚拟化的静态资源文件在单一的服务器节点内是存不下的,如果用多个节点来存储也可以,但是不利于管理和维护,所以我们需要一个系统来管理多台计算机节点上的文件数据,这就是分布式文件系统

分布式文件系统是一个允许文件通过网络在多台节点上分享的文件系统,多台节点共同组成一个整体,为更多的用户提供分享文件和存储空间的功能,虽然是一个分布式文件系统,但是对用户来说是透明的,用户使用的时候就相当于访问本地磁盘一样

分布式文件系统可以提供冗余备份,所以容错能力很高。系统中有某些节点宕机,但是整体文件服务不会停止,还是能够为用户提供服务,整体还是运作的,数据也不会丢失。分布式文件系统的可扩展性强,增加或减少节点都很简单,不会影响线上服务,增加完毕后会发布到线上,加入到集群中为用户提供服务

分布式文件系统可以提供负载均衡能力,在读取文件副本的时候可以由多个节点共同提供服务,而且可以通过横向扩展来确保性能的提升与分担负载

FastDFS是一个开源的轻量级分布式文件系统。它解决了大数据量存储和负载均衡等问题。特别适合以中小文件为载体的在线服务,如相册网站、视频网站等等

特点:

文件不分块存储,上传的文件和OS文件系统中的文件一一对应

支持相同内容的文件只保存一份,节约磁盘空间

下载文件支持HTTP协议,可以使用内置Web Server(5.0之后废弃),也可以和其他Web Server配合使用(nginx)

支持在线扩容

支持主从文件

存储服务器上可以保存文件属性(meta-data)

V2.0网络通信采用libevent,支持大并发访问,整体性能好

官方网站:https://github.com/happyfish100/

配置文档:https://github.com/happyfish100/fastdfs/wiki/

1.架构

跟踪服务器(tracker server)

主要做调度工作,起负载均衡的作用。在内存中记录集群中所有存储组和存储服务器的状态信息,是客户端和存储服务器交互的枢纽,不记录文件索引信息,占用的内存量很少

负载管理所有的storage server和group,每个storage在启动后会连接tracker,告知自己所属的group等信息,并保持周期性的心跳。tracker根据storage的心跳信息,建立group==>[storage server list]的映射表

tracker需要管理的元信息很少,会全部存储在内存中;另外tracker上的元信息都是由storage汇报的信息生成的,本身不需要持久化任何数据,这样使得tracker非常容易扩展,直接增加tracker机器即可扩展为tracker cluster来服务,cluster里每个tracker之间是完全对等的,所有的tracker都接受storage的心跳信息,生成元数据信息来提供读写服务

存储服务器(storage server)

文件和元文件都保存到存储服务器,Storage server直接利用操作系统的文件系统来调用或管理文件。storage server 以组或卷(group或volume)为单位组织,一个group内包含多台storage机器,数据互相备份,存储空间以group中容量最小的storage为准(所以建议group内多个storage 尽量配置相同,以免造成存储空间的浪费)

利用group可以进行数据隔离、负载均衡、应用隔离等功能。比如将不同应用数据存到不同group就能隔离,同时还可以根据应用的访问特性来将应用数据分配到不同的group来做负载均衡;缺点是group的容量受单机存储容量的限制,同时当group内有机器坏掉时,数据恢复只能依赖group内地其他机器,使得恢复时间会很长

group内每个storage的存储都依赖于本地文件系统,storage可配置多个数据存储目录,比如有10块磁盘,分别挂载在/data/disk1-/data/disk10,则可将这10个目录都配置为storage的数据存储目录,当storage接收到写请求时,会根据配置好的规则来选择其中一个存储目录来存储文件,为了避免单个目录下的文件数太多,在storage第一次启动时,会在每个数据存储目录里创建2级子目录,每级265个,总共65535个文件。新写的文件会以hash的方式路由到某个子目录下。然后将文件数据直接作为一个本地文件存储到该目录中

客户端(client)

作为业务请求发起方,通过专门接口,使用TCP/IP协议与tracker server或者storage server进行数据交互

FastDFS向使用者提供基本文件访问接口,比如upload、download、append、delete等,以客户端库的方式提供给用户使用

group :组,也可称为卷。同组内服务器上的文件是完全相同的,同一组内的storage server之间是对等的,文件上传、删除等操作可以在任意一台storage server上进行

meta data:文件相关属性,键值对( Key Value Pair)方式,如:width=1024,heigth=768

Tracker集群

FastDFS集群中的Tracker server可以有多台,Tracker server之间是相互平等关系,同时提供服务,Trackerserver不存在单点故障。客户端请求Trackerserver采用轮询方式,如果请求的tracker无法提供服务则换另一个tracker

Storage server会连接集群中所有的Tracker server,定时向他们报告自己的状态,包括磁盘剩余空间、文件同步状况、文件上传下载次数等统计信息

Storage集群

为了支持大容量,storage server采用了分组的形式。存储系统由一个或多个group组成,group之间的文件是相互独立的,所有group内的文件容量累加就是整个存储系统中的文件容量

一个group内包含一台或多台storage server,group内的storage是平等关系,不同group的storage不会相互通信,同group内的storage之间会相互连接进行文件同步,从而保证同group内的每个storage上的文件完全一致

group内的多台storage起到了冗余备份和负载均衡的作用

在group中添加 storage时,系统会自动同步已有的文件系统,同步完成后,系统自动将新增storage到线上提供服务。当存储空间不足或者即将耗尽时,可以动态添加group。只需要增加一台或多台storage并将它们配置到同一个group内就行

2.fastdfs上传

FastDFS向使用者提供基本文件访问接口,比如upload、download、append、delete等,以客户端库的方式提供给用户使用

当tracker收到客户端上传文件的请求时,会为该文件分配一个可以存储文件的group,当选定了group后就要决定给客户端分配group中哪一个storage

分配好storage后,将客户端请求分发到指定storage上,storage会为文件分配一个数据存储目录然后为文件分配一个fileid,最后根据以上的信息生成文件名存储文件,Storage将文件写入磁盘后,会返回路径信息给客户端,客户端就可以根据这个路径信息找到上传的文件

3.fastdfs下载

Storage会定时的向Tracker安装发送心跳,告诉Tracker自己还还活着,这样Fastdfs就可以工作了

客户端发送下载请求到Tracker上,Tracker查找到存储的Storage地址后返回给客户端

客户端拿到Storage地址后,去Storage上找到文件

Storage把文件返回给客户端

group1/M00/02/44/Swtdssdsdfsdf.txt

1.通过组名tracker能够很快的定位到客户端需要访问的存储服务器组是group1,并选择合适的storage提供客户端访问。

2.storage根据“文件存储虚拟磁盘路径”和“数据文件两级目录”可以很快定位到文件所在目录,并根据文件名找到客户端需要访问的文件。

4.拓展模块

在大多数业务场景中,往往需要为FastDFS存储的文件提供http下载服务,而尽管FastDFS在其storage及tracker都内置了http服务, 但性能表现却不尽如人意;

作者余庆在后来的版本中增加了基于当前主流web服务器(nginx/apache)的扩展模块,其用意在于利用web服务器直接对本机storage数据文件提供http服务,以提高文件下载的性能。

初始化过程

文件下载过程

FastDFS实战

1.单机部署

注:一定要先启动Tracker,在启动Storage

包目录:/usr/local/

需要安装的包

1.libfatscommon:FastDFS分离出的一些公用函数包

2.FastDFS:FastDFS本体

3.fastdfs-nginx-module:FastDFS和nginx的关联模块

4.nginx:发布访问服务

安装编译环境

yum -y install gcc pcre-devel zlib-devel openssl-devel libxml2-devel \

libxslt-devel gd-devel GeoIP-devel jemalloc-devel libatomic_ops-devel \

perl-devel perl-ExtUtils-Embed git gcc gcc-c++ make automake autoconf libtool pcre pcre-devel zlib zlib-devel openssl-devel

安装相关包

mkdir /usr/local/fastdfs

cd /usr/local/fastdfs

wget --no-check-certificate https://github.com/happyfish100/libfastcommon/archive/refs/heads/master.zip -O libfastcommon-master.zip

wget --no-check-certificate https://github.com/happyfish100/fastdfs/archive/refs/heads/master.zip -O fastdfs.zip

wget --no-check-certificate https://github.com/happyfish100/fastdfs-nginx-module/archive/refs/heads/master.zip -O fastdfs-nginx-module.zip

unzip libfastcommon-master.zip

unzip fastdfs.zip

unzip fastdfs-nginx-module.zip

rm -rf *.zip

编译安装

cd /usr/local/fastdfs/libfastcommon-master && sh make.sh clean && sh make.sh && sh make.sh install

cd /usr/local/fastdfs/fastdfs-master && sh make.sh clean && sh make.sh && sh make.sh install

cd /usr/local/fastdfs/fastdfs-master && sh setup.sh /etc/fdfs

配置文件更新

cp -r -a /usr/local/fastdfs/fastdfs-master/conf/* /etc/fdfs

cp -r -a /usr/local/fastdfs/fastdfs-master/systemd/* /usr/lib/systemd/system

tracker和storage配置

#创建工作目录

mkdir /opt/fdfs/{tracker,storage,client} -pv && mkdir /opt/fdfs/storage/data

#切换目录

cd /etc/fdfs/

##tracker文件配置和修改启动服务配置

cp tracker.conf tracker.conf.bak

vim tracker.conf

base_path = /opt/fdfs/tracker

vim /usr/lib/systemd/system/fdfs_trackerd.service

PIDFile=/opt/fdfs/tracker/data/fdfs_trackerd.pid

##storage文件配置和修改启动服务配置

cp storage.conf storage.conf.bak

vim storage.conf

group_name =group1

base_path = /opt/fdfs/storage

store_path0 = /opt/fdfs/storage/data

tracker_server =192.168.149.131:22122

http.server_port =8888

vim /usr/lib/systemd/system/fdfs_storaged.service

PIDFile=/opt/fdfs/storage/data/fdfs_storaged.pid

#client配置

cp client.conf client.conf.bak

vim client.conf

base_path = /opt/fdfs/client

tracker_server =192.168.149.131:22122

#重新加载启动服务

systemctl daemon-reload

#启动服务

systemctl start fdfs_trackerd

systemctl start fdfs_storaged

# 建立fdfs快捷命令

alias fdfs_delete_file='fdfs_delete_file /etc/fdfs/client.conf'

alias fdfs_download_file='fdfs_download_file /etc/fdfs/client.conf'

alias fdfs_file_info='fdfs_file_info /etc/fdfs/client.conf'

alias fdfs_monitor='fdfs_monitor /etc/fdfs/client.conf'

alias fdfs_upload_file='fdfs_upload_file /etc/fdfs/client.conf'

alias fdfs_test='fdfs_test /etc/fdfs/client.conf'

验证

fdfs_upload_file 1.jpg

group1/M00/00/00/wKiVg2L8Uq6AO3LzAADZ-GROavg913.jpg

nginx外部访问

fastdfs安装好以后是无法通过http访问的,这个时候就需要借助nginx了,所以需要安装fastdfs的第三方模块到nginx中,就能使用了

安装nginx

cd /usr/local/

wget http://nginx.org/download/nginx-1.15.4.tar.gz

tar -zxvf nginx-1.15.4.tar.gz

#编译安装

cd nginx-1.15.4/

./configure --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_auth_request_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-stream=dynamic --with-stream_ssl_module --with-stream_realip_module --with-stream_geoip_module=dynamic --with-stream_ssl_preread_module --with-compat --with-pcre-jit --add-module=/usr/local/fastdfs/fastdfs-nginx-module-master/src

make

make install

cd ..

修改配置文件

cp nginx-1.15.4/conf/nginx.conf nginx-1.15.4/conf/nginx.conf.bak

vim nginx-1.15.4/conf/nginx.conf

worker_processes 1;

events {

worker_connections 1024;

}

http {

include mime.types;

default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '

'$status $body_bytes_sent "$http_referer" '

'"$http_user_agent" "$http_x_forwarded_for" "$upstream_addr"';

sendfile on;

keepalive_timeout 65;

server {

listen 8080;

server_name localhost;

location ~/group([0-9])/M00 {

ngx_fastdfs_module;

}

location / {

root html;

index index.html index.htm;

}

error_page 500 502 503 504 /50x.html;

location = /50x.html {

root html;

}

}

}

#转移

cp nginx-1.15.4/conf/nginx.conf /usr/local/nginx/conf/

#nginx fastdfs模块配置

cp /usr/local/fastdfs/fastdfs-nginx-module-master/src/mod_fastdfs.conf /etc/fdfs/

cp /etc/fdfs/mod_fastdfs.conf /etc/fdfs/mod_fastdfs.conf.bak

vim /etc/fdfs/mod_fastdfs.conf

tracker_server =192.168.149.131:22122

url_have_group_name =true

store_path0=/opt/fdfs/storage/data

创建nginx启动服务文件

#注册nginx服务

vim /usr/lib/systemd/system/nginx.service

[Unit]

Description=The nginx HTTP and reverse proxy server

After=network-online.target remote-fs.target nss-lookup.target

Wants=network-online.target

[Service]

Type=forking

PIDFile=/usr/local/nginx/logs/nginx.pid

ExecStartPre=/usr/local/nginx/sbin/nginx -t

ExecStart=/usr/local/nginx/sbin/nginx

ExecReload=/usr/local/nginx/sbin/nginx -s reload

KillSignal=SIGQUIT

TimeoutStopSec=5

KillMode=process

PrivateTmp=true

# Nginx will fail to start if /run/nginx.pid already exists but has the wrong

# SELinux context. This might happen when running `nginx -t` from the cmdline.

# https://bugzilla.redhat.com/show_bug.cgi?id=1268621ExecStartPre=/usr/bin/rm-f /usr/local/nginx/logs/nginx.pid

systemctl daemon-reload

将 /usr/local/nginx/conf/ 软连接到/etc/nginx (方便我们配置)

ln -s /usr/local/nginx/conf/ /etc/nginx

启动nginx

systemctl start nginx

验证

2.分布式部署

如下图所示,本次集群部署分两组存储,分别是group1和group2负载承接不同业务。双tracker通过keepalived实现高可用来处理写的请求(可以使用nginx做负载均衡),双nginx代理配合keepalived负责读请求处理。集群组内故障其中任意一个节点均不会影响业务的正常开展,针对读的请求使用nginx做负载,请求到达Storage的算法使用了rr

#服务器信息

tracker1:192.168.149.128(主)

tracker2:192.168.149.130(备)

storage1:192.168.149.128(master、minion)

storage2:192.168.149.129(minion)

storage3:192.168.149.130(minion)

storage4:192.168.149.131(minion)

nginx1:192.168.149.128(备)

nginx2:192.168.149.130(主)

#软件包版本信息

FastDFS:6.0.8

NGINX:1.15.4

Keepalived:1.3.5

环境搭建(使用salt-master)

下载相关软件包以及依赖

#master上操作

mkdir /srv/salt/fastdfs

cd /srv/salt/fastdfs

wget --no-check-certificate https://github.com/happyfish100/libfastcommon/archive/refs/heads/master.zip -O libfastcommon-master.zip

wget --no-check-certificate https://github.com/happyfish100/fastdfs/archive/refs/heads/master.zip -O fastdfs.zip

wget --no-check-certificate https://github.com/happyfish100/fastdfs-nginx-module/archive/refs/heads/master.zip -O fastdfs-nginx-module.zip

unzip libfastcommon-master.zip

unzip fastdfs.zip

unzip fastdfs-nginx-module.zip

rm -rf *.zip

master下发

#编译前准备

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cp.get_dir salt://fastdfs /usr/local

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cmd.run 'mkdir /usr/local/fastdfs -p'

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cmd.run 'mkdir /opt/fdfs/{tracker,storage,client} -p && mkdir /opt/fdfs/storage/data'

# 编译安装FastDFS lib及FastDFS

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cmd.run 'cd /usr/local/fastdfs/libfastcommon-master && sh make.sh clean && sh make.sh && sh make.sh install'

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cmd.run 'cd /usr/local/fastdfs/fastdfs-master && sh make.sh clean && sh make.sh && sh make.sh install'

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cmd.run 'cd /usr/local/fastdfs/fastdfs-master && sh setup.sh /etc/fdfs'

tracker和storage配置

配置准备

cd /srv/salt/fastdfs

cp -r -a fastdfs-master/conf fdfs

cp -r -a fastdfs-master/systemd system

cp fastdfs-nginx-module-master/src/mod_fastdfs.conf fdfs/

[root@master /srv/salt/fastdfs]# ll fdfs/

anti-steal.jpg

client.conf

http.conf

mime.types

mod_fastdfs.conf

storage.conf

storage_ids.conf

tracker.conf

#tracker配置

vim fdfs/tracker.conf

base_path = /opt/fdfs/tracker

vim system/fdfs_trackerd.service #服务配置

PIDFile=/opt/fdfs/tracker/data/fdfs_trackerd.pid

#storage配置

vim fdfs/storage.conf

group_name =group1

base_path = /opt/fdfs/storage

store_path0 = /opt/fdfs/storage/data

tracker_server =192.168.149.128:22122

tracker_server =192.168.149.130:22122

http.server_port =8888

vim system/fdfs_storaged.service #服务配置

PIDFile=/opt/fdfs/storage/data/fdfs_storaged.pid

#client配置

vim fdfs/client.conf

base_path = /opt/fdfs/client

tracker_server =192.168.149.128:22122

tracker_server =192.168.149.130:22122

#nginx fastdfs模块配置

vim fdfs/mod_fastdfs.conf

tracker_server =192.168.149.128:22122

tracker_server =192.168.149.130:22122

url_have_group_name =true

store_path0=/opt/fdfs/storage/data

group_count =2

[group1]

group_name=group1

storage_server_port=23000

store_path_count=1

store_path0=/opt/fdfs/storage/data

[group2]

group_name=group2

storage_server_port=23000

store_path_count=1

store_path0=/opt/fdfs/storage/data

#注册nginx服务

vim system/nginx.service

[Unit]

Description=The nginx HTTP and reverse proxy server

After=network-online.target remote-fs.target nss-lookup.target

Wants=network-online.target

[Service]

Type=forking

PIDFile=/usr/local/nginx/logs/nginx.pid

# Nginx will fail to start if /run/nginx.pid already exists but has the wrong

# SELinux context. This might happen when running `nginx -t` from the cmdline.

# https://bugzilla.redhat.com/show_bug.cgi?id=1268621ExecStartPre=/usr/bin/rm-f /usr/local/nginx/logs/nginx.pid

ExecStartPre=/usr/local/nginx/sbin/nginx -t

ExecStart=/usr/local/nginx/sbin/nginx

ExecReload=/usr/local/nginx/sbin/nginx -s reload

KillSignal=SIGQUIT

TimeoutStopSec=5

KillMode=process

PrivateTmp=true

配置分发

#group2

salt -L "192.168.149.130,192.168.149.131" cmd.run cmd="sed -i 's/group_name = group1/group_name = group2/g' /etc/fdfs/storage.conf"

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cp.get_dir salt://fastdfs/fdfs /etc

管理服务脚本分发

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cp.get_dir salt://fastdfs/system /usr/lib/systemd

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cmd.run 'systemctl daemon-reload'

启动服务

#tracker

salt -L '192.168.149.128,192.168.149.130' cmd.run 'systemctl start fdfs_trackerd'

salt -L '192.168.149.128,192.168.149.130' cmd.run 'systemctl status fdfs_trackerd'

#storage

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cmd.run 'systemctl start fdfs_storaged'

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cmd.run 'systemctl status fdfs_storaged'

通过别名来快捷使用命令

# 建立fdfs快捷命令

alias fdfs_delete_file='fdfs_delete_file /etc/fdfs/client.conf'

alias fdfs_download_file='fdfs_download_file /etc/fdfs/client.conf'

alias fdfs_file_info='fdfs_file_info /etc/fdfs/client.conf'

alias fdfs_monitor='fdfs_monitor /etc/fdfs/client.conf'

alias fdfs_upload_file='fdfs_upload_file /etc/fdfs/client.conf'

alias fdfs_test='fdfs_test /etc/fdfs/client.conf'

nginx服务

nginx编译安装

cd /srv/salt/fastdfs

wget http://nginx.org/download/nginx-1.15.4.tar.gz

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cp.get_file salt://fastdfs/nginx-1.15.4.tar.gz /usr/local

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cmd.run "cd /usr/local && tar -zxvf nginx-1.15.4.tar.gz"

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cmd.run "cd /usr/local/nginx-1.15.4 && ./configure --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_auth_request_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-stream=dynamic --with-stream_ssl_module --with-stream_realip_module --with-stream_geoip_module=dynamic --with-stream_ssl_preread_module --with-compat --with-pcre-jit --add-module=/usr/local/fastdfs/fastdfs-nginx-module-master/src"

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cmd.run "cd /usr/local/nginx-1.15.4 make && make install "

nginx配置文件下发

#修改配置 nginx代替storage对外提供下载接口

vim /srv/salt/fastdfs/nginx-1.15.4/conf/nginx.conf

worker_processes 1;

events {

worker_connections 1024;

}

http {

include mime.types;

default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '

'$status $body_bytes_sent "$http_referer" '

'"$http_user_agent" "$http_x_forwarded_for" "$upstream_addr"';

sendfile on;

keepalive_timeout 65;

server {

listen 8080;

server_name localhost;

location ~/group([0-9])/M00 {

ngx_fastdfs_module;

}

location / {

root html;

index index.html index.htm;

}

error_page 500 502 503 504 /50x.html;

location = /50x.html {

root html;

}

}

#配置文件下发

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cp.get_file salt://fastdfs/nginx-1.15.4/conf/nginx.conf /usr/local/nginx/conf/nginx.conf

#将 /usr/local/nginx/conf/ 软连接到/etc/nginx

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cmd.run cmd="ln -s /usr/local/nginx/conf/ /etc/nginx"

#启动nginx

salt -L "192.168.149.128,192.168.149.129,192.168.149.130,192.168.149.131" cmd.run "systemctl start nginx "

验证

/usr/bin/fdfs_upload_file anti-steal.jpg

group2/M00/00/00/wKiVgmL7PG6AedB3AABdreSfEnY870.jpg

3.分布式高可用

负载均衡

#配置nginx(130、128一主一备)

vim /etc/nginx/nginx.conf

#group1

upstream fdfs_group1 {

server 192.168.149.128:8080 weight=1 max_fails=2 fail_timeout=30s;

server 192.168.149.129:8080 weight=1 max_fails=2 fail_timeout=30s;

}

#group2

upstream fdfs_group2 {

server 192.168.149.130:8080 weight=1 max_fails=2 fail_timeout=30s;

server 192.168.149.131:8080 weight=1 max_fails=2 fail_timeout=30s;

}

server {

listen 8888;

location /group1/M00 {

proxy_pass http://fdfs_group1;

}

location /group2/M00 {

proxy_pass http://fdfs_group2;

}

}

systemctl restart nginx

配置nginx高可用

192.168.149.128(备)

192.168.149.130(主)

192.168.149.100 (VIP)

下载keepalived

yum install -y keepalived

cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

nginx高可用配置

#192.168.149.128

vim /etc/keepalived/keepalived.conf

!Configuration File forkeepalived

global_defs {

notification_email {

acassen@firewall.loc

failover@firewall.loc

sysadmin@firewall.loc

}

notification_email_from Alexandre.Cassen@firewall.loc

smtp_server 192.168.200.1

smtp_connect_timeout 30

router_id LVS_DEVEL

vrrp_skip_check_adv_addr

vrrp_garp_interval 0

vrrp_gna_interval 0

}

vrrp_script nginx_check {

script"/tools/nginx_check.sh"

interval 1

}

vrrp_instance VI_1 {

state MASTER

interface ens33

virtual_router_id 52

priority 100

advert_int 1

authentication {

auth_type PASS

auth_pass test

}

virtual_ipaddress {

192.168.149.100

}

track_script {

nginx_check

}

notify_master /tools/master.sh

notify_backup /tools/backup.sh

notify_fault /tools/fault.sh

notify_stop /tools/stop.sh

}

#192.168.149.130

vim /etc/keepalived/keepalived.conf

!Configuration File forkeepalived

global_defs {

notification_email {

acassen@firewall.loc

failover@firewall.loc

sysadmin@firewall.loc

}

notification_email_from Alexandre.Cassen@firewall.loc

smtp_server 192.168.200.1

smtp_connect_timeout 30

router_id LVS_DEVEL

vrrp_skip_check_adv_addr

vrrp_garp_interval 0

vrrp_gna_interval 0

}

vrrp_script nginx_check {

script"/tools/nginx_check.sh"

interval 1

}

vrrp_instance VI_1 {

state BACKUP

interface ens33

virtual_router_id 52

priority 99

advert_int 1

authentication {

auth_type PASS

auth_pass test

}

virtual_ipaddress {

192.168.149.100

}

track_script {

nginx_check

}

notify_master /tools/master.sh

notify_backup /tools/backup.sh

notify_fault /tools/fault.sh

notify_stop /tools/stop.sh

}

keepalived脚本

mkdir /tools

cd /tools

# keepalived通知脚本cat master.sh

ip=$(ip addr|grep inet| grep 192.168 |awk '{print $2}')

dt=$(date+'%Y%m%d %H:%M:%S')

echo"$0--${ip}--${dt}">> /tmp/kp.log

cat backup.sh

ip=$(ip addr|grep inet| grep 192.168 |awk '{print $2}')

dt=$(date+'%Y%m%d %H:%M:%S')

echo"$0--${ip}--${dt}">> /tmp/kp.log

cat fault.sh

ip=$(ip addr|grep inet| grep 192.168 |awk '{print $2}')

dt=$(date +'%Y%m%d %H:%M:%S')

echo"$0--${ip}--${dt}">> /tmp/kp.log

cat stop.sh

ip=$(ip addr|grep inet| grep 192.168| awk '{print $2}')

dt=$(date +'%Y%m%d %H:%M:%S')

echo"$0--${ip}--${dt}">> /tmp/kp.log

## keepalived健康检查脚本

cat nginx_check.sh

#!/bin/bash

result=`pidof nginx`

if [ ! -z "${result}" ];

then

exit 0

else

exit 1

fi

mv tracker.sh tracker_check.sh

cat tracker_check.sh

#!/bin/bash

result=`pidof fdfs_trackerd`

if [ ! -z "${result}"];

then

exit 0

else

exit 1

fi

# 注意脚本授权,重启keepalived

cd /tools/ && chmod +x *.sh

systemctl restart keepalived.service

配置tracker高可用

192.168.149.128(主)

192.168.149.130(备)

192.168.149.200 (VIP)

#192.168.149.128

vim /etc/keepalived/keepalived.conf

!Configuration File forkeepalived

global_defs {

notification_email {

acassen@firewall.loc

failover@firewall.loc

sysadmin@firewall.loc

}

notification_email_from Alexandre.Cassen@firewall.loc

smtp_server 192.168.200.1

smtp_connect_timeout 30

router_id LVS_DEVEL

vrrp_skip_check_adv_addr

vrrp_garp_interval 0

vrrp_gna_interval 0

}

vrrp_script tracker_check {

script"/tools/tracker_check.sh"

interval 1

}

vrrp_instance VI_1 {

state MASTER

interface ens33

virtual_router_id 51

priority 101

advert_int 1

authentication {

auth_type PASS

auth_pass test

}

virtual_ipaddress {

192.168.149.200

}

track_script {

tracker_check

}

notify_master /tools/master.sh

notify_backup /tools/backup.sh

notify_fault /tools/fault.sh

notify_stop /tools/stop.sh

}

#192.168.149.130

vim /etc/keepalived/keepalived.conf

!Configuration File forkeepalived

global_defs {

notification_email {

acassen@firewall.loc

failover@firewall.loc

sysadmin@firewall.loc

}

notification_email_from Alexandre.Cassen@firewall.loc

smtp_server 192.168.200.1

smtp_connect_timeout 30

router_id LVS_DEVEL

vrrp_skip_check_adv_addr

vrrp_garp_interval 0

vrrp_gna_interval 0

}

vrrp_script tracker_check {

script"/tools/tracker_check.sh"

interval 1

}

vrrp_instance VI_1 {

state BACKUP

interface ens33

virtual_router_id 51

priority 100

advert_int 1

authentication {

auth_type PASS

auth_pass test

}

virtual_ipaddress {

192.168.149.200

}

track_script {

tracker_check

}

notify_master /tools/master.sh

notify_backup /tools/backup.sh

notify_fault /tools/fault.sh

notify_stop /tools/stop.sh

}

keepalived脚本

mkdir /tools

cd /tools

# keepalived通知脚本cat master.sh

ip=$(ip addr|grep inet| grep 192.168 |awk '{print $2}')

dt=$(date+'%Y%m%d %H:%M:%S')

echo"$0--${ip}--${dt}">> /tmp/kp.log

cat backup.sh

ip=$(ip addr|grep inet| grep 192.168 |awk '{print $2}')

dt=$(date+'%Y%m%d %H:%M:%S')

echo"$0--${ip}--${dt}">> /tmp/kp.log

cat fault.sh

ip=$(ip addr|grep inet| grep 192.168 |awk '{print $2}')

dt=$(date +'%Y%m%d %H:%M:%S')

echo"$0--${ip}--${dt}">> /tmp/kp.log

cat stop.sh

ip=$(ip addr|grep inet| grep 192.168| awk '{print $2}')

dt=$(date +'%Y%m%d %H:%M:%S')

echo"$0--${ip}--${dt}">> /tmp/kp.log

## keepalived健康检查脚本

cat tracker_check.sh

#!/bin/bash

result=`pidof fdfs_trackerd`

if [ ! -z "${result}" ]

then

exit 0

else

exit 1

fi

# 注意脚本授权,重启keepalived

cd /tools/ && chmod +x *.sh

systemctl restart keepalived.service

4.集群管理

FastDFS七种状态

标识

含义

INIT

初始化,尚未得到同步已有数据的源服务器

WAIT_SYNC

等待同步,已得到同步已有数据的源服务器

SYNCING

同步中

DELETED

已删除,该服务器从本组中摘除

OFFLINE

离线

ONLINE

在线,尚不能提供服务

ACTIVE

在线,可以提供服务

增加storage节点

#安装相关包之后配置mod_fastdfs.conf及storage.conf

vim /etc/fdfs/storage.conf

tracker_server=xxx.xxx.xxx.xxx:22122

tracker_server=xxx.xxx.xxx.xxx:22122

group_name =group1

vim /etc/fdfs/mod_fastdfs.conf

tracker_server=xxx.xxx.xxx.xxx:22122

tracker_server=xxx.xxx.xxx.xxx:22122

group_count =2

[group1]

group_name=group1

storage_server_port=23000

store_path_count=1

store_path0=/opt/fdfs/storage/data

#启动storage服务,会自动同步相同group的数据,向对应tracker注册

systemctl start fdfs_storaged.service

删除storage节点

#停掉要删除的storage节点

systemctl stop fdfs_storaged.service

#在tracker上删除

#格式:fdfs_monitor /etc/fdfs/client.conf delete [待删除group_name] [待删除storage所在服务器IP]

fdfs_monitor /etc/fdfs/client.conf delete group2 192.168.149.131

#查看集群信息,可以看到对应的storage节点状态为DELETED

fdfs_monitor /etc/fdfs/client.conf

增加tracker节点

#在所有storage节点的storage.conf, mod_fastdfs.conf 中配置增加的tracker_server记录就可以了

tracker_server=xxx.xxx.xxx.xxx:22122

tracker_server=xxx.xxx.xxx.new:22122

#在client.conf中也配置多条tracker_server记录,执行 fdfs_monitor /etc/fdfs/client.conf查看集群情况,可以看到tracker_serve_count变为2

刷新fastdfs节点信息

更新fdfs_monitor上的节点信息:

#关闭集群storage和tracker服务

systemctl stop fdfs_storaged.service

systemctl stop fdfs_trackerd.service

#清空base_path

rm -rf /opt/fdfs/tracker/*

#重启服务

systemctl start fdfs_trackerd.service

systemctl start fdfs_storaged.service

FAQ

"http.mime_types_filename" not exist or is empty

问题现象

本地访问storage资源是没有问题,可以上传可以下载

[root@localhost ~]# fdfs_file_info group1/M00/00/00/wKiVg2L8Uq6AO3LzAADZ-GROavg913.jpg

GET FROM SERVER: false

file type: normal

source storage id: 0

source ip address: 192.168.149.131

file create timestamp: 2022-08-17 10:30:06

file size: 55800

file crc32: 1682860792 (0x644e6af8)

启动nginx后,外部访问不了storage的资源

查看nginx错误日志,发现如下报错:

[root@localhost ~]# cat /usr/local/nginx/logs/error.log

param "http.mime_types_filename" not exist or is empty

#参数“http.mime_types_filename”不存在或为空

问题排查

我们首先看看/etc/fdfs/下有没有http.conf和mime.type(文件类型映射表)这两个文件可以发现是有的,奇怪,既然有怎么还报错

我们看下/etc/fdfs/mod_fastdfs.conf这个配置文件,mod_fastdfs.conf为FastDFS扩展模块的配置文件

#鉴于我已经将mod_fastdfs.conf修改过了,修改前我有进行备份,所以看备份文件

[root@localhost ~]# cat /etc/fdfs/mod_fastdfs.conf.bak

# use "#include" directive to include HTTP config file

# NOTE: #include is an include directive, do NOT remove the # before include

#include http.conf

到这里就逐渐清晰起来了,原配置文件里关于http.conf的字段有这样一段说明

# NOTE: #include is an include directive, do NOT remove the # before include

意思就是说不要去掉#include http.conf的#键

而我在修改配置文件时是把以#开头的行还有空行都删除掉了,所以才会报错

处理

使用备份文件恢复一下,再修改配置(直接修改就行了,不需要去掉#开头的行还有空行)

removing protocol iptable drop rule

问题现象

在搭建高可用分布式FastDFS集群时,发现访问keepalived的VIP没有响应

查看一下keepalived的状态

[root@localhost ~]# systemctl status keepalived.service

发现有这么一条记录

VRRP_Instance(VI_1) removing protocol iptable drop rule

问题排查

既然访问不通,首先想到的是网络问题

我们先看下vip有没有分配

[root@localhost ~]#ip a

inet 192.168.149.100/32 scope global ens33

可以发现vip是有的

一般在测试环境我都是把防火墙和selinux给关了的,我们分别看下

[root@localhost ~]# getenforce

Disabled

可以看到selinux是关闭了的

我们看下iptables有没有设置规则

发现有一条防火墙规则:DROP掉所有目的为keepalived dst的访问

[root@localhost ~]# iptables -L -n

Chain INPUT (policy ACCEPT)

target prot opt source destination

DROP all -- 0.0.0.0/0 0.0.0.0/0 match-set keepalived dst

问题已经逐渐明朗了,我们发现keepalived自动添加了一条防火墙规则,导致了我们去访问vip时系统没有响应

我们看下keepalived的配置文件

[root@localhost ~]# cat /etc/keepalived/keepalived.conf

vrrp_strict

发现了有这个字段,如果开启了vrrp_strict ,则

#严格遵守VRRP协议,启用此项后以下状况将无法启动服务:

1.无VIP地址

2.配置了单播邻居

#而且在VRRP版本2中有IPv6地址,开启配置了此项并且没有配置vrrp_iptables时会自动开启iptables防火墙规则,默认导致VIP无法访问,建议不加此项配置

vrrp_iptables

#此项和vrrp_strict同时开启时,则不会添加防火墙规则,如果无配置vrrp_strict项,则无需启用此项配置

处理

我们去掉vrrp_strict 字段,重启keepalived,成功访问!

相关推荐

绛县横水西周墓地再发现一代倗国国君墓葬
365bet备用网

绛县横水西周墓地再发现一代倗国国君墓葬

📅 02-04 👁️ 7172
如何正确地给股票估值?
365bet手机端

如何正确地给股票估值?

📅 01-05 👁️ 8776
狼掌骨简析与狼髀石如何辨别左右
365bet手机端

狼掌骨简析与狼髀石如何辨别左右

📅 09-03 👁️ 4364
莲蓬象征着什么意义
365bet备用网

莲蓬象征着什么意义

📅 09-25 👁️ 3791
陰莖不硬怎麼辦?飲食、運動、醫療三面向,6招幫助男性找回硬度與自信
急用500块怎么借?这5个靠谱平台当天到账!
mobileBET365

急用500块怎么借?这5个靠谱平台当天到账!

📅 07-25 👁️ 1591