分类 运维 下的文章

土豪绕道,土豪可以使用共享带宽、多核CPU、高内存、云盘等……
本文主要讨论如何充分使用多个性能低下与主机。为什么买配置低的?因为作者穷啊,考虑到性价比
双11主机上搞活动,云主机特别便宜,入手了10台入门云主机,每台10台1核1G2M,50G硬盘的配置,10台就有10个独立ip,20M带宽,500G硬盘,想想20M带宽可不便宜,就想着充分利用这些带宽

下面是使用nginx负载均衡搭建静态服务器的配置,大家有什么好的建议

定义上服务器,用一致性哈希hash consistent,可以充分利用硬盘,不同请求的 url文件会缓存到不同的服务器磁盘

upstream backend { 
hash $uri consistent;   
server 10.5.245.1:30018;
server 10.5.245.2:30018;
server 10.5.245.3:30018;
server 10.5.245.4:30018;
server 10.5.245.5:30018;
server 10.5.245.6:30018;
server 10.5.245.7:30018;
server 10.5.245.8:30018;
server 10.5.245.9:30018;
server 10.5.245.10:30018;
} 

nginx反向代理,如何是内网ip请求,则缓存文件,

#PROXY-START/

location ^~ /oss/
{
    proxy_pass http://www.liugang.net/;
    proxy_set_header Host www.liugang.net;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header REMOTE-HOST $remote_addr;
    


    add_header X-Cache $upstream_cache_status;
    #Set Nginx Cache

    proxy_ignore_headers Set-Cookie Cache-Control expires;
    proxy_cache cache_one;
    if ($ckey = ""){
    set $ckey $uri;
    }
    proxy_cache_key $ckey;
    proxy_cache_valid 200 304 301 302 30d;

}
location ^~ /  
{
    if ($proxy_add_x_forwarded_for ~* "10.5.245.")
    {
         set $ckey $uri;
        rewrite ^/(.*) /oss/$1 last;
    }
    proxy_pass http://backend/;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header REMOTE-HOST $remote_addr;
    expires 24h;

}
#PROXY-END/

充分利用带宽,可以在域名解析时候添加10个A记录指向不同的ip即可

PUT /my_index
{
    "settings": {
        "analysis": {
            "char_filter": {
                "&_to_and": {
                    "type":       "mapping",
                    "mappings": [ "&=> and "]
            }},
            "filter": {
                "my_stopwords": {
                    "type":       "stop",
                    "stopwords": [ "the", "a" ]
            },
       "my_synonym_filter": {
          "type": "synonym", 
          "synonyms": [ 
            "british,english",
            "queen,monarch"
          ]
        }},
            "analyzer": {
                "my_analyzer": {
                    "type":         "custom",
                    "char_filter":  [ "html_strip", "&_to_and" ],
                    "tokenizer":    "standard",
                    "filter":       [ "lowercase", "my_stopwords","my_synonym_filter"]
            }}
}}}

master端:

--binlog-do-db 二进制日志记录的数据库(多数据库用逗号,隔开)
--binlog-ignore-db 二进制日志中忽略数据库 (多数据库用逗号,隔开)

以下是mysql主从忽略授权表的方法案例:
in master:
binlog-do-db=YYY 需要同步的数据库。不添加这行表示同步所有
binlog-ignore-db = mysql 这是不记录binlog,来达到从库不同步mysql库,以确保各自权限
binlog-ignore-db = performance_schema
binlog-ignore-db = information_schema

slave端
--replication-do-db 设定需要复制的数据库(多数据库使用逗号,隔开)
--replication-ignore-db 设定需要忽略的复制数据库 (多数据库使用逗号,隔开)
--replication-do-table 设定需要复制的表
--replication-ignore-table 设定需要忽略的复制表
--replication-wild-do-table 同replication-do-table功能一样,但是可以通配符
--replication-wild-ignore-table 同replication-ignore-table功能一样,但是可以加通配符

第一步,开启缓存模块

LoadModule cache_module modules/mod_cache.so
LoadModule cache_disk_module modules/mod_cache_disk.so

第二步,创建目录,/www/wwwroot/proxycache 给读写权限
第三步,配置代理和缓存

<IfModule mod_proxy.c>
    ProxyRequests Off
    SSLProxyEngine on
    ProxyPass / http://www.baidu.com/
    ProxyPassReverse / http://www.baidu.com/
    
    <IfModule mod_cache.c>
    CacheDefaultExpire 86400
    CacheEnable disk /
    CacheRoot /www/wwwroot/proxycache
    CacheDirLevels 4
    CacheDirLength 4
    CacheMaxFileSize 1048576
    CacheMinFileSize 1
    </IfModule>
</IfModule>

docker pull docker.elastic.co/elasticsearch/elasticsearch:7.15.1
docker pull docker.elastic.co/kibana/kibana:7.15.1
docker network create elastic
docker run --name es01 --net elastic -p 9200:9200 -p 9300:9300  -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms2g -Xmx2g" -v /www/server/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /www/server/elasticsearch/data:/usr/share/elasticsearch/data -v /www/server/elasticsearch/plugins:/usr/share/elasticsearch/plugins -d docker.elastic.co/elasticsearch/elasticsearch:7.15.1
docker run --name kib01 --net elastic -p 5601:5601 -e "I18N_LOCALE=zh-CN" -e "ELASTICSEARCH_HOSTS=http://es01:9200" docker.elastic.co/kibana/kibana:7.15.1
docker cp /www/server/logstash/mysql-connector-java-8.0.27.jar logstash:/usr/share/logstash/config/jars/mysql-connector-java-8.0.27.jar
docker run \
--name logstash \
--net elastic \
--restart=always \
-p 5044:5044 \
-p 9600:9600 \
-e ES_JAVA_OPTS="-Duser.timezone=Asia/Shanghai" \
-v /www/server/logstash/config:/usr/share/logstash/config \
-v /www/server/logstash/data:/usr/share/logstash/data \
-v /www/server/logstash/pipeline:/usr/share/logstash/pipeline \
-d docker.elastic.co/logstash/logstash:7.15.1