前后端项目部署服务器(传统部署和Docker部署)
- 电脑硬件
- 2025-08-23 11:21:02

内外网
开发环境连外网(8.140.26.187),测试/生产环境连内网(172.20.59.17)
内外网地址不同,但指定的库是同一个
内网IP地址范围包括:
10.0.0.0 到 10.255.255.255172.16.0.0 到 172.31.255192.168.0.0 到 192.168.255.255 注意事项springboot项目、linux服务器为例:
1、在linux安装项目运行所需的软件,比如jdk、mysql、redis等
2、将springboot打包jar,上传linux服务器
3、运行:
普通:java -jar xxx.jar后台:nohup java -jar xxx.jar &(如何关闭?1、ps -ef 查看 PID;2、kill -9 PID)4、注意日志配置,部署服务器后,可通过日志文件查看info/error等信息
5、运行调试时,可以在本地启动调试,携带服务器产生的token即可
安装jdk(其余见csdn收藏-部署)以jdk11为例
1、官网下载
.oracle /java/technologies/downloads/#java11
2、将压缩包上传至linux、解压
tar -zxvf xxxx.tar.gz3、配置文件修调整
sudo vim /etc/profile JAVA_HOME=/myself/soft/jdk-11.0.25 JRE_HOME=$JAVA_HOME/jre CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin export JAVA_HOME JRE_HOME PATH source /etc/profile4、校验
java -version 传统部署在这种情况下,前端和后端分别部署在不同的服务器上。
后端部署 环境准备:linux服务器,提前准备好所需要的运行环境,如mysql/redis/mq/nacos/nginx等(可选择使用云环境,如阿里云/百度云等)修改代码配置文件:如mysql/redis/mq等,修改为服务器中的配置,可采用多配置文件的方式区分maven打包:通过maven的package操作,打包成 .jar文件,上传到linux服务器指令启动:通过 nohup java -jar xxx.jar & 指令启动 前端部署 webpage打包:将 Vue.js 项目打包生成的 dist 目录,上传到linux服务器的相关目录,如/myself/vue_code/dist-linkwe配置nginx反向代理:支持访问静态文件,访问后端接口 user root; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; server { listen 8077; # 前端指定接口 server_name localhost; # 若配置了域名,则此处替换为域名,并将端口换为80或443 location / { root /myself/vue_code/dist-linkwe; index index.html index.htm; try_files $uri $uri/ /index.html; proxy_read_timeout 150; # 处理跨域 add_header Access-Control-Allow-Origin '*' always; add_header Access-Control-Allow-Headers '*'; add_header Access-Control-Allow-Methods '*'; add_header Access-Control-Allow-Credentials 'true'; if ($request_method = 'OPTIONS') { return 204; } } location ^~/linkwechat-api/ { proxy_buffer_size 1024k; #设置代理服务器(nginx)保存用户头信息的缓冲区大小 proxy_buffers 16 1024k; #proxy_buffers缓冲区,网页平均在32k以下的设置 proxy_busy_buffers_size 2048k; #高负荷下缓冲大小(proxy_buffers*2) proxy_temp_file_write_size 2048k; #设定缓存文件夹大小,大于这个值,将从upstream服务器传 proxy_pass http://localhost:6180/; } error_page 404 401 403 500 502 503 504 $host/404; } server { listen 8088; server_name localhost; location / { root /myself/vue_code/dist-iYque; index index.html index.htm; try_files $uri $uri/ /index.html; proxy_read_timeout 150; # 处理跨域 add_header Access-Control-Allow-Origin '*' always; add_header Access-Control-Allow-Headers '*'; add_header Access-Control-Allow-Methods '*'; add_header Access-Control-Allow-Credentials 'true'; if ($request_method = 'OPTIONS') { return 204; } } location ^~/iYque-api/ { proxy_buffer_size 1024k; #设置代理服务器(nginx)保存用户头信息的缓冲区大小 proxy_buffers 16 1024k; #proxy_buffers缓冲区,网页平均在32k以下的设置 proxy_busy_buffers_size 2048k; #高负荷下缓冲大小(proxy_buffers*2) proxy_temp_file_write_size 2048k; #设定缓存文件夹大小,大于这个值,将从upstream服务器传 proxy_pass http://localhost:8085/; } error_page 404 401 403 500 502 503 504 $host/404; } } docker部署(前端同传统部署) 相关指令 # 拉取镜像 docker pull 镜像名称:版本号 # 删除镜像 docker rmi 镜像名称:版本号 # 构建镜像 docker build -t 新镜像名字:TAG Dockerfile文件位置 文件目录 # 启动容器 docker run -d(后台运行) -p --name -v -e -d(指定镜像:版本) # 进入容器 docker exec -it 容器名称 /bin/bash # 停止容器 docker stop 容器名称 # 启动容器 docker start 容器名称 # 重启容器 docker restart 容器名称 # 拷贝文件 docker cp 容器名称:路径 服务器路径 (或:docker cp 服务器路径 容器名称:路径) # 删除容器 docker rm 容器名称 -f(强制) # 查看容器日志 docker logs 容器名称 -n 行数 # 异步编排 docker-compose up # 临时停止服务,保留所有数据和状态 docker-compose stop # 停止服务并删除容器,但不删除数据和网络配置 docker-compose down # 完全清理服务,包括删除所有容器、网络和数据卷 docker-compose down -v 环境准备 安装docker/docker-compose单体的话,只安装docker即可,微服务推荐使用docker-compose异步编排,节省一个个docker启动的操作
docker安装mysql 拉取镜像 docker pull mysql:5.7 启动容器 docker run -d -p 3307:3306 --name mysql57 -v /docker/volumn/mysql57/log:/var/log/mysql -v /docker/volumn/mysql57/data:/var/lib/mysql -v /docker/volumn/mysql57/conf:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.7 修改配置文件 cd /docker/volumn/mysql57/conf vim my f [client] default_character_set=utf8 [mysqld] collation_server=utf8_general_ci character_set_server=utf8 docker restart mysql57 docker安装redis 创建redis配置文件,用于数据卷挂载 mkdir -p /docker/volumn/redis608/conf vim /docker/volumn/redis608/conf/redis.conf # bind 192.168.1.100 10.0.0.1 # bind 127.0.0.1 ::1 #bind 127.0.0.1 protected-mode no port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize no supervised no pidfile /var/run/redis_6379.pid loglevel notice logfile "" always-show-logo yes #save 900 1 #save 300 10 #save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir ./ replica-serve-stale-data yes replica-read-only yes repl-diskless-sync no repl-disable-tcp-nodelay no replica-priority 100 lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no replica-lazy-flush no appendonly no appendfilename "appendonly.aof" no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble yes lua-time-limit 5000 slowlog-max-len 128 notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 stream-node-max-bytes 4096 stream-node-max-entries 100 activerehashing yes hz 10 dynamic-hz yes aof-rewrite-incremental-fsync yes rdb-save-incremental-fsync yes 拉取镜像 docker pull redis:6.0.8 启动容器 docker run -p 6380:6379 --name redis608 -v /docker/volumn/redis608/data:/data -v /docker/volumn/redis608/conf/redis.conf:/etc/redis/redis.conf -d redis:6.0.8 redis-server /etc/redis/redis.conf docker安装nacos 创建数据库nacos_config,运行sql脚本 /* * Copyright 1999-2018 Alibaba Group Holding Ltd. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http:// .apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info */ /******************************************/ CREATE TABLE `config_info` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) DEFAULT NULL, `content` longtext NOT NULL COMMENT 'content', `md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', `src_user` text COMMENT 'source user', `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip', `app_name` varchar(128) DEFAULT NULL, `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', `c_desc` varchar(256) DEFAULT NULL, `c_use` varchar(64) DEFAULT NULL, `effect` varchar(64) DEFAULT NULL, `type` varchar(64) DEFAULT NULL, `c_schema` text, `encrypted_data_key` text NOT NULL COMMENT '秘钥', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfo_datagrouptenant` (`data_id`,`group_id`,`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info_aggr */ /******************************************/ CREATE TABLE `config_info_aggr` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) NOT NULL COMMENT 'group_id', `datum_id` varchar(255) NOT NULL COMMENT 'datum_id', `content` longtext NOT NULL COMMENT '内容', `gmt_modified` datetime NOT NULL COMMENT '修改时间', `app_name` varchar(128) DEFAULT NULL, `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfoaggr_datagrouptenantdatum` (`data_id`,`group_id`,`tenant_id`,`datum_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='增加租户字段'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info_beta */ /******************************************/ CREATE TABLE `config_info_beta` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) NOT NULL COMMENT 'group_id', `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name', `content` longtext NOT NULL COMMENT 'content', `beta_ips` varchar(1024) DEFAULT NULL COMMENT 'betaIps', `md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', `src_user` text COMMENT 'source user', `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip', `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', `encrypted_data_key` text NOT NULL COMMENT '秘钥', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfobeta_datagrouptenant` (`data_id`,`group_id`,`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_beta'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_info_tag */ /******************************************/ CREATE TABLE `config_info_tag` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) NOT NULL COMMENT 'group_id', `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id', `tag_id` varchar(128) NOT NULL COMMENT 'tag_id', `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name', `content` longtext NOT NULL COMMENT 'content', `md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', `src_user` text COMMENT 'source user', `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip', PRIMARY KEY (`id`), UNIQUE KEY `uk_configinfotag_datagrouptenanttag` (`data_id`,`group_id`,`tenant_id`,`tag_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_tag'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = config_tags_relation */ /******************************************/ CREATE TABLE `config_tags_relation` ( `id` bigint(20) NOT NULL COMMENT 'id', `tag_name` varchar(128) NOT NULL COMMENT 'tag_name', `tag_type` varchar(64) DEFAULT NULL COMMENT 'tag_type', `data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(128) NOT NULL COMMENT 'group_id', `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id', `nid` bigint(20) NOT NULL AUTO_INCREMENT, PRIMARY KEY (`nid`), UNIQUE KEY `uk_configtagrelation_configidtag` (`id`,`tag_name`,`tag_type`), KEY `idx_tenant_id` (`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_tag_relation'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = group_capacity */ /******************************************/ CREATE TABLE `group_capacity` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID', `group_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Group ID,空字符表示整个集群', `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值', `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量', `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值', `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数,,0表示使用默认值', `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值', `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', PRIMARY KEY (`id`), UNIQUE KEY `uk_group_id` (`group_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='集群、各Group容量信息表'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = his_config_info */ /******************************************/ CREATE TABLE `his_config_info` ( `id` bigint(20) unsigned NOT NULL, `nid` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `data_id` varchar(255) NOT NULL, `group_id` varchar(128) NOT NULL, `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name', `content` longtext NOT NULL, `md5` varchar(32) DEFAULT NULL, `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, `src_user` text, `src_ip` varchar(50) DEFAULT NULL, `op_type` char(10) DEFAULT NULL, `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段', `encrypted_data_key` text NOT NULL COMMENT '秘钥', PRIMARY KEY (`nid`), KEY `idx_gmt_create` (`gmt_create`), KEY `idx_gmt_modified` (`gmt_modified`), KEY `idx_did` (`data_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='多租户改造'; /******************************************/ /* 数据库全名 = nacos_config */ /* 表名称 = tenant_capacity */ /******************************************/ CREATE TABLE `tenant_capacity` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID', `tenant_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Tenant ID', `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值', `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量', `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值', `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数', `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值', `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量', `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间', PRIMARY KEY (`id`), UNIQUE KEY `uk_tenant_id` (`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='租户容量信息表'; CREATE TABLE `tenant_info` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `kp` varchar(128) NOT NULL COMMENT 'kp', `tenant_id` varchar(128) default '' COMMENT 'tenant_id', `tenant_name` varchar(128) default '' COMMENT 'tenant_name', `tenant_desc` varchar(256) DEFAULT NULL COMMENT 'tenant_desc', `create_source` varchar(32) DEFAULT NULL COMMENT 'create_source', `gmt_create` bigint(20) NOT NULL COMMENT '创建时间', `gmt_modified` bigint(20) NOT NULL COMMENT '修改时间', PRIMARY KEY (`id`), UNIQUE KEY `uk_tenant_info_kptenantid` (`kp`,`tenant_id`), KEY `idx_tenant_id` (`tenant_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='tenant_info'; CREATE TABLE `users` ( `username` varchar(50) NOT NULL PRIMARY KEY, `password` varchar(500) NOT NULL, `enabled` boolean NOT NULL ); CREATE TABLE `roles` ( `username` varchar(50) NOT NULL, `role` varchar(50) NOT NULL, UNIQUE INDEX `idx_user_role` (`username` ASC, `role` ASC) USING BTREE ); CREATE TABLE `permissions` ( `role` varchar(50) NOT NULL, `resource` varchar(255) NOT NULL, `action` varchar(8) NOT NULL, UNIQUE INDEX `uk_role_permission` (`role`,`resource`,`action`) USING BTREE ); INSERT INTO users (username, password, enabled) VALUES ('nacos', '$2a$10$EuWPZHzz32dJN7jexM34MOeYirDdFAZm2kuWj7VEOJhhZkDrxfvUu', TRUE); INSERT INTO roles (username, role) VALUES ('nacos', 'ROLE_ADMIN'); 创建nacos配置文件 cd /docker/volumn/nacos220/conf vim application.properties注意红色加粗,修改为相关数据库配置
# # Copyright 1999-2021 Alibaba Group Holding Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http:// .apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # #*************** Spring Boot Related Configurations ***************# ### Default web context path: server.servlet.contextPath=/nacos ### Include message field server.error.include-message=ALWAYS ### Default web server port: server.port=8848 #*************** Network Related Configurations ***************# ### If prefer hostname over ip for Nacos server addresses in cluster.conf: # nacos.inetutils.prefer-hostname-over-ip=false ### Specify local server's IP: # nacos.inetutils.ip-address= #*************** Config Module Related Configurations ***************# ### If use MySQL as datasource: # spring.datasource.platform=mysql ### Count of DB: # db.num=1 ### Connect URL of DB: # db.url.0=jdbc:mysql://127.0.0.1:3306/nacos?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC # db.user.0=nacos # db.password.0=nacos spring.sql.init.platform=mysql db.num=1 db.url.0=jdbc:mysql://192.168.120.128:3307/nacos_config?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC db.user.0=root db.password.0=123456 ### Connection pool configuration: hikariCP db.pool.config.connectionTimeout=30000 db.pool.config.validationTimeout=10000 db.pool.config.maximumPoolSize=20 db.pool.config.minimumIdle=2 #*************** Naming Module Related Configurations ***************# ### If enable data warmup. If set to false, the server would accept request without local data preparation: # nacos.naming.data.warmup=true ### If enable the instance auto expiration, kind like of health check of instance: # nacos.naming.expireInstance=true ### Add in 2.0.0 ### The interval to clean empty service, unit: milliseconds. # nacos.naming.clean.empty-service.interval=60000 ### The expired time to clean empty service, unit: milliseconds. # nacos.naming.clean.empty-service.expired-time=60000 ### The interval to clean expired metadata, unit: milliseconds. # nacos.naming.clean.expired-metadata.interval=5000 ### The expired time to clean metadata, unit: milliseconds. # nacos.naming.clean.expired-metadata.expired-time=60000 ### The delay time before push task to execute from service changed, unit: milliseconds. # nacos.naming.push.pushTaskDelay=500 ### The timeout for push task execute, unit: milliseconds. # nacos.naming.push.pushTaskTimeout=5000 ### The delay time for retrying failed push task, unit: milliseconds. # nacos.naming.push.pushTaskRetryDelay=1000 ### Since 2.0.3 ### The expired time for inactive client, unit: milliseconds. # nacos.naming.client.expired.time=180000 #*************** CMDB Module Related Configurations ***************# ### The interval to dump external CMDB in seconds: # nacos.cmdb.dumpTaskInterval=3600 ### The interval of polling data change event in seconds: # nacos.cmdb.eventTaskInterval=10 ### The interval of loading labels in seconds: # nacos.cmdb.labelTaskInterval=300 ### If turn on data loading task: # nacos.cmdb.loadDataAtStart=false #*************** Metrics Related Configurations ***************# ### Metrics for prometheus #management.endpoints.web.exposure.include=* ### Metrics for elastic search management.metrics.export.elastic.enabled=false #management.metrics.export.elastic.host=http://localhost:9200 ### Metrics for influx management.metrics.export.influx.enabled=false #management.metrics.export.influx.db=springboot #management.metrics.export.influx.uri=http://localhost:8086 #management.metrics.export.influx.auto-create-db=true #management.metrics.export.influx.consistency=one #management.metrics.export.influx pressed=true #*************** Access Log Related Configurations ***************# ### If turn on the access log: server.tomcat.accesslog.enabled=true ### The access log pattern: server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i ### The directory of access log: server.tomcat.basedir=file:. #*************** Access Control Related Configurations ***************# ### If enable spring security, this option is deprecated in 1.2.0: #spring.security.enabled=false ### The ignore urls of auth nacos.security.ignore.urls=/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-ui/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/** ### The auth system to use, currently only 'nacos' and 'ldap' is supported: nacos.core.auth.system.type=nacos ### If turn on auth system: nacos.core.auth.enabled=false ### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay. nacos.core.auth.caching.enabled=true ### Since 1.4.1, Turn on/off white auth for user-agent: nacos-server, only for upgrade from old version. nacos.core.auth.enable.userAgentAuthWhite=false ### Since 1.4.1, worked when nacos.core.auth.enabled=true and nacos.core.auth.enable.userAgentAuthWhite=false. ### The two properties is the white list for auth and used by identity the request from other server. nacos.core.auth.server.identity.key=serverIdentity nacos.core.auth.server.identity.value=security ### worked when nacos.core.auth.system.type=nacos ### The token expiration in seconds: nacos.core.auth.plugin.nacos.token.expire.seconds=18000 ### The default token (Base64 String): nacos.core.auth.plugin.nacos.token.secret.key=SecretKey012345678901234567890123456789012345678901234567890123456789 ### worked when nacos.core.auth.system.type=ldap,{0} is Placeholder,replace login username #nacos.core.auth.ldap.url=ldap://localhost:389 #nacos.core.auth.ldap.basedc=dc=example,dc=org #nacos.core.auth.ldap.userDn=cn=admin,${nacos.core.auth.ldap.basedc} #nacos.core.auth.ldap.password=admin #nacos.core.auth.ldap.userdn=cn={0},dc=example,dc=org #nacos.core.auth.ldap.filter.prefix=uid #nacos.core.auth.ldap.case.sensitive=true #*************** Istio Related Configurations ***************# ### If turn on the MCP server: nacos.istio.mcp.server.enabled=false #*************** Core Related Configurations ***************# ### set the WorkerID manually # nacos.core.snowflake.worker-id= ### Member-MetaData # nacos.core.member.meta.site= # nacos.core.member.meta.adweight= # nacos.core.member.meta.weight= ### MemberLookup ### Addressing pattern category, If set, the priority is highest # nacos.core.member.lookup.type=[file,address-server] ## Set the cluster list with a configuration file or command-line argument # nacos.member.list=192.168.16.101:8847?raft_port=8807,192.168.16.101?raft_port=8808,192.168.16.101:8849?raft_port=8809 ## for AddressServerMemberLookup # Maximum number of retries to query the address server upon initialization # nacos.core.address-server.retry=5 ## Server domain name address of [address-server] mode # address.server.domain=jmenv.tbsite.net ## Server port of [address-server] mode # address.server.port=8080 ## Request address of [address-server] mode # address.server.url=/nacos/serverlist #*************** JRaft Related Configurations ***************# ### Sets the Raft cluster election timeout, default value is 5 second # nacos.core.protocol.raft.data.election_timeout_ms=5000 ### Sets the amount of time the Raft snapshot will execute periodically, default is 30 minute # nacos.core.protocol.raft.data.snapshot_interval_secs=30 ### raft internal worker threads # nacos.core.protocol.raft.data.core_thread_num=8 ### Number of threads required for raft business request processing # nacos.core.protocol.raft.data.cli_service_thread_num=4 ### raft linear read strategy. Safe linear reads are used by default, that is, the Leader tenure is confirmed by heartbeat # nacos.core.protocol.raft.data.read_index_type=ReadOnlySafe ### rpc request timeout, default 5 seconds # nacos.core.protocol.raft.data.rpc_request_timeout_ms=5000 #*************** Distro Related Configurations ***************# ### Distro data sync delay time, when sync task delayed, task will be merged for same data key. Default 1 second. # nacos.core.protocol.distro.data.sync.delayMs=1000 ### Distro data sync timeout for one sync data, default 3 seconds. # nacos.core.protocol.distro.data.sync.timeoutMs=3000 ### Distro data sync retry delay time when sync data failed or timeout, same behavior with delayMs, default 3 seconds. # nacos.core.protocol.distro.data.sync.retryDelayMs=3000 ### Distro data verify interval time, verify synced data whether expired for a interval. Default 5 seconds. # nacos.core.protocol.distro.data.verify.intervalMs=5000 ### Distro data verify timeout for one verify, default 3 seconds. # nacos.core.protocol.distro.data.verify.timeoutMs=3000 ### Distro data load retry delay when load snapshot data failed, default 30 seconds. # nacos.core.protocol.distro.data.load.retryDelayMs=30000 ### enable to support prometheus service discovery #nacos.prometheus.metrics.enabled=true 拉取镜像 docker pull nacos/nacos-server:v2.2.0 启动容器 docker run -d --name nacos220 -e MODE=standalone -p 8849:8848 -v /docker/volumn/nacos220/conf/application.properties:/home/nacos/conf/application.properties -v /docker/volumn/nacos220/logs:/home/nacos/logs -d nacos/nacos-server:v2.2.0 docker安装nginx 拉取镜像 docker pull nginx:1.18.0 拷贝待挂载文件 # 启动nginx docker run -p 81:80 --name nginx1180 -d nginx:1.18.0 # 拷贝挂载数据 docker cp nginx1180:/etc/nginx/conf.d /docker/volumn/nginx1180/conf docker cp nginx1180:/etc/nginx/nginx.conf /docker/volumn/nginx1180/conf docker cp nginx:/usr/share/nginx/html /docker/volumn/nginx1180/html # 删除容器 docker rm nginx1180 启动容器 docker run -p 81:80 -p 8088:8088 -p 8077:8077 --name nginx1180 -v /docker/volumn/nginx1180/conf/nginx.conf:/etc/nginx/nginx.conf -v /docker/volumn/nginx1180/conf/conf.d:/etc/nginx/conf.d -v /docker/volumn/nginx1180/html:/usr/share/nginx/html -d nginx:1.18.0 访问测试:服务器IP:81,例如,192.168.120.128:81修改配置文件注意:IP换成服务器的具体地址,不要使用localhost,因为localhost代表nginx容器的地址
server { listen 8077; # 前端指定接口 server_name localhost; # 若配置了域名,则此处替换为域名,并将端口换为80或443 location / { root /usr/share/nginx/html/dist-linkwe; index index.html index.htm; try_files $uri $uri/ /index.html; proxy_read_timeout 150; # 处理跨域 add_header Access-Control-Allow-Origin '*' always; add_header Access-Control-Allow-Headers '*'; add_header Access-Control-Allow-Methods '*'; add_header Access-Control-Allow-Credentials 'true'; if ($request_method = 'OPTIONS') { return 204; } } location ^~/linkwechat-api/ { proxy_buffer_size 1024k; #设置代理服务器(nginx)保存用户头信息的缓冲区大小 proxy_buffers 16 1024k; #proxy_buffers缓冲区,网页平均在32k以下的设置 proxy_busy_buffers_size 2048k; #高负荷下缓冲大小(proxy_buffers*2) proxy_temp_file_write_size 2048k; #设定缓存文件夹大小,大于这个值,将从upstream服务器传 proxy_pass http://192.168.120.128:6180/; } error_page 404 401 403 500 502 503 504 $host/404; } server { listen 8088; server_name localhost; location / { root /usr/share/nginx/html/dist-iYque; index index.html index.htm; try_files $uri $uri/ /index.html; proxy_read_timeout 150; # 处理跨域 add_header Access-Control-Allow-Origin '*' always; add_header Access-Control-Allow-Headers '*'; add_header Access-Control-Allow-Methods '*'; add_header Access-Control-Allow-Credentials 'true'; if ($request_method = 'OPTIONS') { return 204; } } location ^~/iYque-api/ { proxy_buffer_size 1024k; #设置代理服务器(nginx)保存用户头信息的缓冲区大小 proxy_buffers 16 1024k; #proxy_buffers缓冲区,网页平均在32k以下的设置 proxy_busy_buffers_size 2048k; #高负荷下缓冲大小(proxy_buffers*2) proxy_temp_file_write_size 2048k; #设定缓存文件夹大小,大于这个值,将从upstream服务器传 proxy_pass http://192.168.120.128:8085/; } error_page 404 401 403 500 502 503 504 $host/404; } 总结:由于使用docker部署nginx需要开放端口映射,新增端口映射需要stop原始容器,重新run启动,所以一般不建议使用docker部署nginx 后端部署 docker独立构建 修改配置文件中的ip和端口,改为服务器中的对应配置(若使用nacos,修改云端nacos中redis/mysql的ip等相关配置)pom中添加maven打包插件 <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>${springboot.maven.version}</version> <configuration> <skip>true</skip> </configuration> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins> </build> 打包后上传服务器,路径如:/docker/dockerTest在服务jar包相同的目录,编写Dockerfile用于构建镜像不同Dockerfile,只需要修改红色字体即可(服务名,端口号,jdk版本根据项目指定)
# jdk 是一个已有的包含 jdk 的镜像 FROM openjdk:11 # 作者签名 MAINTAINER nuanfeng # 简化 jar 的名字路径 COPY iyque-code-1.0-SNAPSHOT.jar /app.jar # 执行 java -jar 命令 (CMD:在启动容器时才执行此行。RUN:构建镜像时就执行此行) CMD java -jar /app.jar # 对外暴露接口 EXPOSE 8085注意:将 jar 包 以及 Dockerfile 上传到 服务器,尽量统一放到一个文件夹中,管理方便。每一个文件夹对应一个服务的 jar包以及 Dockerfile
构建镜像 docker build -t 新镜像名字:TAG Dockerfile文件位置 文件目录 举例: docker build -t iyque:1.0 -f /docker/dockerTest/Dockerfile /docker/dockerTest/ docker build -t linkwe-api:1.0 -f /docker/linkwe/linkwe-api/Dockerfile /docker/linkwe/linkwe-api docker build -t linkwe-auth:1.0 -f /docker/linkwe/linkwe-auth/Dockerfile /docker/linkwe/linkwe-auth docker build -t linkwe-file:1.0 -f /docker/linkwe/linkwe-file/Dockerfile /docker/linkwe/linkwe-file docker build -t linkwe-gateway:1.0 -f /docker/linkwe/linkwe-gateway/Dockerfile /docker/linkwe/linkwe-gateway docker build -t linkwe-wecom:1.0 -f /docker/linkwe/linkwe-wecom/Dockerfile /docker/linkwe/linkwe-wecom docker build -t linkwe-wx:1.0 -f /docker/linkwe/linkwe-wx/Dockerfile /docker/linkwe/linkwe-wx 启动容器 docker run --name lw-api -p 6091:6091 -d linkwe-api:1.0 docker run --name lw-auth -p 6880:6880 -d linkwe-auth:1.0 docker run --name lw-file -p 9101:9101 -d linkwe-file:1.0 docker run --name lw-gateway -p 6180:6180 -d linkwe-gateway:1.0 docker run --name lw-wecom -p 6093:6093 -d linkwe-wecom:1.0 docker run --name lw-wx -p 6094:6094 -d linkwe-wx:1.0 docker-compose异步编排由于一个个服务创建镜像,比较麻烦,为了方便调试,可以直接配置 docker-compose 创建镜像并启动容器
编写 docker-compose.yml version: '3.8' services: # MySQL 服务,使用版本5.7 mysql57: image: mysql:5.7 # 使用 MySQL 5.7 镜像 ports: - "3307:3306" # 映射容器内的 3306 端口到宿主机的 3307 端口 volumes: - /docker/volumn/mysql57/log:/var/log/mysql # 挂载 MySQL 的日志目录 - /docker/volumn/mysql57/data:/var/lib/mysql # 挂载 MySQL 的数据目录 - /docker/volumn/mysql57/conf:/etc/mysql/conf.d # 挂载 MySQL 的配置文件目录 environment: MYSQL_ROOT_PASSWORD: 123456 # 设置 MySQL root 用户的密码 # Redis 服务,使用版本6.0.8 redis608: image: redis:6.0.8 # 使用 Redis 6.0.8 镜像 ports: - "6380:6379" # 映射容器内的 6379 端口到宿主机的 6380 端口 volumes: - /docker/volumn/redis608/data:/data # 挂载 Redis 的数据目录 - /docker/volumn/redis608/conf/redis.conf:/etc/redis/redis.conf # 挂载 Redis 配置文件 command: redis-server /etc/redis/redis.conf # 使用指定的配置文件启动 Redis # Nacos 服务,版本 2.2.0 nacos220: image: nacos/nacos-server:v2.2.0 # 使用 Nacos 2.2.0 镜像 ports: - "8849:8848" # 映射容器内的 8848 端口到宿主机的 8849 端口 volumes: - /docker/volumn/nacos220/conf/application.properties:/home/nacos/conf/application.properties # 挂载 Nacos 配置文件 - /docker/volumn/nacos220/logs:/home/nacos/logs # 挂载 Nacos 日志目录 command: [ "MODE", "standalone" ] # 设置 Nacos 以单机模式运行 depends_on: - mysql57 # Nacos 依赖于 MySQL 服务 # API 服务 lw-api: image: linkwe-api:1.0 # 使用 Linkwe API 镜像 container_name: lw-api # 容器名称为 lw-api ports: - "6091:6091" # 映射容器内的 6091 端口到宿主机的 6091 端口 depends_on: - mysql57 # 依赖 MySQL 服务 - redis608 # 依赖 Redis 服务 - nacos220 # 依赖 Nacos 服务 # 认证服务 lw-auth: image: linkwe-auth:1.0 # 使用 Linkwe Auth 镜像 container_name: lw-auth # 容器名称为 lw-auth ports: - "6880:6880" # 映射容器内的 6880 端口到宿主机的 6880 端口 depends_on: - mysql57 # 依赖 MySQL 服务 - redis608 # 依赖 Redis 服务 - nacos220 # 依赖 Nacos 服务 # 文件服务 lw-file: image: linkwe-file:1.0 # 使用 Linkwe File 镜像 container_name: lw-file # 容器名称为 lw-file ports: - "9101:9101" # 映射容器内的 9101 端口到宿主机的 9101 端口 depends_on: - mysql57 # 依赖 MySQL 服务 - redis608 # 依赖 Redis 服务 - nacos220 # 依赖 Nacos 服务 # 网关服务 lw-gateway: image: linkwe-gateway:1.0 # 使用 Linkwe Gateway 镜像 container_name: lw-gateway # 容器名称为 lw-gateway ports: - "6180:6180" # 映射容器内的 6180 端口到宿主机的 6180 端口 depends_on: - mysql57 # 依赖 MySQL 服务 - redis608 # 依赖 Redis 服务 - nacos220 # 依赖 Nacos 服务 # WeCom 服务(微信企业号相关服务) lw-wecom: image: linkwe-wecom:1.0 # 使用 Linkwe WeCom 镜像 container_name: lw-wecom # 容器名称为 lw-wecom ports: - "6093:6093" # 映射容器内的 6093 端口到宿主机的 6093 端口 depends_on: - mysql57 # 依赖 MySQL 服务 - redis608 # 依赖 Redis 服务 - nacos220 # 依赖 Nacos 服务 # 微信服务 lw-wx: image: linkwe-wx:1.0 # 使用 Linkwe WX 镜像 container_name: lw-wx # 容器名称为 lw-wx ports: - "6094:6094" # 映射容器内的 6094 端口到宿主机的 6094 端口 depends_on: - mysql57 # 依赖 MySQL 服务 - redis608 # 依赖 Redis 服务 - nacos220 # 依赖 Nacos 服务 编排容器(进入docker-compose.yml所在目录)注意:宿主机提前构建好镜像,并创建好挂载所需的目录及配置文件
# 编排前可检查一下配置文件,如果没有任何东西输出就没有问题 docker-compose config -q # 编排 docker-compose up 日志查看 # 异步编排日志 cd docker-compose.yml所在目录 docker-compose logs # 容器日志 docker logs 容器名 -n 行数 启动/停止容器 # 创建并启动容器 docker-compose up -d # 启动容器 docker-compose start # 临时停止服务,保留所有数据和状态 docker-compose stop # 停止服务并删除容器,但不删除数据和网络配置 docker-compose down # 完全清理服务,包括删除所有容器、网络和数据卷 docker-compose down -v前后端项目部署服务器(传统部署和Docker部署)由讯客互联电脑硬件栏目发布,感谢您对讯客互联的认可,以及对我们原创作品以及文章的青睐,非常欢迎各位朋友分享到个人网站或者朋友圈,但转载请说明文章出处“前后端项目部署服务器(传统部署和Docker部署)”