纯手工搭建整套CI/CD流水线指南
- 软件开发
- 2025-08-24 11:09:02

目录
一、前言
二、环境准备
1、服务器开荒(192.168.1.200)
2、离线资源清单(提前用U盘拷好)
三、硬核安装:比拧螺丝还细的步骤
Step1:搭建GitLab(注意!这是只内存饕餮)
Step2:Jenkins登场(Java环境要干净)
Step3:harbor镜像仓库
Step4:jenkins集成harbor
Step5:镜像测试
四、后记
一、前言
兄弟们,不知道你们工作中有没有碰到这些场景的无奈——安全审计不让用容器?老旧业务不敢动底层?离线环境连个Docker镜像都拉不动!今天就带大家用最原始但最可靠的方式,在仅有的服务器资源上(Ubuntu/CentOS服务器)上把GitLab+Jenkins+Harbor这套组合拳打起来。
二、环境准备 1、服务器开荒(192.168.1.200)这里主要是先配置服务器的硬件环境,保证顺利部署。第一件事:关掉碍眼的自动更新(别让apt搞事情)避免部署服务过程中出现依赖冲突和内存不够造成服务中断的问题。
root@master01:/opt/cicd# sed -i 's/^Prompt=.*/Prompt=never/' /etc/update-manager/release-upgrades root@master01:/opt/cicd# systemctl stop apt-daily.timer第二件事:给文件句柄数松绑(防止GitLab爆炸)
root@master01:/opt/cicd# echo "fs.file-max = 65535" | sudo tee -a /etc/sysctl.conf fs.file-max = 65535 root@master01:/opt/cicd# sysctl -p vm.max_map_count = 262144 fs.file-max = 65535第三件事:内存不够?SWAP来凑(GitLab这货能吃4G),所以这里我们可以先快速分配4G交换空间,预防运行GitLab后使用过程中出现内存爆炸的情况。
root@master01:/opt/cicd# fallocate -l 4G /swapfile root@master01:/opt/cicd# chmod 600 /swapfile root@master01:/opt/cicd# mkswap /swapfile mkswap: /swapfile:警告,将擦除旧的 swap 签名。 正在设置交换空间版本 1,大小 = 4 GiB (4294963200 个字节) 无标签, UUID=fb37a0d6-71f1-4c16-a582-c6e06bf3bcfb root@master01:/opt/cicd# swapon /swapfile 2、离线资源清单(提前用U盘拷好)然后将所需的离线安装包上传至服务器,大致如下:
root@master01:/opt/cicd# ls -lh 总计 2.2G -rw-r--r-- 1 root root 8.7M 2月 16 18:53 apache-maven-3.9.9-bin.tar.gz -rw-r--r-- 1 root root 1.3G 2月 16 19:01 gitlab-ce_17.6.5-ce.0_amd64.deb -rw-r--r-- 1 root root 599M 2月 16 18:59 harbor-offline-installer-v2.11.2.tgz -rw-r--r-- 1 root root 89M 2月 16 18:54 jenkins_2.492_all.deb -rw-r--r-- 1 root root 1.3M 2月 16 18:53 nginx-1.27.4.tar.gz -rw-r--r-- 1 root root 219M 2月 16 18:55 openlogic-openjdk-17.0.12+7-linux-x64-deb.deb由于涉及较多就不放个个下载地址到这儿了,有需要可到笔者资源自取。下面逐步开始安装~
三、硬核安装:比拧螺丝还细的步骤 Step1:搭建GitLab(注意!这是只内存饕餮)GitLab 有许多依赖包,我们需要提前获取这些依赖包及其所有子依赖包。
有网的情况下可以直接在线安装依赖:
# 更新系统软件包列表 root@master01:/opt/cicd#sudo apt update # 安装必要的依赖包 root@master01:/opt/cicd#sudo apt install -y curl openssh-server ca-certificates tzdata perl但是实际的离线环境该如何处理呢?
可以在有网络的相同版本 Ubuntu 22.04 系统上执行以下命令模拟安装并列出所需依赖:
sudo apt-get install --print-uris --yes ./gitlab-ce_17.6.5-ce.0_amd64.deb | grep ^\' | cut -d\' -f2 > packages.list该命令会生成一个 packages.list 文件,其中包含了 GitLab 及其依赖包的下载链接。
然后使用 wget 命令根据 packages.list 文件下载所有软件包:
while read -r line; do wget "$line" done < packages.list这样就可以把所有需要的软件包下载到当前目录。接着将下载好的所有 .deb 软件包文件复制到离线的 Ubuntu 22.04 系统中,可以使用 U 盘、移动硬盘等存储设备进行拷贝。
然后在离线系统中,使用 dpkg 命令依次安装依赖包。可以编写一个简单的脚本批量安装:
for deb in *.deb; do sudo dpkg -i "$deb" done如果在安装过程中遇到依赖问题,可以通过命令sudo apt-get -f install尝试解决。
接着就是正式安装gitlab的流程了,使用 dpkg 命令安装 gitlab-ce_17.6.5-ce.0_amd64.deb 包,注意盯着点内存,不够就加SWAP:
root@master01:/opt/cicd# dpkg -i gitlab-ce_17.6.5-ce.0_amd64.deb 正在选中未选择的软件包 gitlab-ce。 (正在读取数据库 ... 系统当前共安装有 218253 个文件和目录。) 准备解压 gitlab-ce_17.6.5-ce.0_amd64.deb ... 正在解压 gitlab-ce (17.6.5-ce.0) ... 正在设置 gitlab-ce (17.6.5-ce.0) ... It looks like GitLab has not been configured yet; skipping the upgrade script. *. *. *** *** ***** ***** .****** ******* ******** ******** ,,,,,,,,,***********,,,,,,,,, ,,,,,,,,,,,*********,,,,,,,,,,, .,,,,,,,,,,,*******,,,,,,,,,,,, ,,,,,,,,,*****,,,,,,,,,. ,,,,,,,****,,,,,, .,,,***,,,, ,*,. _______ __ __ __ / ____(_) /_/ / ____ _/ /_ / / __/ / __/ / / __ `/ __ \ / /_/ / / /_/ /___/ /_/ / /_/ / \____/_/\__/_____/\__,_/_.___/ Thank you for installing GitLab! GitLab was unable to detect a valid hostname for your instance. Please configure a URL for your GitLab instance by setting `external_url` configuration in /etc/gitlab/gitlab.rb file. Then, you can start your GitLab instance by running the following command: sudo gitlab-ctl reconfigure For a comprehensive list of configuration options please see the Omnibus GitLab readme gitlab /gitlab-org/omnibus-gitlab/blob/master/README.md Help us improve the installation experience, let us know how we did with a 1 minute survey: gitlab.fra1.qualtrics /jfe/form/SV_6kVqZANThUQ1bZb?installation=omnibus&release=17-6若安装过程中提示依赖缺失,同样执行 sudo apt-get -f install来解决。然后需要更改配置:
# 关键配置(别照抄!换成你的IP) root@master01:/opt/cicd# vim /etc/gitlab/gitlab.rb --- external_url 'http://192.168.1.200' nginx['listen_port'] = 8001 # 避开Jenkins的8080 postgresql['shared_buffers'] = "256MB" # 小内存机器必调! ---最后重载配置并重启gitlab:
## 让配置生效(去泡杯咖啡吧,这步巨慢) root@master01:/opt/cicd#gitlab-ctl reconfigure ...... Notes: Default admin account has been configured with following details: Username: root Password: You didn't opt-in to print initial root password to STDOUT. Password stored to /etc/gitlab/initial_root_password. This file will be cleaned up in first reconfigure run after 24 hours. NOTE: Because these credentials might be present in your log files in plain text, it is highly recommended to reset the password following docs.gitlab /ee/security/reset_user_password.html#reset-your-root-password. gitlab Reconfigured! root@master01:/opt/cicd# gitlab-ctl restart ok: run: alertmanager: (pid 153324) 0s ok: run: gitaly: (pid 153354) 0s ok: run: gitlab-exporter: (pid 153374) 0s ok: run: gitlab-kas: (pid 153590) 1s ok: run: gitlab-workhorse: (pid 153600) 0s ok: run: logrotate: (pid 153632) 0s ok: run: nginx: (pid 153638) 0s ok: run: node-exporter: (pid 153646) 0s ok: run: postgres-exporter: (pid 153679) 1s ok: run: postgresql: (pid 153720) 0s ok: run: prometheus: (pid 153729) 0s ok: run: puma: (pid 153948) 0s ok: run: redis: (pid 153953) 1s ok: run: redis-exporter: (pid 153970) 0s ok: run: sidekiq: (pid 154099) 1s启动成功后,看到日志里面提示的管理员账号root,密码存在/etc/gitlab/initial_root_password。然后登录即可。
当然为了工作方便记得设置一下中文,顺便改一下初始密码。设置中文:
重置密码:
最后就可以创建群组,项目,添加成员正式进行开发代码存放,运维工作了。
Step2:Jenkins登场(Java环境要干净)关于Jenkis部署可参考笔者以往文章,Jenkis部署方式汇总,里面离线的、在线的、容器化、k8s部署非常清晰了,这里不做多余赘述,重点说一下注意事项和要安装的插件。
注意事项
当jenkins与gitlab共同部署在一台服务器上时,是很容易出现端口占用的,因为当gitlab运行时,其中有个服务puma会使用8080端口。
root@master01:/opt/cicd# gitlab-ctl status run: alertmanager: (pid 2676) 666s; run: log: (pid 2659) 666s run: gitaly: (pid 2655) 666s; run: log: (pid 2647) 666s run: gitlab-exporter: (pid 2679) 666s; run: log: (pid 2663) 666s run: gitlab-kas: (pid 2677) 666s; run: log: (pid 2664) 666s run: gitlab-workhorse: (pid 2682) 666s; run: log: (pid 2660) 666s run: logrotate: (pid 2665) 666s; run: log: (pid 2650) 666s run: nginx: (pid 2651) 666s; run: log: (pid 2645) 666s run: node-exporter: (pid 2666) 666s; run: log: (pid 2653) 666s run: postgres-exporter: (pid 2668) 666s; run: log: (pid 2656) 666s run: postgresql: (pid 2654) 666s; run: log: (pid 2648) 666s run: prometheus: (pid 2681) 666s; run: log: (pid 2667) 666s run: puma: (pid 2646) 666s; run: log: (pid 2644) 666s run: redis: (pid 2673) 666s; run: log: (pid 2658) 666s run: redis-exporter: (pid 2675) 666s; run: log: (pid 2657) 666s run: sidekiq: (pid 2674) 666s; run: log: (pid 2662) 666s root@master01:/opt/cicd# ps -ef |grep puma root 2636 2625 0 11:01 ? 00:00:00 runsv puma root 2644 2636 0 11:01 ? 00:00:00 svlogd -tt /var/log/gitlab/puma git 2646 2636 8 11:01 ? 00:00:55 puma 6.4.3 (unix:///var/opt/gitlab/gitlab-rails/sockets/gitlab.socket,tcp://127.0.0.1:8080) [gitlab-puma-worker] git 3166 2646 0 11:02 ? 00:00:03 puma: cluster worker 0: 2646 [gitlab-puma-worker] git 3168 2646 0 11:02 ? 00:00:02 puma: cluster worker 1: 2646 [gitlab-puma-worker] root 11972 3502 0 11:12 pts/0 00:00:00 grep puma而jenkins在部署时默认的端口也是8080,因此我们需要额外注意避免两方出现占用情况。如果jenkins使用8080先部署会导致gitlab中的puma运行异常,然后gitlab就出现502问题了。所以要么资源情况允许的情况下建议将gitlab和jenkins分开部署,否则就改一下端口,避免端口冲突!
root@master01:/opt/cicd# sed -i 's/HTTP_PORT=8080/HTTP_PORT=8091/' /etc/default/jenkins root@master01:/opt/cicd# sed -i 's/Environment="JENKINS_PORT=8080"/Environment="JENKINS_PORT=8091"/' /lib/systemd/system/jenkins.service root@master01:/opt/cicd# systemctl daemon-reload root@master01:/opt/cicd# systemctl restart jenkins root@master01:/opt/cicd# systemctl status jenkins ● jenkins.service - Jenkins Continuous Integration Server Loaded: loaded (/lib/systemd/system/jenkins.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2025-02-18 11:29:57 CST; 23s ago Main PID: 85440 (java) Tasks: 53 (limit: 4546) Memory: 704.1M CPU: 17.772s CGroup: /system.slice/jenkins.service └─85440 /usr/bin/java -Djava.awt.headless=true -jar /usr/share/java/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=8091 2月 18 11:29:42 master01 jenkins[85440]: 886befd6733c4715b2e136ce9a5531a0 2月 18 11:29:42 master01 jenkins[85440]: This may also be found at: /var/lib/jenkins/secrets/initialAdminPassword 2月 18 11:29:42 master01 jenkins[85440]: ************************************************************* 2月 18 11:29:42 master01 jenkins[85440]: ************************************************************* 2月 18 11:29:42 master01 jenkins[85440]: ************************************************************* 2月 18 11:29:57 master01 jenkins[85440]: 2025-02-18 03:29:57.941+0000 [id=35] INFO jenkins.InitReactorRunner$1#onAttained: Completed initialization 2月 18 11:29:57 master01 jenkins[85440]: 2025-02-18 03:29:57.964+0000 [id=24] INFO hudson.lifecycle.Lifecycle#onReady: Jenkins is fully up and running 2月 18 11:29:57 master01 systemd[1]: Started Jenkins Continuous Integration Server. 2月 18 11:29:58 master01 jenkins[85440]: 2025-02-18 03:29:58.232+0000 [id=53] INFO h.m.DownloadService$Downloadable#load: Obtained the updated data file for hudson.t> 2月 18 11:29:58 master01 jenkins[85440]: 2025-02-18 03:29:58.233+0000 [id=53] INFO hudson.util.Retrier#start: Performed the action check updates server successfully >然后开放一下更改后的运行端口8091就可以正常访问了。
最后还需要获取一下初始密码(眼睛睁大):
root@master01:/opt/cicd# cat /var/lib/jenkins/secrets/initialAdminPassword 886befd6733c4715b2e136ce9a5531a0然后就可以进入插件安装界面了。但是这块我们先跳过。先改一下初始密码:
由于Jenkins插件是使用默认官网进行下载的,速度非常慢,而且很容易会失败,所以建议暂时先跳过插件安装,配置一下jenkins的默认更新源,这里以阿里云源为例:
root@master01:/opt/cicd# sed -i 's| updates.jenkins.io/download| mirrors.aliyun /jenkins|g' /var/lib/jenkins/updates/default.json && sed -i 's| .google | .baidu |g' /var/lib/jenkins/updates/default.json然后在在 Jenkins 的管理界面Dashboard > Manage Jenkins > Plugins ,将其中的 Update Site 选项也设置一下国内更新的源地址:
点击submit结束,最后在浏览器输入:http://192.168.1.200:8091/restart重启jenkins生效。
安装gitlab集成插件
首先需要将Git插件安装一下,它为 Jenkins 与 Git 版本控制系统之间搭建了桥梁,使得 Jenkins 能够与 Git 仓库进行交互,从而实现自动化构建、测试和部署等一系列操作。在插件管理中直接搜索安装即可。
然后集成gitlab拉取代码时需要一个凭证管理工具,需要安装Credentials Binding插件:
但是2.49这个版本默认应该是安装了,在使用 Jenkins 连接 GitLab 之前,需要添加gitlab凭证,这里我们可以通过gitlab账号密码添加:
集成验证
首先在gitlab上创建我们的项目群组,并创建项目,然后将开发代码pull上去:
在 Jenkins 主界面,选择一个任务或者创建一个新任务进创建。
主要配置gitlab中的仓库地址信息和凭证,分支等信息。配置完成后再到当前项目中点击“build now”,此时下面builds就会显示拉取情况了,如果正常拉取gitlab中的项目那么就可以在工作区间看到拉取信息了,则证明初步的jenkins与gitlab集成完成,至于后续的打包流程,后面再慢慢补充~
安装Maven
得到java项目代码后我们还需要通过maven才能进行打包,因此这里现需要安装一下maven:
root@master01:/opt/cicd# tar -xzf apache-maven-3.9.9-bin.tar.gz root@master01:/opt/cicd# vim /etc/profile #export MAVEN_HOME=/opt/apache-maven-3.9.9 #export PATH=$PATH:$MAVEN_HOME/bin #添加maven环境变量 root@master01:/opt/cicd# source /etc/profile root@master01:/opt/cicd# echo $MAVEN_HOME /opt/cicd/apache-maven-3.9.9 #验证maven root@master01:/opt/cicd# mvn -version Apache Maven 3.9.9 (8e8579a9e76f7d015ee5ec7bfcdc97d260186937) Maven home: /opt/cicd/apache-maven-3.9.9 Java version: 17.0.12, vendor: OpenLogic-OpenJDK, runtime: /usr/lib/jvm/openlogic-openjdk-17-hotspot-amd64 Default locale: zh_CN, platform encoding: UTF-8 OS name: "linux", version: "6.8.0-52-generic", arch: "amd64", family: "unix"版本验证没问题就代表maven安装完了,接着在/opt/cicd/apache-maven-3.9.9/conf/setting.xml中配置一下Maven的阿里云源。
<mirrors> <mirror> <id>aliyunmaven</id> <mirrorOf>*</mirrorOf> <name>阿里云公共仓库</name> <url> maven.aliyun /repository/public</url> </mirror> </mirrors>然后将jdk和maven与jenkins相关联,可在Dashboard->Manage Jenkins->Tools进行配置,里面把maven配置,jdk,git和maven全部配置完。
然后我们去项目中验证一下,在构建过程中添加打包命令:
如果没有配置maven环境变量可以直接指定maven安装目录/bin/mvn去执行,等待打包完成后,项目的jar包就可以看到工作区间的target目录生成了。
得到springboot的jar包后我们就可以进行后续操作了,比如直接通过jar进行运行或者下载Publish Over SSH插件发送到远程测试服务器运行。这些操作需要在jenkins上加个构建后操作,在此先不做测试,后续更推荐使用pipeline脚本构建自动化操作,后面再补充。
Step3:harbor镜像仓库这一块主要针对有容器化部署服务需求的项目,对于有些项目需要根据jar包得到的镜像来运行容器服务或多个微服务时,这一步就很重要了。Harbor 提供统一仓库,集成后可集中存储、管理和分发容器镜像,提升效率,避免镜像管理混乱。CI/CD 流水线自动从 Harbor 拉取镜像部署,减少人工干预,加快软件交付,提升开发运维效率。下面是harbor部署集成的流程:
1、安装容器工具
Harbor 离线部署时,Docker 是必须安装的,因为 Harbor 是基于 Docker 容器技术运行的,镜像的构建、存储和分发等功能都依赖 Docker 环境。除此以外,也建议安装docker-compose,这两个通过离线安装包都比较好安装。这里不做详细介绍安装过程,以前的文章上也仔细介绍过了可自行参考:
Docker-Compose进行容器编排的简单使用
Docker基础与进阶梳理
2、harbor安装
注意:这里我们用另一台服务器进行安装,ip地址为:192.168.1.201,需要区别于gitlab与jenkins的192.168.1.200,上传harbor离线安装包至服务器目标文件进行安装:
[root@node01 opt]# tar -zxf harbor-offline-installer-v2.11.2.tgz [root@node01 opt]# cd harbor/ && ls -l /opt/harbor 总用量 616552 -rw-r--r-- 1 root root 3646 11月 14 14:50 common.sh -rw-r--r-- 1 root root 631306450 11月 14 14:50 harbor.v2.11.2.tar.gz -rw-r--r-- 1 root root 14270 11月 14 14:50 harbor.yml.tmpl -rwxr-xr-x 1 root root 1975 11月 14 14:50 install.sh -rw-r--r-- 1 root root 11347 11月 14 14:50 LICENSE -rwxr-xr-x 1 root root 1882 11月 14 14:50 prepare [root@node01 harbor]# docker -v Docker version 26.1.4, build 5650f9b [root@node01 harbor]# docker-compose -v Docker Compose version v2.28.1 # 复制并重命名一份新的配置文件 [root@node01 harbor]# cp harbor.yml.tmpl harbor.yml #修改harbor配置文件 [root@node01 harbor]# vi harbor.yml重点需要根据自己的需求修改一下ip信息和端口,如果有其他配置想法,比如日志存储地址等等,也可以自行更改:
# Configuration file of Harbor # The IP address or hostname to access admin UI and registry service. # DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients. hostname: 192.168.1.201 # http related config http: # port for http, default is 80. If https enabled, this port will redirect to https port-端口,记得开放端口防火墙 port: 8002 # https related config,这里我们不使用HTTPS #https: # https port for harbor, default is 443 # port: 443 # The path of cert and key files for nginx # certificate: /your/certificate/path # private_key: /your/private/key/path # enable strong ssl ciphers (default: false) # strong_ssl_ciphers: false # # Harbor will set ipv4 enabled only by default if this block is not configured # # Otherwise, please uncomment this block to configure your own ip_family stacks # ip_family: # # ipv6Enabled set to true if ipv6 is enabled in docker network, currently it affected the nginx related component # ipv6: # enabled: false # # ipv4Enabled set to true by default, currently it affected the nginx related component # ipv4: # enabled: true # # Uncomment following will enable tls communication between all harbor components # internal_tls: # # set enabled to true means internal tls is enabled # enabled: true # # put your cert and key files on dir # dir: /etc/harbor/tls/internal # Uncomment external_url if you want to enable external proxy # And when it enabled the hostname will no longer used # external_url: reg.mydomain :8433 # The initial password of Harbor admin # It only works in first time to install harbor # Remember Change the admin password from UI after launching Harbor. -harbor密码 harbor_admin_password: Harbor12345 # Harbor DB configuration database: # The password for the root user of Harbor DB. Change this before any production use. password: root123 # The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained. max_idle_conns: 100 # The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections. # Note: the default number of connections is 1024 for postgres of harbor. max_open_conns: 900 # The maximum amount of time a connection may be reused. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's age. # The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". conn_max_lifetime: 5m # The maximum amount of time a connection may be idle. Expired connections may be closed lazily before reuse. If it <= 0, connections are not closed due to a connection's idle time. # The value is a duration string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". conn_max_idle_time: 0 # The default data volume data_volume: /data # Harbor Storage settings by default is using /data dir on local filesystem # Uncomment storage_service setting If you want to using external storage # storage_service: # # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore # # of registry's containers. This is usually needed when the user hosts a internal storage with self signed certificate. # ca_bundle: # # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss # # for more info about this configuration please refer distribution.github.io/distribution/about/configuration/ # # and distribution.github.io/distribution/storage-drivers/ # filesystem: # maxthreads: 100 # # set disable to true when you want to disable registry redirect # redirect: # disable: false # Trivy configuration # # Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases. # It is downloaded by Trivy from the GitHub release page github /aquasecurity/trivy-db/releases and cached # in the local file system. In addition, the database contains the update timestamp so Trivy can detect whether it # should download a newer version from the Internet or use the cached one. Currently, the database is updated every # 12 hours and published as a new release to GitHub. trivy: # ignoreUnfixed The flag to display only fixed vulnerabilities ignore_unfixed: false # skipUpdate The flag to enable or disable Trivy DB downloads from GitHub # # You might want to enable this flag in test or CI/CD environments to avoid GitHub rate limiting issues. # If the flag is enabled you have to download the `trivy-offline.tar.gz` archive manually, extract `trivy.db` and # `metadata.json` files and mount them in the `/home/scanner/.cache/trivy/db` path. skip_update: false # # skipJavaDBUpdate If the flag is enabled you have to manually download the `trivy-java.db` file and mount it in the # `/home/scanner/.cache/trivy/java-db/trivy-java.db` path skip_java_db_update: false # # The offline_scan option prevents Trivy from sending API requests to identify dependencies. # Scanning JAR files and pom.xml may require Internet access for better detection, but this option tries to avoid it. # For example, the offline mode will not try to resolve transitive dependencies in pom.xml when the dependency doesn't # exist in the local repositories. It means a number of detected vulnerabilities might be fewer in offline mode. # It would work if all the dependencies are in local. # This option doesn't affect DB download. You need to specify "skip-update" as well as "offline-scan" in an air-gapped environment. offline_scan: false # # Comma-separated list of what security issues to detect. Possible values are `vuln`, `config` and `secret`. Defaults to `vuln`. security_check: vuln # # insecure The flag to skip verifying registry certificate insecure: false # # timeout The duration to wait for scan completion. # There is upper bound of 30 minutes defined in scan job. So if this `timeout` is larger than 30m0s, it will also timeout at 30m0s. timeout: 5m0s # # github_token The GitHub access token to download Trivy DB # # Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough # for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000 # requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult # docs.github /rest/overview/resources-in-the-rest-api#rate-limiting # # You can create a GitHub token by following the instructions in # help.github /en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line # # github_token: xxx jobservice: # Maximum number of job workers in job service max_job_workers: 10 # The jobLoggers backend name, only support "STD_OUTPUT", "FILE" and/or "DB" job_loggers: - STD_OUTPUT - FILE # - DB # The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`) logger_sweeper_duration: 1 #days notification: # Maximum retry count for webhook job webhook_job_max_retry: 3 # HTTP client timeout for webhook job webhook_job_http_client_timeout: 3 #seconds # Log configurations log: # options are debug, info, warning, error, fatal level: info # configs for logs in local storage local: # Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated. rotate_count: 50 # Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes. # If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G # are all valid. rotate_size: 50M # The directory on your host that store log location: /mnt/log/harbor # Uncomment following lines to enable external syslog endpoint. # external_endpoint: # # protocol used to transmit log to external endpoint, options is tcp or udp # protocol: tcp # # The host of external endpoint # host: localhost # # Port of external endpoint # port: 5140 #This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY! _version: 2.11.0 # Uncomment external_database if using external database. # external_database: # harbor: # host: harbor_db_host # port: harbor_db_port # db_name: harbor_db_name # username: harbor_db_username # password: harbor_db_password # ssl_mode: disable # max_idle_conns: 2 # max_open_conns: 0 # Uncomment redis if need to customize redis db # redis: # # db_index 0 is for core, it's unchangeable # # registry_db_index: 1 # # jobservice_db_index: 2 # # trivy_db_index: 5 # # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it. # # harbor_db_index: 6 # # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it. # # cache_layer_db_index: 7 # Uncomment external_redis if using external Redis server # external_redis: # # support redis, redis+sentinel # # host for redis: <host_redis>:<port_redis> # # host for redis+sentinel: # # <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3> # host: redis:6379 # password: # # Redis AUTH command was extended in Redis 6, it is possible to use it in the two-arguments AUTH <username> <password> form. # # there's a known issue when using external redis username ref: github /goharbor/harbor/issues/18892 # # if you care about the image pull/push performance, please refer to this github /goharbor/harbor/wiki/Harbor-FAQs#external-redis-username-password-usage # # username: # # sentinel_master_set must be set to support redis+sentinel # #sentinel_master_set: # # db_index 0 is for core, it's unchangeable # registry_db_index: 1 # jobservice_db_index: 2 # trivy_db_index: 5 # idle_timeout_seconds: 30 # # it's optional, the db for harbor business misc, by default is 0, uncomment it if you want to change it. # # harbor_db_index: 6 # # it's optional, the db for harbor cache layer, by default is 0, uncomment it if you want to change it. # # cache_layer_db_index: 7 # Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert. # uaa: # ca_file: /path/to/ca # Global proxy # Config http proxy for components, e.g. http://my.proxy :3128 # Components doesn't need to connect to each others via http proxy. # Remove component from `components` array if want disable proxy # for it. If you want use proxy for replication, MUST enable proxy # for core and jobservice, and set `http_proxy` and `https_proxy`. # Add domain to the `no_proxy` field, when you want disable proxy # for some special registry. proxy: http_proxy: https_proxy: no_proxy: components: - core - jobservice - trivy # metric: # enabled: false # port: 9090 # path: /metrics # Trace related config # only can enable one trace provider(jaeger or otel) at the same time, # and when using jaeger as provider, can only enable it with agent mode or collector mode. # if using jaeger collector mode, uncomment endpoint and uncomment username, password if needed # if using jaeger agetn mode uncomment agent_host and agent_port # trace: # enabled: true # # set sample_rate to 1 if you wanna sampling 100% of trace data; set 0.5 if you wanna sampling 50% of trace data, and so forth # sample_rate: 1 # # # namespace used to differentiate different harbor services # # namespace: # # # attributes is a key value dict contains user defined attributes used to initialize trace provider # # attributes: # # application: harbor # # # jaeger should be 1.26 or newer. # # jaeger: # # endpoint: http://hostname:14268/api/traces # # username: # # password: # # agent_host: hostname # # # export trace data by jaeger.thrift in compact mode # # agent_port: 6831 # # otel: # # endpoint: hostname:4318 # # url_path: /v1/traces # # compression: false # # insecure: true # # # timeout is in seconds # # timeout: 10 # Enable purge _upload directories upload_purging: enabled: true # remove files in _upload directories which exist for a period of time, default is one week. age: 168h # the interval of the purge operations interval: 24h dryrun: false # Cache layer configurations # If this feature enabled, harbor will cache the resource # `project/project_metadata/repository/artifact/manifest` in the redis # which can especially help to improve the performance of high concurrent # manifest pulling. # NOTICE # If you are deploying Harbor in HA mode, make sure that all the harbor # instances have the same behaviour, all with caching enabled or disabled, # otherwise it can lead to potential data inconsistency. cache: # not enabled by default enabled: false # keep cache for one day by default expire_hours: 24 # Harbor core configurations # Uncomment to enable the following harbor core related configuration items. # core: # # The provider for updating project quota(usage), there are 2 options, redis or db, # # by default is implemented by db but you can switch the updation via redis which # # can improve the performance of high concurrent pushing to the same project, # # and reduce the database connections spike and occupies. # # By redis will bring up some delay for quota usage updation for display, so only # # suggest switch provider to redis if you were ran into the db connections spike around # # the scenario of high concurrent pushing to same project, no improvement for other scenes. # quota_update_provider: redis # Or db然后通过离线安装包中的安装脚本install.sh进行安装,但是需要注意的是,如果是纯离线环境,由于harbor是基于docker进行安装的,所以它的本质上还是依赖于harbor镜像:
[root@node01 harbor]# ./install.sh [Step 0]: checking if docker is installed ... Note: docker version: 26.1.4 [Step 1]: checking docker-compose is installed ... Note: Docker Compose version v2.27.1 [Step 2]: loading Harbor images ... 7e3e085aad00: Loading layer [==================================================>] 40.56MB/40.56MB b7c5fb3793f7: Loading layer [==================================================>] 8.645MB/8.645MB 8699e44017ac: Loading layer [==================================================>] 4.096kB/4.096kB 5acf2113ede5: Loading layer [==================================================>] 3.072kB/3.072kB 5b27c976d4e4: Loading layer [==================================================>] 17.86MB/17.86MB 99dce882a0b7: Loading layer [==================================================>] 18.65MB/18.65MB Loaded image: goharbor/registry-photon:v2.11.2 735708850366: Loading layer [==================================================>] 115.6MB/115.6MB 14d29efa6a3e: Loading layer [==================================================>] 6.703MB/6.703MB 3c01418d025f: Loading layer [==================================================>] 251.9kB/251.9kB 391e512c63f4: Loading layer [==================================================>] 1.477MB/1.477MB Loaded image: goharbor/harbor-portal:v2.11.2 2c25bffefb46: Loading layer [==================================================>] 11.6MB/11.6MB d53b6b501f40: Loading layer [==================================================>] 3.584kB/3.584kB 723ee3ad357e: Loading layer [==================================================>] 2.56kB/2.56kB 1d345de45454: Loading layer [==================================================>] 67.03MB/67.03MB 5ae1f905cf80: Loading layer [==================================================>] 5.632kB/5.632kB 5aacaf2bd0a6: Loading layer [==================================================>] 125.4kB/125.4kB b41bbf91b8f8: Loading layer [==================================================>] 201.7kB/201.7kB ccd95252247d: Loading layer [==================================================>] 68.15MB/68.15MB 35d4ae1c56b8: Loading layer [==================================================>] 2.56kB/2.56kB Loaded image: goharbor/harbor-core:v2.11.2 25f6d303fc1c: Loading layer [==================================================>] 125.2MB/125.2MB 6a3e4e4a22f7: Loading layer [==================================================>] 3.584kB/3.584kB 2451c9db432c: Loading layer [==================================================>] 3.072kB/3.072kB 42be28bb03c4: Loading layer [==================================================>] 2.56kB/2.56kB 0d32464f8e56: Loading layer [==================================================>] 3.072kB/3.072kB 83fea3b73ca4: Loading layer [==================================================>] 3.584kB/3.584kB 84774a42cbee: Loading layer [==================================================>] 20.48kB/20.48kB Loaded image: goharbor/harbor-log:v2.11.2 95fb141e4a22: Loading layer [==================================================>] 16.35MB/16.35MB e7c0b354cb9b: Loading layer [==================================================>] 175MB/175MB 7b10d6a1815a: Loading layer [==================================================>] 26.1MB/26.1MB 74a898a79638: Loading layer [==================================================>] 18.44MB/18.44MB 931e5f3b6a94: Loading layer [==================================================>] 5.12kB/5.12kB f4b563aea366: Loading layer [==================================================>] 6.144kB/6.144kB 2a1fb073de9b: Loading layer [==================================================>] 3.072kB/3.072kB 78383705f279: Loading layer [==================================================>] 2.048kB/2.048kB a1e5fb322262: Loading layer [==================================================>] 2.56kB/2.56kB 958e977e7694: Loading layer [==================================================>] 7.68kB/7.68kB Loaded image: goharbor/harbor-db:v2.11.2 87f25aec2a57: Loading layer [==================================================>] 11.6MB/11.6MB c233354a43b9: Loading layer [==================================================>] 3.584kB/3.584kB d49be8eb0188: Loading layer [==================================================>] 2.56kB/2.56kB 0b6ebe66006c: Loading layer [==================================================>] 54.2MB/54.2MB f3d9d03f3291: Loading layer [==================================================>] 54.99MB/54.99MB Loaded image: goharbor/harbor-jobservice:v2.11.2 f3516a4426ea: Loading layer [==================================================>] 8.645MB/8.645MB e5ba977ab436: Loading layer [==================================================>] 4.096kB/4.096kB ff84095a1129: Loading layer [==================================================>] 17.86MB/17.86MB bf86942e0e5f: Loading layer [==================================================>] 3.072kB/3.072kB 5f4a426c3fc9: Loading layer [==================================================>] 38.78MB/38.78MB 151dd1100160: Loading layer [==================================================>] 57.42MB/57.42MB Loaded image: goharbor/harbor-registryctl:v2.11.2 8d04e586bf47: Loading layer [==================================================>] 115.6MB/115.6MB Loaded image: goharbor/nginx-photon:v2.11.2 23e78727ab4a: Loading layer [==================================================>] 9.137MB/9.137MB 8c28d2bfc282: Loading layer [==================================================>] 4.096kB/4.096kB 9ed1df8a63f5: Loading layer [==================================================>] 3.072kB/3.072kB 68142b296c5e: Loading layer [==================================================>] 133.8MB/133.8MB 235478fb591e: Loading layer [==================================================>] 14.89MB/14.89MB 82d21983f014: Loading layer [==================================================>] 149.5MB/149.5MB Loaded image: goharbor/trivy-adapter-photon:v2.11.2 faebe453cc4b: Loading layer [==================================================>] 106.7MB/106.7MB e8d8565c9983: Loading layer [==================================================>] 46.48MB/46.48MB 9c15ef707b0c: Loading layer [==================================================>] 13.86MB/13.86MB 771d6693db72: Loading layer [==================================================>] 66.05kB/66.05kB 7db7ce7738f9: Loading layer [==================================================>] 2.56kB/2.56kB 029c27b3f91b: Loading layer [==================================================>] 1.536kB/1.536kB 659dc40ce3b7: Loading layer [==================================================>] 12.29kB/12.29kB ee793768fa5f: Loading layer [==================================================>] 2.746MB/2.746MB c6844997789a: Loading layer [==================================================>] 492.5kB/492.5kB Loaded image: goharbor/prepare:v2.11.2 6d23bb381515: Loading layer [==================================================>] 11.6MB/11.6MB affe8930250d: Loading layer [==================================================>] 28.46MB/28.46MB 3c22ae1a8288: Loading layer [==================================================>] 4.608kB/4.608kB 77dcdafb6660: Loading layer [==================================================>] 29.25MB/29.25MB Loaded image: goharbor/harbor-exporter:v2.11.2 809f11a2a8fa: Loading layer [==================================================>] 16.35MB/16.35MB cd64e0c8c9c1: Loading layer [==================================================>] 110.6MB/110.6MB b8a0c0f2e1cb: Loading layer [==================================================>] 3.072kB/3.072kB 4623c5b1c6fc: Loading layer [==================================================>] 59.9kB/59.9kB ce9fdd61da0b: Loading layer [==================================================>] 61.95kB/61.95kB Loaded image: goharbor/redis-photon:v2.11.2 [Step 3]: preparing environment ... [Step 4]: preparing harbor configs ... prepare base dir is set to /opt/harbor WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https Generated configuration file: /config/portal/nginx.conf Generated configuration file: /config/log/logrotate.conf Generated configuration file: /config/log/rsyslog_docker.conf Generated configuration file: /config/nginx/nginx.conf Generated configuration file: /config/core/env Generated configuration file: /config/core/app.conf Generated configuration file: /config/registry/config.yml Generated configuration file: /config/registryctl/env Generated configuration file: /config/registryctl/config.yml Generated configuration file: /config/db/env Generated configuration file: /config/jobservice/env Generated configuration file: /config/jobservice/config.yml Generated and saved secret to file: /data/secret/keys/secretkey Successfully called func: create_root_cert Generated configuration file: /compose_location/docker-compose.yml Clean up the input dir Note: stopping existing Harbor instance ... WARN[0000] /opt/harbor/docker-compose.yml: `version` is obsolete [Step 5]: starting Harbor ... WARN[0000] /opt/harbor/docker-compose.yml: `version` is obsolete [+] Running 10/10 ✔ Network harbor_harbor Created 0.3s ✔ Container harbor-log Started 0.5s ✔ Container redis Started 1.3s ✔ Container harbor-db Started 1.2s ✔ Container harbor-portal Started 1.4s ✔ Container registryctl Started 1.6s ✔ Container registry Started 1.4s ✔ Container harbor-core Started 2.0s ✔ Container harbor-jobservice Started 2.8s ✔ Container nginx Started 2.6s ✔ ----Harbor has been installed and started successfully.---- [root@node01 harbor]# cat prepare #!/bin/bash set -e # If compiling source code this dir is harbor's make dir. # If installing harbor via package, this dir is harbor's root dir. if [[ -n "$HARBOR_BUNDLE_DIR" ]]; then harbor_prepare_path=$HARBOR_BUNDLE_DIR else harbor_prepare_path="$( cd "$(dirname "$0")" ; pwd -P )" fi echo "prepare base dir is set to ${harbor_prepare_path}" # Clean up input dir rm -rf ${harbor_prepare_path}/input # Create a input dirs mkdir -p ${harbor_prepare_path}/input input_dir=${harbor_prepare_path}/input # Copy harbor.yml to input dir if [[ ! "$1" =~ ^\-\- ]] && [ -f "$1" ] then cp $1 $input_dir/harbor.yml shift else if [ -f "${harbor_prepare_path}/harbor.yml" ];then cp ${harbor_prepare_path}/harbor.yml $input_dir/harbor.yml else echo "no config file: ${harbor_prepare_path}/harbor.yml" exit 1 fi fi data_path=$(grep '^[^#]*data_volume:' $input_dir/harbor.yml | awk '{print $NF}') # If previous secretkeys exist, move it to new location previous_secretkey_path=/data/secretkey previous_defaultalias_path=/data/defaultalias if [ -f $previous_secretkey_path ]; then mkdir -p $data_path/secret/keys mv $previous_secretkey_path $data_path/secret/keys fi if [ -f $previous_defaultalias_path ]; then mkdir -p $data_path/secret/keys mv $previous_defaultalias_path $data_path/secret/keys fi # Create secret dir secret_dir=${data_path}/secret config_dir=$harbor_prepare_path/common/config # Run prepare script docker run --rm -v $input_dir:/input \ -v $data_path:/data \ -v $harbor_prepare_path:/compose_location \ -v $config_dir:/config \ -v /:/hostfs \ --privileged \ goharbor/prepare:v2.11.2 prepare $@ echo "Clean up the input dir" # Clean up input dir rm -rf ${harbor_prepare_path}/input所以如果离线情况下docker中没有goharbor/prepare:v2.11.2镜像,那么需要下载镜像包导入docker才能正常执行,在线情况下就不必在乎这个过程了。
然后通过ip:端口号就能在浏览器进行访问harbor了:
Step4:jenkins集成harbor首先切换到jenkins所在服务器,登录jenkins后在上次添加gitlab的地方添加harbor的凭证信息:
然后在jenkins所在服务器上安装docker,同时更改docker配置,指定harbor仓库地址,便于docker登录。
root@master01:~# cat /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": [" umvonce3.mirror.aliyuncs "," yxzrazem.mirror.aliyuncs "], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", //添加harbor地址 "insecure-registries": ["192.168.1.201:8002"] } root@master01:~# systemctl daemon-reload root@master01:~# systemctl restart docker #测试一下登录harbor平台中的docker私有仓库 root@master01:~# docker login 192.168.1.201:8002 Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See docs.docker /engine/reference/commandline/login/#credentials-store Login Succeeded然后需要将jenkins拉取打包到的jar包制作成镜像,因此需要在gitlab的项目目录中添加一个Dockerfile文件:
#采用国内镜像源 FROM swr -north-4.myhuaweicloud /ddn-k8s/docker.io/openjdk:17.0.2-jdk-slim RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime RUN mkdir -p /opt/projects/ WORKDIR /opt/projects/ ADD ./target/test-audio-websocket-1.0.jar /opt/projects/ EXPOSE 8021 CMD ["java", "-jar", "test-audio-websocket-1.0.jar"]执行jenkins自动化
这里根据自己的gitlab代码存放情况适当修改即可。
#/opt/cicd/apache-maven/bin/mvn clean package -DskipTests cd test-audio-websocket #maven打包 mvn clean package -DskipTests #根据dockerfile打包镜像 docker build -t springboot-test:v1.0 . #登录harbor仓库 docker login -u admin -p Harbor12345 192.168.1.201:8002 #重新对饮harbor存放仓库地址打标签 docker tag springboot-test:v1.0 192.168.1.201:8002/jenkins-test/springboot-test:v1.0 #推送镜像至harbor仓库 docker push 192.168.1.201:8002/jenkins-test/springboot-test:v1.0保存完毕后就可以通过Build Now进行构建了。需要说明的是如果构建期间出现jenkins与docker的权限问题,比如:
[INFO] ------------------------------------------------------------------------ [INFO] Total time: 6.489 s [INFO] Finished at: 2025-02-20T16:35:50+08:00 [INFO] ------------------------------------------------------------------------ + docker build -t springboot-test:v1.0 . ERROR: permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/_ping": dial unix /var/run/docker.sock: connect: permission denied Build step 'Execute shell' marked build as failure Finished: FAILURE那么需要将当前运行jenkins的运行用户添加至docker用户组,具体操作如下:
#查询jenkins运行用户 root@master01:/opt# ps -ef | grep jenkins jenkins 131194 1 1 15:10 ? 00:01:44 /usr/bin/java -Djava.awt.headless=true -jar /usr/share/java/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=8081 root 199587 2613 0 16:39 pts/0 00:00:00 grep jenkins #添加jenkins用户至docker组 root@master01:/opt# sudo usermod -aG docker jenkins #更新权限生效 root@master01:/opt# systemctl restart docker root@master01:/opt# systemctl restart jenkins构建过程中可通过日志查看jenkins流水线的实时执行信息:
成功后便可在harbor仓库查看到我们推送已打包好的镜像了。
Step5:镜像测试当测试人员从harbor拉取镜像后,通过docker生成容器服务后即可检查开发的接口情况了:
root@master03:/opt# docker login -u admin -p Harbor12345 192.168.1.201:8002 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See docs.docker /engine/reference/commandline/login/#credentials-store Login Succeeded root@master03:/opt# docker pull 192.168.1.201:8002/jenkins-test/springboot-test:v1.0 v1.0: Pulling from jenkins-test/springboot-test 1fe172e4850f: Already exists 44d3aa8d0766: Already exists 6ce99fdf16e8: Already exists a2352eb54222: Already exists eaf39c4ea3ef: Already exists 4f4fb700ef54: Already exists 10448af52808: Already exists Digest: sha256:cb5c4dc3a7d2daf81f9fb6c4b7cd9c399413818f70c150331ee00bb76d95bb04 Status: Downloaded newer image for 192.168.1.201:8002/jenkins-test/springboot-test:v1.0 192.168.1.201:8002/jenkins-test/springboot-test:v1.0 root@master01:/opt# docker run -d --name jenkins-test -p 8021:8021 192.168.1.201:8002/jenkins-test/springboot-test:v1.0 e36d8a817a05dcd441c0d363b0b486958b3adf658b06cb798a96c54321e88110四、后记
兄弟们,虽然现在K8s横行天下,但真正经历过IDC搬迁、等保三级检查的老兵都懂——越简单越可靠。这套方案可能在知乎上被喷"不够云原生",但在银行内网、军工单位这些地方,它就是救命稻草。
记住三点:
备份大于天:每天定时备份/var/opt/gitlab和Jenkins_HOME;
日志即正义:给nginx配访问日志,关键时刻能甩锅;
权限要收紧:Jenkins别用root跑,GitLab开启双因素认证(所以我这篇文章只能给刚学的兄弟作参考);
离线整套资源随后会放在笔者资源里面,有需要可自取。关于Jenkins进阶功能后续有空且有必要的话再单独介绍~~~
纯手工搭建整套CI/CD流水线指南由讯客互联软件开发栏目发布,感谢您对讯客互联的认可,以及对我们原创作品以及文章的青睐,非常欢迎各位朋友分享到个人网站或者朋友圈,但转载请说明文章出处“纯手工搭建整套CI/CD流水线指南”