主页 > 电脑硬件  > 

实例分割|yolov11训练自己的数据集

实例分割|yolov11训练自己的数据集
前言

因工作要求使用的都是yolov5系列的模型,今天学习一下最先进的yolov11,记录一下环境配置及训练过程。

1.项目下载及环境安装

源码位置:yolov11 可以看到,这里要求python版本大于等于3.8,我这里安装python3.10.

conda create -n yolov11 python=3.10 conda activate yolov11 pip install ultralytics -i pypi.tuna.tsinghua.edu /simple 2.标注自己的数据集

标注实例分割数据集的工具有很多,这里建议labelme和AnyLabeling任意选一个。 如图所示,标注后的数据集是json格式的: 我们需要将其转成yolo系列需要的txt格式。 json转txt格式转化代码:

# json2txt.py # json2txt.py import cv2 import os import json import glob import numpy as np class_names = ["cls1_name", "cls2_name", "cls3_name", "cls4_name", "cls5_name"] def convert_json_label_to_yolov_seg_label(): json_path = "F:/Desktop/hand/labels" # 本地json路径 json_files = glob.glob(json_path + "/*.json") # print(json_files) # 指定输出文件夹 output_folder = "F:/Desktop/hand/labels_txt" # txt存放路径 if not os.path.exists(output_folder): os.makedirs(output_folder) for json_file in json_files: # print(json_file) with open(json_file, 'r') as f: json_info = json.load(f) img = cv2.imread(os.path.join(json_path, json_info["imagePath"])) height, width, _ = img.shape np_w_h = np.array([[width, height]], np.int32) txt_file = os.path.join(output_folder, os.path.basename(json_file).replace(".json", ".txt")) with open(txt_file, "w") as f: for point_json in json_info["shapes"]: txt_content = "" np_points = np.array(point_json["points"], np.int32) label = point_json["label"] index = class_names.index(label) # print(type(label)) norm_points = np_points / np_w_h norm_points_list = norm_points.tolist() txt_content += str(index) + " " + " ".join( [" ".join([str(cell[0]), str(cell[1])]) for cell in norm_points_list]) + "\n" f.write(txt_content) convert_json_label_to_yolov_seg_label()

转换后是这样的: 分割数据集,我们需要将转化成txt的数据集分割成训练集、验证集和测试集,这是分割代码:

# txt_split.py # 将图片和标注数据按比例切分为 训练集和测试集 import shutil import random import os # 原始路径 image_original_path = "hhh/images/" label_original_path = "hhh/labels_txt/" cur_path = os.getcwd() #cur_path = 'D:/image_denoising_test/denoise/' # 训练集路径 train_image_path = os.path.join(cur_path, "datasets/images/train/") train_label_path = os.path.join(cur_path, "datasets/labels/train/") # 验证集路径 val_image_path = os.path.join(cur_path, "datasets/images/val/") val_label_path = os.path.join(cur_path, "datasets/labels/val/") # 测试集路径 test_image_path = os.path.join(cur_path, "datasets/images/test/") test_label_path = os.path.join(cur_path, "datasets/labels/test/") # 训练集目录 list_train = os.path.join(cur_path, "datasets/train.txt") list_val = os.path.join(cur_path, "datasets/val.txt") list_test = os.path.join(cur_path, "datasets/test.txt") train_percent = 0.8 val_percent = 0.1 test_percent = 0.1 def del_file(path): for i in os.listdir(path): file_data = path + "\\" + i os.remove(file_data) def mkdir(): if not os.path.exists(train_image_path): os.makedirs(train_image_path) else: del_file(train_image_path) if not os.path.exists(train_label_path): os.makedirs(train_label_path) else: del_file(train_label_path) if not os.path.exists(val_image_path): os.makedirs(val_image_path) else: del_file(val_image_path) if not os.path.exists(val_label_path): os.makedirs(val_label_path) else: del_file(val_label_path) if not os.path.exists(test_image_path): os.makedirs(test_image_path) else: del_file(test_image_path) if not os.path.exists(test_label_path): os.makedirs(test_label_path) else: del_file(test_label_path) def clearfile(): if os.path.exists(list_train): os.remove(list_train) if os.path.exists(list_val): os.remove(list_val) if os.path.exists(list_test): os.remove(list_test) def main(): mkdir() clearfile() file_train = open(list_train, 'w') file_val = open(list_val, 'w') file_test = open(list_test, 'w') total_txt = os.listdir(label_original_path) num_txt = len(total_txt) list_all_txt = range(num_txt) num_train = int(num_txt * train_percent) num_val = int(num_txt * val_percent) num_test = num_txt - num_train - num_val train = random.sample(list_all_txt, num_train) # train从list_all_txt取出num_train个元素 # 所以list_all_txt列表只剩下了这些元素 val_test = [i for i in list_all_txt if not i in train] # 再从val_test取出num_val个元素,val_test剩下的元素就是test val = random.sample(val_test, num_val) print("训练集数目:{}, 验证集数目:{}, 测试集数目:{}".format(len(train), len(val), len(val_test) - len(val))) for i in list_all_txt: name = total_txt[i][:-4] srcImage = image_original_path + name + '.jpg' srcLabel = label_original_path + name + ".txt" if i in train: dst_train_Image = train_image_path + name + '.jpg' dst_train_Label = train_label_path + name + '.txt' shutil.copyfile(srcImage, dst_train_Image) shutil.copyfile(srcLabel, dst_train_Label) file_train.write(dst_train_Image + '\n') elif i in val: dst_val_Image = val_image_path + name + '.jpg' dst_val_Label = val_label_path + name + '.txt' shutil.copyfile(srcImage, dst_val_Image) shutil.copyfile(srcLabel, dst_val_Label) file_val.write(dst_val_Image + '\n') else: dst_test_Image = test_image_path + name + '.jpg' dst_test_Label = test_label_path + name + '.txt' shutil.copyfile(srcImage, dst_test_Image) shutil.copyfile(srcLabel, dst_test_Label) file_test.write(dst_test_Image + '\n') file_train.close() file_val.close() file_test.close() if __name__ == "__main__": main() 3.编写训练代码并训练

我这里习惯使用代码训练,还有命令训练,如果感兴趣的朋友可以去官网了解。

# train.py from ultralytics import YOLO if __name__ == '__main__': model = YOLO(r'ultralytics/cfg/models/11/yolo11-seg.yaml') model.train(data=r'config.yaml', imgsz=640, epochs=800, single_cls=True, batch=16, workers=10, device='0', )

配置文件:

# config.yaml path: ../datasets/images # 数据集所在路径 train: train # 数据集路径下的train.txt val: val # 数据集路径下的val.txt test: test # 数据集路径下的test.txt # Classes names: 0: class1_name 1: class2_name 2: class3_name 3: class4_name 4: class5_name

这里的path改成你的数据集位置,如果txt_split.py在项目根目录下运行则不需要修改路径,只需要修改类别即可。 修改之后,只需要python train.py运行即可。

测试代码:

# test.py from ultralytics import YOLO # 加载训练好的模型,改为自己的路径 model = YOLO('runs/train/exp22/weights/best.pt') #修改为训练好的路径 source = '11.jpg' #修改为自己的图片路径及文件名 # 运行推理,并附加参数 model.predict(source, save=True, imgsz=640)

转成onnx模型并运行:

yolo export model=runs/segment/train11/weights/best.pt imgsz=640 format=onnx opset=12 simplify python examples/YOLOv8-Segmentation-ONNXRuntime-Python/main.py --model runs/segment/train5n/weights/bestv8.onnx 4.常见报错

RuntimeError: Trying to create tensor with negative dimension -37: [0, -37] 运行YOLOv8-Segmentation-ONNXRuntime-Python时报错,修改配置文件

参考

语义分割:YOLOv11的分割模型训练自己的数据集(从代码下载到实例测试) 配置文件位置在ultralytics/cfg/datasets/,如果这里一直报错can't find file,就直接写绝对路径。

总结

因为项目还没完成,主要精力在此项目中,过程写的有点仓促,后面会慢慢优化文章质量,补全没完成的部分。

标签:

实例分割|yolov11训练自己的数据集由讯客互联电脑硬件栏目发布,感谢您对讯客互联的认可,以及对我们原创作品以及文章的青睐,非常欢迎各位朋友分享到个人网站或者朋友圈,但转载请说明文章出处“实例分割|yolov11训练自己的数据集