tensorflow-servingdocker模型部署(以mnist为例)
- IT业界
- 2025-08-21 12:06:01

✨ 博客主页:小小马车夫的主页 ✨ 所属专栏:Tensorflow
文章目录 前言一、环境介绍二、tensorflow-serving docker安装三、单模型部署 (以官方demo saved_model_half_plus_two_cpu为例)1、docker模型部署2、python requests模型预测 四、多模型部署 (以mnist为例)1、配置文件models.config2、docker模型部署3、python reqests预测4、gRPC预测 五、查看tensorflow模型参数1、saved_model_cli2、获取模型metadata参数 总结前言
tensorflow模型训练出来要部署到生产环境,就需要模型预测框架,其中tensorflow-serving应用的比较多,下面就对tensorflow-serving docker部署作一个简要的介绍。
一、环境介绍
Ubuntu 18.04.64 LTS 64位 WSL docker 20.10.21 Python 3.9.13 tensorflow 2.11.0 tensorflow-serving 2.5.1
二、tensorflow-serving docker安装 docker pull tensorflow/serving #下载最新tensorflow-serving service docker start # 启动docker说明 docker pull tensorflow/serving:2.5.1 指定版本下载,不指定则下载最新版本
三、单模型部署 (以官方demo saved_model_half_plus_two_cpu为例)tensorflow-serving源码中有很多官方训练好的模型,这里以saved_model_half_plus_two_cpu为例作介绍。 tensorflow-serving git地址: github /tensorflow/serving
模型目录结构如下:
1、docker模型部署 docker run -p 8501:8501 \ --mount type=bind,\ source=/mnt/d/projects/Tests/tensorflow/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu,\ target=/tensorflow/models/half_plus_two \ -e MODEL_NAME=half_plus_two -t tensorflow/serving &说明: -p 8501:8501 指要映射的端口,将容器8501端口映射到系统8501端口,8501是tensorflow-serving的http服务端口,用于提供RESTful服务 source 模型的绝对路径,要到模型目录版本号的上一级 target 模型挂载到docker容器中的目录 -e 用于传递环境变量, 这里是MODEL_NAME=half_plus_two, 此处批模型的别名 -t 指定挂载到目标容器
2、python requests模型预测 import requests import json pdata={"instances":[1,2,3]} param=json.dumps(pdata) res=requests.post('http://192.168.2.110:8501/v1/models/half_plus_two:predict',data=param) print(res.text)输出:
{ "predictions": [2.5, 3.0, 3.5 ] } 四、多模型部署 (以mnist为例)前面介绍了单模型部署的方法,如果有多个模型我们该怎么办,如果每加一个模型要新开一个容器,那样未免也太麻烦了,这里介绍一个一次部署多个模型的方法。
1、配置文件models.config首先,要一个配置文件配置多个模型的信息,如下:
model_config_list:{ config:{ name:"half_plus_two" base_path:"/tensorflow/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu" model_platform:"tensorflow" model_version_policy { specific { versions: 123 } } }, config:{ name:"mnist" base_path:"/tensorflow/models/mnist" model_platform:"tensorflow" model_version_policy { specific { versions: 1 } } } }说明: 配置中有两个模型,一个前面介绍的half_plus_two,另一个是 mnist name 模型名称 base_path 是一个相对路径 versions 指定版本
2、docker模型部署 docker run -p 8500:8500 -p 8501:8501 --mount type=bind,source=/mnt/d/projects/Tests/tensorflow/,target=/tensorflow -t tensorflow/serving --model_config_file=/tensorflow/models.config 2>&1 >/mnt/d/projects/Tests/tensorflow/model.log &说明: -p 8500:8500 -p 8501:8501 将容器端口8500、8501分别映射到系统的8500和8501端口,其中8501是tensorflow-serving的gRPC端口,8500是tensorflow-serving的RESTful端口 source 模型根目录 -t 指定挂载到目标容器 --model_config_file 指定模型配置文件 2>&1 >log 标准输出重定向到日志文件
3、python reqests预测这里只演示mnist的预测,其他模型类似
import tensorflow as tf from tensorflow import keras from keras import layers,optimizers,datasets import requests import json import numpy #加载mnist数据集 (x,y),(x_val,y_val)=datasets.mnist.load_data() print('datasets',x.shape,y.shape,x.min(),y.min()) #从数据集中取一个样本,作为预测使用 idx=1234 img=x_val[idx,:,:] label=y_val[idx] #样本图像数组重新定义shape img=tf.cast(img.reshape(-1,784),tf.float32) #tensor对象转换numpy数组 img=numpy.asarray(img) #定义tensorflow-serving数据,其中signature_name和inputs为模型配置,具体内容需要查看模型内容,详见后续说明 pdata={"signature_name":"serving_default","inputs":{"dense_input":img.tolist()}} param=json.dumps(pdata) pdata['inputs'] header={"content-type":"application/json"} #发送模型预测请求 res=requests.post("http://192.168.2.110:8501/v1/models/mnist:predict",data=param,headers=header) print(res.json()['outputs'][0]) #获取预测结果 float_vals=numpy.array(res.json()['outputs'][0]) #预测结果是一个10个元素float, 每一个元素代表概率,其中概率最大的元素的下标就是本次预测的数字 prediction=numpy.argmax(float_vals) print(prediction) print(label)输出:
datasets (60000, 28, 28) (60000,) 0 0 [12.0071669, 3.22835922, -12.5186815, -17.6188622, -13.0628424, -1.33576834, 9.98286, -6.54291344, 53.3233299, -14.6254959] 8 8 4、gRPC预测 import numpy import grpc import tensorflow as tf from tensorflow_serving.apis import predict_pb2 from tensorflow_serving.apis import prediction_service_pb2_grpc from keras.datasets import mnist #加载mnist数据集 (x_train,y_train),(x_test,y_test)=mnist.load_data() print(numpy.shape(x_train)) #从数据集中取一个样本,作为预测使用 idx=1234 img=x_val[idx,:,:] label=y_val[idx] #样本图像数组重新定义shape img=tf.cast(img.reshape(-1,784),tf.float32) #tensor对象转换numpy数组 img=numpy.asarray(img) #定义tensorflow-srving服务基本参数 host='192.168.2.110' port=8500 #对应gRPC端口 model_name='mnist' #模型名称 model_version=1 request_timeout=20 #tensorflow-serving gRPC 地址 url='%s:%s'%(host,port) #转换样本图像数据类型 features_tensor_proto=tf.make_tensor_proto(img,dtype=tf.float32,shape=img.shape) channel=grpc.insecure_channel(url) stub=prediction_service_pb2_grpc.PredictionServiceStub(channel) #创建预测请求对象 request=predict_pb2.PredictRequest() #模型名称 request.model_spec.name=model_name #模型版本 request.model_spec.version.value=model_version #指定样本图像数据,dense_input为模型文件中指定,具体参数详见后续 request.inputs['dense_input'].CopyFrom(features_tensor_proto) #指定签名 request.model_spec.signature_name='serving_default' #开始预测 result=stub.Predict(request,request_timeout) #获取预测结果,dense_2为模型中指定,模型参数模型详见后续 response=numpy.array(result.outputs['dense_2'].float_val) #取概率最大值的下标作为预测数字 prediction=numpy.argmax(response) print(result) print(prediction) print(label)输出:
(60000, 28, 28) (28, 28) outputs { key: "dense_2" value { dtype: DT_FLOAT tensor_shape { dim { size: 1 } dim { size: 10 } } float_val: 0.07472114264965057 float_val: 9.907261848449707 float_val: -38.84326934814453 float_val: 158.68698120117188 float_val: -41.261207580566406 float_val: -28.248952865600586 float_val: -19.231698989868164 float_val: -14.337512969970703 float_val: -44.73484420776367 float_val: 2.877924680709839 } } model_spec { name: "minist" version { value: 1 } signature_name: "serving_default" } 3 3 五、查看tensorflow模型参数经过前面的介绍,大家就对预测过程有所了解,其中比较关键的模型输入输出类型、签名等信息,下面介绍两种查看模型参数的方法。
1、saved_model_cli saved_model_cli show --dir model-savedmodel --all输出:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['__saved_model_init_op']: The given SavedModel SignatureDef contains the following input(s): The given SavedModel SignatureDef contains the following output(s): outputs['__saved_model_init_op'] tensor_info: dtype: DT_INVALID shape: unknown_rank name: NoOp Method name is: signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['dense_input'] tensor_info: dtype: DT_FLOAT shape: (-1, 784) name: serving_default_dense_input:0 The given SavedModel SignatureDef contains the following output(s): outputs['dense_2'] tensor_info: dtype: DT_FLOAT shape: (-1, 10) name: StatefulPartitionedCall:0 Method name is: tensorflow/serving/predict 2022-11-24 14:55:37.147887: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Concrete Functions: Function Name: '__call__' Option #1 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 784), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 dense_input: TensorSpec(shape=(None, 784), dtype=tf.float32, name='dense_input') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #3 Callable with: Argument #1 dense_input: TensorSpec(shape=(None, 784), dtype=tf.float32, name='dense_input') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #4 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 784), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Function Name: '_default_save_signature' Option #1 Callable with: Argument #1 dense_input: TensorSpec(shape=(None, 784), dtype=tf.float32, name='dense_input') Function Name: 'call_and_return_all_conditional_losses' Option #1 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 784), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 dense_input: TensorSpec(shape=(None, 784), dtype=tf.float32, name='dense_input') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #3 Callable with: Argument #1 dense_input: TensorSpec(shape=(None, 784), dtype=tf.float32, name='dense_input') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #4 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 784), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None 2、获取模型metadata参数metadata就是模型的基本信息,python requests方式获取metadata方法如下:
importrequests res=requests.get("http://192.168.2.110:8501/v1/models/mnist/metadata") print(res.text)输出:
{ "model_spec":{ "name": "minist", "signature_name": "", "version": "1" } , "metadata": {"signature_def": { "signature_def": { "serving_default": { "inputs": { "dense_input": { "dtype": "DT_FLOAT", "tensor_shape": { "dim": [ { "size": "-1", "name": "" }, { "size": "784", "name": "" } ], "unknown_rank": false }, "name": "serving_default_dense_input:0" } }, "outputs": { "dense_2": { "dtype": "DT_FLOAT", "tensor_shape": { "dim": [ { "size": "-1", "name": "" }, { "size": "10", "name": "" } ], "unknown_rank": false }, "name": "StatefulPartitionedCall:0" } }, "method_name": "tensorflow/serving/predict" }, "__saved_model_init_op": { "inputs": {}, "outputs": { "__saved_model_init_op": { "dtype": "DT_INVALID", "tensor_shape": { "dim": [], "unknown_rank": true }, "name": "NoOp" } }, "method_name": "" } } } } }总结
本文主要介绍了docker下tensorflow-serving的安装、部署,以及预测相关的知识,具体如下:
tensorflow-serving docker镜像安装tensorflow-serving单模型和多模型部署tensorflow-serving模型预测方法,主要介绍python requests和gRPC两种方式两种查看tensorflow模型信息的方法如果觉得有些帮助或觉得文章还不错,请关注一下博主,你的关注是我持续写作的动力。另外,如果有什么问题,可以在评论区留言,或者私信博主,博主看到后会第一时间进行回复。 【间歇性的努力和蒙混过日子,都是对之前努力的清零】 欢迎转载,转载请注明出处: blog.csdn.net/xxm524/article/details/128060790
tensorflow-servingdocker模型部署(以mnist为例)由讯客互联IT业界栏目发布,感谢您对讯客互联的认可,以及对我们原创作品以及文章的青睐,非常欢迎各位朋友分享到个人网站或者朋友圈,但转载请说明文章出处“tensorflow-servingdocker模型部署(以mnist为例)”