Serving Deep Learning Models with Tensorflow Server – part 2

Serving Deep Learning Models with Tensorflow Server – part 1

Now we already had our model ready to be serve with our Tensorflow Server!

Let’s install the docker image of Tensorflow Server

docker pull tensorflow/serving

Almost there! now we need just running the container with some parameters! offering the source path of our model(source), the path that it gonna be exposed (target) and the model name

docker run -t --rm -p 8501:8501 --mount type=bind,source=I:PROJECTStensorflow_servinglinear_model,target=/models/linear_model -e MODEL_NAME=linear_model -t tensorflow/serving

Voila! our model can be requested accessing the url: http://localhost:8509/v1/models/linear_model:predict

OBS: To request the prediction of our model is necessary to request using the protocol POST and the meta-data Content-Type as application/json and data has to be organized inside the data-tag instances. I created a yaml file that can be used as example using the VSCode and the extension apitester

«
»