Nice! That means you've successfully:

  1. ✅ Loaded a YOLOv8 model (yolo11m-pose.pt)
  2. ✅ Exported it to ONNX format (yolo11m-pose.onnx)
  3. ✅ Converted the ONNX model to a TensorFlow SavedModel (yolo11m-pose_tf/)

🧠 What's Next?

Depending on your goal, you’ve got a few options:


🟢 If you want to use this model in a TensorFlow app:

You can now load and run it like this:

import tensorflow as tf

model = tf.saved_model.load("yolo11m-pose_tf")
# Use model.signatures to see available functions


📱 If you're deploying to mobile (TFLite):

You can convert it to TFLite:

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model("yolo11m-pose_tf")
tflite_model = converter.convert()

with open("yolo11m-pose.tflite", "wb") as f:
    f.write(tflite_model)

You can now run this .tflite model on Android/iOS with TensorFlow Lite.


📸 If you're using it in your webcam app (from your GP project):

You can plug this TFLite model into your OpenCV + MediaPipe or TensorFlow Lite runtime to get keypoints and calculate distances.

Let me know which direction you're going — mobile deployment? Web-based inference? Real-time webcam overlay? I can help wire that part up.