Backend models from Azure Custom vision are way different from Yolo v8. We can export the Azure Custom vision models as Tensorflow/pytorch models but I don't think it will be a good practice a train a model in Custom vision framework and transfer to Yolo v8 model.
Best practice would be fine tune yolo v8 model in Azure ML studio (GPU environment) with a custom dataset.
https://github.com/airacingtech/YOLOv8-Fine-Tune
https://yolov8.org/how-to-use-fine-tune-yolov8/
Fine tuning scripts can be run in Azure ML GPU or Local GPU, Post finetuning, we can upload the model registry in Azure ML
https://learn.microsoft.com/en-us/azure/machine-learning/tutorial-deploy-model?view=azureml-api-2
You can go through authoring scoring script to deploy the fine tune model as online or batch endpoint on Azure ML environment instead
Scoring happens on endpoint in two steps.
- Load the fine tune models from model registry
def init(): global model # AZUREML_MODEL_DIR is an environment variable created during deployment # The path "model" is the name of the registered model's folder model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model") # load the model model = load_model(model_path)
- Load the dataset and run prediction with endpoint in batches
import pandas as pd
from typing import List, Any, Union
def run(mini_batch: List[str]) -> Union[List[Any], pd.DataFrame]:
results = []
for file in mini_batch:
(...)
return pd.DataFrame(results)
Hope it helps.
Thank you