Skip to content

Conversation

SahilChachra
Copy link
Contributor

@SahilChachra SahilChachra commented Apr 11, 2022

While running inference using YoloV3 onnx model, I got the following error :-

ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

On checking the code, in /models/common.py , after line 315, cuda = torch.cuda.is_available() was missing and also providers for onnx was missing. Hence by referring to Onnx inference code in YoloV5 (Yolov5/models/common.py), I have added the following lines of code

cuda = torch.cuda.is_available()
check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime'))
import onnxruntime
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider']
session = onnxruntime.InferenceSession(w, providers=providers)

Have tested the code with CPU and GPU both and inferencing is running now.

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Improvement in ONNX Runtime inference for the YOLOv3 model by selectively enabling GPU support.

📊 Key Changes

  • Detected whether CUDA (GPU support) is available and set the variable cuda accordingly.
  • Modified requirement checks to use onnxruntime-gpu when CUDA is available, otherwise defaulting to onnxruntime.
  • Specified ONNX Runtime inference providers to include GPU support if CUDA is present.

🎯 Purpose & Impact

  • Increased Flexibility: By automatically detecting GPU availability, the model can now leverage accelerated computation when possible, leading to performance improvements.
  • Smoother User Experience: Users no longer need to manually specify whether they're using a GPU; the system intelligently adapts, simplifying the setup process.
  • Potential Impact: Improved performance and usability can enhance the applicability of YOLOv3 across various environments, benefiting those requiring efficient real-time object detection.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👋 Hello @SahilChachra, thank you for submitting a 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:

  • ✅ Verify your PR is up-to-date with upstream/master. If your PR is behind upstream/master an automatic GitHub actions rebase may be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature' with the name of your local branch:
git remote add upstream https://github.com/ultralytics/yolov3.git
git fetch upstream
git checkout feature  # <----- replace 'feature' with local branch name
git merge upstream/master
git push -u origin -f
  • ✅ Verify all Continuous Integration (CI) checks are passing.
  • ✅ Reduce changes to the absolute minimum required for your bug fix or feature addition. "It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is." -Bruce Lee

@glenn-jocher glenn-jocher changed the title Fixed Onnx inference code Fix ONNX inference code Apr 11, 2022
@glenn-jocher glenn-jocher merged commit ae37b2d into ultralytics:master Apr 11, 2022
@glenn-jocher
Copy link
Member

@SahilChachra PR is merged. Thank you for your contributions to YOLOv3 🚀 and Vision AI ⭐

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants