Skip to content

Issues: microsoft/onnxruntime

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Please add bucketize op feature request request for unsupported feature or enhancement
#8297 opened Jul 5, 2021 by xuhuisheng
More supported ai.onnx operators for WebGL backend feature request request for unsupported feature or enhancement platform:web issues related to ONNX Runtime web; typically submitted using template
#11872 opened Jun 16, 2022 by FabioRomagnolo
High Output Difference between ONNX model with different optimizer settings ep:CUDA issues related to the CUDA execution provider
#18959 opened Dec 29, 2023 by okorokovnikita
Linux CI pipelines can't test unreleased versions of ONNX core runtime issues related to core runtime
#11693 opened May 31, 2022 by garymm
How to force specific type of operators to run on CPU? ep:CUDA issues related to the CUDA execution provider feature request request for unsupported feature or enhancement
#14864 opened Mar 1, 2023 by SineStriker
OrtCustomOp does not support all types of attributes core runtime issues related to core runtime feature request request for unsupported feature or enhancement
#6409 opened Jan 22, 2021 by xadupre
[Web] executionProviders chain for webnn fallback does not work on init error ep:WebNN WebNN execution provider platform:web issues related to ONNX Runtime web; typically submitted using template platform:windows issues related to the Windows platform stale issues that have not been addressed in a while; categorized by a bot
#20729 opened May 19, 2024 by sansmoraxz
I want use gpu on my jetson nx2 platform with c++, how should i do? platform:jetson issues related to the NVIDIA Jetson platform
#11240 opened Apr 17, 2022 by xinsuinizhuan
Missing Bfloat16 support in DLPack converter code training issues related to ONNX Runtime training; typically submitted using template
#9920 opened Dec 2, 2021 by abhinavPodili
Unable to build onnxruntime with "--build_wheel" and "--enable_pybind" options build build issues; typically submitted using template
#6841 opened Feb 27, 2021 by ashutoshnaik
Conv2d_transpose requires asymmetric padding which the CUDA EP currently does not support core runtime issues related to core runtime ep:CUDA issues related to the CUDA execution provider feature request request for unsupported feature or enhancement
#11312 opened Apr 22, 2022 by ultimatedigiman
Large Memory Allocations When Loading RandomForestRegressor Model api issues related to all other APIs: C, C++, Python, etc.
#7067 opened Mar 19, 2021 by mejas
import torch greatly increase ONNX model gpu memory api issues related to all other APIs: C, C++, Python, etc. stale issues that have not been addressed in a while; categorized by a bot
#8823 opened Aug 24, 2021 by lck1201
Make ONNX graphs with fused ONNXRuntime plugins runnable by TensorRT execution provider ? core runtime issues related to core runtime ep:TensorRT issues related to TensorRT execution provider stale issues that have not been addressed in a while; categorized by a bot
#10509 opened Feb 9, 2022 by borisfom
DnnlExecutionProvider is not visible in python API ep:oneDNN questions/issues related to DNNL EP
#10275 opened Jan 13, 2022 by magicaltoast
float16 support in web runtimes platform:web issues related to ONNX Runtime web; typically submitted using template stale issues that have not been addressed in a while; categorized by a bot
#9758 opened Nov 14, 2021 by josephrocca
Why CUDAExecutionProvider doesn't support ConvInteger Op? ep:CUDA issues related to the CUDA execution provider feature request request for unsupported feature or enhancement
#5127 opened Sep 11, 2020 by lewisword
onnxruntime-web is 11-17x times slower than native inference platform:web issues related to ONNX Runtime web; typically submitted using template
#11181 opened Apr 12, 2022 by CanyonWind
ProTip! Find all open issues with in progress development work with linked:pr.