Hello, I am trying to port YoloDotNet library to VL (GitHub - dani217s/VL.YoloDotNet: A VL wrapper for YoloDotNet v2.0) .
This library use Microsoft.ML.OnnxRuntime but seems that gamma is not able to load the dlls provided by Microsoft.ML.OnnxRuntime.Gpu.Windows.
Everithing is working if I manually copy these dlls (onnxruntime.dll, onnxruntime_providers_cuda.dll, onnxruntime_providers_shared.dll, onnxruntime_providers_tensorrt.dll) on gamma installation folder.
I’m I missing something on the project that makes gamma unable to load onnxruntime.dll etc.?
This is a portion of Gamma error log:
2024/09/11 10:32:18.159 [CRT] (App) 0 Help YoloV10 object detection Unexpected exception during Update: Unable to find an entry point named ‘OrtGetApiBase’ in DLL ‘onnxruntime’.
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.
—> System.TypeInitializationException: The type initializer for ‘Microsoft.ML.OnnxRuntime.NativeMethods’ threw an exception.
—> System.EntryPointNotFoundException: Unable to find an entry point named ‘OrtGetApiBase’ in DLL ‘onnxruntime’.
at Microsoft.ML.OnnxRuntime.NativeMethods.OrtGetApiBase()
at Microsoft.ML.OnnxRuntime.NativeMethods…cctor() in C:\a_work\1\s\csharp\src\Microsoft.ML.OnnxRuntime\NativeMethods.shared.cs:line 313
— End of inner exception stack trace —
Hi @dani21s , unfortunately you have it right, gamma won’t pick up the onnxruntime.dll without modification because it’s native. There is a little trick you can try. See here: https://thegraybook.vvvv.org/reference/libraries/referencing.html#unmanagednative-dependencies
where it explains you can set a path to those dlls when you open vvvv so it picks it up.
One small note about your important efforts - how about using the direct ml runtime so the acceleration works on a wider range of hardware? GPU is cuda/Nvidia-only.
Hi @Hadasi , thanks for the reply.
Sorry it is not possible to reference onnxruntime.dll and the native libraries included in nuget packages.
I followed your suggestion to use directml, but the problem of native libraries remains, in this case I have to copy onnxruntime.dll and DirectMl.dll into the gamma folder.
The problem does not appear when exporting the application.
I saw that something similar has already been addressed with onnxruntime (BUG with native .NET lib dependencies in editor) and if it were possible to find a solution it would be possible to create a nuget package vl.yolodotnet.
@dani21s the DML suggestion was just about enabling a wider userbase to get acceleration, but yes the problem will persist. @gregsn what could you recommend developers do in the meanwhile? have a lib-native folder in your Nuget?
Indeed you seem to face the same issue as was described in the other thread. Back there the solution was that we internally check whether or not Microsoft.ML.OnnxRuntime is referenced by any VL document or package and if so trigger a native library load which will ensure that the native dlls is loaded from the package folder and not from the Windows/System32 folder. Now it seems we wrote that check a little bit too strict, testing for the presence of Microsoft.ML.OnnxRuntime only instead of widening the query a bit so that for example Microsoft.ML.OnnxRuntime.Managed is also included. A possible workaround for you would be to reference Microsoft.ML.OnnxRuntime from your main VL document directly. This should trigger the correct native library load call. Once referenced you will need to restart vvvv to see the effect. Please tell us if that workaround works for you.
In upcoming versions of vvvv we’ll change the check a little bit so that the presence of Microsoft.ML.OnnxRuntime.Managed should trigger the native lib load as well.
Hi @Elias , with 6.7.148, after referencing Microsoft.ML.OnnxRuntime.Gpu.Windows in the main VL document, everything works as expected.
But I have another question: as suggested by @Hadasi to allow a wider user base to get acceleration, I started integrating the DirecML execution provider (everything in the develop-directml branch) that refers to Microsoft.ML.OnnxRuntime.DirectML which depends on Microsoft.AI.DirectML.
In this case Microsoft.ML.OnnxRuntime.DirectML onnxruntime.dll is addressed correctly, but not DirecML.dll (Microsoft.AI.DirectML). I tried to put this library in .\lib folder and also in .\runtimes\etc. folder, but the only solution is to put DirecML.dll in vvvv.exe folder.
Is there anything else I can try?
Thanks a lot for your support!
Sadly these packages don’t deliver those native dlls in the corresponding runtimes folders but instead have some custom mechanism which probably only works when used from withing Visual Studio. So you’ll have to come up with your own solution how to load those native dlls when running inside the vvvv editor and probably also when exporting an application - well chances are the export might just work since it goes through the normal dotnet build system, but still you’d have to test.
Possible solutions:
a) Copy those dlls into your own package under runtimes/win-x64/native and remove the dependency on the original package
b) Keep the dependency, but write your own native dll loader. See my attached patch, if you place that file in your package and simply reference it from your main document it should load the dll from the custom package location when the user patch starts.
Hi @Elias ,
solution b) is very powerful and cool.
I was not aware of the functions included in AppHost and they are very useful also to load default models from the package files.
I hope to be able to release a nuget package in the next few weeks, compatible with some personal commitments.
Many thanks to you and also @Hadasi.
DS