I've been trying to find a tutorial or something on how to make yolo c# use gpu instead of cpu, I always find that it says that it works on both cpu and gpu but no one ever says how to use the gpu since it always uses cpu for me. Here's my code with yolo v5 c#. It doesn't really matter for me if it uses yolo v5 just that it uses gpu. Tutorial I found that tutorial but i can't even find the download for Nvidia cuDNN v7.6.3 for CUDA 10.1. It feels very unclear on how to use it with gpu please help me :D
var image = pictureBox1.Image;
var scorer = new YoloScorer<YoloCocoP5Model>("Assets/Weights/yolov5n.onnx");
List<YoloPrediction> predictions = scorer.Predict(image);
var graphics = Graphics.FromImage(image);
foreach (var prediction in predictions) // iterate predictions to draw results
{
using (MemoryStream ms = new MemoryStream())
{
pictureBox1.Image.Save(ms, ImageFormat.Png);
prediction.Label.Color = Color.FromArgb(255, 255, 0, 0);
double score = Math.Round(prediction.Score, 2);
graphics.DrawRectangles(new Pen(prediction.Label.Color, 1),
new[] { prediction.Rectangle });
var (x, y) = (prediction.Rectangle.X - 3, prediction.Rectangle.Y - 23);
graphics.DrawString($"{prediction.Label.Name} ({score})",
new Font("Consolas", 16, GraphicsUnit.Pixel), new SolidBrush(prediction.Label.Color),
new PointF(x, y));
pictureBox1.Image = image;
}
}
Before you are going to use scorer you need an option set.
//https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html
bool initResult = false;
var cudaProviderOptions = new Microsoft.ML.OnnxRuntime.OrtCUDAProviderOptions(); // Dispose this finally
var providerOptionsDict = new Dictionary<string, string>();
providerOptionsDict["device_id"] = "0";
providerOptionsDict["gpu_mem_limit"] = "2147483648";
providerOptionsDict["arena_extend_strategy"] = "kSameAsRequested";
/*
cudnn_conv_algo_search
The type of search done for cuDNN convolution algorithms.
Value Description
EXHAUSTIVE (0) expensive exhaustive benchmarking using cudnnFindConvolutionForwardAlgorithmEx
HEURISTIC (1) lightweight heuristic based search using cudnnGetConvolutionForwardAlgorithm_v7
DEFAULT (2) default algorithm using CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM
Default value: EXHAUSTIVE
*/
providerOptionsDict["cudnn_conv_algo_search"] = "DEFAULT";
/*
do_copy_in_default_stream
Whether to do copies in the default stream or use separate streams. The recommended setting is true. If false, there are race conditions and possibly better performance.
Default value: true
*/
providerOptionsDict["do_copy_in_default_stream"] = "1";
/*
cudnn_conv_use_max_workspace
Check tuning performance for convolution heavy models for details on what this flag does. This flag is only supported from the V2 version of the provider options struct when used using the C API. The V2 provider options struct can be created using this and updated using this. Please take a look at the sample below for an example.
Default value: 0
*/
providerOptionsDict["cudnn_conv_use_max_workspace"] = "1";
/*
cudnn_conv1d_pad_to_nc1d
Check convolution input padding in the CUDA EP for details on what this flag does. This flag is only supported from the V2 version of the provider options struct when used using the C API. The V2 provider options struct can be created using this and updated using this. Please take a look at the sample below for an example.
Default value: 0
*/
providerOptionsDict["cudnn_conv1d_pad_to_nc1d"] = "1";
cudaProviderOptions.UpdateOptions(providerOptionsDict);
options = SessionOptions.MakeSessionOptionWithCudaProvider(cudaProviderOptions); // Dispose this finally
if (options != null)
{
// check yolo model file is accesible
if (File.Exists(yoloModelFile))
{
scorer = new YoloScorer<YoloCocoP5Model>(yoloModelFile, options);
initResult = true;
}
else
{
DebugMessage("Yolo model ONNX file (" + yoloModelFile + ") is missing!\r\n", 2);
}
}
else
DebugMessage("Yolo instance initializing error! Session options are empty!\r\n", 2);
}
Related
I spend the last days trying to get Yolo to work on my GPU. I tried using https://github.yuuza.net/mentalstack/yolov5-net, I followed their guide to use GPU and it didn't work. All other C# Yolo wrappers are not using the Yolov5 version and I want to use this one. So my question is, how can I use YoloV5 in C# (.net 5.0) with my GPU. Here is the code I used with yolov5-net :
using var image = Image.FromFile(path);
using var scorer = new YoloScorer<YoloCocoP5Model>("tinyyolov2-8.onnx");
List<YoloPrediction> predictions = scorer.Predict(image);
using var graphics = Graphics.FromImage(image);
foreach (var prediction in predictions)
{
double score = Math.Round(prediction.Score, 2);
graphics.DrawRectangles(new Pen(prediction.Label.Color, 8),
new[] { prediction.Rectangle });
var (x, y) = (prediction.Rectangle.X - 3, prediction.Rectangle.Y - 23);
graphics.DrawString($"{prediction.Label.Name} ({score})",
new Font("Arial", 40, GraphicsUnit.Pixel), new SolidBrush(prediction.Label.Color),
new PointF(x, y));
}
Console.WriteLine(outputPath);
image.Save(outputPath);
The above code works, but it eats my CPU, and it's clearly not possible to use this to process many images fast.
you can try this:
using var image = Image.FromFile(path);
var options = SessionOptions.MakeSessionOptionWithCudaProvider(0);
using var scorer = new YoloScorer<YoloCocoP5Model>("Assets/Weights/yolov5n.onnx", options);
make sure you installed CUDA toolkit and cudnn.
this code maybe work or not, i'm not sure but you can try this, that's all I can help
I get the below error when trying to use DXGI to capture the builtin screen on my laptop that runs on an Intel 630 HD with the latest driver. The code works when I capture the external screen running on my GTX 1070.
SharpDX.SharpDXException
HResult=0x80070057
Message=HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect.
The code in my form:
desktopDuplicator = new DesktopDuplicatorD11(1,0, DesktopDuplicatorD11.VSyncLevel.None);
The section of the code that errors:
private bool RetrieveFrame()
{
if (desktopImageTexture == null)
desktopImageTexture = new Texture2D(mDevice, mTextureDescription);
frameInfo = new OutputDuplicateFrameInformation();
try
{
mDeskDuplication.AcquireNextFrame(500, out frameInfo, out desktopResource);
}
catch (SharpDXException ex)
{
if (ex.ResultCode.Code == SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
{
return true;
}
if (ex.ResultCode.Failure)
{
throw new DesktopDuplicationException("Failed to acquire next frame.");
}
}
using (var tempTexture = desktopResource.QueryInterface<Texture2D>())
{
mDevice.ImmediateContext.CopyResource(tempTexture, desktopImageTexture);
}
return false;
}
It errors specifically on the line:
desktopImageTexture = new Texture2D(mDevice, mTextureDescription);
What is causing the error when using the internal display and the intel 630?
Edit #1:
mTextureDescription creation:
this.mTextureDescription = new Texture2DDescription()
{
CpuAccessFlags = CpuAccessFlags.Read,
BindFlags = BindFlags.None,
Format = Format.B8G8R8A8_UNorm,
Width = this.mOutputDescription.DesktopBounds.Right,
Height = this.mOutputDescription.DesktopBounds.Bottom,
OptionFlags = ResourceOptionFlags.None,
MipLevels = 1,
ArraySize = 1,
SampleDescription = { Count = 1, Quality = 0 },
Usage = ResourceUsage.Staging
};
The whole Desktop Duplication process is done on the same thread.
Update #2:
On the intel 630 Width = this.mOutputDescription.DesktopBounds.Right, returns 0 where as on my 1070 it returns 1920.
The most simplest reason is usually the actual problem.
Intel's final WDDM 2.6 drivers do not work properly with switchable graphics, update to the DCH WDDM 2.7 driver.
First, to get hints from API about invalid argument (this is exactly what you have) you need to enable Direct3D Debug Layer. The article explains it for C++ and it is possible to do a similar trick with C# as well.
Second, important is what are effectively the arguments in the failing call, not just the code.
The code is about right but if coordinates in mOutputDescription are zero or invalid, the mentioned API call is going to fail as well. You need to set a break point and inspect the variable.
I need to perform Source In composition on 2 images.
For example this image:
and a mask image (tested with black-transparent and black-white):
should produce result:
I am trying to do this with ImageSharp:
img.Mutate(imgMaskIn =>
{
using (var mask = Image.Load(maskImageFileName))
{
imgMaskIn.DrawImage(mask, new GraphicsOptions { AlphaCompositionMode = PixelAlphaCompositionMode.SrcIn});
}
});
but result is mask image. It should work based on this merge request.
Did I wrongly used library, or there is a bug?
Is there any other way to do this in ASP.NET Core?
Unfortunately, the syntax to do this is with ImageSharp is changing between the current preview version and the development version which should be the final API for this.
With 1.0.0-beta0005, you can blend these images like this:
using (var pattern = Image.Load("img_pattern.png"))
using (var texture = Image.Load("img_texture.png"))
{
var options = new GraphicsOptions { BlenderMode = PixelBlenderMode.In };
using (var result = pattern.Clone(x => x.DrawImage(options, texture)))
{
result.Save("img_out.png");
}
}
Note that you have to use a pattern image with alpha transparency for this. You cannot use a keyed transparency (at least not with this solution).
I’ve made the pattern transparent for that purpose (you can get the one I used here) and got this result:
In the final release, it will look like this:
using (var pattern = Image.Load("img_pattern.png"))
using (var texture = Image.Load("img_texture.png"))
{
var options = new GraphicsOptions { AlphaCompositionMode = PixelAlphaCompositionMode.SrcIn };
using (var result = pattern.Clone(x => x.DrawImage(texture, options)))
{
result.Save("img_out.png");
}
}
A good way to figure that out btw. is to look at the PorterDuffCompositorTests file which contains the tests for this feature and as such will always reflect the current API.
As of March 2022 the accepted answer doesn't work anymore, with ImageSharp 2. The following code doesn't solve the exact problem in the question, but it's how I got something similar to work:
using SixLabors.ImageSharp;
using SixLabors.ImageSharp.PixelFormats;
using SixLabors.ImageSharp.Processing.Processors.Drawing;
...
using (var inImg = Image.Load<RgbaVector>(imagePath)) //has some transparent pixels
using (var background = new Image<RgbaVector>(inImg.Width, inImg.Height, new RgbaVector(1, 0, 0, 1))) //this is just a solid red image of the same size as the loaded image, but it could be any image
{
var processorCreator = new DrawImageProcessor(
inImg,
Point.Empty,
PixelColorBlendingMode.Normal,
PixelAlphaCompositionMode.SrcAtop, //this is the setting you want to play with to get the behavior from the original question
1f
);
var pxProcessor = processorCreator.CreatePixelSpecificProcessor(
Configuration.Default,
background,
inImg.Bounds());
pxProcessor.Execute(); //writes to the image passed into CreatePixelSpecificProcessor, in this case background
background.Save("some_path.png");
}
}
I am trying to upgrade my code for multi-label classification from version 3.0.2. to 3.3.0. I did the upgrade, but it seems that classification results were much worse now - algorithm is not able to classify many more instances than before. Can you please tell me if there is any issue in my new code. How to use estimated parameters for gauss kernel as I did before?
Old code:
var gauss = Gaussian.Estimate(inputs, Convert.ToInt32(inputs.GetLength(0) * 0.8));
// Create the machine.
this.machine = new MultilabelSupportVectorMachine(inputs[0].Length, gauss, outputs[0].Length);
// Train the model.
var teacher = new MultilabelSupportVectorLearning(this.machine, inputs, outputs);
teacher.Algorithm = (svm, classInputs, classOutputs, i, j) => new SequentialMinimalOptimization(svm, classInputs, classOutputs) { UseComplexityHeuristic = true, CacheSize = 1000 };
teacher.SubproblemFinished += SubproblemFinished;
var error = teacher.Run(true);
// Save the model.
this.machine.Save(modelPath);
New code:
var gauss = Gaussian.Estimate(inputs, Convert.ToInt32(inputs.GetLength(0) * 0.8));
// Create the machine.
this.machine = new MultilabelSupportVectorMachine<Gaussian>(inputs[0].Length, gauss, outputs[0].Length);
// Create the multi-class learning algorithm for the machine
var teacher = new MultilabelSupportVectorLearning<Gaussian>(this.machine)
{
// Configure the learning algorithm to use SMO to train the
// underlying SVMs in each of the binary class subproblems.
Learner = (param) => new SequentialMinimalOptimization<Gaussian>()
{
// Estimate a suitable guess for the Gaussian kernel's parameters.
// This estimate can serve as a starting point for a grid search.
UseKernelEstimation = true,
UseComplexityHeuristic = true,
CacheSize = 1000
}
};
teacher.ParallelOptions.MaxDegreeOfParallelism = 1;
// Learn a machine
machine = teacher.Learn(inputs, outputs);
Serializer.Save(machine, modelPath);
Thanks in advance!
I want to use directx on C# and I am using SharpDX wrapper. I got a book called Direct3D rendering cookbook and I got the basic code from it. I want to create a 3d world view. For that I will need a camera view and a grid that helps to recognize world position just like in Autodesk Maya but I do not know how to do that. My mind is rally mixed what should I do to start ?
Here I have code that is ready to render something I think:
using System;
using SharpDX.Windows;
using SharpDX.DXGI;
using SharpDX.Direct3D11;
using Device = SharpDX.Direct3D11.Device;
using Device1 = SharpDX.Direct3D11.Device1;
namespace CurrencyConverter
{
static class Program
{[STAThread]
static void Main()
{
// Enable object tracking
SharpDX.Configuration.EnableObjectTracking = true;
SharpDX.Animation.Timer timer = new SharpDX.Animation.Timer();
#region Direct3D Initialization
// Create the window to render to
Form1 form = new Form1();
form.Text = "D3DRendering - EmptyProject";
form.Width = 640;
form.Height = 480;
// Declare the device and swapChain vars
Device device;
SwapChain swapChain;
// Create the device and swapchain
// First create a regular D3D11 device
using (var device11 = new Device(
SharpDX.Direct3D.DriverType.Hardware,
DeviceCreationFlags.None,
new[] {
SharpDX.Direct3D.FeatureLevel.Level_11_1,
SharpDX.Direct3D.FeatureLevel.Level_11_0,
}))
{
// Query device for the Device1 interface (ID3D11Device1)
device = device11.QueryInterfaceOrNull<Device1>();
if (device == null)
throw new NotSupportedException(
"SharpDX.Direct3D11.Device1 is not supported");
}// Rather than create a new DXGI Factory we reuse the
// one that has been used internally to create the device
using (var dxgi = device.QueryInterface<SharpDX.DXGI.Device2>())
using (var adapter = dxgi.Adapter)
using (var factory = adapter.GetParent<Factory2>())
{
var desc1 = new SwapChainDescription1()
{
Width = form.ClientSize.Width,
Height = form.ClientSize.Height,
Format = Format.R8G8B8A8_UNorm,
Stereo = false,
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.BackBuffer | Usage.RenderTargetOutput,
BufferCount = 1,
Scaling = Scaling.Stretch,
SwapEffect = SwapEffect.Discard,
};
swapChain = new SwapChain1(factory,
device,
form.Handle,
ref desc1,
new SwapChainFullScreenDescription()
{
RefreshRate = new Rational(60, 1),
Scaling = DisplayModeScaling.Centered,
Windowed = true
},
// Restrict output to specific Output (monitor)
adapter.Outputs[0]);
}
// Create references for backBuffer and renderTargetView
var backBuffer = Texture2D.FromSwapChain<Texture2D>(swapChain,
0);
var renderTargetView = new RenderTargetView(device,
backBuffer);
#endregion
// Setup object debug names
device.DebugName = "The Device";
swapChain.DebugName = "The SwapChain";
backBuffer.DebugName = "The Backbuffer";
renderTargetView.DebugName = "The RenderTargetView";
#region Render loop
// Create and run the render loop
RenderLoop.Run(form, () =>
{
// Clear the render target with...
var lerpColor = SharpDX.Color.Lerp(SharpDX.Color.White,
SharpDX.Color.DarkBlue,
(float)((timer.Time) / 10.0 % 1.0));
device.ImmediateContext.ClearRenderTargetView(
renderTargetView,
lerpColor);
// Execute rendering commands here...
//...
//I DO NOT HAVE ANY IDEA
//...
// Present the frame
swapChain.Present(0, PresentFlags.RestrictToOutput);
});
#endregion
#region Direct3D Cleanup
// Release the device and any other resources created
renderTargetView.Dispose();
backBuffer.Dispose();
device.Dispose();
swapChain.Dispose();
#endregion
}
}
}
Generally speaking, with Direct3D you need a substantial amount of code before to have anything happening on the screen.
In the SharpDX repository you have the MiniCube sample which contains enough to really get you started, as it has all the elements required to draw a 3d scene.
I recommend to particularily look for:
Depth buffer creation (DepthStencilView)
fx file, as you need shaders to have anything on the screen (no more fixed funtion)
How the Vertex Buffer is created, you need to split geometry in triangles (in common cases, there are other possibilities).
Don't forget the SetViewport (it's really common to have it omitted)
The calls referring to Input Assembler are assigning the geometry to be drawn
Constant buffer creation : this is to pass matrices and changing data (like diffuse)
Also make sure to have DeviceCreationFlags.None with the Device.CreateWithSwapChain call, and in visual studio debug options, use "Enable Native Code Debugging". This will give you errors and warnings if something is not set properly, plus a meaningful reason in case any for the resource creation fails (instead of "Invalid Args", which is quite pointless).
As another recommendation, all the Direct3D11 resource creation parameters are incredibly error prone and tedious (many options are non compatible between each other), so it quite important to wrap those into some easier to use helper functions (and make a small amount of unit tests to validate them once and for all). The old Toolkit has quite some of those examples
SharpDX wrapper is relatively close to the c++ counterpart, so anything in the c++ documentation applies to it too.