I spend the last days trying to get Yolo to work on my GPU. I tried using https://github.yuuza.net/mentalstack/yolov5-net, I followed their guide to use GPU and it didn't work. All other C# Yolo wrappers are not using the Yolov5 version and I want to use this one. So my question is, how can I use YoloV5 in C# (.net 5.0) with my GPU. Here is the code I used with yolov5-net :
using var image = Image.FromFile(path);
using var scorer = new YoloScorer<YoloCocoP5Model>("tinyyolov2-8.onnx");
List<YoloPrediction> predictions = scorer.Predict(image);
using var graphics = Graphics.FromImage(image);
foreach (var prediction in predictions)
{
double score = Math.Round(prediction.Score, 2);
graphics.DrawRectangles(new Pen(prediction.Label.Color, 8),
new[] { prediction.Rectangle });
var (x, y) = (prediction.Rectangle.X - 3, prediction.Rectangle.Y - 23);
graphics.DrawString($"{prediction.Label.Name} ({score})",
new Font("Arial", 40, GraphicsUnit.Pixel), new SolidBrush(prediction.Label.Color),
new PointF(x, y));
}
Console.WriteLine(outputPath);
image.Save(outputPath);
The above code works, but it eats my CPU, and it's clearly not possible to use this to process many images fast.
you can try this:
using var image = Image.FromFile(path);
var options = SessionOptions.MakeSessionOptionWithCudaProvider(0);
using var scorer = new YoloScorer<YoloCocoP5Model>("Assets/Weights/yolov5n.onnx", options);
make sure you installed CUDA toolkit and cudnn.
this code maybe work or not, i'm not sure but you can try this, that's all I can help
Related
The older version of EMGU ( < 4.5.2) we could find blobs easily using the blobdetector and cvblobs methods like this :
Emgu.CV.Cvb.CvBlobs resultingImgBlobs = new Emgu.CV.Cvb.CvBlobs();
Emgu.CV.Cvb.CvBlobDetector bDetect = new Emgu.CV.Cvb.CvBlobDetector();
uint numWebcamBlobsFound = bDetect.Detect(greyThreshImg, resultingImgBlobs);
But in the latest version, there is no CVblobs and BlobDetector, there is a simpleblobdetector class but its useless.
Does anyone knows or can point me to some documentation on how to find blobs in the new version (4.5.5) ?
You can use the SimpleBlobDetector, it is what you are looking for.
You find it "useless" beacuase you didn't set it up correctly according to your usecase.
To set its parameters you need to use SimpleBlobDetectorParams.
Here is an example:
SimpleBlobDetector simpleBlobDetector = new SimpleBlobDetector(new SimpleBlobDetectorParams()
{
FilterByCircularity = true,
FilterByArea = true,
MinCircularity = 0.7f,
MaxCircularity = 1.0f,
MinArea = 500,
MaxArea = 10000
});
Here are just some parameters, you can find more about them in Emgu Documentation
I've been trying to find a tutorial or something on how to make yolo c# use gpu instead of cpu, I always find that it says that it works on both cpu and gpu but no one ever says how to use the gpu since it always uses cpu for me. Here's my code with yolo v5 c#. It doesn't really matter for me if it uses yolo v5 just that it uses gpu. Tutorial I found that tutorial but i can't even find the download for Nvidia cuDNN v7.6.3 for CUDA 10.1. It feels very unclear on how to use it with gpu please help me :D
var image = pictureBox1.Image;
var scorer = new YoloScorer<YoloCocoP5Model>("Assets/Weights/yolov5n.onnx");
List<YoloPrediction> predictions = scorer.Predict(image);
var graphics = Graphics.FromImage(image);
foreach (var prediction in predictions) // iterate predictions to draw results
{
using (MemoryStream ms = new MemoryStream())
{
pictureBox1.Image.Save(ms, ImageFormat.Png);
prediction.Label.Color = Color.FromArgb(255, 255, 0, 0);
double score = Math.Round(prediction.Score, 2);
graphics.DrawRectangles(new Pen(prediction.Label.Color, 1),
new[] { prediction.Rectangle });
var (x, y) = (prediction.Rectangle.X - 3, prediction.Rectangle.Y - 23);
graphics.DrawString($"{prediction.Label.Name} ({score})",
new Font("Consolas", 16, GraphicsUnit.Pixel), new SolidBrush(prediction.Label.Color),
new PointF(x, y));
pictureBox1.Image = image;
}
}
Before you are going to use scorer you need an option set.
//https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html
bool initResult = false;
var cudaProviderOptions = new Microsoft.ML.OnnxRuntime.OrtCUDAProviderOptions(); // Dispose this finally
var providerOptionsDict = new Dictionary<string, string>();
providerOptionsDict["device_id"] = "0";
providerOptionsDict["gpu_mem_limit"] = "2147483648";
providerOptionsDict["arena_extend_strategy"] = "kSameAsRequested";
/*
cudnn_conv_algo_search
The type of search done for cuDNN convolution algorithms.
Value Description
EXHAUSTIVE (0) expensive exhaustive benchmarking using cudnnFindConvolutionForwardAlgorithmEx
HEURISTIC (1) lightweight heuristic based search using cudnnGetConvolutionForwardAlgorithm_v7
DEFAULT (2) default algorithm using CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM
Default value: EXHAUSTIVE
*/
providerOptionsDict["cudnn_conv_algo_search"] = "DEFAULT";
/*
do_copy_in_default_stream
Whether to do copies in the default stream or use separate streams. The recommended setting is true. If false, there are race conditions and possibly better performance.
Default value: true
*/
providerOptionsDict["do_copy_in_default_stream"] = "1";
/*
cudnn_conv_use_max_workspace
Check tuning performance for convolution heavy models for details on what this flag does. This flag is only supported from the V2 version of the provider options struct when used using the C API. The V2 provider options struct can be created using this and updated using this. Please take a look at the sample below for an example.
Default value: 0
*/
providerOptionsDict["cudnn_conv_use_max_workspace"] = "1";
/*
cudnn_conv1d_pad_to_nc1d
Check convolution input padding in the CUDA EP for details on what this flag does. This flag is only supported from the V2 version of the provider options struct when used using the C API. The V2 provider options struct can be created using this and updated using this. Please take a look at the sample below for an example.
Default value: 0
*/
providerOptionsDict["cudnn_conv1d_pad_to_nc1d"] = "1";
cudaProviderOptions.UpdateOptions(providerOptionsDict);
options = SessionOptions.MakeSessionOptionWithCudaProvider(cudaProviderOptions); // Dispose this finally
if (options != null)
{
// check yolo model file is accesible
if (File.Exists(yoloModelFile))
{
scorer = new YoloScorer<YoloCocoP5Model>(yoloModelFile, options);
initResult = true;
}
else
{
DebugMessage("Yolo model ONNX file (" + yoloModelFile + ") is missing!\r\n", 2);
}
}
else
DebugMessage("Yolo instance initializing error! Session options are empty!\r\n", 2);
}
I need to perform Source In composition on 2 images.
For example this image:
and a mask image (tested with black-transparent and black-white):
should produce result:
I am trying to do this with ImageSharp:
img.Mutate(imgMaskIn =>
{
using (var mask = Image.Load(maskImageFileName))
{
imgMaskIn.DrawImage(mask, new GraphicsOptions { AlphaCompositionMode = PixelAlphaCompositionMode.SrcIn});
}
});
but result is mask image. It should work based on this merge request.
Did I wrongly used library, or there is a bug?
Is there any other way to do this in ASP.NET Core?
Unfortunately, the syntax to do this is with ImageSharp is changing between the current preview version and the development version which should be the final API for this.
With 1.0.0-beta0005, you can blend these images like this:
using (var pattern = Image.Load("img_pattern.png"))
using (var texture = Image.Load("img_texture.png"))
{
var options = new GraphicsOptions { BlenderMode = PixelBlenderMode.In };
using (var result = pattern.Clone(x => x.DrawImage(options, texture)))
{
result.Save("img_out.png");
}
}
Note that you have to use a pattern image with alpha transparency for this. You cannot use a keyed transparency (at least not with this solution).
I’ve made the pattern transparent for that purpose (you can get the one I used here) and got this result:
In the final release, it will look like this:
using (var pattern = Image.Load("img_pattern.png"))
using (var texture = Image.Load("img_texture.png"))
{
var options = new GraphicsOptions { AlphaCompositionMode = PixelAlphaCompositionMode.SrcIn };
using (var result = pattern.Clone(x => x.DrawImage(texture, options)))
{
result.Save("img_out.png");
}
}
A good way to figure that out btw. is to look at the PorterDuffCompositorTests file which contains the tests for this feature and as such will always reflect the current API.
As of March 2022 the accepted answer doesn't work anymore, with ImageSharp 2. The following code doesn't solve the exact problem in the question, but it's how I got something similar to work:
using SixLabors.ImageSharp;
using SixLabors.ImageSharp.PixelFormats;
using SixLabors.ImageSharp.Processing.Processors.Drawing;
...
using (var inImg = Image.Load<RgbaVector>(imagePath)) //has some transparent pixels
using (var background = new Image<RgbaVector>(inImg.Width, inImg.Height, new RgbaVector(1, 0, 0, 1))) //this is just a solid red image of the same size as the loaded image, but it could be any image
{
var processorCreator = new DrawImageProcessor(
inImg,
Point.Empty,
PixelColorBlendingMode.Normal,
PixelAlphaCompositionMode.SrcAtop, //this is the setting you want to play with to get the behavior from the original question
1f
);
var pxProcessor = processorCreator.CreatePixelSpecificProcessor(
Configuration.Default,
background,
inImg.Bounds());
pxProcessor.Execute(); //writes to the image passed into CreatePixelSpecificProcessor, in this case background
background.Save("some_path.png");
}
}
I am new to SlimDX and I've heard that there is a way to capture screenshots using this library. The reason I want to use SlimDX is that I want to capture screenshots much faster than
Graphics.CopyFromScreen()
so that I can make a livestream app running at higher framerates.
I have some code I found on the internet which should capture the desktop, but it always crashes at the line where I create an instance of Device.
I tried changing the DeviceType parameter to Software and the CreateFlags to Multithreaded just to see if anything changes, but nothing did and this is what it says every time:
SlimDX.Direct3D9.Direct3D9Exception: 'D3DERR_INVALIDCALL: Invalid call (-2005530516)'
Here's the code I have:
var pp = new PresentParameters();
pp.Windowed = true;
pp.SwapEffect = SwapEffect.Discard;
var d = new Device(new Direct3D(), 0, DeviceType.Hardware, IntPtr.Zero, CreateFlags.SoftwareVertexProcessing, pp);
var surface = Surface.CreateOffscreenPlain(d, Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height, Format.A8R8G8B8, Pool.Scratch);
d.GetFrontBufferData(0, surface);
//not sure if this will work
var ds = Surface.ToStream(surface, ImageFileFormat.Jpg);
var img = Image.FromStream(ds);
I've also read that it could be a result of BackBuffer not being supported by the graphics card, but in that case I really don't know how to fix this.
My graphics card is AMD R270X.
Any ideas?
Setting pp.BackBufferCount to 0 worked. The capture time is still pretty long though..
I want to use directx on C# and I am using SharpDX wrapper. I got a book called Direct3D rendering cookbook and I got the basic code from it. I want to create a 3d world view. For that I will need a camera view and a grid that helps to recognize world position just like in Autodesk Maya but I do not know how to do that. My mind is rally mixed what should I do to start ?
Here I have code that is ready to render something I think:
using System;
using SharpDX.Windows;
using SharpDX.DXGI;
using SharpDX.Direct3D11;
using Device = SharpDX.Direct3D11.Device;
using Device1 = SharpDX.Direct3D11.Device1;
namespace CurrencyConverter
{
static class Program
{[STAThread]
static void Main()
{
// Enable object tracking
SharpDX.Configuration.EnableObjectTracking = true;
SharpDX.Animation.Timer timer = new SharpDX.Animation.Timer();
#region Direct3D Initialization
// Create the window to render to
Form1 form = new Form1();
form.Text = "D3DRendering - EmptyProject";
form.Width = 640;
form.Height = 480;
// Declare the device and swapChain vars
Device device;
SwapChain swapChain;
// Create the device and swapchain
// First create a regular D3D11 device
using (var device11 = new Device(
SharpDX.Direct3D.DriverType.Hardware,
DeviceCreationFlags.None,
new[] {
SharpDX.Direct3D.FeatureLevel.Level_11_1,
SharpDX.Direct3D.FeatureLevel.Level_11_0,
}))
{
// Query device for the Device1 interface (ID3D11Device1)
device = device11.QueryInterfaceOrNull<Device1>();
if (device == null)
throw new NotSupportedException(
"SharpDX.Direct3D11.Device1 is not supported");
}// Rather than create a new DXGI Factory we reuse the
// one that has been used internally to create the device
using (var dxgi = device.QueryInterface<SharpDX.DXGI.Device2>())
using (var adapter = dxgi.Adapter)
using (var factory = adapter.GetParent<Factory2>())
{
var desc1 = new SwapChainDescription1()
{
Width = form.ClientSize.Width,
Height = form.ClientSize.Height,
Format = Format.R8G8B8A8_UNorm,
Stereo = false,
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.BackBuffer | Usage.RenderTargetOutput,
BufferCount = 1,
Scaling = Scaling.Stretch,
SwapEffect = SwapEffect.Discard,
};
swapChain = new SwapChain1(factory,
device,
form.Handle,
ref desc1,
new SwapChainFullScreenDescription()
{
RefreshRate = new Rational(60, 1),
Scaling = DisplayModeScaling.Centered,
Windowed = true
},
// Restrict output to specific Output (monitor)
adapter.Outputs[0]);
}
// Create references for backBuffer and renderTargetView
var backBuffer = Texture2D.FromSwapChain<Texture2D>(swapChain,
0);
var renderTargetView = new RenderTargetView(device,
backBuffer);
#endregion
// Setup object debug names
device.DebugName = "The Device";
swapChain.DebugName = "The SwapChain";
backBuffer.DebugName = "The Backbuffer";
renderTargetView.DebugName = "The RenderTargetView";
#region Render loop
// Create and run the render loop
RenderLoop.Run(form, () =>
{
// Clear the render target with...
var lerpColor = SharpDX.Color.Lerp(SharpDX.Color.White,
SharpDX.Color.DarkBlue,
(float)((timer.Time) / 10.0 % 1.0));
device.ImmediateContext.ClearRenderTargetView(
renderTargetView,
lerpColor);
// Execute rendering commands here...
//...
//I DO NOT HAVE ANY IDEA
//...
// Present the frame
swapChain.Present(0, PresentFlags.RestrictToOutput);
});
#endregion
#region Direct3D Cleanup
// Release the device and any other resources created
renderTargetView.Dispose();
backBuffer.Dispose();
device.Dispose();
swapChain.Dispose();
#endregion
}
}
}
Generally speaking, with Direct3D you need a substantial amount of code before to have anything happening on the screen.
In the SharpDX repository you have the MiniCube sample which contains enough to really get you started, as it has all the elements required to draw a 3d scene.
I recommend to particularily look for:
Depth buffer creation (DepthStencilView)
fx file, as you need shaders to have anything on the screen (no more fixed funtion)
How the Vertex Buffer is created, you need to split geometry in triangles (in common cases, there are other possibilities).
Don't forget the SetViewport (it's really common to have it omitted)
The calls referring to Input Assembler are assigning the geometry to be drawn
Constant buffer creation : this is to pass matrices and changing data (like diffuse)
Also make sure to have DeviceCreationFlags.None with the Device.CreateWithSwapChain call, and in visual studio debug options, use "Enable Native Code Debugging". This will give you errors and warnings if something is not set properly, plus a meaningful reason in case any for the resource creation fails (instead of "Invalid Args", which is quite pointless).
As another recommendation, all the Direct3D11 resource creation parameters are incredibly error prone and tedious (many options are non compatible between each other), so it quite important to wrap those into some easier to use helper functions (and make a small amount of unit tests to validate them once and for all). The old Toolkit has quite some of those examples
SharpDX wrapper is relatively close to the c++ counterpart, so anything in the c++ documentation applies to it too.