I'm trying to grab image from webcam using DirectShow.NET and IBasicVideo CetCurrentImage. But I only get catastrophic failure on second call GetCurrentImage.
What I'm doing particularly:
IBasicVideo bv = (IBasicVideo)graph;
IntPtr bvp = new IntPtr();
int size = 0;
int hr = bv.GetCurrentImage(ref size, IntPtr.Zero);
DsError.ThrowExceptionForHR(hr);
bvp = Marshal.AllocCoTaskMem(size);
hr = bv.GetCurrentImage(ref size, bvp);
DsError.ThrowExceptionForHR(hr);
Bitmap image = new Bitmap(480, 320, 480 * (24 / 8), System.Drawing.Imaging.PixelFormat.Format24bppRgb, bvp);
image.Save(path);
What am I doing wrong?
Prety much all I have:
IGraphBuilder graph = null;
IMediaEventEx eventEx = null;
IMediaControl control = null;
ICaptureGraphBuilder2 capture = null;
IBaseFilter srcFilter = null;
public IVideoWindow videoWindow = null;
IntPtr videoWindowHandle = IntPtr.Zero;
public void GetPreviewFromCam()
{
graph = (IGraphBuilder)(new FilterGraph());
capture = (ICaptureGraphBuilder2)(new CaptureGraphBuilder2());
eventEx = (IMediaEventEx)graph;
control = (IMediaControl)graph;
videoWindow = (IVideoWindow)graph;
videoWindowHandle = hVideoWindow;
eventEx.SetNotifyWindow(hVideoWindow, WM_GRAPHNOTIFY, IntPtr.Zero);
int hr;
// Attach the filter graph to the capture graph
hr = capture.SetFiltergraph(graph);
DsError.ThrowExceptionForHR(hr);
// Find capture device and bind it to srcFilter
FindCaptureDevice();
// Add Capture filter to our graph.
hr = graph.AddFilter(srcFilter, "Video Capture");
DsError.ThrowExceptionForHR(hr);
// Render the preview pin on the video capture filter
// Use this instead of graph->RenderFile
hr = capture.RenderStream(PinCategory.Preview, MediaType.Video, srcFilter, null, null);
DsError.ThrowExceptionForHR(hr);
hr = control.Run();
DsError.ThrowExceptionForHR(hr);
}
IBasicVideo::GetCurrentImage does not have to unconditionally succeed. What it does is forwarding of the call to video renderer in your graph (fails if you don't have one, or you have a weird non-renderer filter which unexpectedly implements the interface), then the renderer would attempt to get you the image. The renderer might fail if it is operating in incompatible mode (windowless video renderers don't have IBasicVideo - might fail here), or the renderer yet did not receive any video frame to have a copy delivered to you, that is the call is premature.
Additionally, there might be a handful of other issues related to obvious bugs - you did not put the graph into active mode, you are under wrong impression about topology you are having, you are using wrong interface, your code has threading issues etc.
With a specter of possibly causes this wide, start with a simple question: at the time of the call, do you have the video frame already presented to you visually?
Related
I am new to SlimDX and I've heard that there is a way to capture screenshots using this library. The reason I want to use SlimDX is that I want to capture screenshots much faster than
Graphics.CopyFromScreen()
so that I can make a livestream app running at higher framerates.
I have some code I found on the internet which should capture the desktop, but it always crashes at the line where I create an instance of Device.
I tried changing the DeviceType parameter to Software and the CreateFlags to Multithreaded just to see if anything changes, but nothing did and this is what it says every time:
SlimDX.Direct3D9.Direct3D9Exception: 'D3DERR_INVALIDCALL: Invalid call (-2005530516)'
Here's the code I have:
var pp = new PresentParameters();
pp.Windowed = true;
pp.SwapEffect = SwapEffect.Discard;
var d = new Device(new Direct3D(), 0, DeviceType.Hardware, IntPtr.Zero, CreateFlags.SoftwareVertexProcessing, pp);
var surface = Surface.CreateOffscreenPlain(d, Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height, Format.A8R8G8B8, Pool.Scratch);
d.GetFrontBufferData(0, surface);
//not sure if this will work
var ds = Surface.ToStream(surface, ImageFileFormat.Jpg);
var img = Image.FromStream(ds);
I've also read that it could be a result of BackBuffer not being supported by the graphics card, but in that case I really don't know how to fix this.
My graphics card is AMD R270X.
Any ideas?
Setting pp.BackBufferCount to 0 worked. The capture time is still pretty long though..
I am evaluating the Accord.NET Framework (https://github.com/accord-net/framework/) for use in an imaging application. At the moment I have some basic requirements - capture video from a USB camera to display on the UI and view/change all camera properties.
Accord.Video.DirectShow.VideoCaptureDevice.DisplayPropertyPage works well for showing the camera properties, such as brightness, contrast, hue etc. but does not show available camera resolutions.
Accord.Video.DirectShow.VideoCaptureDevice.VideoCapabilities is returning only one resolution but I was expecting several more.
I have tried the VideoCapx (http://videocapx.com/) ActiveX control and using its ShowVideoFormatDlg method I can display a dialog which shows all available resolutions, framerates etc. I understand this is a dialog provided by the manufacturer and accessed via OLE\COM. What I am looking for is a way of accessing this via .NET, hopefully through the Accord framework.
I understand the additional resolutions might be properties of a transform filter however I am new to DirectShow and COM interfaces in .NET so I am looking for some pointers.
I use to wrap DirectShow code for .NET.
For sure with DirectShow it is possible to get, set ,and retrieve a/v source capabilities.
Have You tried using IAMStreamConfig video interface to set output format on certain capture and compression filters?
I use this code to get resolutions and set it on different sources.
where m_pVCap: source filter
hr = m_pBuilder->FindInterface(&PIN_CATEGORY_CAPTURE,&MEDIATYPE_Interleaved,
m_pVCap, IID_IAMVideoCompression,(void **)&m_pVC);
if (hr != S_OK)
hr = m_pBuilder->FindInterface(&PIN_CATEGORY_CAPTURE, &MEDIATYPE_Video,
m_pVCap,IID_IAMVideoCompression,(void **)&m_pVC);
// !!! What if this interface isn't supported?
// we use this interface to set the frame rate and get the capture size
hr = m_pBuilder->FindInterface(&PIN_CATEGORY_CAPTURE,&MEDIATYPE_Interleaved,
m_pVCap, IID_IAMStreamConfig, (void **)&m_pVSC);
if (hr != NOERROR)
{
hr = m_pBuilder->FindInterface(&PIN_CATEGORY_CAPTURE, &MEDIATYPE_Video,
m_pVCap, IID_IAMStreamConfig,(void **)&m_pVSC);
if (hr != NOERROR)
{
LogDXError(hr, false, FILELINE);
}
}
To get current source format
hr = m_pVSC->GetFormat(&pmt);
// DV capture does not use a VIDEOINFOHEADER
if (hr == NOERROR)
{
if (pmt->formattype == FORMAT_VideoInfo)
{
VIDEOINFOHEADER *pvi = (VIDEOINFOHEADER *)pmt->pbFormat;
pvi->AvgTimePerFrame = (LONGLONG)(10000000 / m_FrameRate);
hr = m_pVSC->SetFormat(pmt);
if (hr != NOERROR)
(NotifyNewError) (FILELINE, "", LOG_ALL, ERR_GRAVE, false,
"Cannot set frame rate for capture");
hr = m_pVSC->GetFormat(&pmt);
pvi = (VIDEOINFOHEADER *)pmt->pbFormat;
pvi->bmiHeader.biWidth = g_SizeOutput.cx;
pvi->bmiHeader.biHeight = g_SizeOutput.cy;
pvi->bmiHeader.biSizeImage = DIBSIZE(pvi->bmiHeader);
hr = m_pVSC->SetFormat(pmt);
if (hr != NOERROR)
{
char ErrTxt[MAX_ERROR_TEXT_LEN];
AMGetErrorText(hr, ErrTxt,MAX_ERROR_TEXT_LEN);
wsprintf(szError, "Error %x: %s\nCannot set frame rate (%d)for
prev", hr, ErrTxt,m_FrameRate);
(NotifyNewError)(FILELINE, "", LOG_ALL, ERR_GRAVE, false, szError);
}
DeleteMediaType(pmt);
}
To get sources capabilities you can use:
IAMStreamConfig::GetNumberOfCapabilities and then IAMStreamConfig::GetStreamCaps
I want to use directx on C# and I am using SharpDX wrapper. I got a book called Direct3D rendering cookbook and I got the basic code from it. I want to create a 3d world view. For that I will need a camera view and a grid that helps to recognize world position just like in Autodesk Maya but I do not know how to do that. My mind is rally mixed what should I do to start ?
Here I have code that is ready to render something I think:
using System;
using SharpDX.Windows;
using SharpDX.DXGI;
using SharpDX.Direct3D11;
using Device = SharpDX.Direct3D11.Device;
using Device1 = SharpDX.Direct3D11.Device1;
namespace CurrencyConverter
{
static class Program
{[STAThread]
static void Main()
{
// Enable object tracking
SharpDX.Configuration.EnableObjectTracking = true;
SharpDX.Animation.Timer timer = new SharpDX.Animation.Timer();
#region Direct3D Initialization
// Create the window to render to
Form1 form = new Form1();
form.Text = "D3DRendering - EmptyProject";
form.Width = 640;
form.Height = 480;
// Declare the device and swapChain vars
Device device;
SwapChain swapChain;
// Create the device and swapchain
// First create a regular D3D11 device
using (var device11 = new Device(
SharpDX.Direct3D.DriverType.Hardware,
DeviceCreationFlags.None,
new[] {
SharpDX.Direct3D.FeatureLevel.Level_11_1,
SharpDX.Direct3D.FeatureLevel.Level_11_0,
}))
{
// Query device for the Device1 interface (ID3D11Device1)
device = device11.QueryInterfaceOrNull<Device1>();
if (device == null)
throw new NotSupportedException(
"SharpDX.Direct3D11.Device1 is not supported");
}// Rather than create a new DXGI Factory we reuse the
// one that has been used internally to create the device
using (var dxgi = device.QueryInterface<SharpDX.DXGI.Device2>())
using (var adapter = dxgi.Adapter)
using (var factory = adapter.GetParent<Factory2>())
{
var desc1 = new SwapChainDescription1()
{
Width = form.ClientSize.Width,
Height = form.ClientSize.Height,
Format = Format.R8G8B8A8_UNorm,
Stereo = false,
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.BackBuffer | Usage.RenderTargetOutput,
BufferCount = 1,
Scaling = Scaling.Stretch,
SwapEffect = SwapEffect.Discard,
};
swapChain = new SwapChain1(factory,
device,
form.Handle,
ref desc1,
new SwapChainFullScreenDescription()
{
RefreshRate = new Rational(60, 1),
Scaling = DisplayModeScaling.Centered,
Windowed = true
},
// Restrict output to specific Output (monitor)
adapter.Outputs[0]);
}
// Create references for backBuffer and renderTargetView
var backBuffer = Texture2D.FromSwapChain<Texture2D>(swapChain,
0);
var renderTargetView = new RenderTargetView(device,
backBuffer);
#endregion
// Setup object debug names
device.DebugName = "The Device";
swapChain.DebugName = "The SwapChain";
backBuffer.DebugName = "The Backbuffer";
renderTargetView.DebugName = "The RenderTargetView";
#region Render loop
// Create and run the render loop
RenderLoop.Run(form, () =>
{
// Clear the render target with...
var lerpColor = SharpDX.Color.Lerp(SharpDX.Color.White,
SharpDX.Color.DarkBlue,
(float)((timer.Time) / 10.0 % 1.0));
device.ImmediateContext.ClearRenderTargetView(
renderTargetView,
lerpColor);
// Execute rendering commands here...
//...
//I DO NOT HAVE ANY IDEA
//...
// Present the frame
swapChain.Present(0, PresentFlags.RestrictToOutput);
});
#endregion
#region Direct3D Cleanup
// Release the device and any other resources created
renderTargetView.Dispose();
backBuffer.Dispose();
device.Dispose();
swapChain.Dispose();
#endregion
}
}
}
Generally speaking, with Direct3D you need a substantial amount of code before to have anything happening on the screen.
In the SharpDX repository you have the MiniCube sample which contains enough to really get you started, as it has all the elements required to draw a 3d scene.
I recommend to particularily look for:
Depth buffer creation (DepthStencilView)
fx file, as you need shaders to have anything on the screen (no more fixed funtion)
How the Vertex Buffer is created, you need to split geometry in triangles (in common cases, there are other possibilities).
Don't forget the SetViewport (it's really common to have it omitted)
The calls referring to Input Assembler are assigning the geometry to be drawn
Constant buffer creation : this is to pass matrices and changing data (like diffuse)
Also make sure to have DeviceCreationFlags.None with the Device.CreateWithSwapChain call, and in visual studio debug options, use "Enable Native Code Debugging". This will give you errors and warnings if something is not set properly, plus a meaningful reason in case any for the resource creation fails (instead of "Invalid Args", which is quite pointless).
As another recommendation, all the Direct3D11 resource creation parameters are incredibly error prone and tedious (many options are non compatible between each other), so it quite important to wrap those into some easier to use helper functions (and make a small amount of unit tests to validate them once and for all). The old Toolkit has quite some of those examples
SharpDX wrapper is relatively close to the c++ counterpart, so anything in the c++ documentation applies to it too.
I'm stuck with a memory issue using directshow and, more specifically, the directshow.net library
The goal is reproducing an AVI file and everything works fine, except for the fact the memory doesn't release at all after calling Marshal.ReleaseComObject.
Few steps I'm following on creating the graph
Create IFilterGraph2 object
Create ICaptureGraphBuilder2
Set Filtergraph
Adding the source filter using IFilterGraph2.AddSourceFilter
Create and configuring ISampleGrabber
Adding ISampleGrabber
Render the stream
At this point if I remove the filters from the graph and calling Marshal.ReleaseComObject on ISampleGrabber, IFilterGraph2 and ICaptureGraphBuilder2 to free up, the memory still retained and never unloads.
The problem is that if I play many files without shutting down the whole process I eventually receive a 80004001 error from Com Objects, which sounds a bit weird
Thanks in advance for reading and for any further suggestion
EDIT: Adding code from directshow.net library sample, which has the same behaviour
private void SetupGraph(Control hWin, string FileName)
{
int hr;
// Get the graphbuilder object
m_FilterGraph = new FilterGraph() as IFilterGraph2;
// Get a ICaptureGraphBuilder2 to help build the graph
ICaptureGraphBuilder2 icgb2 = new CaptureGraphBuilder2() as ICaptureGraphBuilder2;
try
{
// Link the ICaptureGraphBuilder2 to the IFilterGraph2
hr = icgb2.SetFiltergraph(m_FilterGraph);
DsError.ThrowExceptionForHR( hr );
// Add the filters necessary to render the file. This function will
// work with a number of different file types.
IBaseFilter sourceFilter = null;
hr = m_FilterGraph.AddSourceFilter(FileName, FileName, out sourceFilter);
DsError.ThrowExceptionForHR( hr );
// Get the SampleGrabber interface
m_sampGrabber = (ISampleGrabber) new SampleGrabber();
IBaseFilter baseGrabFlt = (IBaseFilter) m_sampGrabber;
// Configure the Sample Grabber
ConfigureSampleGrabber(m_sampGrabber);
// Add it to the filter
hr = m_FilterGraph.AddFilter( baseGrabFlt, "Ds.NET Grabber" );
DsError.ThrowExceptionForHR( hr );
// Connect the pieces together, use the default renderer
hr = icgb2.RenderStream(null, null, sourceFilter, baseGrabFlt, null);
DsError.ThrowExceptionForHR( hr );
// Now that the graph is built, read the dimensions of the bitmaps we'll be getting
SaveSizeInfo(m_sampGrabber);
// Configure the Video Window
IVideoWindow videoWindow = m_FilterGraph as IVideoWindow;
ConfigureVideoWindow(videoWindow, hWin);
// Grab some other interfaces
m_mediaEvent = m_FilterGraph as IMediaEvent;
m_mediaCtrl = m_FilterGraph as IMediaControl;
}
finally
{
if (icgb2 != null)
{
Marshal.ReleaseComObject(icgb2);
icgb2 = null;
}
}
}
I am currently trying to add a ISampleGrabber filter to my program. Currently the program captures and displays a preview to my windows form however when i try to add my own ISampleGrabber filter using others examples the webcam section of the program stops working completely.
IVideoWindow videoWindow = null;
IMediaControl mediaControl = null;
IMediaEventEx mediaEventEx = null;
IGraphBuilder graphBuilder = null;
ICaptureGraphBuilder2 captureGraphBuilder = null;
IBaseFilter baseFilterForSampleGrabber;
ISampleGrabber sampleGrabber;
AMMediaType mediaType;
VideoInfoHeader videoInfoHeader;
public void capturePreview()
{
int hr = 0;
IBaseFilter baseFilter = null;
try
{
interfaces();
hr = this.captureGraphBuilder.SetFiltergraph(this.graphBuilder);
DsError.ThrowExceptionForHR(hr);
baseFilter = getListOfDevices();
hr = this.graphBuilder.AddFilter(baseFilter, "Webcam");
DsError.ThrowExceptionForHR(hr);
sampleGrabber = new SampleGrabber() as ISampleGrabber;
baseFilterForSampleGrabber = (IBaseFilter)new SampleGrabber();
if (baseFilterForSampleGrabber == null)
{
Marshal.ReleaseComObject(sampleGrabber);
sampleGrabber = null;
}
mediaType = new AMMediaType();
mediaType.majorType = MediaType.Video;
mediaType.subType = MediaSubType.RGB24;
mediaType.formatType = FormatType.VideoInfo;
//int width = videoInfoHeader.BmiHeader.Width;
//int height = videoInfoHeader.BmiHeader.Height;
//int size = videoInfoHeader.BmiHeader.ImageSize;
//mediaType.formatPtr = Marshal.AllocCoTaskMem(Marshal.SizeOf(videoInfoHeader));
//Marshal.StructureToPtr(videoInfoHeader, mediaType.formatPtr, false);
hr = sampleGrabber.SetMediaType(mediaType);
DsUtils.FreeAMMediaType(mediaType);
hr = graphBuilder.AddFilter(baseFilterForSampleGrabber, "ISampleGrabber Filter");
DsError.ThrowExceptionForHR(hr);
hr = this.captureGraphBuilder.RenderStream(PinCategory.Preview, MediaType.Video, baseFilter, baseFilterForSampleGrabber, null);
DsError.ThrowExceptionForHR(hr);
Marshal.ReleaseComObject(baseFilter);
videoWindowSetup();
hr = sampleGrabber.SetBufferSamples(true);
DsError.ThrowExceptionForHR(hr);
hr = this.mediaControl.Run();
DsError.ThrowExceptionForHR(hr);
}
catch
{
MessageBox.Show("Error...Try restart");
}
}
The above code contains my current graph along with the starting ISampleGrabber code I see repeated in every example, however when I add the commented code this is when the program stops. I do not know where the issue is and presume I should at least get the basics sorted before continuing adding on the graph.
If I resolve this problem any further help on what else I require to complete this graph would be very helpful, I aim to convert the frames captured into bitmaps so I can immediately edit them, such as add a crosshair, and show them in the windows form once edited straight away.
Any help is appreciated :)
Commented code does not initialize videoInfoHeader correctly. You need to initialize all members there (well, some might be left with zeros, but you have to add values for mandatory ones). biCompression, biBitCount to say the least. Also your code does not even initialize those members and vice versa reads uninitialized values back.
This is however a wrong way already. Most samples suggest that you don't intialize format and formatPtr for a reason. With major type and subtype, Sample Grabber would "hint" intelligent connect what format you want data in (24-bit RGB here and typically). Yes this is what you can do and this works out well. However there is no flexibility to specify resolution there, or frame rate, not even every pixel format works out. That is, whatever you are trying to do here is likely to be incorrect. You are supposed to be happy with partial media type (major type and subtype only).
DxScan from DirectShow.NET Samples adds the Sample Grabber and shows how to do it and how to set it up:
private void ConfigureSampleGrabber(ISampleGrabber sampGrabber)
{
AMMediaType media;
int hr;
// Set the media type to Video/RBG24
media = new AMMediaType();
media.majorType = MediaType.Video;
media.subType = MediaSubType.RGB24;
media.formatType = FormatType.VideoInfo;
hr = sampGrabber.SetMediaType( media );
DsError.ThrowExceptionForHR( hr );
DsUtils.FreeAMMediaType(media);
media = null;
// Choose to call BufferCB instead of SampleCB
hr = sampGrabber.SetCallback( this, 1 );
DsError.ThrowExceptionForHR( hr );
}