I'm creating a webcam control using DirectShow.NET. I want to render the video of the camera into a WPF window. What is happening currently is that the IVMRWindowlessControl9 doesn't seem to be going into windowless mode and is not being parented to the window that I'm specifying, even though I'm calling the appropriate methods.
Why are these methods not being invoked? Is there something else that I'm not doing?
Below is a snippet of the relevant code:
IGraphBuilder graphBuilder = (IGraphBuilder) new FilterGraph();
ICaptureGraphBuilder2 captureGraphBuilder = (ICaptureGraphBuilder2)new CaptureGraphBuilder2();
IMediaControl mediaControl = (IMediaControl) this.graphBuilder;
IBaseFilter renderFilter = (IBaseFilter) new VideoMixingRenderer9();
hr = this.captureGraphBuilder.SetFiltergraph(this.graphBuilder);
DsError.ThrowExceptionForHR(hr);
IBaseFilter sourceFilter = FindCaptureDevice();
hr = this.graphBuilder.AddFilter(sourceFilter, "Video Capture");
DsError.ThrowExceptionForHR(hr);
SetCaptureResolution();
IVMRFilterConfig9 filterConfig = (IVMRFilterConfig9)renderFilter;
hr = filterConfig.SetNumberOfStreams(1);
DsError.ThrowExceptionForHR(hr);
hr = filterConfig.SetRenderingMode(VMR9Mode.Windowless);
DsError.ThrowExceptionForHR(hr);
windowlessControl = (IVMRWindowlessControl9)renderFilter;
hr = this.graphBuilder.AddFilter(renderFilter, "Video Capture");
DsError.ThrowExceptionForHR(hr);
Window window = Window.GetWindow(this);
var wih = new WindowInteropHelper(window);
IntPtr hWnd = wih.Handle;
hr = windowlessControl.SetVideoClippingWindow(hWnd);
DsError.ThrowExceptionForHR(hr);
hr = windowlessControl.SetAspectRatioMode(VMR9AspectRatioMode.LetterBox);
DsError.ThrowExceptionForHR(hr);
hr = this.captureGraphBuilder.RenderStream(PinCategory.Capture, MediaType.Video, sourceFilter, null, null);
DsError.ThrowExceptionForHR(hr);
Marshal.ReleaseComObject(sourceFilter);
hr = this.mediaControl.Run();
DsError.ThrowExceptionForHR(hr);
Here is an image of what is happening (I made the background green to make it easier to see):
This is a diagram of the filter graph:
To answer a potential question (because I've had this issue previously), yes, hWnd is getting set/has a value - so the windowlessControl does have a pointer to the window.
A popup "ActiveMovie Window" created when you run a filter graph is a symptom of video renderer filter inserted into pipeline and running in default mode, without being configured to be a part of other UI: embedded as a child window etc.
Your reversing your graph sheds light on what is going on:
You insert and set up one video renderer filter, then there is another one added by the API and connected to your input. While embedded into your UI is the first one, it remains idle and the other one renders video into popup.
The code line which gives the problem is:
hr = this.captureGraphBuilder.RenderStream(PinCategory.Capture,
MediaType.Video, sourceFilter, null, null);
The problem is quite typical for those who build graphs by inserting a video renderer and expecting them to be picked up and connected, especially that it sometimes works and such code snippets might be found online.
MSDN says:
If the pSink parameter is NULL, the method tries to use a default renderer. For video it uses the Video Renderer, and for audio it uses the DirectSound Renderer.
The call added another instance and you expected your existing one to be connected. While RenderStream is a powerful call and does filter magic to get things connected, it is easy to end up having it done something in wrong way.
If you already have your video renderer, you could use it as a sink argument in this call. Or, you could avoid doing RenderStream and incrementally add filters you need to make sure everything is built following your expectations. Or, another option is IFilterGraph2::RenderEx call instead with AM_RENDEREX_RENDERTOEXISTINGRENDERERS flag:
...the method attempts to use renderers already in the filter graph. It will not add new renderers to the graph. (It will add intermediate transform filters, if needed.) For the method to succeed, the graph must contain the appropriate renderers, and they must have unconnected input pins.
Related
I'm currently trying to create a swapchain for a CoreWindow using the latest SharpDX as DirectX wrapper and UWP as project base framework.
The documentation on that is so sparse it's unbelievable. Nonetheless I could find a snippet which looked promising. Inititally I always got an E_INVALIDCALL error message. Now it's "only" E_ACCESSDENIED.
So far I've done this to set up the chain:
var description = new SwapChainDescription1
{
BufferCount = 2,
Flags = SwapChainFlags.None,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.FlipSequential,
Usage = Usage.RenderTargetOutput,
Width = 0,
Height = 0,
Scaling = Scaling.None,
Format = Format.B8G8R8A8_UNorm,
Stereo = false
};
CoreWindow window = CoreWindow.GetForCurrentThread();
if (window == null)
{
Logging.Error("Could not retrieve core window for swap chain.");
throw new Exception("Invalid core window.");
}
using (var device = _device.QueryInterface<SharpDX.DXGI.Device2>())
{
device.MaximumFrameLatency = 1;
using (Adapter adapter = device.Adapter)
{
using (ComObject coreWindow = new ComObject(window))
{
using (Factory2 factory = adapter.GetParent<Factory2>())
_swapChain = new SwapChain1(factory, _device, coreWindow, ref description);
}
}
}
The constructor of SwapChain1 throws the SharpDX exception:
SharpDX.Result.CheckError()
SharpDX.DXGI.Factory2.CreateSwapChainForCoreWindow(ComObject deviceRef, ComObject windowRef, SwapChainDescription1& descRef, Output restrictToOutputRef, SwapChain1 swapChainOut)
SharpDX.DXGI.SwapChain1..ctor(Factory2 factory, ComObject device, ComObject coreWindow, SwapChainDescription1& description, Output restrictToOutput)
RobInspect.Visualizer.Rendering.RenderingPanel.InitializeSizeDependentResources()
RobInspect.Visualizer.Rendering.RenderingPanel.InitializeDevice()
"HRESULT: [0x80070005], Module: [General], ApiCode: [E_ACCESSDENIED/General access denied error], Message: Access is denied.
"
Can anyone explain me why? "Access denied" is quite a broad statement and I'm not that experienced with DirectX's internals.
Further information: The code is executing on the main (UI) thread. So I guess I can exclude that the CoreWindow reference is inaccessible. Since this is first-time initialisation I also exclude the possibility of DirectX objects not being freed properly before creating the swap chain.
EDIT:
That's the code for creating the device. Whereas the flags are set to DeviceCreationFlags.BgraSuuport and DeviceCreationFlags.Debug. The levels are set to FeatureLevel.Level_11_1 down to FeatureLevel.Level_9_1.
using (var device = new Device(DriverType.Hardware, flags, levels))
{
_device = device.QueryInterface<Device1>();
_context = _device.ImmediateContext1;
}
The solution to this problem is that the terms WinRT Core and WinRT XAML are rather misleading. Since UWP is based on CoreWindow and both support and use them it's not clear where to use what.
DirectX exposes two methods for WinRT and one for Desktop. One being Factory2.CreateSwapChainForCoreWindow(...) and one Factory2.CreateSwapChainForComposition(...). The difference is that one takes the CoreWindow as parameter and one does not. And here's the trap I fell into.
Core stands for the design-scheme with which one only uses IFrameworkView and IFrameworkViewSource (see here for an example with SharpDX) whereas XAML stands for the traditional scheme where you have the Windows.UI.Xaml.Application class.
When using the Core-model you have to call the ...ForCoreWindow(...) method in order to create a swap chain. While using the XAML based approach you need a composition swap chain. I for myself already tried that, but failed because I forgot to enable (tip: do this if not already done) native debugging so the DirectX Debug Layer actually showed me essential information which could have saved me hours if not days of trial and error.
The issue here is that both composition and CoreWindow swap chains require special settings in the SwapChainDescription1. I'll leave you with the MSDN documentation. Moreover if native debugging and the debug layer is enabled, DirectX will tell you exactly what setting is invalid.
I am trying to extract the IVMRMixerControl9 from the Video Mixing Renderer 9, but this COM based Direct Show filter doesn't list this interface as implemented, and I can't QueryInterface for it. I am trying to set enable the "YUV mixing mode", which I presumably do using the IVMRMixerControl9::SetMixingPrefs method.
Why does MSDN documentation list the VMR9 implementing IVMRMixerControl9, but I can't extract the interface? I have checked on Windows XP and Windows 7, no luck.
var vmr9 = new VideoMixingRenderer9() as IBaseFilter;
// this is always null. this is the C# equivalent of QueryInterface
var mixerControl = vmr9 as IVMRMixerControl9;
Here is an image of the setting I am trying to enable.
Too early, the mixer is loaded on demand. Or, you can force it by explicit SetNumberOfStreams call.
IBaseFilter baseFilter = new VideoMixingRenderer9() as IBaseFilter;
IVMRFilterConfig9 vmrFilterConfig = baseFilter as IVMRFilterConfig9;
vmrFilterConfig.SetNumberOfStreams(1);
IVMRMixerControl9 vmrMixerControl = baseFilter as IVMRMixerControl9;
Debug.Assert(vmrMixerControl != null);
Currently I am working on rendering two different videos at the same time using one VMR9 renderer and putting it on a XNA texture. The code I am currently using manages one video rendering however it does some cheesiness at two videos. On my working setup the complete video playing works flawlessly, but when I try to switch computers it gets me the black screen.
I am using a filter graph as suggested in this topic: Can one Video Mixing Renderer 9 (VMR9) render more video streams?
If I attach GraphStudioNext on the currently running program it displays the following graph:
http://s11.postimg.org/z7d3qyyxf/graph.png
At first I tought the problem would be some differences between codec settings, but after I managed the same configuration on two different machines only the graphs changed: they became identical even though one machine displays the video correctly and the other just displays a black screen.
I even tried to remake the graph by hand to see if there is any problem with the graph itself and it runs smoothly.
I use the following code snippet to add the video sources to the VMR9 renderer:
protected override HRESULT OnInitInterfaces()
{
IBaseFilter bsFilter;
m_GraphBuilder.AddSourceFilter(#"C:\Video\Digitales CLP_tic tac Strawberry Mix_HUN_FIN.mp4", "first", out bsFilter);
IEnumPins ePins;
bsFilter.EnumPins(out ePins);
IPin[] pins = new IPin[1];
IntPtr fetched = IntPtr.Zero;
ePins.Next(1, pins, fetched);
int hr = m_GraphBuilder.Render(pins[0]);
m_GraphBuilder.AddSourceFilter(#"C:\Video\UIP_StarTrek.mp4", "second", out bsFilter);
bsFilter.EnumPins(out ePins);
ePins.Next(1, pins, fetched);
hr = m_GraphBuilder.Render(pins[0]);
return (HRESULT)hr;
}
Any help would be appreciated.
The problem was with NVidia drivers. The following code snippet caused the error:
VMR9NormalizedRect r1 = new VMR9NormalizedRect(0, 0, 0.5f, 1);
VMR9NormalizedRect r2 = new VMR9NormalizedRect(0.5f, 0, 1, 1);
hr = (HRESULT)mix.SetOutputRect(0, ref r1);
hr = (HRESULT)mix.SetOutputRect(1, ref r2);
If a VMR9NormalizedRect gets initialized with any parameters apart from 0, 0, 1, 1 it will only display a black screen. The code runs perfect on any ATI card I tried.
It seems like that NVidia couldn't fix this error since 2006:
https://forums.geforce.com/default/topic/358347/.
I am trying to figure out, how can i get an bitmap data from a filter.
I am using DirectShowNet wrapper to get an image from my webcamera.
My current code is:
public partial class Form1 : Form
{
public IGraphBuilder gb;
public ICaptureGraphBuilder2 cgb;
public IBaseFilter filter;
public Form1()
{
InitializeComponent();
DsDevice[] videoInputDevices = DsDevice.GetDevicesOfCat(FilterCategory.VideoInputDevice);
object obj = null; Guid iid = typeof(IBaseFilter).GUID;
videoInputDevices[1].Mon.BindToObject(null, null, ref iid, out obj);
filter = (IBaseFilter)obj;
((IAMCameraControl)filter).Set(CameraControlProperty.Exposure, 0, CameraControlFlags.Auto);
gb = (IGraphBuilder) new FilterGraph();
cgb = (ICaptureGraphBuilder2) new CaptureGraphBuilder2();
cgb.SetFiltergraph(gb);
gb.AddFilter(filter, "First Filter");
cgb.RenderStream(PinCategory.Preview, MediaType.Video, filter, null, null);
((IVideoWindow)gb).put_Owner(this.panel1.Handle);
((IVideoWindow)gb).put_WindowStyle(WindowStyle.Child | WindowStyle.ClipChildren);
((IVideoWindow)gb).put_Visible(OABool.True);
((IVideoWindow)gb).SetWindowPosition(0, 0, this.panel1.Width, this.panel1.Height);
((IMediaControl)gb).Run();
}
}
This simple code just render webcamera output to panel control. I tried to use timer and SaveToBitmap function to copy image from panel to bitmap, but bitmap is blank after that.
I read something about Grabber filter, but my solution did not work, it returned null ptr to buffer/sample.
I would like to ask, what should i add to be able to read image data ?
Thank you very much.
Standard behavior of DirectShow pipeline is such that filters pass data one to another without showing it to the controlling application and code, so there is no direct way to access the data.
You typically do one of the following:
You add Sample Grabber Filter to certain position of your pipeline and set it up so that SG calls you back every time it has data going through
You grab a copy of currently displayed video from video renderer
Both methods are documented, popular and discussed multiple times including on StackOverflow:
Efficiently grabbing pixels from video
take picture from webcam c#
Here's a detailed example of exactly this:
Working with raw video data from webcam in C# and DirectShowNet
Following is my Filter Graph. I am trying to insert "ffdshow video encoder" encoder in the filtergraph, but I am unable to do so.
Following is my Code for trying to Connect Compressor after getting filtergraph generated:
public void setFileName(string pFileName)
{
int hr;
IBaseFilter _infinitePinTeeFilter = null;
graph.FindFilterByName("Infinite Pin Tee Filter", out _infinitePinTeeFilter);
mediaControl.Stop();
hr = captureGraphBuilder.SetOutputFileName(MediaSubType.Avi, pFileName, out mux, out sink);
checkHR(hr, "Can't set SetOutputFile");
hr = captureGraphBuilder.RenderStream(null, MediaType.Video, _infinitePinTeeFilter, _videoCompressor, mux);
checkHR(hr, "Can't Render Output File");
mediaControl.Run();
}
Any help would be appreciated... Thanks.
ICaptureGraphBuilder::SetOutputFileName is not a good choice of API to set the graph up. It does the job well for simple graphs, but as it forwards you back errors without good description and the stage at which the error actually happened, every time you have hard time trying to understand what goes wrong.
The problem might be caused by absence of frame rate information on the media type on the output of video compressor, but as the building stage you have on your screenshot you don't even have this media type yet available and you cannot troubleshoot and get this information.
Use IGraphBuilder::AddFilter, IGraphBuilder::Connect, IFileSinkiFilter::SetFileName instead to reliably configure the pipeline.