Currently I am working on rendering two different videos at the same time using one VMR9 renderer and putting it on a XNA texture. The code I am currently using manages one video rendering however it does some cheesiness at two videos. On my working setup the complete video playing works flawlessly, but when I try to switch computers it gets me the black screen.
I am using a filter graph as suggested in this topic: Can one Video Mixing Renderer 9 (VMR9) render more video streams?
If I attach GraphStudioNext on the currently running program it displays the following graph:
http://s11.postimg.org/z7d3qyyxf/graph.png
At first I tought the problem would be some differences between codec settings, but after I managed the same configuration on two different machines only the graphs changed: they became identical even though one machine displays the video correctly and the other just displays a black screen.
I even tried to remake the graph by hand to see if there is any problem with the graph itself and it runs smoothly.
I use the following code snippet to add the video sources to the VMR9 renderer:
protected override HRESULT OnInitInterfaces()
{
IBaseFilter bsFilter;
m_GraphBuilder.AddSourceFilter(#"C:\Video\Digitales CLP_tic tac Strawberry Mix_HUN_FIN.mp4", "first", out bsFilter);
IEnumPins ePins;
bsFilter.EnumPins(out ePins);
IPin[] pins = new IPin[1];
IntPtr fetched = IntPtr.Zero;
ePins.Next(1, pins, fetched);
int hr = m_GraphBuilder.Render(pins[0]);
m_GraphBuilder.AddSourceFilter(#"C:\Video\UIP_StarTrek.mp4", "second", out bsFilter);
bsFilter.EnumPins(out ePins);
ePins.Next(1, pins, fetched);
hr = m_GraphBuilder.Render(pins[0]);
return (HRESULT)hr;
}
Any help would be appreciated.
The problem was with NVidia drivers. The following code snippet caused the error:
VMR9NormalizedRect r1 = new VMR9NormalizedRect(0, 0, 0.5f, 1);
VMR9NormalizedRect r2 = new VMR9NormalizedRect(0.5f, 0, 1, 1);
hr = (HRESULT)mix.SetOutputRect(0, ref r1);
hr = (HRESULT)mix.SetOutputRect(1, ref r2);
If a VMR9NormalizedRect gets initialized with any parameters apart from 0, 0, 1, 1 it will only display a black screen. The code runs perfect on any ATI card I tried.
It seems like that NVidia couldn't fix this error since 2006:
https://forums.geforce.com/default/topic/358347/.
Related
I'm creating a webcam control using DirectShow.NET. I want to render the video of the camera into a WPF window. What is happening currently is that the IVMRWindowlessControl9 doesn't seem to be going into windowless mode and is not being parented to the window that I'm specifying, even though I'm calling the appropriate methods.
Why are these methods not being invoked? Is there something else that I'm not doing?
Below is a snippet of the relevant code:
IGraphBuilder graphBuilder = (IGraphBuilder) new FilterGraph();
ICaptureGraphBuilder2 captureGraphBuilder = (ICaptureGraphBuilder2)new CaptureGraphBuilder2();
IMediaControl mediaControl = (IMediaControl) this.graphBuilder;
IBaseFilter renderFilter = (IBaseFilter) new VideoMixingRenderer9();
hr = this.captureGraphBuilder.SetFiltergraph(this.graphBuilder);
DsError.ThrowExceptionForHR(hr);
IBaseFilter sourceFilter = FindCaptureDevice();
hr = this.graphBuilder.AddFilter(sourceFilter, "Video Capture");
DsError.ThrowExceptionForHR(hr);
SetCaptureResolution();
IVMRFilterConfig9 filterConfig = (IVMRFilterConfig9)renderFilter;
hr = filterConfig.SetNumberOfStreams(1);
DsError.ThrowExceptionForHR(hr);
hr = filterConfig.SetRenderingMode(VMR9Mode.Windowless);
DsError.ThrowExceptionForHR(hr);
windowlessControl = (IVMRWindowlessControl9)renderFilter;
hr = this.graphBuilder.AddFilter(renderFilter, "Video Capture");
DsError.ThrowExceptionForHR(hr);
Window window = Window.GetWindow(this);
var wih = new WindowInteropHelper(window);
IntPtr hWnd = wih.Handle;
hr = windowlessControl.SetVideoClippingWindow(hWnd);
DsError.ThrowExceptionForHR(hr);
hr = windowlessControl.SetAspectRatioMode(VMR9AspectRatioMode.LetterBox);
DsError.ThrowExceptionForHR(hr);
hr = this.captureGraphBuilder.RenderStream(PinCategory.Capture, MediaType.Video, sourceFilter, null, null);
DsError.ThrowExceptionForHR(hr);
Marshal.ReleaseComObject(sourceFilter);
hr = this.mediaControl.Run();
DsError.ThrowExceptionForHR(hr);
Here is an image of what is happening (I made the background green to make it easier to see):
This is a diagram of the filter graph:
To answer a potential question (because I've had this issue previously), yes, hWnd is getting set/has a value - so the windowlessControl does have a pointer to the window.
A popup "ActiveMovie Window" created when you run a filter graph is a symptom of video renderer filter inserted into pipeline and running in default mode, without being configured to be a part of other UI: embedded as a child window etc.
Your reversing your graph sheds light on what is going on:
You insert and set up one video renderer filter, then there is another one added by the API and connected to your input. While embedded into your UI is the first one, it remains idle and the other one renders video into popup.
The code line which gives the problem is:
hr = this.captureGraphBuilder.RenderStream(PinCategory.Capture,
MediaType.Video, sourceFilter, null, null);
The problem is quite typical for those who build graphs by inserting a video renderer and expecting them to be picked up and connected, especially that it sometimes works and such code snippets might be found online.
MSDN says:
If the pSink parameter is NULL, the method tries to use a default renderer. For video it uses the Video Renderer, and for audio it uses the DirectSound Renderer.
The call added another instance and you expected your existing one to be connected. While RenderStream is a powerful call and does filter magic to get things connected, it is easy to end up having it done something in wrong way.
If you already have your video renderer, you could use it as a sink argument in this call. Or, you could avoid doing RenderStream and incrementally add filters you need to make sure everything is built following your expectations. Or, another option is IFilterGraph2::RenderEx call instead with AM_RENDEREX_RENDERTOEXISTINGRENDERERS flag:
...the method attempts to use renderers already in the filter graph. It will not add new renderers to the graph. (It will add intermediate transform filters, if needed.) For the method to succeed, the graph must contain the appropriate renderers, and they must have unconnected input pins.
In order to program an AI for a web video game, I'd like to get a screenshot of the game. No problem, i can use GetWindowRect.
But, this method actually save a screenshot without the filters applyed by the GPU. I mean, I'd like to get real colors of the webpage, not the one I see after the GPU processing.
The form recognition is actually based on colors and I can't publish this AI if nobody gets the same colors on the screenshot.
Is there any way to do that ?
--
PinkPR
You need to render in the same way as you render on screen but to bitmap's device context.
Sample code to provide basic idea on subject:
..
..
//Application of matrices and usual program flow
//
..
//Get graphics from the bitmap previously created
Graphics gc = Graphics.FromImage(bmp)
//Get device context
IntPtr hDc = gc.GetHdc();
int iPixelFormat = Gdi.ChoosePixelFormat(hDc, ref pfd);
Gdi.SetPixelFormat(hDc, iPixelFormat, ref pfd);
IntPtr hRc = Wgl.wglCreateContext(hDc);
//Make it current
if (!Wgl.wglMakeCurrent(hDc, hRc))
{
throw new Exception("....");
}
//Render to hDc
I am trying to figure out, how can i get an bitmap data from a filter.
I am using DirectShowNet wrapper to get an image from my webcamera.
My current code is:
public partial class Form1 : Form
{
public IGraphBuilder gb;
public ICaptureGraphBuilder2 cgb;
public IBaseFilter filter;
public Form1()
{
InitializeComponent();
DsDevice[] videoInputDevices = DsDevice.GetDevicesOfCat(FilterCategory.VideoInputDevice);
object obj = null; Guid iid = typeof(IBaseFilter).GUID;
videoInputDevices[1].Mon.BindToObject(null, null, ref iid, out obj);
filter = (IBaseFilter)obj;
((IAMCameraControl)filter).Set(CameraControlProperty.Exposure, 0, CameraControlFlags.Auto);
gb = (IGraphBuilder) new FilterGraph();
cgb = (ICaptureGraphBuilder2) new CaptureGraphBuilder2();
cgb.SetFiltergraph(gb);
gb.AddFilter(filter, "First Filter");
cgb.RenderStream(PinCategory.Preview, MediaType.Video, filter, null, null);
((IVideoWindow)gb).put_Owner(this.panel1.Handle);
((IVideoWindow)gb).put_WindowStyle(WindowStyle.Child | WindowStyle.ClipChildren);
((IVideoWindow)gb).put_Visible(OABool.True);
((IVideoWindow)gb).SetWindowPosition(0, 0, this.panel1.Width, this.panel1.Height);
((IMediaControl)gb).Run();
}
}
This simple code just render webcamera output to panel control. I tried to use timer and SaveToBitmap function to copy image from panel to bitmap, but bitmap is blank after that.
I read something about Grabber filter, but my solution did not work, it returned null ptr to buffer/sample.
I would like to ask, what should i add to be able to read image data ?
Thank you very much.
Standard behavior of DirectShow pipeline is such that filters pass data one to another without showing it to the controlling application and code, so there is no direct way to access the data.
You typically do one of the following:
You add Sample Grabber Filter to certain position of your pipeline and set it up so that SG calls you back every time it has data going through
You grab a copy of currently displayed video from video renderer
Both methods are documented, popular and discussed multiple times including on StackOverflow:
Efficiently grabbing pixels from video
take picture from webcam c#
Here's a detailed example of exactly this:
Working with raw video data from webcam in C# and DirectShowNet
DISCLAIMER
This question is somewhat similar to another on StackOverflow, C# - Capturing the Mouse cursor image - but with a slightly different requirement.
BACKGROUND
I am writing a scriptable automation client that scraps data from 3 legacy Win32 systems.
Two of these systems may indicate the presence of finished tasks via a change in cursor bitmap when the cursor is hovered over some specific areas. No other hints (color change, status message) are offered.
My own code is derived from the original post mentioned on the disclaimer.
REQUIREMENTS
While I an able to capture the cursor bitmaps by programatically moving the cursor to a specific coordinate and capturing it via CURSORINFO, the idea was to allow an interactive user to continue using the computer. As it is, the forced positioning disrupts the process.
QUESTION
Is there a way to capture the cursor bitmap by parametrized position (e.g., request the CURSORINFO as if the focus was in window W at coordinates X, Y)?
A solution fulfilling the specifics of this question was implemented using the information provided by Hans Passant, so all credit must go to him.
The current setup is as shown:
It runs on a machine with two displays. Not shown in the picture is a small application that is actually responsible for the event monitoring and data scraping - it runs minimized and unattended.
Solution
Obtain the Window handle for the application to be tested (in this case, I cycled through all processes returned by Process.GetProcesses():
IntPtr _probeHwnd;
var _procs = Process.GetProcesses();
foreach (var item in _procs)
{
if (item.MainWindowTitle == "WinApp#1")
{
_probeHwnd= item.MainWindowHandle;
break;
}
}
With the window handle for the target application, we are now able to craft specific messages and send to it via SendMessage.
In order to pass coordinates to SendMessage we need to serialize both X and Y coordinates into a single long value:
public int MakeLong(short lowPart, short highPart)
{
return (int)(((ushort)lowPart) | (uint)(highPart << 16));
}
Knowing the specific coordinates we want to probe (_probeX,_probeY), now we can issue a WM_NCHITTEST message:
SendMessage(_probeHwnd, WM_NCHITTEST, NULL, (LPARAM)MakeLong(_probeX, _probeY));
We need GetCursorInfo to obtain the Bitmap:
Win32Stuff.CURSORINFO ci = new Win32Stuff.CURSORINFO();
Win32Stuff.GetCursorInfo(ci);
Check if the return flag from GetCursorInfo indicates that the cursor is showing (pco.flags == CURSOR_SHOWING):
Use CopyIcon in order to obtain a valid handle for the cursor bitmap:
IntPtr hicon = default(IntPtr);
hicon = Win32Stuff.CopyIcon(ci.hCursor);
Use GetIconInfo to extract the information from the handler:
Win32Stuff.ICONINFO icInfo = default(Win32Stuff.ICONINFO);
Win32Stuff.GetIconInfo(hicon, icInfo);
Use the System.Drawing.Icon class to obtain a manageable copy using Icon.FromHandle, passing the value returned by CopyIcon;
Icon ic = Icon.FromHandle(hicon);
Extract the bitmap via Icon.ToBitmap method.
Bitmap bmp = ic.ToBitmap();
Limitations
This solution was tested on two different OSes: Windows XP and Windows 8. It only worked on Windows XP. On Windows 8 the cursor would flicker and return to the 'correct' format immediately, and the the captured CURSORINFO reflected that.
The test point areas must be visible (i.e., application must not be minimized, and test points can't be under an overlapping window. Tested window may be partially overlapped, though - and it doesn't need to have focus.)
When WM_NCHITTEST is issued, the current physical cursor over WebApp changes to whatever cursor bitmap is set by the probed application. CURSORINFO contains the cursor bitmap set by the probed application, but the coordinates always indicate the 'physical' location.
I recently found out about the surface & device class which may solve my problems with screenshoting a fullscreen direct3d game.
I've tried following this article : fastest method to capture game screen shots in c#?(more than20 images per second)
First method i've tried is :
Device device = new Device(0, DeviceType.Default, GetForegroundWindow(), CreateFlags.None, new PresentParameters());
Surface s2 = device.CreateImageSurface(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height, Format.A8R8G8B8);
device.GetFrontBuffer(s2);
SurfaceLoader.Save("c:\\Screenshot.bmp", ImageFileFormat.Bmp, s2);
second method i've tried is :
Device device = new Device(0, DeviceType.Default, GetForegroundWindow(), CreateFlags.None, new PresentParameters());
Surface s1 = device.GetBackBuffer(0, BackBufferType.Mono);
device.GetFrontBuffer(s1);
On both methods the device would report a dll it can't find (Unable to load DLL 'netcfd3dm2_0.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E))
I got confused from that other article. Anyone with experience in this area can sort things out?
Seems you missing file that comes with Compact Framework. Try install/reinstall compact framework