Capturing all Screens with DirectX GetFrontBufferData - c#

I'm trying to create a Screenshot of all Screens on my PC. In the past I've been using the GDI Method, but due to performance issues I'm trying the DirectX way.
I can take a Screenshot of a single Screen without issues, with a code like this:
using Microsoft.DirectX;
using Microsoft.DirectX.Direct3D;
using System.Windows.Forms;
using System.Drawing;
class Capture : Form
{
private Device device;
private Surface surface;
public Capture()
{
PresentParameters p = new PresentParameters();
p.Windowed = true;
p.SwapEffect = SwapEffect.Discard;
device = new Device(0, DeviceType.Hardware, this, CreateFlags.HardwareVertexProcessing, p);
surface = device.CreateOffscreenPlainSurface(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height, Format.A8B8G8R8, Pool.Scratch);
}
public Bitmap Frame()
{
GraphicsStream gs = SurfaceLoader.SaveToStream(ImageFileFormat.Jpg, surface);
return new Bitmap(gs);
}
}
(Lets ignore deleting the Bitmap from memory for this question)
With that Code I can take a Screenshot of my Primary Screen. Changing the first parameter of the Device constructor to a different number corresponds to a different Screen. If I have 3 Screens and I pass 2 as a parameter, I get a Screenshot of my third Screen.
The issue I have is how to handle capturing all Screens. I came up with the following:
class CaptureScreen : Form
{
private int index;
private Screen screen;
private Device device;
private Surface surface;
public Rectangle ScreenBounds { get { return screen.Bounds; } }
public Device Device { get { return device; } }
public CaptureScreen(int index, Screen screen, PresentParameters p)
{
this.screen = screen; this.index = index;
device = new Device(index, DeviceType.Hardware, this, CreateFlags.HardwareVertexProcessing, p);
surface = device.CreateOffscreenPlainSurface(screen.Bounds.Width, screen.Bounds.Height, Format.A8R8G8B8, Pool.Scratch);
}
public Bitmap Frame()
{
device.GetFrontBufferData(0, surface);
GraphicsStream gs = SurfaceLoader.SaveToStream(ImageFileFormat.Jpg, surface);
return new Bitmap(gs);
}
}
class CaptureDirectX : Form
{
private CaptureScreen[] screens;
private int width = 0;
private int height = 0;
public CaptureDirectX()
{
PresentParameters p = new PresentParameters();
p.Windowed = true;
p.SwapEffect = SwapEffect.Discard;
screens = new CaptureScreen[Screen.AllScreens.Length];
for (int i = 0; i < Screen.AllScreens.Length; i++)
{
screens[i] = new CaptureScreen(i, Screen.AllScreens[i], p);
//reset previous devices
if (i > 0)
{
for(int j = 0; j < i; j++)
{
screens[j].Device.Reset(p);
}
}
width += Screen.AllScreens[i].Bounds.Width;
if (Screen.AllScreens[i].Bounds.Height > height)
{
height = Screen.AllScreens[i].Bounds.Height;
}
}
}
public Bitmap Frame()
{
Bitmap result = new Bitmap(width, height);
using (var g = Graphics.FromImage(result))
{
for (int i = 0; i < screens.Length; i++)
{
Bitmap frame = screens[i].Frame();
g.DrawImage(frame, screens[i].Bounds);
}
}
return result;
}
}
As you can see, I iterate though the available Screens and create multiple devices and surfaces in a seperate Class. But calling Frame() of the CaptureDirectX class throws the following error:
An unhandled exception of type 'Microsoft.DirectX.Direct3D.InvalidCallException' occurred in Microsoft.DirectX.Direct3D.dll
At the line
device.GetFrontBufferData(0, surface);
I've been researching this a bit but didn't have a whole lot of success. I'm not really sure what the issue is.
I've found a link that offers a solution that's talking about resetting the Device Objects. But as you can see in my code above, I've been trying to reset all previously created Device objects, sadly without success.
So my questions are:
Is what I'm trying to achieve even possible through this method (i.e. GetFrontBufferData) ?
What am I doing wrong? What am I missing?
Do you see any performance issues when capturing the Screen at a high rate, like say 30 fps? (Capturing a single screen with a target of 30fps gave me a rate of about 25 - 30fps, compared with the GDI methology which sinks to like 15fps sometimes)
FYI it's a WPF application, i.e. .NET 4.5
Edit: I should mention that I'm aware of IDXGI_DesktopDuplication but sadly it doesn't fit my requirements. As far as I know, that API is only available from Windows 8 onwards, but I'm trying to get a solution that works from Windows 7 onwards because of my clients.

Well, in the end the solution was something completely different. The System.Windows.Forms.Screen Class doesn't play nicely with the DirectX Classes. Why? Because the indexes don't match up. The first object in AllScreens does not necessarly have to be index 0 in the Device instatiation.
Now usually this isn't a problem, except when you have a "strange" monitor setup like mine. On the desk I have 3 screens, one vertical (1200,1920), one horizontal (1920, 1200) and another horizontal laptop screen (1920, 1080).
What happened in my case: The first object in AllScreens was the vertical monitor on the left. I try to create a device for index 0, 1200 width and 1920 height. Index 0 corresponds to my main monitor though, i.e. the horizontal monitor in the middle. So I'm essentially going out of the screen bounds with my instatiation. The instatiation doesn't throw an exception and at some point later I try to read the front buffer data. Bam, Exception because I'm trying to take a 1200x1920 screenshot of a monitor that's 1920x1200.
Sadly, even after I got this working, the performance was no good. A single frame of all 3 monitors takes about 300 to 500ms. Even with a single monitor, the execution time was something like 100ms. Not good enough for my usecase.
Didn't get the Backbuffer to work either, it just produces black images.
I went back to the GDI method and enhanced it by only updating specific chunks of the bitmap on each Frame() call. You want to capture a 1920x1200 region, which gets cut into 480x300 Rectangles.

Related

Dynamically display an array of PictureBoxes - Performance issue

I'd like to display coverarts for each album of an MP3 library, a bit like Itunes does (at a later stage, i'd like to click one any of these coverarts to display the list of songs).
I have a form with a panel panel1 and here is the loop i'm using :
int i = 0;
int perCol = 4;
int disBetWeen = 15;
int width = 250;
int height = 250;
foreach(var alb in mp2)
{
myPicBox.Add(new PictureBox());
myPicBox[i].SizeMode = System.Windows.Forms.PictureBoxSizeMode.StretchImage;
myPicBox[i].Location = new System.Drawing.Point(disBetWeen + (disBetWeen * (i % perCol) +(width * (i % perCol))),
disBetWeen + (disBetWeen * (i / perCol))+ (height * (i / perCol)));
myPicBox[i].Name = "pictureBox" + i;
myPicBox[i].Size = new System.Drawing.Size(width, height);
myPicBox[i].ImageLocation = #"C:/Users/Utilisateur/Music/label.jpg";
panel1.Controls.Add(myPicBox[i]);
i++;
}
I'm using the same picture per picturebox for convenience, but i'll use the coverart embedded in each mp3 file eventually.
It's working fine with an abstract of the library (around 50), but i have several thousands of albums. I tried and as expected, it takes a long time to load and i cannot really scroll afterward.
Is there any way to load only what's displayed ? and then how to assess what is displayed with the scrollbars.
Thanks
Winforms really isn't suited to this sort of thing... Using standard controls, you'd probably need to either provision all the image boxes up front and load images in as they become visible, or manage some overflow placeholder for the appropriate length so the scrollbars work.
Assuming Winforms is your only option, I'd suggest you look into creating a custom control with a scroll bar and manually driving the OnPaint event.
That would allow you to keep a cache of images in memory to draw the current view [and a few either side], while giving you total control over when they're loaded/unloaded [well, as "total" as you can get in a managed language - you may still need tune garbage collection]
To get into some details....
Create a new control
namespace SO61574511 {
// Let's inherit from Panel so we can take advantage of scrolling for free
public class ImageScroller : Panel {
// Some numbers to allow us to calculate layout
private const int BitmapWidth = 100;
private const int BitmapSpacing = 10;
// imageCache will keep the images in memory. Ideally we should unload images we're not using, but that's a problem for the reader
private Bitmap[] imageCache;
public ImageScroller() {
//How many images to put in the cache? If you don't know up-front, use a list instead of an array
imageCache = new Bitmap[100];
//Take advantage of Winforms scrolling
this.AutoScroll = true;
this.AutoScrollMinSize = new Size((BitmapWidth + BitmapSpacing) * imageCache.Length, this.Height);
}
protected override void OnPaint(PaintEventArgs e) {
// Let Winforms paint its bits (like the scroll bar)
base.OnPaint(e);
// Translate whatever _we_ paint by the position of the scrollbar
e.Graphics.TranslateTransform(this.AutoScrollPosition.X,
this.AutoScrollPosition.Y);
// Use this to decide which images are out of sight and can be unloaded
var current_scroll_position = this.HorizontalScroll.Value;
// Loop through the images you want to show (probably not all of them, just those close to the view area)
for (int i = 0; i < imageCache.Length; i++) {
e.Graphics.DrawImage(GetImage(i), new PointF(i * (BitmapSpacing + BitmapWidth), 0));
}
}
//You won't need a random, just for my demo colours below
private Random rnd = new Random();
private Bitmap GetImage(int id) {
// This method is responsible for getting an image.
// If it's already in the cache, use it, otherwise load it
if (imageCache[id] == null) {
//Do something here to load an image into the cache
imageCache[id] = new Bitmap(100, 100);
// For demo purposes, I'll flood fill a random colour
using (var gfx = Graphics.FromImage(imageCache[id])) {
gfx.Clear(Color.FromArgb(255, rnd.Next(0, 255), rnd.Next(0, 255), rnd.Next(0, 255)));
}
}
return imageCache[id];
}
}
}
And Load it into your form, docking to fill the screen....
public Form1() {
InitializeComponent();
this.Controls.Add(new ImageScroller {
Dock = DockStyle.Fill
});
}
You can see it in action here: https://www.youtube.com/watch?v=ftr3v6pLnqA (excuse the mouse trails, I captured area outside the window)

Converting Texture2D into a video

I've did a lot of research, but I can't find a suitable solution that works with Unity3d/c#. I'm using a Fove-HMD and would like to record/make a video of the integrated camera. So far I managed every update to take a snapshot of the camera, but I can't find a way to merge this snapshots into a video. Does someone know a way of converting them? Or can someone point me in the right direction, in which I could continue my research?
public class FoveCamera : SingletonBase<FoveCamera>{
private bool camAvailable;
private WebCamTexture foveCamera;
private List<Texture2D> snapshots;
void Start ()
{
//-------------just checking if webcam is available
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length == 0)
{
Debug.LogError("FoveCamera could not be found.");
camAvailable = false;
return;
}
foreach (WebCamDevice device in devices)
{
if (device.name.Equals("FOVE Eyes"))
foveCamera = new WebCamTexture(device.name);//screen.width and screen.height
}
if (foveCamera == null)
{
Debug.LogError("FoveCamera could not be found.");
return;
}
//-------------camera found, start with the video
foveCamera.Play();
camAvailable = true;
}
void Update () {
if (!camAvailable)
{
return;
}
//loading snap from camera
Texture2D snap = new Texture2D(foveCamera.width,foveCamera.height);
snap.SetPixels(foveCamera.GetPixels());
snapshots.Add(snap);
}
}
The code works so far. The first part of the Start-Method is just for finding and enabling the camera. In the Update-Method I'm taking every update a snapshot of the video.
After I "stop" the Update-Method, I would like to convert the gathered Texture2D object into a video.
Thanks in advance
Create MediaEncoder
using UnityEditor; // VideoBitrateMode
using UnityEditor.Media; // MediaEncoder
var vidAttr = new VideoTrackAttributes
{
bitRateMode = VideoBitrateMode.Medium,
frameRate = new MediaRational(25),
width = 320,
height = 240,
includeAlpha = false
};
var audAttr = new AudioTrackAttributes
{
sampleRate = new MediaRational(48000),
channelCount = 2
};
var enc = new MediaEncoder("sample.mp4", vidAttr, audAttr);
Convert each snapshot to Texture2D
Call consequently AddFrame to add each snapshot to MediaEncoder
enc.AddFrame(tex);
Once done call Dispose to close the file
enc.Dispose();
I see two methods here, one is fast to implement, dirty and not for all platforms, second one harder but pretty. Both rely on FFMPEG.
1) Save every frame into image file (snap.EncodeToPNG()) and then call FFMPEG to create video from images (FFmpeg create video from images) - slow due to many disk operations.
2) Use FFMPEG via wrapper implemented in AForge and supply its VideoFileWriter class with images that you have.
Image sequence to video stream?
Problem here is it uses System.Bitmap, so in order to convert Texture2D to Bitmap you can use: How to create bitmap from byte array?
So you end up with something like:
Bitmap bmp;
Texture2D snap;
using (var ms = new MemoryStream(snap.EncodeToPNG()))
{
bmp = new Bitmap(ms);
}
vFWriter.WriteVideoFrame(bmp);
Both methods are not the fastest ones though, so if performance is an issue here you might want to operate on lower level data like DirectX or OpenGL textures.

WPF C# multiple monitors restoring position scaling at 125%

I am having a trouble restoring floating tools in Avalondock.
The app Im developing uses avalondock for document managements with a few tools.
I usually use the tools on the 2nd monitor.
I use multiple monitors at work with 125% scaling on one and 100% scaling on the other. The main monitor is 4k monitor, the other is 2k monitor.
When i remote desktop the work pc with a single monitor (3440x1440) and ran the app, i noticed that the tools in 2nd monitors are not visible and i have no way to bring them back to main screen.
Avalondock's floating LayoutAnchorables are not treated as separate views.
if any one knows how to make LayoutAnchorable as a separate windows view, that would be the best solution. But i could not find how to do it. I tried the following
if (args.Model.IsFloating)
{
var left = (int)args.Model.FloatingLeft;
var top = (int)args.Model.FloatingTop;
var width = (int)args.Model.FloatingWidth;
var height = (int)args.Model.FloatingHeight;
var rect = new System.Drawing.Rectangle(left, top, width, height);
var intersected = Screen.AllScreens.Any(p => p.WorkingArea.IntersectsWith(rect));
if (!intersected)
{
//need to reposition
args.Model.FloatingLeft = 0;
args.Model.FloatingTop = 0;
}
//args.Model.FloatingTop;
}
System.Windows.SystemParameters.PrimaryScreenHeight 1440 double
System.Windows.SystemParameters.PrimaryScreenWidth 3440 double
System.Windows.SystemParameters.VirtualScreenHeight 1440 double
System.Windows.SystemParameters.VirtualScreenWidth 3440 double
args.Model.FloatingLeft 4133.6 double
args.Model.FloatingTop 909.6 double
WorkingArea {X = 0 Y = 0 Width = 4300 Height = 1750} System.Drawing.Rectangle
The problem is that the working area is scaled at 125%.
This makes args.Model within the bounds of the main windows.
So i guess i can't use System.Windows.Forms.Screen info because i do not know which scaling the user will be using.
How do i get the real resolutions of the multiple monitors and positions and scaling?
i found the answer myself.
I was able to make the floating windows owner to null.
Apparently the dock's floatingwindow inherit from System.Windows.Window
This is what i wanted from the beginning.
private void DockManager_LayoutUpdated(object sender, EventArgs e)
{
foreach (var floatingWindow in dockManager.FloatingWindows)
{
if (floatingWindow.Owner != null)
{
floatingWindow.Owner = null;
}
floatingWindow.ShowInTaskbar = true;
}
}

parallel image processing artifacts

I capture images from a webcam, do some heavy processing on them, and then show the result. To keep the framerate high, i want to have the processing of different frames run in parallel.
So, I have a 'Producer', which captures the images and adds these to the 'inQueue'; also it takes an image from the 'outQueue' and displays it:
public class Producer
{
Capture capture;
Queue<Image<Bgr, Byte>> inQueue;
Queue<Image<Bgr, Byte>> outQueue;
Object lockObject;
Emgu.CV.UI.ImageBox screen;
public int frameCounter = 0;
public Producer(Emgu.CV.UI.ImageBox screen, Capture capture, Queue<Image<Bgr, Byte>> inQueue, Queue<Image<Bgr, Byte>> outQueue, Object lockObject)
{
this.screen = screen;
this.capture = capture;
this.inQueue = inQueue;
this.outQueue = outQueue;
this.lockObject = lockObject;
}
public void produce()
{
while (true)
{
lock (lockObject)
{
inQueue.Enqueue(capture.QueryFrame());
if (inQueue.Count == 1)
{
Monitor.PulseAll(lockObject);
}
if (outQueue.Count > 0)
{
screen.Image = outQueue.Dequeue();
}
}
frameCounter++;
}
}
}
There are different 'Consumers' who take an image from the inQueue, do some processing, and add them to the outQueue:
public class Consumer
{
Queue<Image<Bgr, Byte>> inQueue;
Queue<Image<Bgr, Byte>> outQueue;
Object lockObject;
string name;
Image<Bgr, Byte> image;
public Consumer(Queue<Image<Bgr, Byte>> inQueue, Queue<Image<Bgr, Byte>> outQueue, Object lockObject, string name)
{
this.inQueue = inQueue;
this.outQueue = outQueue;
this.lockObject = lockObject;
this.name = name;
}
public void consume()
{
while (true)
{
lock (lockObject)
{
if (inQueue.Count == 0)
{
Monitor.Wait(lockObject);
continue;
}
image = inQueue.Dequeue();
}
// Do some heavy processing with the image
lock (lockObject)
{
outQueue.Enqueue(image);
}
}
}
}
Rest of the important code is this section:
private void Form1_Load(object sender, EventArgs e)
{
Consumer[] c = new Consumer[consumerCount];
Thread[] t = new Thread[consumerCount];
Object lockObj = new object();
Queue<Image<Bgr, Byte>> inQueue = new Queue<Image<Bgr, Byte>>();
Queue<Image<Bgr, Byte>> outQueue = new Queue<Image<Bgr, Byte>>();
p = new Producer(screen1, capture, inQueue, outQueue, lockObj);
for (int i = 0; i < consumerCount; i++)
{
c[i] = new Consumer(inQueue, outQueue, lockObj, "c_" + Convert.ToString(i));
}
for (int i = 0; i < consumerCount; i++)
{
t[i] = new Thread(c[i].consume);
t[i].Start();
}
Thread pt = new Thread(p.produce);
pt.Start();
}
The parallelisation actually works fine, I do get a linear speed increase with each added thread (up to a certain point of course). The problem is that I get artifacts in the output, even if running only one thread. The artifacts look like part of the picture is not in the right place.
Example of the artifact (this is without any processing to keep it clear, but the effect is the same)
Any ideas what causes this?
Thanks
Displaimer: This post isn't supposed to fully describe an answer, but instead give some hints on why the artifact is being shown.
A quick analysis show that the the actifact is, in fact, a partial, vertically mirrored snippet of a frame. I copied it, mirrored, and placed it back over the image, and added an awful marker to show its placement:
Two things immediately come to attention:
The artifact is roughly positioned on the 'correct' place it would be, only that the position is also vertically mirrored;
The image is slightly different, indicating that it may belong to a different frame.
It's been a while since I played around with raw capture and ran into a similar issue, but I remember that depending on how the driver is implemented (or set up - this particular issue happened when setting a specific imaging device for interlaced capture) it may fill its framebuffer alternating between 'top-down' and 'bottom-up' scans - as soon as the frame is full, the 'cursor' reverts direction.
It seems to me that you're running into a race condition/buffer underrun situation, where the transfer from the framebuffer to your application is happening before the full frame is transferred by the device.
In that case, you'd receive a partial image, and the area still not refreshed would show a bit of the previously transferred frame.
If I'd have to bet, I'd say that the artifact may appear on sequential order, not on the same position but 'fluctuating' on a specific direction (up or down), but always as a mirrored bit.
Well, I think the problem is here . The section of code is not guarantee that you will be access by one thread in here between two queue. The image is pop by inQueue is not actually received in order in outQueue
while (true)
{
lock (lockObject)
{
if (inQueue.Count == 0)
{
Monitor.Wait(lockObject);
continue;
}
image = inQueue.Dequeue();
}
// Do some heavy processing with the image
lock (lockObject)
{
outQueue.Enqueue(image);
}
}
Similar to #OnoSendai, I'm not trying to solve the exact problem as stated. I would have to write an app and I just don't have the time. But, the two things that I would change right away would be to use the ConcurrentQueue class so that you have thread-safety. And, I would use the Task library functions in order to create parallel tasks on different processor cores. These are found in the System.Net and System.Net.Task namespaces.
Also, vertically flipping a chunk like that looks like more than an artifact to me. If it also happens when executing in a single thread as you mentioned, then I would definitely re-focus on the "heavy processing" part of the equation.
Good luck! Take care.
You may have two problems:
1) parallism doesn't ensure that images are added to the out queue in the right order. I imagine that displaying image 8 before image 6 and 7 can produce some artifacts. In consumer thread, you have to wait previous consumer have posted its image to the out queue to post next image. Tasks can help greatly for that because of their inherent synchronisation mecanism.
2) You may also have problems in the rendering code.

Get DeviceContext of Entire Screen with Multiple Montiors

I need to draw a line (with the mouse) over everything with C#. I can get a Graphics object of the desktop window by using P/Invoke:
DesktopGraphics = Graphics.FromHdc(GetDC(IntPtr.Zero));
However, anything I draw using this graphics object is only showing on the left monitor, and nothing on the right monitor. It doesn't fail or anything, it just doesn't show.
After I create the Graphics object, it shows the visible clip region to be 1680 x 1050 which is the resolution of my left monitor. I can only assume that it's only getting a device context for the left monitor. Is their a way to get the device context for both (or any number) monitors?
EDIT 3/7/2009:
Additional information about the fix I used.
I used the fix provided by colithium to come up with the following code for creating a graphics object for each monitor as well as a way to store the offset so that I can translate global mouse points to valid points on the graphics surface.
private void InitializeGraphics()
{
// Create graphics for each display using compatibility mode
CompatibilitySurfaces = Screen.AllScreens.Select(s => new CompatibilitySurface()
{
SurfaceGraphics = Graphics.FromHdc(CreateDC(null, s.DeviceName, null, IntPtr.Zero)),
Offset = new Size(s.Bounds.Location)
}).ToArray();
}
private class CompatibilitySurface : IDisposable
{
public Graphics SurfaceGraphics = null;
public Size Offset = default(Size);
public PointF[] OffsetPoints(PointF[] Points)
{
return Points.Select(p => PointF.Subtract(p, Offset)).ToArray();
}
public void Dispose()
{
if (SurfaceGraphics != null)
SurfaceGraphics.Dispose();
}
}
[DllImport("gdi32.dll")]
static extern IntPtr CreateDC(string lpszDriver, string lpszDevice, string lpszOutput, IntPtr lpInitData);
Here is a link to another person that had the same problem. It was solved with a call to:
CreateDC(TEXT("DISPLAY"),NULL,NULL,NULL)
which will return a DC to all monitors.
Following URL to get EnumDisplayMonitor may solve your problem
MSDN
To retrieve information about all of the display monitors, use code like this:
EnumDisplayMonitors(NULL, NULL, MyInfoEnumProc, 0); One more URL given at
MSJ

Categories