Get DeviceContext of Entire Screen with Multiple Montiors - c#

I need to draw a line (with the mouse) over everything with C#. I can get a Graphics object of the desktop window by using P/Invoke:
DesktopGraphics = Graphics.FromHdc(GetDC(IntPtr.Zero));
However, anything I draw using this graphics object is only showing on the left monitor, and nothing on the right monitor. It doesn't fail or anything, it just doesn't show.
After I create the Graphics object, it shows the visible clip region to be 1680 x 1050 which is the resolution of my left monitor. I can only assume that it's only getting a device context for the left monitor. Is their a way to get the device context for both (or any number) monitors?
EDIT 3/7/2009:
Additional information about the fix I used.
I used the fix provided by colithium to come up with the following code for creating a graphics object for each monitor as well as a way to store the offset so that I can translate global mouse points to valid points on the graphics surface.
private void InitializeGraphics()
{
// Create graphics for each display using compatibility mode
CompatibilitySurfaces = Screen.AllScreens.Select(s => new CompatibilitySurface()
{
SurfaceGraphics = Graphics.FromHdc(CreateDC(null, s.DeviceName, null, IntPtr.Zero)),
Offset = new Size(s.Bounds.Location)
}).ToArray();
}
private class CompatibilitySurface : IDisposable
{
public Graphics SurfaceGraphics = null;
public Size Offset = default(Size);
public PointF[] OffsetPoints(PointF[] Points)
{
return Points.Select(p => PointF.Subtract(p, Offset)).ToArray();
}
public void Dispose()
{
if (SurfaceGraphics != null)
SurfaceGraphics.Dispose();
}
}
[DllImport("gdi32.dll")]
static extern IntPtr CreateDC(string lpszDriver, string lpszDevice, string lpszOutput, IntPtr lpInitData);

Here is a link to another person that had the same problem. It was solved with a call to:
CreateDC(TEXT("DISPLAY"),NULL,NULL,NULL)
which will return a DC to all monitors.

Following URL to get EnumDisplayMonitor may solve your problem
MSDN
To retrieve information about all of the display monitors, use code like this:
EnumDisplayMonitors(NULL, NULL, MyInfoEnumProc, 0); One more URL given at
MSJ

Related

Why is the desktop descriptor in GDI+ is limited to one monitor?

I am trying to draw to the screen directly in C# via the Graphics object retrieved from Graphics.FromHwnd(IntPtr.Zero) but I am limited to the primary monitor for some reason.
Upon inspecting the Graphics object, I found that the VisibleClipBounds are limited to my first monitors resolution.
I have two 1920x1080 monitors, so, I guess, that property has to be 3840x1080.
Is there a way to solve my issue?
The code is really simple, I have a wrapper class that looks like this:
public class ScreenCropperDrawer
{
private static Graphics screenGraphics;
public static void FillRectangle(Brush brush, Rectangle rect)
{
if (screenGraphics == null)
{
screenGraphics = Graphics.FromHwnd(IntPtr.Zero);
}
screenGraphics.FillRectangle(brush, rect);
}
}

Creating Windows 10 Transparency effects in c# form

How do you create the transparency effects that you see in windows 10? Something like this:
I have no clue how to approach this in c#. Logically thinking I would take a snapshot of the desktop every time the form comes into focus. Then blur it and place it at 0, 0(screen to client coordinates). That doesn't seem very effective. Any help? Again. not an experienced C# programmer, so a detailed explanation would be much appreciated
Edit: I saw some the answers referring me to a page for alpha blending. This is not what I am looking for. I wanted to know how to create the blur that you see in the image, the rest I can figure out at my own pace
Though all of the comments and the answers say it is not possible for WinForms, it definitely works also for WinForms (as SetWindowCompositionAttribute can be called on a Win32 window handle):
internal enum AccentState
{
ACCENT_DISABLED = 0,
ACCENT_ENABLE_GRADIENT = 1,
ACCENT_ENABLE_TRANSPARENTGRADIENT = 2,
ACCENT_ENABLE_BLURBEHIND = 3,
ACCENT_INVALID_STATE = 4
}
internal enum WindowCompositionAttribute
{
WCA_ACCENT_POLICY = 19
}
[StructLayout(LayoutKind.Sequential)]
internal struct AccentPolicy
{
public AccentState AccentState;
public int AccentFlags;
public int GradientColor;
public int AnimationId;
}
[StructLayout(LayoutKind.Sequential)]
internal struct WindowCompositionAttributeData
{
public WindowCompositionAttribute Attribute;
public IntPtr Data;
public int SizeOfData;
}
internal static class User32
{
[DllImport("user32.dll")]
internal static extern int SetWindowCompositionAttribute(IntPtr hwnd, ref WindowCompositionAttributeData data);
}
And then in your Form constructor:
public Form1()
{
InitializeComponent();
BackColor = Color.Black; // looks really bad with the default back color
var accent = new AccentPolicy { AccentState = AccentState.ACCENT_ENABLE_BLURBEHIND };
var accentStructSize = Marshal.SizeOf(accent);
var accentPtr = Marshal.AllocHGlobal(accentStructSize);
Marshal.StructureToPtr(accent, accentPtr, false);
var data = new WindowCompositionAttributeData
{
Attribute = WindowCompositionAttribute.WCA_ACCENT_POLICY,
SizeOfData = accentStructSize,
Data = accentPtr
}
User32.SetWindowCompositionAttribute(Handle, ref data);
Marshal.FreeHGlobal(accentPtr);
}
Result:
Windows Forms doesn't support AcrylicBrush so far. Only UWP support this.
But you have a Win32 API SetWindowCompositionAttribute to simulate this behavior.
The SetWindowCompositionAttribute API
By calling the Windows internal API SetWindowCompositionAttribute, you can get a lightly blurred transparent Window but this transparency is much less than the AcyclicBrush.
How to implement it
Calling SetWindowCompositionAttribute API is not very easy, so I've written a wrapper class for easier usage. But it's for WPF only.
I've written two posts talking about this:
https://walterlv.github.io/post/win10/2017/10/02/wpf-transparent-blur-in-windows-10.html (not in English)
3 Ways to create a window with blurring background on Windows 10 - walterlv
Other options
It's recommended to use AcrylicBrush using UWP and you can read Microsoft's documents Acrylic material - UWP app developer | Microsoft Docs for more details about it.
You can get that by using some algorithms
I don't know will it work or not. When I see your post I just think and got this concept.
Get entire windows desktop wallpaper image by using copyscreen method
Draw that image into a new bitmap where bitmap width = screen resolution.width and bitmap height = screen resolution.height eg: Bitmap bm = new Bitmap (1920, 1080)
Learn how to blur a bitmap image in c#. There is many of blogs teaching how to blur a bitmap programmatically.
Make blur of captured desktop wallpaper image
Put a picture of inside the form and draw the blurred bitmap into picturebox
You can make FormBorderStyle.None to get rid of old borders
You should keep picturebox picture style = normal in order to get full blurred image

How can I manage moving zoomed view in canon sdk c#

I am able to use canon sdk using this library found in codeproject
Canon EDSDK Library
I have done all of my requirements except one. It is that moving the zoomed live view up/down/left/right. I can zoom but I cant move it to see the right place to adjust the manual zoom.
I have searched and I have come to zoomRect, zoomPosition, zoomCoordinates... but I dont know what they are actually and how to use them.
any advice, code block will help a lot with or without using this library
You can use the property Evf_ZoomPosition with a Point struct to set the position of the zoom rectangle. Note that you set this property to the camera but you get/read all live view related values from the live view frame.
The position you set is the upper left corner of the zoom rectangle and valid values are between
X:0, Y:0
and
X:CoordinateSystem.Width - ZoomRect.Width
Y:CoordinateSystem.Height - ZoomRect.Height
Reading the ZoomPosition isn't really necessary because ZoomRect X and Y are the same values.
I have finally found an answer.
I have used zoomposition in order to change zoom rectangle.
I have used zoomRect in order to get zoom rectangle location and size.
Here is how I did that
Use this method to set zoom position of camera . I have defined this method in camera.cs in library
public void SetZoomPositionSetting(PropertyID propID, Point value, int inParam = 0)
{
CheckState();
int size = Marshal.SizeOf(typeof(Point));
ErrorCode err = CanonSDK.EdsSetPropertyData(CamRef, propID, inParam, size, value);
}
I have send this data to the method from anywhere in your code in order to change the zoomPosition
MainCamera.SetZoomPositionSetting(PropertyID.Evf_ZoomPosition, p);
p in here is EOSDigital.SDK.Point instance.
Here are the methods to get zoomCoordinates, zoomRect. I have defined these methods in camera.cs in library
private Rectangle GetEvfZoomRect(IntPtr imgRef)
{
Rectangle rect = new Rectangle();
ErrorCode err = CanonSDK.GetPropertyData(imgRef, PropertyID.Evf_ZoomRect, 0, out rect);
if (err == ErrorCode.OK)
return rect;
else
return rect = new Rectangle();
}
private Size GetEvfCoord_Size(IntPtr imgRef)
{
Size size = new Size();
ErrorCode err = CanonSDK.GetPropertyData(imgRef, PropertyID.Evf_CoordinateSystem, 0, out size);
if (err == ErrorCode.OK)
return size;
else
return new Size();
}
You need to call these methods within DownloadEvf() method in camera.cs. just after getting evfImageRef from
CanonSDK.EdsDownloadEvfImage(CamRef, evfImageRef);
after you get the evfImageRef with image data you can call the get methods using evfImageRef as imgRef.
you can get the zoomposition using the same way.
Dont forget to rebuild the library.

Capturing all Screens with DirectX GetFrontBufferData

I'm trying to create a Screenshot of all Screens on my PC. In the past I've been using the GDI Method, but due to performance issues I'm trying the DirectX way.
I can take a Screenshot of a single Screen without issues, with a code like this:
using Microsoft.DirectX;
using Microsoft.DirectX.Direct3D;
using System.Windows.Forms;
using System.Drawing;
class Capture : Form
{
private Device device;
private Surface surface;
public Capture()
{
PresentParameters p = new PresentParameters();
p.Windowed = true;
p.SwapEffect = SwapEffect.Discard;
device = new Device(0, DeviceType.Hardware, this, CreateFlags.HardwareVertexProcessing, p);
surface = device.CreateOffscreenPlainSurface(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height, Format.A8B8G8R8, Pool.Scratch);
}
public Bitmap Frame()
{
GraphicsStream gs = SurfaceLoader.SaveToStream(ImageFileFormat.Jpg, surface);
return new Bitmap(gs);
}
}
(Lets ignore deleting the Bitmap from memory for this question)
With that Code I can take a Screenshot of my Primary Screen. Changing the first parameter of the Device constructor to a different number corresponds to a different Screen. If I have 3 Screens and I pass 2 as a parameter, I get a Screenshot of my third Screen.
The issue I have is how to handle capturing all Screens. I came up with the following:
class CaptureScreen : Form
{
private int index;
private Screen screen;
private Device device;
private Surface surface;
public Rectangle ScreenBounds { get { return screen.Bounds; } }
public Device Device { get { return device; } }
public CaptureScreen(int index, Screen screen, PresentParameters p)
{
this.screen = screen; this.index = index;
device = new Device(index, DeviceType.Hardware, this, CreateFlags.HardwareVertexProcessing, p);
surface = device.CreateOffscreenPlainSurface(screen.Bounds.Width, screen.Bounds.Height, Format.A8R8G8B8, Pool.Scratch);
}
public Bitmap Frame()
{
device.GetFrontBufferData(0, surface);
GraphicsStream gs = SurfaceLoader.SaveToStream(ImageFileFormat.Jpg, surface);
return new Bitmap(gs);
}
}
class CaptureDirectX : Form
{
private CaptureScreen[] screens;
private int width = 0;
private int height = 0;
public CaptureDirectX()
{
PresentParameters p = new PresentParameters();
p.Windowed = true;
p.SwapEffect = SwapEffect.Discard;
screens = new CaptureScreen[Screen.AllScreens.Length];
for (int i = 0; i < Screen.AllScreens.Length; i++)
{
screens[i] = new CaptureScreen(i, Screen.AllScreens[i], p);
//reset previous devices
if (i > 0)
{
for(int j = 0; j < i; j++)
{
screens[j].Device.Reset(p);
}
}
width += Screen.AllScreens[i].Bounds.Width;
if (Screen.AllScreens[i].Bounds.Height > height)
{
height = Screen.AllScreens[i].Bounds.Height;
}
}
}
public Bitmap Frame()
{
Bitmap result = new Bitmap(width, height);
using (var g = Graphics.FromImage(result))
{
for (int i = 0; i < screens.Length; i++)
{
Bitmap frame = screens[i].Frame();
g.DrawImage(frame, screens[i].Bounds);
}
}
return result;
}
}
As you can see, I iterate though the available Screens and create multiple devices and surfaces in a seperate Class. But calling Frame() of the CaptureDirectX class throws the following error:
An unhandled exception of type 'Microsoft.DirectX.Direct3D.InvalidCallException' occurred in Microsoft.DirectX.Direct3D.dll
At the line
device.GetFrontBufferData(0, surface);
I've been researching this a bit but didn't have a whole lot of success. I'm not really sure what the issue is.
I've found a link that offers a solution that's talking about resetting the Device Objects. But as you can see in my code above, I've been trying to reset all previously created Device objects, sadly without success.
So my questions are:
Is what I'm trying to achieve even possible through this method (i.e. GetFrontBufferData) ?
What am I doing wrong? What am I missing?
Do you see any performance issues when capturing the Screen at a high rate, like say 30 fps? (Capturing a single screen with a target of 30fps gave me a rate of about 25 - 30fps, compared with the GDI methology which sinks to like 15fps sometimes)
FYI it's a WPF application, i.e. .NET 4.5
Edit: I should mention that I'm aware of IDXGI_DesktopDuplication but sadly it doesn't fit my requirements. As far as I know, that API is only available from Windows 8 onwards, but I'm trying to get a solution that works from Windows 7 onwards because of my clients.
Well, in the end the solution was something completely different. The System.Windows.Forms.Screen Class doesn't play nicely with the DirectX Classes. Why? Because the indexes don't match up. The first object in AllScreens does not necessarly have to be index 0 in the Device instatiation.
Now usually this isn't a problem, except when you have a "strange" monitor setup like mine. On the desk I have 3 screens, one vertical (1200,1920), one horizontal (1920, 1200) and another horizontal laptop screen (1920, 1080).
What happened in my case: The first object in AllScreens was the vertical monitor on the left. I try to create a device for index 0, 1200 width and 1920 height. Index 0 corresponds to my main monitor though, i.e. the horizontal monitor in the middle. So I'm essentially going out of the screen bounds with my instatiation. The instatiation doesn't throw an exception and at some point later I try to read the front buffer data. Bam, Exception because I'm trying to take a 1200x1920 screenshot of a monitor that's 1920x1200.
Sadly, even after I got this working, the performance was no good. A single frame of all 3 monitors takes about 300 to 500ms. Even with a single monitor, the execution time was something like 100ms. Not good enough for my usecase.
Didn't get the Backbuffer to work either, it just produces black images.
I went back to the GDI method and enhanced it by only updating specific chunks of the bitmap on each Frame() call. You want to capture a 1920x1200 region, which gets cut into 480x300 Rectangles.

Experiment on displaying a Bitmap retrieved from a camera on a Picturebox

In my code I retrieve frames from a camera with a pointer to an unmanaged object, make some calculations on it and then I make it visualized on a picturebox control.
Before I go further in this application with all the details, I want to be sure that the base code for this process is good.
In particular I would like to:
- keep execution time minimal and avoid unnecessary operations, such as
copying more images than necessary. I want to keep only essential
operations
- understand if a delay in the calculation process on every frame could have detrimental effects on the way images are shown (i.e. if it is not printed what I expect) or some image is skipped
- prevent more serious errors, such as ones due to memory or thread management, or to image display.
For this purpose, I set up a few experimental lines of code (below), but I’m not able to explain the results of what I found. If you have the executables of OpenCv you can make a try by yourself.
using System;
using System.Drawing;
using System.Drawing.Imaging;
using System.Windows.Forms;
using System.Runtime.InteropServices;
using System.Threading;
public partial class FormX : Form
{
private delegate void setImageCallback();
Bitmap _bmp;
Bitmap _bmp_draw;
bool _exit;
double _x;
IntPtr _ImgBuffer;
bool buffercopy;
bool copyBitmap;
bool refresh;
public FormX()
{
InitializeComponent();
_x = 10.1;
// set experimemental parameters
buffercopy = false;
copyBitmap = false;
refresh = true;
}
private void buttonStart_Click(object sender, EventArgs e)
{
Thread camThread = new Thread(new ThreadStart(Cycle));
camThread.Start();
}
private void buttonStop_Click(object sender, EventArgs e)
{
_exit = true;
}
private void Cycle()
{
_ImgBuffer = IntPtr.Zero;
_exit = false;
IntPtr vcap = cvCreateCameraCapture(0);
while (!_exit)
{
IntPtr frame = cvQueryFrame(vcap);
if (buffercopy)
{
UnmanageCopy(frame);
_bmp = SharedBitmap(_ImgBuffer);
}
else
{ _bmp = SharedBitmap(frame); }
// make calculations
int N = 1000000; /*1000000*/
for (int i = 0; i < N; i++)
_x = Math.Sin(0.999999 * _x);
ShowFrame();
}
cvReleaseImage(ref _ImgBuffer);
cvReleaseCapture(ref vcap);
}
private void ShowFrame()
{
if (pbCam.InvokeRequired)
{
this.Invoke(new setImageCallback(ShowFrame));
}
else
{
Pen RectangleDtPen = new Pen(Color.Azure, 3);
if (copyBitmap)
{
if (_bmp_draw != null) _bmp_draw.Dispose();
//_bmp_draw = new Bitmap(_bmp); // deep copy
_bmp_draw = _bmp.Clone(new Rectangle(0, 0, _bmp.Width, _bmp.Height), _bmp.PixelFormat);
}
else
{
_bmp_draw = _bmp; // add reference to the same object
}
Graphics g = Graphics.FromImage(_bmp_draw);
String drawString = _x.ToString();
Font drawFont = new Font("Arial", 56);
SolidBrush drawBrush = new SolidBrush(Color.Red);
PointF drawPoint = new PointF(10.0F, 10.0F);
g.DrawString(drawString, drawFont, drawBrush, drawPoint);
drawPoint = new PointF(10.0F, 300.0F);
g.DrawString(drawString, drawFont, drawBrush, drawPoint);
g.DrawRectangle(RectangleDtPen, 12, 12, 200, 400);
g.Dispose();
pbCam.Image = _bmp_draw;
if (refresh) pbCam.Refresh();
}
}
public void UnmanageCopy(IntPtr f)
{
if (_ImgBuffer == IntPtr.Zero)
_ImgBuffer = cvCloneImage(f);
else
cvCopy(f, _ImgBuffer, IntPtr.Zero);
}
// only works with 3 channel images from camera! (to keep code minimal)
public Bitmap SharedBitmap(IntPtr ipl)
{
// gets unmanaged data from pointer to IplImage:
IntPtr scan0;
int step;
Size size;
OpenCvCall.cvGetRawData(ipl, out scan0, out step, out size);
return new Bitmap(size.Width, size.Height, step, PixelFormat.Format24bppRgb, scan0);
}
// based on older version of OpenCv. Change dll name if different
[DllImport( "opencv_highgui246", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr cvCreateCameraCapture(int index);
[DllImport("opencv_highgui246", CallingConvention = CallingConvention.Cdecl)]
public static extern void cvReleaseCapture(ref IntPtr capture);
[DllImport("opencv_highgui246", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr cvQueryFrame(IntPtr capture);
[DllImport("opencv_core246", CallingConvention = CallingConvention.Cdecl)]
public static extern void cvGetRawData(IntPtr arr, out IntPtr data, out int step, out Size roiSize);
[DllImport("opencv_core246", CallingConvention = CallingConvention.Cdecl)]
public static extern void cvCopy(IntPtr src, IntPtr dst, IntPtr mask);
[DllImport("opencv_core246", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr cvCloneImage(IntPtr src);
[DllImport("opencv_core246", CallingConvention = CallingConvention.Cdecl)]
public static extern void cvReleaseImage(ref IntPtr image);
}
results [dual core 2 Duo T6600 2.2 GHz]:
A. buffercopy = false; copyBitmap = false; refresh = false;
This is the simpler configuration. Each frame is retrieved in turn, operations are made (in the reality they are based on the same frame, here just calculations), then the result of the calculations is printed on top of the image and finally it is displayed on a picturebox.
OpenCv documentation says:
OpenCV 1.x functions cvRetrieveFrame and cv.RetrieveFrame return image
stored inside the video capturing structure. It is not allowed to
modify or release the image! You can copy the frame using
cvCloneImage() and then do whatever you want with the copy.
But this doesn’t prevent us from doing experiments.
If the calculation are not intense (low number of iterations, N), everything is just ok and the fact that we manipulate the image buffer own by the unmanaged frame retriever doesn’t pose a problem here.
The reason is that probably they advise to leave untouched the buffer, in case people would modify its structure (not its values) or do operations asynchronously without realizing it. Now we retrieve frames and modify their content in turn.
If N is increased (N=1000000 or more), when the number of frames per second is not high, for example with artificial light and low exposure, everything seems ok, but after a while the video is lagged and the graphics impressed on it are blinking. With a higher frame rate the blinking appears from the beginning, even when the video is still fluid.
Is this because the mechanism of displaying images on the control (or refreshing or whatever else) is somehow asynchronous and when the picturebox is fetching its buffer of data it is modified in the meanwhile by the camera, deleting the graphics?
Or is there some other reason?
Why is the image lagged in that way, i.e. I would expect that the delay due to calculations only had the effect of skipping the frames received by the camera when the calculation are not done yet, and de facto only reducing the frame rate; or alternatively that all frames are received and the delay due to calculations brings the system to process images gotten minutes before, because the queue of images to process rises over time.
Instead, the observed behavior seems hybrid between the two: there is a delay of a few seconds, but this seems not increased much as the capturing process goes on.
B. buffercopy = true; copyBitmap = false; refresh = false;
Here I make a deep copy of the buffer into a second buffer, following the advice of the OpenCv documentation.
Nothing changes. The second buffer doesn’t change its address in memory during the run.
C. buffercopy = false; copyBitmap = true; refresh = false;
Now the (deep) copy of the bitmap is made allocating every time a new space in memory.
The blinking effect has gone, but the lagging keep arising after a certain time.
D. buffercopy = false; copyBitmap = false; refresh = true;
As before.
Please help me explain these results!
If I may be so frank, it is a bit tedious to understand all the details of your questions, but let me make a few points to help you analyse your results.
In case A, you say you perform calculations directly on the buffer. The documentation says you shouldn't do this, so if you do, you can expect undefined results. OpenCV assumes you won't touch it, so it might do stuff like suddenly delete that part of memory, let some other app process it, etc. It might look like it works, but you can never know for sure, so don't do it *slaps your wrist* In particular, if your processing takes a long time, the camera might overwrite the buffer while you're in the middle of processing it.
The way you should do it is to copy the buffer before doing anything. This will give you a piece of memory that is yours to do with whatever you wish. You can create a Bitmap that refers to this memory, and manually free the memory when you no longer need it.
If your processing rate (frames processed per second) is less than the number of frames captured per second by the camera, you have to expect some frames will be dropped. If you want to show a live view of the processed images, it will lag and there's no simple way around it. If it is vital that your application processes a fluid video (e.g. this might be necessary if you're tracking an object), then consider storing the video to disk so you don't have to process in real-time. You can also consider multithreading to process several frames at once, but the live view would have a latency.
By the way, is there any particular reason why you're not using EmguCV? It has abstractions for the camera and a system that raises an event whenever the camera has captured a new frame. This way, you don't need to continuously call cvQueryFrame on a background thread.
I think that you still have a problem with your UnmanageCopy method in that you only clone the image the first time this is called and you subsequently copy it. I believe that you need to do a cvCloneImage(f) every time as copy performs only a shallow copy, not a deep copy as you seem to think.

Categories