How can I manage moving zoomed view in canon sdk c# - c#

I am able to use canon sdk using this library found in codeproject
Canon EDSDK Library
I have done all of my requirements except one. It is that moving the zoomed live view up/down/left/right. I can zoom but I cant move it to see the right place to adjust the manual zoom.
I have searched and I have come to zoomRect, zoomPosition, zoomCoordinates... but I dont know what they are actually and how to use them.
any advice, code block will help a lot with or without using this library

You can use the property Evf_ZoomPosition with a Point struct to set the position of the zoom rectangle. Note that you set this property to the camera but you get/read all live view related values from the live view frame.
The position you set is the upper left corner of the zoom rectangle and valid values are between
X:0, Y:0
and
X:CoordinateSystem.Width - ZoomRect.Width
Y:CoordinateSystem.Height - ZoomRect.Height
Reading the ZoomPosition isn't really necessary because ZoomRect X and Y are the same values.

I have finally found an answer.
I have used zoomposition in order to change zoom rectangle.
I have used zoomRect in order to get zoom rectangle location and size.
Here is how I did that
Use this method to set zoom position of camera . I have defined this method in camera.cs in library
public void SetZoomPositionSetting(PropertyID propID, Point value, int inParam = 0)
{
CheckState();
int size = Marshal.SizeOf(typeof(Point));
ErrorCode err = CanonSDK.EdsSetPropertyData(CamRef, propID, inParam, size, value);
}
I have send this data to the method from anywhere in your code in order to change the zoomPosition
MainCamera.SetZoomPositionSetting(PropertyID.Evf_ZoomPosition, p);
p in here is EOSDigital.SDK.Point instance.
Here are the methods to get zoomCoordinates, zoomRect. I have defined these methods in camera.cs in library
private Rectangle GetEvfZoomRect(IntPtr imgRef)
{
Rectangle rect = new Rectangle();
ErrorCode err = CanonSDK.GetPropertyData(imgRef, PropertyID.Evf_ZoomRect, 0, out rect);
if (err == ErrorCode.OK)
return rect;
else
return rect = new Rectangle();
}
private Size GetEvfCoord_Size(IntPtr imgRef)
{
Size size = new Size();
ErrorCode err = CanonSDK.GetPropertyData(imgRef, PropertyID.Evf_CoordinateSystem, 0, out size);
if (err == ErrorCode.OK)
return size;
else
return new Size();
}
You need to call these methods within DownloadEvf() method in camera.cs. just after getting evfImageRef from
CanonSDK.EdsDownloadEvfImage(CamRef, evfImageRef);
after you get the evfImageRef with image data you can call the get methods using evfImageRef as imgRef.
you can get the zoomposition using the same way.
Dont forget to rebuild the library.

Related

SkiaSharp Touch Bitmap Image

In the app I'm trying to develop a key part is getting the position of where the user has touched. First I thought of using a tap gesture recognizer but after a quick google search I learned that was useless (See here for an example).
Then I believe I discovered SkiaSharp and after learning how to use it, at least somewhat, I'm still not sure how I get the proper coordinates of a touch. Here are sections of the code in my project that are relevant to the problem.
Canvas Touch Function
private void canvasView_Touch(object sender, SKTouchEventArgs e)
{
// Only carry on with this function if the image is already on screen.
if(m_isImageDisplayed)
{
// Use switch to get what type of action occurred.
switch (e.ActionType)
{
case SKTouchAction.Pressed:
TouchImage(e.Location);
// Update simply tries to draw a small square using double for loops.
m_editedBm = Update(sender);
// Refresh screen.
(sender as SKCanvasView).InvalidateSurface();
break;
default:
break;
}
}
}
Touch Image
private void TouchImage(SKPoint point)
{
// Is the point in range of the canvas?
if(point.X >= m_x && point.X <= (m_editedCanvasSize.Width + m_x) &&
point.Y >= m_y && point.Y <= (m_editedCanvasSize.Height + m_y))
{
// Save the point for later and set the boolean to true so the algorithm can begin.
m_clickPoint = point;
m_updateAlgorithm = true;
}
}
Here I'm just seeing or TRYING to see if the point clicked was in range of the image and I made a different SkSize variable to help. Ignore the boolean, not that important.
Update function (function that attempts to draw ON the point pressed so it's the most important)
public SKBitmap Update(object sender)
{
// Create the default test color to replace current pixel colors in the bitmap.
SKColor color = new SKColor(255, 255, 255);
// Create a new surface with the current bitmap.
using (var surface = new SKCanvas(m_editedBm))
{
/* According to this: https://learn.microsoft.com/en-us/xamarin/xamarin-forms/user-interface/graphics/skiasharp/paths/finger-paint ,
the points I have to start are in Xamarin forms coordinates, but I need to translate them to SkiaSharp coordinates which are in
pixels. */
Point pt = new Point((double)m_touchPoint.X, (double)m_touchPoint.Y);
SKPoint newPoint = ConvertToPixel(pt);
// Loop over the touch point start, then go to a certain value (like x + 100) just to get a "block" that's been altered for pixels.
for (int x = (int)newPoint.X; x < (int)newPoint.X + 200.0f; ++x)
{
for (int y = (int)newPoint.Y; y < (int)newPoint.Y + 200.0f; ++y)
{
// According to the x and y, change the color.
m_editedBm.SetPixel(x, y, color);
}
}
return m_editedBm;
}
}
Here I'm THINKING that it'll start, you know, at the coordinate I pressed (and these coordinates have been confirmed to be within the range of the image thanks to the function "TouchImage". And when it does get the correct coordinates (or at least it SHOULD of done that) the square will be drawn one "line" at a time. I have a game programming background so this kind of sounds simple but I can't believe I didn't get this right the first time.
Also I have another function, it MIGHT prove worthwhile because the original image is rotated and then put on screen. Why? Well by default the image, after taking the picture, and then displayed, is rotated to the left. I had no idea why but I corrected it with the following function:
// Just rotate the image because for some reason it's titled 90 degrees to the left.
public static SKBitmap Rotate()
{
using (var bitmap = m_bm)
{
// The new ones width IS the old ones height.
var rotated = new SKBitmap(bitmap.Height, bitmap.Width);
using (var surface = new SKCanvas(rotated))
{
surface.Translate(rotated.Width, 0.0f);
surface.RotateDegrees(90);
surface.DrawBitmap(bitmap, 0, 0);
}
return rotated;
}
}
I'll keep reading and looking up stuff on what I'm doing wrong, but if any help is given I'm grateful.

Cropping rectangle arround a face/coordinates

I looking for a way to calculate a rectangle (x,y,width & height) which can be used for cropping an image around the coordinates of a selected face.
I have an image 995x1000 (https://tourspider.blob.core.windows.net/img/artists/original/947a0903-9b64-42a1-8179-108bab2a9e46.jpg) by which the center of the face is located at 492x325. I can find this information using various services so even for multiple faces in an image I'm ableto find the most prominent - hence a single coordinate.
Now i need to make various sized cropped images from the source image (200x150, 200x200 & 750x250). Now I can't seem to solve how to best calculate a rectangle around the center coordinates while taking into account the edges of the images. The face should be as central as possible in the image.
Even after experimenting with various services (https://www.microsoft.com/cognitive-services/en-us/computer-vision-api) the result are pretty poor as the face, mainly in the 750x250, is sometimes not even present.
I'm also experimenting with the ImageProcessor (http://imageprocessor.org/) library with which you can use anchors for resizing but can't get the desired result.
Does anybody has an idea on how best crop around predefined coordinates?
Using Imageprocessor I created the following solution. It is not yet perfect but goes a long way ;)
public static void StoreImage(byte[] image, int destinationWidth, int destinationHeight, Point anchor)
{
using (var inStream = new MemoryStream(image))
using (var imageFactory = new ImageFactory())
{
// Load the image in the image factory
imageFactory.Load(inStream);
var originalSourceWidth = imageFactory.Image.Width;
var originalSourceHeight = imageFactory.Image.Height;
// Resizes the image until the shortest side reaches the set given dimension.
// This will maintain the aspect ratio of the original image.
imageFactory.Resize(new ResizeLayer(new Size(destinationWidth, destinationHeight), ResizeMode.Min));
var resizedSourceWidth = imageFactory.Image.Width;
var resizedSourceHeight = imageFactory.Image.Height;
//Adjust anchor position
var resizedAnchorX = anchor.X/(originalSourceWidth / resizedSourceWidth);
var resizedAnchorY = anchor.Y/(originalSourceHeight/resizedSourceHeight);
if (anchor.X > originalSourceWidth || anchor.Y > originalSourceHeight)
{
throw new Exception($"Invalid anchor point. Image: {originalSourceWidth}x{originalSourceHeight}. Anchor: {anchor.X}x{anchor.Y}.");
}
var cropX = resizedAnchorX - destinationWidth/2;
if (cropX < 0)
cropX = 0;
var cropY = resizedAnchorY - destinationHeight/2;
if (cropY < 0)
cropY = 0;
if (cropY > resizedSourceHeight)
cropY = resizedSourceHeight;
imageFactory
.Crop(new Rectangle(cropX, cropY, destinationWidth, destinationHeight))
.Save($#"{Guid.NewGuid()}.jpg");
}
}

How to apply barrel distortion (lens correction) with SharpDX?

i've made a small application to grap screenshots from any windowed game and send it to the iPhone to creat an virtual reality app, like oculus rift (see https://github.com/gagagu/VR-Streamer-Windows-Server for more info).
The images will be captured with SharpDX and everything is working fine.
Now i want to implement such like lens correction (barrel distortion) and i'm looking for the fastest way to realize it. I'm looking many internet sites with informations about barrel distortion and i think the fastest way is to use a shader for it, but i'm very new to sharpdx (and no knowledge about shaders) and i don't know how to implement a shader to my code. The most tutorials applys a shader to an object (like a cube) but not to a captured image and so i don't know how to do it.
[STAThread]
public System.Drawing.Bitmap Capture()
{
isInCapture = true;
try
{
// init
bool captureDone = false;
bitmap = new System.Drawing.Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
// the capture needs some time
for (int i = 0; !captureDone; i++)
{
try
{
//capture
duplicatedOutput.AcquireNextFrame(-1, out duplicateFrameInformation, out screenResource);
// only for wait
if (i > 0)
{
using (var screenTexture2D = screenResource.QueryInterface<Texture2D>())
device.ImmediateContext.CopyResource(screenTexture2D, screenTexture);
mapSource = device.ImmediateContext.MapSubresource(screenTexture, 0, MapMode.Read, MapFlags.None);
mapDest = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, captureRect.Width, captureRect.Height),
ImageLockMode.WriteOnly, bitmap.PixelFormat);
sourcePtr = mapSource.DataPointer;
destPtr = mapDest.Scan0;
// set x position offset to rect.x
int rowPitch = mapSource.RowPitch - offsetX;
// set pointer to y position
sourcePtr = IntPtr.Add(sourcePtr, mapSource.RowPitch * captureRect.Y);
for (int y = 0; y < captureRect.Height; y++) // needs to speed up!!
{
// set pointer to x position
sourcePtr = IntPtr.Add(sourcePtr, offsetX);
// copy pixel to bmp
Utilities.CopyMemory(destPtr, sourcePtr, pWidth);
// incement pointert to next line
sourcePtr = IntPtr.Add(sourcePtr, rowPitch);
destPtr = IntPtr.Add(destPtr, mapDest.Stride);
}
bitmap.UnlockBits(mapDest);
device.ImmediateContext.UnmapSubresource(screenTexture, 0);
captureDone = true;
}
screenResource.Dispose();
duplicatedOutput.ReleaseFrame();
}
catch//(Exception ex) // catch (SharpDXException e)
{
//if (e.ResultCode.Code != SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
//{
// // throw e;
//}
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
}
}
catch
{
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
isInCapture = false;
return bitmap;
}
It would be really great to get a little start assist from someone who willing to help.
I've found some shaders on inet but it is written for opengl (https://github.com/dghost/glslRiftDistort/tree/master/libovr-0.4.x/glsl110). Can i use the also for directx (sharpdx)?
Thanks forward for any help!
Now I've never used DirectX myself, but I suppose you'll need to use HLSL instead of GLSL (which should be fairly similar though). The idea is that you'll have to load your "screenshot" into a texture buffer, as an input to your fragment shader (pixel shader). Fragment shaders are deceptively easy to understand, it's just a piece of code (written in GLSL or HLSL) looking very much like a subset of C to which a few math functions has been added (vector and matrices manipulation mostly) executed for every single pixel to be rendered.
The code should be fairly simple, you'll take the current pixel position, apply the barrel distortion transformation to it's coordinates, then look up that coordinate in your screenshot texture. The transformation should look something like that :
vec2 uv;
/// Barrel Distortion ///
float d=length(uv);
float z = sqrt(1.0 - d * d);
float r = atan(d, z) / 3.14159;
float phi = atan(uv.y, uv.x);
uv = vec2(r*cos(phi)+.5,r*sin(phi)+.5);
Here's a shadertoy link if you wanna play with it and figure out how it works
I have no idea how HLSL handles texture filtering (which pixel you'll get when using floating point values for coordinates), but I'd put my money on bilinear filtering, which may very well give an unpleasant pixelyness to your output. You'll have to look at better filtering methods once you get the distortion working. Shouldn't be anything too complicated, familiarize yourself with HLSL syntax, find how to load your screenshot into a texture in DirectX and get rolling.
Edit : I said barrel distortion but the code is actually for the fisheye effect. Of course both are pretty much identical, the barrel distortion being only on one axis. I believe what you need is the fisheye effect though, it's what is commonly used for HMDs if I'm not mistaken.

Finding an Image Inside Another Image

I'm trying to build an application that solves a puzzle (trying to develop a graph algorithm), and I don't want to enter sample input by hand all the time.
Edit: I'm not trying to build a game. I'm trying to build an agent which plays the game "SpellSeeker"
Say I have an image (see attachment) on the screen with numbers in it, and I know the locations of the boxes, and I have the exact images for these numbers. What I want to do is simply tell which image (number) is on the corresponding box.
So I guess I need to implement
bool isImageInsideImage(Bitmap numberImage,Bitmap Portion_Of_ScreenCap) or something like that.
What I've tried is (using AForge libraries)
public static bool Contains(this Bitmap template, Bitmap bmp)
{
const Int32 divisor = 4;
const Int32 epsilon = 10;
ExhaustiveTemplateMatching etm = new ExhaustiveTemplateMatching(0.9f);
TemplateMatch[] tm = etm.ProcessImage(
new ResizeNearestNeighbor(template.Width / divisor, template.Height / divisor).Apply(template),
new ResizeNearestNeighbor(bmp.Width / divisor, bmp.Height / divisor).Apply(bmp)
);
if (tm.Length == 1)
{
Rectangle tempRect = tm[0].Rectangle;
if (Math.Abs(bmp.Width / divisor - tempRect.Width) < epsilon
&&
Math.Abs(bmp.Height / divisor - tempRect.Height) < epsilon)
{
return true;
}
}
return false;
}
But it returns false when searching for a black dot in this image.
How can I implement this?
I'm answering my question since I've found the solution:
this worked out for me:
System.Drawing.Bitmap sourceImage = (Bitmap)Bitmap.FromFile(#"C:\SavedBMPs\1.jpg");
System.Drawing.Bitmap template = (Bitmap)Bitmap.FromFile(#"C:\SavedBMPs\2.jpg");
// create template matching algorithm's instance
// (set similarity threshold to 92.5%)
ExhaustiveTemplateMatching tm = new ExhaustiveTemplateMatching(0.921f);
// find all matchings with specified above similarity
TemplateMatch[] matchings = tm.ProcessImage(sourceImage, template);
// highlight found matchings
BitmapData data = sourceImage.LockBits(
new Rectangle(0, 0, sourceImage.Width, sourceImage.Height),
ImageLockMode.ReadWrite, sourceImage.PixelFormat);
foreach (TemplateMatch m in matchings)
{
Drawing.Rectangle(data, m.Rectangle, Color.White);
MessageBox.Show(m.Rectangle.Location.ToString());
// do something else with matching
}
sourceImage.UnlockBits(data);
The only problem was it was finding all (58) boxes for said game. But changing the value 0.921f to 0.98 made it perfect, i.e. it finds only the specified number's image (template)
Edit: I actually have to enter different similarity thresholds for different pictures. I found the optimized values by trying, in the end I have a function like
float getSimilarityThreshold(int number)
A better approach is to build a custom class which holds all the information you need instead of relying on the image itself.
For example:
public class MyTile
{
public Bitmap TileBitmap;
public Location CurrentPosition;
public int Value;
}
This way you can "move around" the tile class and read the value from the Value field instead of analyzing the image. You just draw whatever image the class hold to the position it's currently holding.
You tiles can be held in an array like:
private list<MyTile> MyTiles = new list<MyTile>();
Extend class as needed (and remember to Dispose those images when they are no longer needed).
if you really want to see if there is an image inside the image, you can check out this extension I wrote for another post (although in VB code):
Vb.Net Check If Image Existing In Another Image

Get DeviceContext of Entire Screen with Multiple Montiors

I need to draw a line (with the mouse) over everything with C#. I can get a Graphics object of the desktop window by using P/Invoke:
DesktopGraphics = Graphics.FromHdc(GetDC(IntPtr.Zero));
However, anything I draw using this graphics object is only showing on the left monitor, and nothing on the right monitor. It doesn't fail or anything, it just doesn't show.
After I create the Graphics object, it shows the visible clip region to be 1680 x 1050 which is the resolution of my left monitor. I can only assume that it's only getting a device context for the left monitor. Is their a way to get the device context for both (or any number) monitors?
EDIT 3/7/2009:
Additional information about the fix I used.
I used the fix provided by colithium to come up with the following code for creating a graphics object for each monitor as well as a way to store the offset so that I can translate global mouse points to valid points on the graphics surface.
private void InitializeGraphics()
{
// Create graphics for each display using compatibility mode
CompatibilitySurfaces = Screen.AllScreens.Select(s => new CompatibilitySurface()
{
SurfaceGraphics = Graphics.FromHdc(CreateDC(null, s.DeviceName, null, IntPtr.Zero)),
Offset = new Size(s.Bounds.Location)
}).ToArray();
}
private class CompatibilitySurface : IDisposable
{
public Graphics SurfaceGraphics = null;
public Size Offset = default(Size);
public PointF[] OffsetPoints(PointF[] Points)
{
return Points.Select(p => PointF.Subtract(p, Offset)).ToArray();
}
public void Dispose()
{
if (SurfaceGraphics != null)
SurfaceGraphics.Dispose();
}
}
[DllImport("gdi32.dll")]
static extern IntPtr CreateDC(string lpszDriver, string lpszDevice, string lpszOutput, IntPtr lpInitData);
Here is a link to another person that had the same problem. It was solved with a call to:
CreateDC(TEXT("DISPLAY"),NULL,NULL,NULL)
which will return a DC to all monitors.
Following URL to get EnumDisplayMonitor may solve your problem
MSDN
To retrieve information about all of the display monitors, use code like this:
EnumDisplayMonitors(NULL, NULL, MyInfoEnumProc, 0); One more URL given at
MSJ

Categories