Win2D Keystone Correction - c#

I'm trying to use Win2D/C# to project an image using a overhead projector and I need to use a Win2D effect to do Keystone Correction (pre-warp the image) as the final step.
Basically I'm drawing a rectangle, then trying to use a Transform3DEffect to warp it before rendering. I can't figure out what Matrix transformation combination to use to get it to work. Doing a full camera projection seems like overkill since I only need warping in one direction (see image below). What transforms should I use?

Using an Image like following, can get you a similar effect.
https://i.stack.imgur.com/5QnEm.png
I am unsure what results in the "bending".
Code for creating the displacement map (with GDI+, because you can set pixels fast).
The LockBitmap you can find here
static void DrawDisplacement(int width, int height, LockBitmap lbmp)
{
for (int x = 0; x < width; x++)
for (int y = 0; y < height; y++)
{
int roff = (int)((((width >> 1) - x) / (float)(width >> 1)) * ((height - y) / (float)height) * 127);
int goff = 0;
lbmp.SetPixel(x, y, Color.FromArgb(127 - roff, 127 - goff, 0));
}
}
Drawing in Win2D looks something like this, where displacementImage is the loaded file and offscreen, is a 'CanvasRenderTarget' on which I drew the grid.
//Scaling for fitting the image to the content
ICanvasImage scaledDisplacement = new Transform2DEffect
{
BorderMode = EffectBorderMode.Hard,
Source = displacementImage,
TransformMatrix = Matrix3x2.CreateScale((float) (sender.Size.Width / displacementImage.Bounds.Width), (float) (sender.Size.Height / displacementImage.Bounds.Height)),
Sharpness = 1f,
BufferPrecision = CanvasBufferPrecision.Precision32Float,
InterpolationMode = CanvasImageInterpolation.HighQualityCubic,
};
//Blurring, for a better result
ICanvasImage displacement = new GaussianBlurEffect
{
BorderMode = EffectBorderMode.Hard,
Source = scaledDisplacement,
BufferPrecision = CanvasBufferPrecision.Precision32Float,
BlurAmount = 2,
Optimization = EffectOptimization.Quality,
};
ICanvasImage graphicsEffect = new DisplacementMapEffect
{
Source = offscreen,
Displacement = displacement,
XChannelSelect = EffectChannelSelect.Red,
YChannelSelect = EffectChannelSelect.Green,
Amount = 800,//change for more or less displacement
BufferPrecision = CanvasBufferPrecision.Precision32Float,
};

Related

Cut faraway objects based on depth map

I would like to do grabcut which uses a depth map that cuts away far objects, that is used in mixed reality application. So I would like to show just the front of what I see and the background as virtual reality scene.
The problem right now I tried to adapt so code and what I get is front which is cut but in black color, the mask actually.
I don't know where is the problem settle.
The input is a depth map from zed camera.
here is a picture of the behaviour:
My trial:
private void convertToGrayScaleValues(Mat mask)
{
int width = mask.rows();
int height = mask.cols();
byte[] buffer = new byte[width * height];
mask.get(0, 0, buffer);
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
int value = buffer[y * width + x];
if (value == Imgproc.GC_BGD)
{
buffer[y * width + x] = 0; // for sure background
}
else if (value == Imgproc.GC_PR_BGD)
{
buffer[y * width + x] = 85; // probably background
}
else if (value == Imgproc.GC_PR_FGD)
{
buffer[y * width + x] = (byte)170; // probably foreground
}
else
{
buffer[y * width + x] = (byte)255; // for sure foreground
}
}
}
mask.put(0, 0, buffer);
}
For Each depth frame from Camera:
Mat erodeElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(4, 4));
Mat dilateElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(7, 7));
depth.copyTo(maskFar);
Core.normalize(maskFar, maskFar, 0, 255, Core.NORM_MINMAX, CvType.CV_8U);
Imgproc.cvtColor(maskFar, maskFar, Imgproc.COLOR_BGR2GRAY);
Imgproc.threshold(maskFar, maskFar, 180, 255, Imgproc.THRESH_BINARY);
Imgproc.dilate(maskFar, maskFar, erodeElement);
Imgproc.erode(maskFar, maskFar, dilateElement);
Mat bgModel = new Mat();
Mat fgModel = new Mat();
Imgproc.grabCut(image, maskFar, new OpenCVForUnity.CoreModule.Rect(), bgModel, fgModel, 1, Imgproc.GC_INIT_WITH_MASK);
convertToGrayScaleValues(maskFar); // back to grayscale values
Imgproc.threshold(maskFar, maskFar, 180, 255, Imgproc.THRESH_TOZERO);
Mat foreground = new Mat(image.size(), CvType.CV_8UC4, new Scalar(0, 0, 0));
image.copyTo(foreground, maskFar);
Utils.fastMatToTexture2D(foreground, texture);
In this case, the graph cut on the depth image might not be the correct method to solve all of your issue.
If you insist the processing should be done in the depth image. To find everything that is not on the table and filter out the table part. You may first apply the disparity based approach for finding the object that's is not on the ground. Reference: https://github.com/windowsub0406/StereoVision
Then based on the V disparity output image, find the locally connected component that is grouped together. You may follow this link how to do this disparity map in OpenCV which is asking the similar way to find the objects that's not on the ground
If you are ok with RGB based approaches, then use any deep learning-based method to recognize the monitor should be the correct approaches. It can directly detect the mointer bounding box. By apply this bounding box to the depth image, you may have what you want. For deep learning based approaches, there are many available package such as Yolo series. You may find one that is suitable for you. reference: https://medium.com/#dvshah13/project-image-recognition-1d316d04cb4c

Excluding small chunks of pixels from Image .Net

I have black image with white lines. Is it possible to exclude chunks of whihte pixels, that are smaller than specific number? For example: change color of chunks of pixels that are made from less than 10 pixels from white to black.
Original Image:
Image on the output(small areas of white pixels are removed):
Right now I work with AForge library for C#, but C++ ways of solving this are also apreciated(Open CV, for example). And hint, on how this functionality might be called are also appreciated.
Without worrying to much about your details, it does seem trivially simple
Use bitmap in 32bits and use LockBits to get scanlines and direct pointer access to the array.
Scan every pixel with 2 for loops
Every time you find one that matches your target color, scan left right and up and down (X) Amount of pixels to determine if it matches your requirements,
If it does, leave the pixel, if not change it.
if you wanted more speed you could chuck this all in a parallel workload, also there is probably more you could do with a mask array to save you researching dead paths (just a thought)
Note, Obviously you can smarten this up a bit
Exmaple
// lock the array for direct access
var bitmapData = bitmap.LockBits(Bounds, ImageLockMode.ReadWrite, Bitmap.PixelFormat);
// get the pointer
var scan0Ptr = (int*)_bitmapData.Scan0;
// get the stride
var stride = _bitmapData.Stride / BytesPerPixel;
// local method
void Workload(Rectangle bounds)
{
// this is if synchronous, Bounds is just the full image rectangle
var rect = bounds ?? Bounds;
var white = Color.White.ToArgb();
var black = Color.Black.ToArgb();
// scan all x
for (var x = rect.Left; x < rect.Right; x++)
{
var pX = scan0Ptr + x;
// scan all y
for (var y = rect.Top; y < rect.Bottom; y++)
{
if (*(pX + y * stride ) != white)
{
// this will turn it to monochrome
// so add your threshold here, ie some more for loops
//*(pX + y * Stride) = black;
}
}
}
}
// unlock the bitmap
bitmap.UnlockBits(_bitmapData);
To parallel'ize it
You could use something like this to break your image up into smaller regions
public static List<Rectangle> GetSubRects(this Rectangle source, int size)
{
var rects = new List<Rectangle>();
for (var x = 0; x < size; x++)
{
var width = Convert.ToInt32(Math.Floor(source.Width / (double)size));
var xCal = 0;
if (x == size - 1)
{
xCal = source.Width - (width * size);
}
for (var y = 0; y < size; y++)
{
var height = Convert.ToInt32(Math.Floor(source.Height / (double)size));
var yCal = 0;
if (y == size - 1)
{
yCal = source.Height - (height * size) ;
}
rects.Add(new Rectangle(width * x, height * y, width+ xCal, height + yCal));
}
}
return rects;
}
And this
private static void DoWorkload(Rectangle bounds, ParallelOptions options, Action<Rectangle?> workload)
{
if (options == null)
{
workload(null);
}
else
{
var size = 5 // how many rects to work on, ie 5 x 5
Parallel.ForEach(bounds.GetSubRects(size), options, rect => workload(rect));
}
}
Usage
DoWorkload(Bounds, options, Workload);

Creating features(point) grid on polygon

I am working on GIS based desktop application using C#. I am using dotspatial library in this project.
Now I need to create a grid of features on polygon. This grid cell (rectangle) should be 20*20 Meter Square.
I have worked on it and able to create grid but facing issue regarding to cell size. Whenever polygon size changed cell size also reduced. My code.
// Polygon Width = 2335
// Polygon Height = 2054
int RowsCount = 111;
int ColumnsCount = 111;
var maxPointX = Polygon.Extent.MaxPointX;
var minPointX = Polygon.Extent.MinPointX;
var maxPointY = Polygon.Extent.MaxPointY;
var minPointY = Polygon.Extent.MinPointY;
var dXStep = (maxPointX - minPointX) / (ColumnsCount - 1);
var dYStep = (maxPointY - minPointY) / (RowsCount - 1);
var gridColumnsPoints = new double[1000000];
var gridRowPoints = new double[1000000];
// Calculate the coordinates of grid
var nextPointX = minPointX;
for (int i = 1; i <= ColumnsCount; i++)
{
gridColumnsPoints[i - 1] = nextPointX;
nextPointX = nextPointX + dXStep;
}
var nextPointY = minPointY;
for (int i = 1; i <= RowsCount; i++)
{
gridRowPoints[i - 1] = nextPointY;
nextPointY = nextPointY + dYStep;
}
Output
Now when I tried this code on small size of Polygon then grid cell size also decreased.
I know my approach is not correct, So I have searched on it and got some tools. like
https://gis.stackexchange.com/questions/79681/creating-spatially-projected-polygon-grid-with-arcmap
But I want to create it in C# and unable to found any algorithm or any other helping material.
Please share your knowledge. Thanks
I am not able to understand, if you want the grid cell size to be 20*20 meters, how does the size change from polygon to polygon. It should always be 20*20 meters.
In you code, where did you get the values for ColumnsCount and RowsCount?
Your dx and dy should always be 20 (if the spatial reference units are in meters) or you need to convert the 20 meters to appropriate length of units of the spatial reference.
Pseudo code for creating grid:
var xMax = Polygon.extent.xmax;
var xMin = Polygon.extent.xmin;
var yMax = Polygon.extent.ymax;
var yMin = Polygon.extent.ymin;
var gridCells = [];
var x = xMin, y = yMin;
while(x <= xMax){
var dx = x + 20;
while(y <= yMax){
var dy = y + 20;
var cell = new Extent(x, y, dx, dy);
gridCells.push(cell);
y = dy;
}
x = dx;
}
The problem is here:
var dXStep = (maxPointX - minPointX) / (ColumnsCount - 1);
var dYStep = (maxPointY - minPointY) / (RowsCount - 1);
because it makes the grid size dependent on the polygon, but it should be fixed to the scale of the view.
I'm not familiar with the dotspatial framwork, but you must operate in a coordinate system of a kind. You should align your grid to that coordinate system by calculating the first x pos to the left of the polygon in some distance from the polygons bounding box (max/min) and then step with the resolution of the coordinate system through to the max X of the polygon.

How to count spring coil turns?

In reference to: How to detect and count a spiral's turns
I am not able to get count even in the pixel based calculation also.
If I have attached image how to start with the counting the turns.
I tried the FindContours(); but doesn't quite get the turns segregated which it can't. Also the matchshape() I have the similarity factor but for whole coil.
So I tried as follows for turn count:
public static int GetSpringTurnCount()
{
if (null == m_imageROIed)
return -1;
int imageWidth = m_imageROIed.Width;
int imageHeight = m_imageROIed.Height;
if ((imageWidth <= 0) || (imageHeight <= 0))
return 0;
int turnCount = 0;
Image<Gray, float> imgGrayF = new Image<Gray, float>(imageWidth, imageHeight);
CvInvoke.cvConvert(m_imageROIed, imgGrayF);
imgGrayF = imgGrayF.Laplace(1); // For saving integer overflow.
Image<Gray, byte> imgGray = new Image<Gray, byte>(imageWidth, imageHeight);
Image<Gray, byte> cannyEdges = new Image<Gray, byte>(imageWidth, imageHeight);
CvInvoke.cvConvert(imgGrayF, imgGray);
cannyEdges = imgGray.Copy();
//cannyEdges = cannyEdges.ThresholdBinary(new Gray(1), new Gray(255));// = cannyEdges > 0 ? 1 : 0;
cannyEdges = cannyEdges.Max(0);
cannyEdges /= 255;
Double[] sumRow = new Double[cannyEdges.Cols];
//int sumRowIndex = 0;
int Rows = cannyEdges.Rows;
int Cols = cannyEdges.Cols;
for (int X = 0; X < cannyEdges.Cols; X++)
{
Double sumB = 0;
for (int Y = 0; Y < cannyEdges.Rows; Y ++)
{
//LineSegment2D lines1 = new LineSegment2D(new System.Drawing.Point(X, 0), new System.Drawing.Point(X, Y));
Double pixels = cannyEdges[Y, X].Intensity;
sumB += pixels;
}
sumRow[X] = sumB;
}
Double avg = sumRow.Average();
List<int> turnCountList = new List<int>();
int cnt = 0;
foreach(int i in sumRow)
{
sumRow[cnt] /= avg;
if(sumRow[cnt]>3.0)
turnCountList.Add((int)sumRow[cnt]);
cnt++;
}
turnCount = turnCountList.Count();
cntSmooth = cntSmooth * 0.9f + (turnCount) * 0.1f;
return (int)cntSmooth;
}
I am next trying surf.
==================================================
Edit: Adding samples. If you like it do it.
==================================================
Edit: Tried another algo:
ROI then Rotate ( biggest thin light blue rectangle )
GetMoments() shrink ROI height and position.Y using the moment.
Set the shrinked ROI and ._And() it with a blank image. ( Gray region with green rectangle )
cut the image into half-half.
contour and fit ellipse.
get maximum number of fitted ellipses.
Later will work on better algos and results.
Assuming the bigger cluster of white colour is the spring
--EDIT--
Apply inverse threshold to the picture and fill corners with flood fill algorithm.
Find the rotated bounding box of the biggest white cluster using findContours and minAreaRect
Trace the box longer axis doing the following
for each pixel along the axis trace axis line perpendicular going through current pixel
This line will cross the spring in minimum two points.
Find the point with the bigger distane from the axis
This will create collection on points similar to sine function
Count the peaks or clusters of this collection this will get twice the number of loops.
All this assuming you don't have high noise in the picture.

c# bitmap question

Sorry for the previous post. I now show the full code here.
I need to know what the bitmap.Width and bitmap.Height - 1 for and also the bitmap.Scan0.
I search in the internet but it does not give any full explanation for that.
I will appreciate anyone who can briefly explain the whole thing. Thank you.
public static double[][] GetRgbProjections(Bitmap bitmap)
{
var width = bitmap.Width - 1;
var height = bitmap.Height - 1;
var horizontalProjection = new double[width];
var verticalProjection = new double[height];
var bitmapData1 = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
unsafe
{
var imagePointer1 = (byte*)bitmapData1.Scan0;
for (var y = 0; y < height; y++)
{
for (var x = 0; x < width; x++)
{
var blu = imagePointer1[0];
var green = imagePointer1[1];
var red = imagePointer1[2];
int luminosity = (byte)(((0.2126 * red) + (0.7152 * green)) + (0.0722 * blu));
horizontalProjection[x] += luminosity;
verticalProjection[y] += luminosity;
imagePointer1 += 4;
}
imagePointer1 += bitmapData1.Stride - (bitmapData1.Width * 4);
}
}
MaximizeScale(ref horizontalProjection, height);
MaximizeScale(ref verticalProjection, width);
var projections =
new[]
{
horizontalProjection,
verticalProjection
};
bitmap.UnlockBits(bitmapData1);
return projections;
}
Apparently it runs through every pixel of a RGBA bitmap and calculates the luminosity per pixel which its tracks inside two arrays, luminosity per horizontal line and luminosity per vertical line.
Unless I am mistaken, the -1 should not even be there. When you have a bitmap of 100x100 you want to create an array with 100 elements, not an array with 99 elements (width-1) since you want to track every horizontal and vertical line.

Categories