I have managed to make a surface from a set point points in vtk. Now I need to cut a plane through the surface and make a 2D contour that I can output as vtkImageData. I have made code that only makes a projection onto a plane. Can anyone tell me what I am doing wrong to get a cut plane through the polydata?
vtkSphereSource sphereSource = vtkSphereSource.New();
sphereSource.SetPhiResolution(30);
sphereSource.SetThetaResolution(30);
sphereSource.SetCenter(40, 40, 0);
sphereSource.SetRadius(20);
vtkDataSetSurfaceFilter surfaceFilter = vtkDataSetSurfaceFilter.New();
surfaceFilter.SetInputConnection(sphereSource.GetOutputPort());
surfaceFilter.Update();
// generate circle by cutting the sphere with an implicit plane
// (through its center, axis-aligned)
vtkCutter circleCutter = vtkCutter.New();
circleCutter.SetInputConnection(sphereSource.GetOutputPort());
vtkPlane cutPlane = vtkPlane.New();
double[] Origin = sphereSource.GetCenter();
cutPlane.SetOrigin(Origin[0], Origin[1], Origin[2]);
cutPlane.SetNormal(0, 0, 1);
circleCutter.SetCutFunction(cutPlane);
vtkStripper stripper = vtkStripper.New();
stripper.SetInputConnection(circleCutter.GetOutputPort()); // valid circle
stripper.Update();
// that's our circle
vtkPolyData circle = stripper.GetOutput();
// prepare the binary image's voxel grid
vtkImageData whiteImage = vtkImageData.New();
double[] bounds;
bounds = circle.GetBounds();
whiteImage.SetNumberOfScalarComponents(1);
// whiteImage.SetScalarTypeToChar();
whiteImage.SetScalarType(3);
whiteImage.SetSpacing(.5, .5, .5);
// compute dimensions
int[] dim = new int[3];
for (int i = 0; i < 3; i++)
{
dim[i] = (int)Math.Ceiling((bounds[i * 2 + 1] - bounds[i * 2]) / .5) + 1;
if (dim[i] < 1)
dim[i] = 1;
}
whiteImage.SetDimensions(dim[0], dim[1], dim[2]);
whiteImage.SetExtent(0, dim[0] - 1, 0, dim[1] - 1, 0, dim[2] - 1);
whiteImage.SetOrigin(bounds[0], bounds[2], bounds[4]);
whiteImage.AllocateScalars();
// fill the image with foreground voxels:
byte inval = 255;
byte outval = 0;
int count = whiteImage.GetNumberOfPoints();
for (int i = 0; i < count; ++i)
{
whiteImage.GetPointData().GetScalars().SetTuple1(i, inval);
}
// sweep polygonal data (this is the important thing with contours!)
vtkLinearExtrusionFilter extruder = vtkLinearExtrusionFilter.New();
extruder.SetInput(circle); ///todo warning this maybe setinputconnection
extruder.SetScaleFactor(1.0);
extruder.SetExtrusionTypeToNormalExtrusion();
extruder.SetVector(0, 0, 1);
extruder.Update();
// polygonal data -. image stencil:
vtkPolyDataToImageStencil pol2stenc = vtkPolyDataToImageStencil.New();
pol2stenc.SetTolerance(0); // important if extruder.SetVector(0, 0, 1) !!!
pol2stenc.SetInputConnection(extruder.GetOutputPort());
pol2stenc.SetOutputOrigin(bounds[0], bounds[2], bounds[4]);
pol2stenc.SetOutputSpacing(.5, .5, .5);
int[] Extent = whiteImage.GetExtent();
pol2stenc.SetOutputWholeExtent(Extent[0], Extent[1], Extent[2], Extent[3], Extent[4], Extent[5]);
pol2stenc.Update();
// cut the corresponding white image and set the background:
vtkImageStencil imgstenc = vtkImageStencil.New();
imgstenc.SetInput(whiteImage);
imgstenc.SetStencil(pol2stenc.GetOutput());
imgstenc.ReverseStencilOff();
imgstenc.SetBackgroundValue(outval);
imgstenc.Update();
vtkImageData stencil = imgstenc.GetOutput();
int[] Dims = stencil.GetDimensions();
int[,] DataMap = new int[Dims[0], Dims[1]];
I think you are getting the contour correctly: http://www.vtk.org/Wiki/VTK/Examples/Cxx/Filtering/ContoursFromPolyData - now how do you expect to output a contour as an image? You just want to set "pixels on the contour" to some value, and "pixels not on the contour" to a different value? How about this? http://www.vtk.org/Wiki/VTK/Examples/Cxx/PolyData/PolyDataContourToImageData
Related
I am working on GIS based desktop application using C#. I am using dotspatial library in this project.
Now I need to create a grid of features on polygon. This grid cell (rectangle) should be 20*20 Meter Square.
I have worked on it and able to create grid but facing issue regarding to cell size. Whenever polygon size changed cell size also reduced. My code.
// Polygon Width = 2335
// Polygon Height = 2054
int RowsCount = 111;
int ColumnsCount = 111;
var maxPointX = Polygon.Extent.MaxPointX;
var minPointX = Polygon.Extent.MinPointX;
var maxPointY = Polygon.Extent.MaxPointY;
var minPointY = Polygon.Extent.MinPointY;
var dXStep = (maxPointX - minPointX) / (ColumnsCount - 1);
var dYStep = (maxPointY - minPointY) / (RowsCount - 1);
var gridColumnsPoints = new double[1000000];
var gridRowPoints = new double[1000000];
// Calculate the coordinates of grid
var nextPointX = minPointX;
for (int i = 1; i <= ColumnsCount; i++)
{
gridColumnsPoints[i - 1] = nextPointX;
nextPointX = nextPointX + dXStep;
}
var nextPointY = minPointY;
for (int i = 1; i <= RowsCount; i++)
{
gridRowPoints[i - 1] = nextPointY;
nextPointY = nextPointY + dYStep;
}
Output
Now when I tried this code on small size of Polygon then grid cell size also decreased.
I know my approach is not correct, So I have searched on it and got some tools. like
https://gis.stackexchange.com/questions/79681/creating-spatially-projected-polygon-grid-with-arcmap
But I want to create it in C# and unable to found any algorithm or any other helping material.
Please share your knowledge. Thanks
I am not able to understand, if you want the grid cell size to be 20*20 meters, how does the size change from polygon to polygon. It should always be 20*20 meters.
In you code, where did you get the values for ColumnsCount and RowsCount?
Your dx and dy should always be 20 (if the spatial reference units are in meters) or you need to convert the 20 meters to appropriate length of units of the spatial reference.
Pseudo code for creating grid:
var xMax = Polygon.extent.xmax;
var xMin = Polygon.extent.xmin;
var yMax = Polygon.extent.ymax;
var yMin = Polygon.extent.ymin;
var gridCells = [];
var x = xMin, y = yMin;
while(x <= xMax){
var dx = x + 20;
while(y <= yMax){
var dy = y + 20;
var cell = new Extent(x, y, dx, dy);
gridCells.push(cell);
y = dy;
}
x = dx;
}
The problem is here:
var dXStep = (maxPointX - minPointX) / (ColumnsCount - 1);
var dYStep = (maxPointY - minPointY) / (RowsCount - 1);
because it makes the grid size dependent on the polygon, but it should be fixed to the scale of the view.
I'm not familiar with the dotspatial framwork, but you must operate in a coordinate system of a kind. You should align your grid to that coordinate system by calculating the first x pos to the left of the polygon in some distance from the polygons bounding box (max/min) and then step with the resolution of the coordinate system through to the max X of the polygon.
I'm trying to use Win2D/C# to project an image using a overhead projector and I need to use a Win2D effect to do Keystone Correction (pre-warp the image) as the final step.
Basically I'm drawing a rectangle, then trying to use a Transform3DEffect to warp it before rendering. I can't figure out what Matrix transformation combination to use to get it to work. Doing a full camera projection seems like overkill since I only need warping in one direction (see image below). What transforms should I use?
Using an Image like following, can get you a similar effect.
https://i.stack.imgur.com/5QnEm.png
I am unsure what results in the "bending".
Code for creating the displacement map (with GDI+, because you can set pixels fast).
The LockBitmap you can find here
static void DrawDisplacement(int width, int height, LockBitmap lbmp)
{
for (int x = 0; x < width; x++)
for (int y = 0; y < height; y++)
{
int roff = (int)((((width >> 1) - x) / (float)(width >> 1)) * ((height - y) / (float)height) * 127);
int goff = 0;
lbmp.SetPixel(x, y, Color.FromArgb(127 - roff, 127 - goff, 0));
}
}
Drawing in Win2D looks something like this, where displacementImage is the loaded file and offscreen, is a 'CanvasRenderTarget' on which I drew the grid.
//Scaling for fitting the image to the content
ICanvasImage scaledDisplacement = new Transform2DEffect
{
BorderMode = EffectBorderMode.Hard,
Source = displacementImage,
TransformMatrix = Matrix3x2.CreateScale((float) (sender.Size.Width / displacementImage.Bounds.Width), (float) (sender.Size.Height / displacementImage.Bounds.Height)),
Sharpness = 1f,
BufferPrecision = CanvasBufferPrecision.Precision32Float,
InterpolationMode = CanvasImageInterpolation.HighQualityCubic,
};
//Blurring, for a better result
ICanvasImage displacement = new GaussianBlurEffect
{
BorderMode = EffectBorderMode.Hard,
Source = scaledDisplacement,
BufferPrecision = CanvasBufferPrecision.Precision32Float,
BlurAmount = 2,
Optimization = EffectOptimization.Quality,
};
ICanvasImage graphicsEffect = new DisplacementMapEffect
{
Source = offscreen,
Displacement = displacement,
XChannelSelect = EffectChannelSelect.Red,
YChannelSelect = EffectChannelSelect.Green,
Amount = 800,//change for more or less displacement
BufferPrecision = CanvasBufferPrecision.Precision32Float,
};
Converting a bitmap to grayscale is pretty easy with AForge:
public static Bitmap ConvertToGrayScale(this Bitmap me)
{
if (me == null)
return null;
// first convert to a grey scale image
var filterGreyScale = new Grayscale(0.2125, 0.7154, 0.0721);
me = filterGreyScale.Apply(me);
return me;
}
But I need something more tricky:
Imagine you want to convert everything to grayscale except for a circle in the middle of the bitmap. In other words: a circle in the middle of the given bitmap should keep its original colours.
Let's assume the radius of the circle is 20px, how should I approach this?
This can be accomplished using MaskedFilter with a mask that defines the circled area you describe. As the documentation states
Mask can be specified as .NET's managed Bitmap, as UnmanagedImage or
as byte array. In the case if mask is specified as image, it must be 8
bpp grayscale image. In all case mask size must be the same as size of
the image to process.
So the mask image has to be generated based on the source image's width and height.
I haven't compiled the following code but it should get you on your way. If the circle is always in the same spot, you could generate the image mask outside the method so that it doesn't have to be regenerated each time you apply the filter. Actually you could have the whole MaskedFilter generated outside the method that applies it if nothing changes but the source image.
public static Bitmap ConvertToGrayScale(this Bitmap me)
{
if (me == null)
return null;
var radius = 20, x = me.Width / 2, y = me.Height / 2;
using (Bitmap maskImage = new Bitmap(me.Width, me.Height, PixelFormat.Format8bppIndexed))
{
using (Graphics g = Graphics.FromImage(maskImage))
using (Brush b = new SolidBrush(ColorTranslator.FromHtml("#00000000")))
g.FillEllipse(b, x, y, radius, radius);
var maskedFilter = new MaskedFilter(new Grayscale(0.2125, 0.7154, 0.0721), maskImage);
return maskedFilter.Apply(me);
}
}
EDIT
The solution for this turned out to be a lot more trickier than I expected. The main problem was that the MaskedFilter doesn't allow the usage of filters that change the images format, which the Grayscale filter does (it changes the source to an 8bpp or 16 bpp image).
The following is the resulting code, which I have tested, with comments added to each part of the ConvertToGrayScale method explaining the logic behind it. The gray-scaled portion of the image has to be converted back to RGB since the Merge filter doesn't support merging two images with different formats.
static class MaskedImage
{
public static void DrawCircle(byte[,] img, int x, int y, int radius, byte val)
{
int west = Math.Max(0, x - radius),
east = Math.Min(x + radius, img.GetLength(1)),
north = Math.Max(0, y - radius),
south = Math.Min(y + radius, img.GetLength(0));
for (int i = north; i < south; i++)
for (int j = west; j < east; j++)
{
int dx = i - y;
int dy = j - x;
if (Math.Sqrt(dx * dx + dy * dy) < radius)
img[i, j] = val;
}
}
public static void Initialize(byte[,] arr, byte val)
{
for (int i = 0; i < arr.GetLength(0); i++)
for (int j = 0; j < arr.GetLength(1); j++)
arr[i, j] = val;
}
public static void Invert(byte[,] arr)
{
for (int i = 0; i < arr.GetLength(0); i++)
for (int j = 0; j < arr.GetLength(1); j++)
arr[i, j] = (byte)~arr[i, j];
}
public static Bitmap ConvertToGrayScale(this Bitmap me)
{
if (me == null)
return null;
int radius = 20, x = me.Width / 2, y = me.Height / 2;
// Generate a two-dimensional `byte` array that has the same size as the source image, which will be used as the mask.
byte[,] mask = new byte[me.Height, me.Width];
// Initialize all its elements to the value 0xFF (255 in decimal).
Initialize(mask, 0xFF);
// "Draw" a circle in the `byte` array setting the positions inside the circle with the value 0.
DrawCircle(mask, x, y, radius, 0);
var grayFilter = new Grayscale(0.2125, 0.7154, 0.0721);
var rgbFilter = new GrayscaleToRGB();
var maskFilter = new ApplyMask(mask);
// Apply the `Grayscale` filter to everything outside the circle, convert the resulting image back to RGB
Bitmap img = rgbFilter.Apply(grayFilter.Apply(maskFilter.Apply(me)));
// Invert the mask
Invert(mask);
// Get only the cirle in color from the original image
Bitmap circleImg = new ApplyMask(mask).Apply(me);
// Merge both the grayscaled part of the image and the circle in color in a single one.
return new Merge(img).Apply(circleImg);
}
}
I've worked on this for some time now with no luck only coming close a time or 2. I've also checked google with no luck so asking for help.
what I'm attempting is scanning each line of image and progress an index as long as the next color is equal to the current pixel then write out a list that contains (1)The length of each corresponding color per row. (2) the x position, and (3) the y position.
any help would be much appreciated.
Get the RGB values of each pixel, and compare with the next pixel values scanned. use an if else to obtain the value... something like this
for (int i = 0; i < image.rows; i++)
{
for(int j=0; j< image.cols; j++)
{
int b = image.at<cv::Vec3b>(i,j)[0];
int g = image.at<cv::Vec3b>(i,j)[1];
int r = image.at<cv::Vec3b>(i,j)[2];
}
// add your comparison here. Dun wanna do your work for u.
}
You can also convert the image to grayscale for faster processing but there will be an information loss.
I eventually used a structure to store the data until after the for loops to ensure all data was collected correctly. Here is the code used in case someone else may be looking for the same solution.
First we mill make our structure to hold the data needed.
internal struct VectorRectangle
{
public int X,Y,Size;
public string HexColor;
}
Now we have our structure we can obtain our values from our image(Note: it sets the width prop only as height not yet implemented). If you see a way to obtimize code please feel free to do so and message me that you changed something :).
internal static unsafe VectorRectangle[] GetRectangles(Bitmap #this)
{
const int PixelSize = 4;
List<VectorRectangle> vectorRectangles = new List<VectorRectangle>();
Point dummyPoint;
Rectangle rectangle = new Rectangle(dummyPoint, #this.Size);
BitmapData bitmapData = #this.LockBits(rectangle, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
for(int height = 0; height < #this.Height; height++)
{
VectorRectangle rect = new VectorRectangle();
rect.Size = 1;
for(int width = 0; width < #this.Width; width++)
{
byte* row = (byte*)(bitmapData.Scan0 + (height * BitmapData.Stride));
rect.X = width;
rect.Y = height;
rect.HexColor = ColorTranslator.ToHtml(Color.FromArgb(
row[(width * PixelSize) + 3], // blue
row[(width * PixelSize) + 2], // green
row[(width * PixelSize) + 1], // red
row[width * PixelSize] // transparency
));
while(width < #this.Width && ColorTranslator.ToHtml(Color.FromArgb(
row[((width + 1) * PixelSize) + 3], // blue
row[((width + 1) * PixelSize) + 2], // green
row[((width + 1) * PixelSize) + 1], // red
row[(width + 1) * PixelSize] // transparency
)) == rect.HexColor)
{
rect.Size++;
width++; }
vectorRectangles.Add(rect);
}
}
#this.UnlockBits(bitmapData);
return vectorRectangles.ToArray();
}
In reference to: How to detect and count a spiral's turns
I am not able to get count even in the pixel based calculation also.
If I have attached image how to start with the counting the turns.
I tried the FindContours(); but doesn't quite get the turns segregated which it can't. Also the matchshape() I have the similarity factor but for whole coil.
So I tried as follows for turn count:
public static int GetSpringTurnCount()
{
if (null == m_imageROIed)
return -1;
int imageWidth = m_imageROIed.Width;
int imageHeight = m_imageROIed.Height;
if ((imageWidth <= 0) || (imageHeight <= 0))
return 0;
int turnCount = 0;
Image<Gray, float> imgGrayF = new Image<Gray, float>(imageWidth, imageHeight);
CvInvoke.cvConvert(m_imageROIed, imgGrayF);
imgGrayF = imgGrayF.Laplace(1); // For saving integer overflow.
Image<Gray, byte> imgGray = new Image<Gray, byte>(imageWidth, imageHeight);
Image<Gray, byte> cannyEdges = new Image<Gray, byte>(imageWidth, imageHeight);
CvInvoke.cvConvert(imgGrayF, imgGray);
cannyEdges = imgGray.Copy();
//cannyEdges = cannyEdges.ThresholdBinary(new Gray(1), new Gray(255));// = cannyEdges > 0 ? 1 : 0;
cannyEdges = cannyEdges.Max(0);
cannyEdges /= 255;
Double[] sumRow = new Double[cannyEdges.Cols];
//int sumRowIndex = 0;
int Rows = cannyEdges.Rows;
int Cols = cannyEdges.Cols;
for (int X = 0; X < cannyEdges.Cols; X++)
{
Double sumB = 0;
for (int Y = 0; Y < cannyEdges.Rows; Y ++)
{
//LineSegment2D lines1 = new LineSegment2D(new System.Drawing.Point(X, 0), new System.Drawing.Point(X, Y));
Double pixels = cannyEdges[Y, X].Intensity;
sumB += pixels;
}
sumRow[X] = sumB;
}
Double avg = sumRow.Average();
List<int> turnCountList = new List<int>();
int cnt = 0;
foreach(int i in sumRow)
{
sumRow[cnt] /= avg;
if(sumRow[cnt]>3.0)
turnCountList.Add((int)sumRow[cnt]);
cnt++;
}
turnCount = turnCountList.Count();
cntSmooth = cntSmooth * 0.9f + (turnCount) * 0.1f;
return (int)cntSmooth;
}
I am next trying surf.
==================================================
Edit: Adding samples. If you like it do it.
==================================================
Edit: Tried another algo:
ROI then Rotate ( biggest thin light blue rectangle )
GetMoments() shrink ROI height and position.Y using the moment.
Set the shrinked ROI and ._And() it with a blank image. ( Gray region with green rectangle )
cut the image into half-half.
contour and fit ellipse.
get maximum number of fitted ellipses.
Later will work on better algos and results.
Assuming the bigger cluster of white colour is the spring
--EDIT--
Apply inverse threshold to the picture and fill corners with flood fill algorithm.
Find the rotated bounding box of the biggest white cluster using findContours and minAreaRect
Trace the box longer axis doing the following
for each pixel along the axis trace axis line perpendicular going through current pixel
This line will cross the spring in minimum two points.
Find the point with the bigger distane from the axis
This will create collection on points similar to sine function
Count the peaks or clusters of this collection this will get twice the number of loops.
All this assuming you don't have high noise in the picture.