Creating features(point) grid on polygon - c#

I am working on GIS based desktop application using C#. I am using dotspatial library in this project.
Now I need to create a grid of features on polygon. This grid cell (rectangle) should be 20*20 Meter Square.
I have worked on it and able to create grid but facing issue regarding to cell size. Whenever polygon size changed cell size also reduced. My code.
// Polygon Width = 2335
// Polygon Height = 2054
int RowsCount = 111;
int ColumnsCount = 111;
var maxPointX = Polygon.Extent.MaxPointX;
var minPointX = Polygon.Extent.MinPointX;
var maxPointY = Polygon.Extent.MaxPointY;
var minPointY = Polygon.Extent.MinPointY;
var dXStep = (maxPointX - minPointX) / (ColumnsCount - 1);
var dYStep = (maxPointY - minPointY) / (RowsCount - 1);
var gridColumnsPoints = new double[1000000];
var gridRowPoints = new double[1000000];
// Calculate the coordinates of grid
var nextPointX = minPointX;
for (int i = 1; i <= ColumnsCount; i++)
{
gridColumnsPoints[i - 1] = nextPointX;
nextPointX = nextPointX + dXStep;
}
var nextPointY = minPointY;
for (int i = 1; i <= RowsCount; i++)
{
gridRowPoints[i - 1] = nextPointY;
nextPointY = nextPointY + dYStep;
}
Output
Now when I tried this code on small size of Polygon then grid cell size also decreased.
I know my approach is not correct, So I have searched on it and got some tools. like
https://gis.stackexchange.com/questions/79681/creating-spatially-projected-polygon-grid-with-arcmap
But I want to create it in C# and unable to found any algorithm or any other helping material.
Please share your knowledge. Thanks

I am not able to understand, if you want the grid cell size to be 20*20 meters, how does the size change from polygon to polygon. It should always be 20*20 meters.
In you code, where did you get the values for ColumnsCount and RowsCount?
Your dx and dy should always be 20 (if the spatial reference units are in meters) or you need to convert the 20 meters to appropriate length of units of the spatial reference.
Pseudo code for creating grid:
var xMax = Polygon.extent.xmax;
var xMin = Polygon.extent.xmin;
var yMax = Polygon.extent.ymax;
var yMin = Polygon.extent.ymin;
var gridCells = [];
var x = xMin, y = yMin;
while(x <= xMax){
var dx = x + 20;
while(y <= yMax){
var dy = y + 20;
var cell = new Extent(x, y, dx, dy);
gridCells.push(cell);
y = dy;
}
x = dx;
}

The problem is here:
var dXStep = (maxPointX - minPointX) / (ColumnsCount - 1);
var dYStep = (maxPointY - minPointY) / (RowsCount - 1);
because it makes the grid size dependent on the polygon, but it should be fixed to the scale of the view.
I'm not familiar with the dotspatial framwork, but you must operate in a coordinate system of a kind. You should align your grid to that coordinate system by calculating the first x pos to the left of the polygon in some distance from the polygons bounding box (max/min) and then step with the resolution of the coordinate system through to the max X of the polygon.

Related

How are the position of elements derived from the CSS attribute justify-content:space-between calculated?

This is somewhat a point of curiosity and somewhat something I actually need to use.
How are the position for of the elements derived from justify-content:space-between calculated?
As in this here: https://jsfiddle.net/qrjot0bh/1/
I know how to divide a line segment up with 2 Vectors, Lerping similar to this:
//C#
using System.Numerics;
...
int amount = 3; // element amount
float startSpace = 100; //starting X
float endSpace = 900; //end space
Vector2 vec2Start = new Vector2(startSpace); //vector Start
Vector2 vec2End = new Vector2(endspace); //vector End
Vector2[] arrPossibles = new Vector2[amount];
float divider = 1f / (float)amount;
float linear = 0f;
for (int i = 0; i < amount; i++)
{
if (i == 0)
linear = divider / 2;
else
linear += divider;
arrPossibles[i] = Vector2.Lerp(vec2Start, vec2End, linear);
}
//...go through possibilities, treating them as center points for a prospective rectangle.
This is a JavaScript equivalent: https://jsfiddle.net/8c9rdejx/6/
But as you can see it's not 'between', the first and last elements are not and the start and end of the respective parent holder.
After much fiddling and searching I've managed to come up with a simple algorithm to distribute 'Blocks' with equal distances on a horizontal axis.
Variables
Name
Value
amount
3
Width
800
Margin
50
blockWidth
32
Code (javascript)
var amount = 3;
var width = 800;
var margin = 50;
var blockWidth = 32;
var startSpace = 0 + margin;
var endSpace = width - margin;
var distance = endSpace - startSpace;
var minrect = blockWidth; // min size for each block
var sumWidths = blockWidth * amount; // size if we add all the blocks together
var remWidth = distance - sumWidths; // size of the space minus the size of all blocks
var spaceBetween = remWidth / (amount-1 ); //remaining white space between blocks
//Probably a better way to initialize an array.
var pos2 = new Array(amount);
for(var i=0 ; i < amount; i++)
pos2[i] = margin;
var last = pos2[0]; //last positon is the first element
for(var i = 1; i < amount; i++)
{
//sum of last posistions + (space and the blockWidth) is the new Position
pos2[i] = last + spaceBetween + blockWidth;
//store the current position as the last.
last = pos2[i];
}
a JS Fiddle is available here: https://jsfiddle.net/joesblog/605dzcgh/53/

C# Windows Forms Chart Always Display Zero in YAxis

I draw some charts using DataVisualization.Charting.Chart. All of my charts have inverse Y range.
For Example "-20 to 20" or "-150 to 150". It means that Zero is always in the middle of Y range, there is no problem in drawing but the chart never makes a label for Zero.
For Example I have this labels -20,-15,-10,-5,5,10,15,20. I always want to see the Zero in Y axis labels. See the image:
I did it using CustomLabels :
int maxRange = 20;
int yInterval = 5;
var minRange = (maxRange * -1);
area.AxisY.Minimum = minRange;
area.AxisY.Maximum = maxRange;
area.AxisY.LabelStyle.Format = "#";
area.AxisY.Interval = yInterval;
int yVal = minRange;
while (yVal <= maxRange)
{
area.AxisY.CustomLabels.Add(yVal - 0.5, yVal + 0.5, yVal.ToString());
yVal += yInterval;
}
for this min & max must in 10s series
chart.ChartAreas[0].AxisX.Interval = 10;
chart.ChartAreas[0].AxisY.Interval = 10;

Win2D Keystone Correction

I'm trying to use Win2D/C# to project an image using a overhead projector and I need to use a Win2D effect to do Keystone Correction (pre-warp the image) as the final step.
Basically I'm drawing a rectangle, then trying to use a Transform3DEffect to warp it before rendering. I can't figure out what Matrix transformation combination to use to get it to work. Doing a full camera projection seems like overkill since I only need warping in one direction (see image below). What transforms should I use?
Using an Image like following, can get you a similar effect.
https://i.stack.imgur.com/5QnEm.png
I am unsure what results in the "bending".
Code for creating the displacement map (with GDI+, because you can set pixels fast).
The LockBitmap you can find here
static void DrawDisplacement(int width, int height, LockBitmap lbmp)
{
for (int x = 0; x < width; x++)
for (int y = 0; y < height; y++)
{
int roff = (int)((((width >> 1) - x) / (float)(width >> 1)) * ((height - y) / (float)height) * 127);
int goff = 0;
lbmp.SetPixel(x, y, Color.FromArgb(127 - roff, 127 - goff, 0));
}
}
Drawing in Win2D looks something like this, where displacementImage is the loaded file and offscreen, is a 'CanvasRenderTarget' on which I drew the grid.
//Scaling for fitting the image to the content
ICanvasImage scaledDisplacement = new Transform2DEffect
{
BorderMode = EffectBorderMode.Hard,
Source = displacementImage,
TransformMatrix = Matrix3x2.CreateScale((float) (sender.Size.Width / displacementImage.Bounds.Width), (float) (sender.Size.Height / displacementImage.Bounds.Height)),
Sharpness = 1f,
BufferPrecision = CanvasBufferPrecision.Precision32Float,
InterpolationMode = CanvasImageInterpolation.HighQualityCubic,
};
//Blurring, for a better result
ICanvasImage displacement = new GaussianBlurEffect
{
BorderMode = EffectBorderMode.Hard,
Source = scaledDisplacement,
BufferPrecision = CanvasBufferPrecision.Precision32Float,
BlurAmount = 2,
Optimization = EffectOptimization.Quality,
};
ICanvasImage graphicsEffect = new DisplacementMapEffect
{
Source = offscreen,
Displacement = displacement,
XChannelSelect = EffectChannelSelect.Red,
YChannelSelect = EffectChannelSelect.Green,
Amount = 800,//change for more or less displacement
BufferPrecision = CanvasBufferPrecision.Precision32Float,
};

Asp.net chart, how can I set the X axis label position to left aligned instead of centered?

I've spent hours trying to solve this silly problem. I create an histogram with asp chart control. All I want to do is have the xaxis label on the left of the column instead of centered on it. Xaxis lable doesn't seem to have a position property like series do, so I can't figure it out and it's frustrating.
Here's a sample code of the type of graphic I'm talking about to show you what I get approximately:
private void Graphique()
{
// Creating the series
Series series2 = new Series("Series2");
// Setting the Chart Types
series2.ChartType = SeriesChartType.Column;
// Adding some points
series2.Points.AddXY(1492, 12);
series2.Points.AddXY(2984, 0);
series2.Points.AddXY(4476, 1);
series2.Points.AddXY(5968, 2);
series2.Points.AddXY(7460, 2);
series2.Points.AddXY(8952, 12);
series2.Points.AddXY(10444, 4);
series2.Points.AddXY(11936, 3);
series2.Points.AddXY(13428, 3);
series2.Points.AddXY(14920, 5);
series2.Points.AddXY(16412, 1);
Chart3.Series.Add(series2);
Chart3.Width = 600;
Chart3.Height = 600;
// Series visual
series2.YValueMembers = "Frequency";
series2.XValueMember = "RoundedValue";
series2.BorderWidth = 1;
series2.ShadowOffset = 0;
series2.IsXValueIndexed = true;
// Setting the X Axis
Chart3.ChartAreas["ChartArea1"].AxisX.IsMarginVisible = true;
Chart3.ChartAreas["ChartArea1"].AxisX.Interval = 1;
Chart3.ChartAreas["ChartArea1"].AxisX.Maximum = Double.NaN;
Chart3.ChartAreas["ChartArea1"].AxisX.Title = "kbps";
// Setting the Y Axis
Chart3.ChartAreas["ChartArea1"].AxisY.Interval = 2;
Chart3.ChartAreas["ChartArea1"].AxisY.Maximum = Double.NaN;
Chart3.ChartAreas["ChartArea1"].AxisY.Title = "Frequency";
}
Now my real chart looks like this, Actual result
I would like something similar to this website :
Desired layout chart
You see, the x label is on the left, which makes way more sense considering that each column of an histogram represents the frequency of a range of values.....
Any help would be appreciated...
Did you try to add CustomLabels to replace the default ones? For example:
for (int i = 0; i <= 10; i++) {
area.AxisX.CustomLabels.Add(i + 0.5, i + 1.5, i, 0, LabelMarkStyle.None);
}
The first two are for positioning and the third would be the text value of the label.

Manually Feed Point Cloud Data into Kinect Fusion Toolkit

I'm trying to manually feed in data to Kinect Fusion's AlignPointCloud functionality. In the Kinect SDK Toolkit 1.8, there's an example of how to use Kinect Fusion. I'm using the toolkit provided in the samples to try to use Fusion to try to align two point clouds; however, I can't seem to get the AlignPointCloud method to ever converge successfully.
I'm sure that there is something I'm misunderstanding about how to copy data into the FusionPointCloudImageFrame.
What I'm currently trying to do (A trivial case, simply matching together two planes) fails:
float[] arr1 = new float[80 * 60 * 6];
float[] arr2 = new float[80 * 60 * 6];
for (int y = 0; y < 60; y++) {
for (int x = 0; x < 80; x++) {
int ind = y * 80 + x;
arr1[ind] = x / .1f; // X coordinate
arr1[ind + 1] = y / .1f; // Y coordinate
arr1[ind + 2] = 1; // Z coordinate
// Normals
arr1[ind + 3] = 0;
arr1[ind + 4] = 0;
arr1[ind + 5] = 1;
arr2[ind] = x / .1f; // X coordinate
arr2[ind + 1] = y / .1f; // Y coordinate
arr2[ind + 2] = 2; // Z coordinate
// Normals
arr1[ind + 3] = 0;
arr1[ind + 4] = 0;
arr1[ind + 5] = 1;
}
}
FusionPointCloudImageFrame pcl1 = new FusionPointCloudImageFrame(80, 60);
FusionPointCloudImageFrame pcl2 = new FusionPointCloudImageFrame(80, 60);
pcl1.CopyPixelDataFrom(arr1);
pcl2.CopyPixelDataFrom(arr2);
Matrix4 m = Matrix4.Identity;
bool success = FusionDepthProcessor.AlignPointClouds(pcl1, pcl2, 7, null, ref m);
// Does not converge, m is identity regardless of what it was before
What am I doing incorrectly, or what do I need to change to manually feed in data to match two point clouds? Also, can someone please explain to me the significance of a point cloud having a width and height? Each point has an x,y, and z value and doesn't need to be in any ordered way afaik, so why do we need to provide a width or height? If I'm reading the data from a .obj (wavefront) file, how can I determine the width and height?
Thanks!
At a quick glance, this example probably fails to converge because there is no texture to "fix" these planes: they can just slide around and match equally well. Try using some less trivial test data.
The "point cloud" is essentially just a depth image, so it can have a width and height as with a frame that originated from the Kinect camera.
If you are rendering from a mesh, you can choose an appropriate width and height, e.g. 640x480.
I don't know if this is still helpful, but there is a problem with your code, you are only storing the x coord and for the normals you're not storing them for the second array. For my part I use coordinates saved from a previous mesh from kinect fusion that I store in a file and then load them to compare them with the actual point cloud frame. However it seems to give me true each time so I don't know if alignpointcloud is used to limit transformation between two point cloud or can be used for object recognition.

Categories