I have a list of region borders inside SQL database and i am using sharpmap to render thumbnail images for every country i need. It works really well.
But I would like to go one step further and add a small globe around it and position country on it's place in globe, but I do not know where to start.
Here's the code i'm using so far to render country thumbs. Any ideas?
var map = new Map(new Size(command.Width, command.Height));
map.BackColor = Color.Transparent;
var countryGeometry = GeometryFromWKT.Parse(command.CountryLevelWkt);
IProvider countryProvider = new GeometryFeatureProvider(countryGeometry);
var countryLayer = new VectorLayer("country", countryProvider);
var borderColor = System.Drawing.ColorTranslator.FromHtml(command.BorderColor);
countryLayer.Style.EnableOutline = true;
countryLayer.Style.Outline = new Pen(borderColor);
countryLayer.Style.Outline.Width = command.BorderWidth;
countryLayer.Style.Fill = Brushes.Transparent;
var transformationFactory = new CoordinateTransformationFactory();
countryLayer.CoordinateTransformation = transformationFactory.CreateFromCoordinateSystems(
GeographicCoordinateSystem.WGS84,
ProjectedCoordinateSystem.WebMercator);
map.Layers.Add(countryLayer);
var bottomLeft = new Coordinate(command.Extents.BottomLeft.Longitude, command.Extents.BottomLeft.Latitude);
var topRight = new Coordinate(command.Extents.TopRight.Longitude, command.Extents.TopRight.Latitude);
// transformations
var bottomLeftLongLat = countryLayer.CoordinateTransformation.MathTransform.Transform(bottomLeft);
var topRightLongLat = countryLayer.CoordinateTransformation.MathTransform.Transform(topRight);
map.ZoomToBox(new Envelope(bottomLeftLongLat, topRightLongLat));
var img = map.GetMap();
return img;
Start by drawing all countries on a new map, each on its own layer.
Draw the country in which you're interested on its own layer.
Set map center to the Envelope.Center of the layer from Step 2. Eg, if drawing Australia, the map will move to the left.
Render map as image. Draw image on a drawing sufrace (System.Drawing.Graphics).
Re-center the map, to cover the empty space. Eg, if drawing Australia, move map almost all the way to the right. You will need to work out these offsets programmatically.
Render map from step 5 as image. Add image to the same drawing sufrace (see step 4).
Repeat steps 5-6 covering empty space below/above the render made at step 3.
Here is an example:
Note that:
Australia is in the center
There is a gap between map layers near the mouse pointer (gap in screenshot is intentional to demonstrate the logic)
Some countries are very large (eg Russia) and obtaining Envelope.Center will not work very well - consider centering based on the largest polygon only
Here's a sample Windows Forms project. In the sample, I used maps from http://thematicmapping.org/downloads/world_borders.php.
Related
Hi i'm struggling with an algorithm that can effectively extract image bounding box and arrows from a rasterized document. The arrows and images can change is size shape and color. The arrows may not always be arrows but rather lines. The images may be just an outline or a full color picture. This is the code i wrote so far and it kinda works but not always. I wrote this algorithm based on the excellent paper Image to CAD: Feature Extraction and Translation of Raster Image of CAD Drawing to DXF CAD Format by Aditya Intwala.
This is the original image:
For detecting the arrowheads
using var kernel1 = Cv2.GetStructuringElement(MorphShapes.Rect, new Size(2, 2));
using var binary = grayImage.Threshold(0, 255, ThresholdTypes.Binary | ThresholdTypes.Otsu);
using var invertedBinary = grayImage.Threshold(0, 255, ThresholdTypes.BinaryInv | ThresholdTypes.Otsu);
using var blackHatResult = binary.MorphologyEx(MorphTypes.BlackHat, kernel1);
using var solidArrowHeads = invertedBinary - blackHatResult;
using var foundArrowHeads = solidArrowHeads.ToMat();
using var steKernel = Cv2.GetStructuringElement(MorphShapes.Rect, new Size(3, 3));
using var eroded = foundArrowHeads.Erode(steKernel);
using var dialated = eroded.Dilate(steKernel);
dialated.FindContours(out var arrowHeadContours, out var hierarchy, RetrievalModes.External,
ContourApproximationModes.ApproxSimple);
After finding arrow heads I;m doing the following for finding the lines that intersect the bounding box of arroheads. These intersecting lines is what im classifying as an arrow. It does work but i get a lot more false positives and would like to know how to improve my algorithm or if theres a better way. Next i'm erasing the arrows by masking it and then using this image for the next step.
To find the boundaries of the images I've written the following code
using var se1 = Cv2.GetStructuringElement(MorphShapes.Rect, new Size(70, 1));
using var closedImg = grayImage.MorphologyEx(MorphTypes.Close, se1);
Cv2.BitwiseAnd(grayImage, closedImg, grayImage);
using var structuringElement = Cv2.GetStructuringElement(MorphShapes.Rect, new Size(15, 15));
using var blurred = grayImage.MorphologyEx(MorphTypes.Gradient, structuringElement);
using var inverted = blurred.Threshold(197, 255, ThresholdTypes.Binary);
//Lets get bounding boxes for all large contours
Cv2.FindContours(inverted, out var contours, out var hierarchyIndices, RetrievalModes.External,
ContourApproximationModes.ApproxSimple);
for (var i = 0; (i >= 0) && (i < hierarchyIndices.Length); i = hierarchyIndices[i].Next)
{
CT_Assert.True(hierarchyIndices[i].Parent == -1, "Must be a top level contour ?");
var rect = Cv2.BoundingRect(contours[i]);
// save this rect as possible bounding box
}
This again kinda works but not always.
Final output image, black bounding box is detected arrowhead, red lines are arrows and blue boxes are detected image bounding boxes.
Final Output image:
When I add a slew of Pushpins to a Bing Map, I want the zoom level to automagically update to the "best fit" - the "closest in" view that allows all of the Pushpins to be displayed.
I would imagine that it would be done by computing the zoom level required by discovering the latitude values that are the furthest north and southmost, and the longitude values that are furthest east and west, and then centering on the average value of those.
My idea is passing a List of GeoCoordinates to a method that would return the appropriate ZoomLevel, and possibly a separate method that would calculate the center point of the map.
These two values could then be used for the map's SetView(location, ZoomLevel) method.
Has this been done already, or is there a Bing Maps call that will handle this?
There is an overload of SetView which accepts a list of locations and set the view to the bounding rectangle of those locations. It ensures all the locations are visible; you also need to add a bit of extra margins to the view rectangle to make sure all the pushpins are visible.
Having pushpins on the map, means you already have a list of locations or if you don't have you can easily get the locations of all those pushpins.
Example
Assuming you have followed the steps on How can I add a Bing Maps Component to my C# Winforms app?, add a button to the form and handle its click event like this:
private void button1_Click(object sender, EventArgs e)
{
var map = this.userControl11.myMap;
//Locations
var locations = new[] {
new Location(47.6424, -122.3219),
new Location(47.8424, -122.1747),
new Location(47.67856, -122.130994)};
//Add Pushpins
locations.Select(x => new Pushpin() { Location = x })
.ToList().ForEach(x => { map.Children.Add(x); });
//Margin
var w = new Pushpin().Width;
var h = new Pushpin().Height;
var margin = new Thickness(w / 2, h, w / 2, 0);
//Set view
map.SetView(locations, margin, 0);
}
And this is the result:
Note: If you don't have the list of locations, but you have just a few pushpins on the map, then you can easily get locations like this:
var map = this.userControl11.myMap;
var locations = map.Children.OfType<Pushpin>().Select(x => x.Location);
I'm looking in on how to draw a looping shape in WPF from codebehind.
I know I can draw ellipses and draw a line, but I can't find anywhere how to modify them to get this looping shape.
Perhaps I'm not using the correct word for this type of drawing... but also found some words such as clothoids but not sure if this is exactly what I'm looking for...
Any tips are welcome. Thank you !
It's a little clumsy but you can create Geometry objects and then Path objects from SVG data.
The Path can then be hosted in any container.
//Define SVG data
string pathData = "M 4.4285714e-6,196.64791 C 71.557031,196.64791 202.13304,-0.49493571 99.047621,-0.49493571 c -103.0854176,0 -4.93638,197.14284571 99.047619,197.14284571 103.984,0 197.9832,-197.14284571 99.04762,-197.14284571 -98.93558,0 3.94706,197.14284571 99.04762,197.14284571 95.10056,0 199.7498,-197.14284571 99.04761,-197.14284571 -100.70219,0 4.52998,197.14284571 99.04762,197.14284571";
//Create converter
var converter = TypeDescriptor.GetConverter(typeof(Geometry));
//Create Path
var p = new Path() {
Data = (Geometry)converter.ConvertFrom(pathData),
Stroke = new SolidColorBrush(Colors.Red),
StrokeThickness = 4
};
I'm working on a simple 3D model viewer, I need to be able to support very large models (100,000 + triangles) and have smooth movement while rotating the camera.
To optimize the drawing instead of creating a GeometryModel3D of each segment in a polymesh I want to use the full list of vertices and triangle indexes. The speed up was amazing but now the lighting is messed up. Each triangle now has its own shade.
I think the issue is related to normals, if I manually set all normals to Vector3(0,0,1) then I get even lighting. But when I try to see the side of the model or the reverse side it is dark. I also attempted to use a formulae to calculate the normals of each triangle but the result was the same: One large model would be messed up, separate models would look good. So maybe it isn't a normal issue?
I'm curious as to why when the models are separate everything works correctly and by combining the models causes issues.
Image Showing the Issue
The polyMesh object contains a bunch of faces which contain which vertex indices it uses. Either a triangle or a quad. The polyMesh has all vertex information.
var models = new Model3DCollection();
var brush = new SolidColorBrush(GetColorFromEntity(polyMesh));
var material = new DiffuseMaterial(brush);
foreach (var face in polyMesh.FaceRecord)
{
var indexes = new Int32Collection();
if (face.VertexIndexes.Count == 4)
{
indexes.Add(face.VertexIndexes[0]);
indexes.Add(face.VertexIndexes[1]);
indexes.Add(face.VertexIndexes[2]);
indexes.Add(face.VertexIndexes[2]);
indexes.Add(face.VertexIndexes[3]);
indexes.Add(face.VertexIndexes[0]);
}
else
{
indexes.Add(face.VertexIndexes[0]);
indexes.Add(face.VertexIndexes[1]);
indexes.Add(face.VertexIndexes[2]);
}
MeshGeometry3D mesh = new MeshGeometry3D()
{
Positions = GetPoints(polyMesh.Vertices),
TriangleIndices = indexes,
};
GeometryModel3D model = new GeometryModel3D()
{
Geometry = mesh,
Material = material,
};
models.Add(model);
}
return models;
If I take the Mesh and the Geometry out of the loop and create one large mesh things go wrong.
var models = new Model3DCollection();
var brush = new SolidColorBrush(GetColorFromEntity(polyMesh));
var material = new DiffuseMaterial(brush);
var indexes = new Int32Collection();
foreach (var face in polyMesh.FaceRecord)
{
//Add indices as above (code trimmed to save space.)
indexes.Add(face.VertexIndexes[0]);
}
MeshGeometry3D mesh = new MeshGeometry3D()
{
Positions = GetPoints(polyMesh.Vertices),
TriangleIndices = indexes,
};
GeometryModel3D model = new GeometryModel3D()
{
Geometry = mesh,
Material = material,
};
models.Add(model);
return models;
Other important details: The model isn't just a flat surface it is a complex model of a road way. I can't show the whole model nor provide code for how it was imported. I am using HelixToolKit for camera controls and diagnostics.
Viewport code:
<h:HelixViewport3D ZoomExtentsWhenLoaded="True" IsPanEnabled="True" x:Name="ViewPort" IsHeadLightEnabled="True">
<h:HelixViewport3D.DefaultCamera>
<!--フリッカー回避方法 This code fixes a flicker bug! Found at http://stackoverflow.com/a/38243386 -->
<PerspectiveCamera NearPlaneDistance="25"/>
</h:HelixViewport3D.DefaultCamera>
<h:DefaultLights/>
</h:HelixViewport3D>
Setting a back material doesn't change anything. I'm hoping I'm just being a moron and missing something obvious as I am new to 3D.
Finally fixed this myself.
Came down to not enough knowledge about Wpf3D (or 3D in general?).
If a vertex is reused as in the combination model about then smoth shading is applied. The reason smooth shading looks so terrible on the flat side is that it is applying shading to the full 3D model. Each polyface I had was a 3D shape. Each segment on the wall was a different polymesh with ~6 faces. (front, up, down, left, right, back).
When each vertex position was unique the problem went away even when combining the models.
When separate since each model had a copy of the positions they were unique.
Found the answer from:
http://xoax.net/blog/automatic-3d-normal-vector-calculation-in-c-wpf-applications/
I'm looking to connect or glue together two shapes or objects with a Line. These shapes will be generated dynamically, meaning I'll be calling a Web service on the backend to determine how many objects/shapes need to be created. Once this is determined, I'll need to have the objects/shapes connected together.
The method signature may look like this (similar to Visio's drawing capabilities):
GlueTogether(objButton1, objButton2);
I may need to get the position of each Rectangle shape or Button to determine where the starting Line point is. Then determine the second shape/objects position to draw the line.
Any help or suggestions would be great!
Use a Path or a Line below the shapes in stacking order or z index
Use instance.TransformToVisual() to get the transform of each shape
Use the transform to transform the centerpoint of each shape
Draw a line between the two centerpoints.
var transform1 = shape1.TransformToVisual(shape1.Parent as UIElement);
var transform2 = shape2.TransformToVisual(shape2.Parent as UIElement);
var lineGeometry = new LineGeometry()
{
StartPoint = transform1.Transform(new Point(shape1.ActualWidth / 2, shape1.ActualHeight / 2.0)),
EndPoint = transform2.Transform(new Point(shape2.ActualWidth / 2.0, shape2.ActualHeight / 2.0))
};
var path = new Path()
{
Data = lineGeometry
};
I am trying much the same, but instead of the line going from one centre to the other I want the lines to stop at the edge of the two shapes.
In particular I have arrows at the end of the lines, and the arrows need to stop at the bounds of the shapes instead of going inside/behind the shape to its centre.
My shape is a usercontrol with a grid and rectangle, and some labels and other stuff.
I can't find any methods that provide me with a geometry for the edge of the shape (which is a rounded rectangle).
I figured out a solution that uses the bounding box and intersection points to connect my elements by lines at their approximate edges, and it works well for me using arrow ended lines.
See Connecting two WPF canvas elements by a line, without using anchors?
Check this out: http://www.graphspe.com/Main.aspx#/Solution/graphviz-xaml-renderer
All you have to do is printf to a string and you get your Silverlight[2|3] diagram.
Ceyhun
In addition... Instead of connecting to the center point of your objects, I've modified the same code from Michael S. to:
var lineGeometry = new LineGeometry()
{
StartPoint = transform1.Transform(new Point(1 , b1.ActualHeight / 2.0)),
EndPoint = transform2.Transform(new Point(b2.ActualWidth , b2.ActualHeight / 2.0))
};
This will connect at the outer portions of each object.
I am using the above code to draw two buttons, I want a line between those two buttons, but all i get are two buttons that look like tiny circles and no line.
code:
Button b1 = new Button();
Button b2 = new Button();
canvas1.Children.Add(b1);
canvas1.Children.Add(b2);
Canvas.SetLeft(b1, 300);
var transform1 = b1.TransformToVisual(b1.Parent as UIElement);
var transform2 = b2.TransformToVisual(b2.Parent as UIElement);
var lineGeometry = new LineGeometry()
{
StartPoint = transform1.Transform(new Point(1, b1.ActualHeight / 2.0)),
EndPoint = transform2.Transform(new Point(b2.ActualWidth, b2.ActualHeight / 2.0))
};
var path = new Path()
{
Data = lineGeometry
};
canvas1.Children.Add(path);