I am creating a visualizer in WPF to display flowfield information for a game I am writing and have come across a problem with some labels being very close to each other.
In the above screen shot, sector (0,0) is the top left. In sector (1,1) I have highlighted two labels with arrows that are very close to each other. In sector (2,1) I have circled two labels that overlap completly. I need to be able to place labels in a way so that they do not overlap and have a margin of distance. I am after preferably a simple algorithm that allows me to place labels on a contended spot.
The blue/black cells are virtualized items on an Items Control with a canvas as the ItemsPanel. The red sector squares are on one adorner while the green lines, boxes, bezier curves and red cost labels are on a second adorner. Both adorners use the drawing context with everything dynamically created upon render.
var typeface = new Typeface(new FontFamily("Segoe UI"), FontStyles.Normal, FontWeights.Normal, FontStretches.Normal);
var formattedText = new FormattedText(curve.Cost.ToString(), CultureInfo.CurrentUICulture, FlowDirection.LeftToRight, typeface, 12, Brushes.Red, null, TextFormattingMode.Display);
var textLocation = new Point(midPoint2.X - (formattedText.WidthIncludingTrailingWhitespace / 2), midPoint2.Y - formattedText.Height);
drawingContext.DrawText(formattedText, textLocation);
A suggestion:
The Voronoi diagram of a set of geometric entities is the partition of the plane into regions where points are closer to a given entity than to all others.
If you construct the Voronoi diagram of your curves, and if you place the labels wholly in the corresponding regions, this solves your problem.
Assuming that all labels have the same extent (same bounding box), you can find suitable empty spaces by applying an erosion operation, i.e. removing layers of pixels on the region outlines for the desired width/height. The remaining pixels are possible centers for the labels.
In the general case, computing a Voronoi diagram by geometric means is extremely difficult. But if you work with a digital image, it suffices to draw the geometric entities and compute the distance map from them.
This requires that you be somewhat familiar with the techniques of digital image processing.
After considering several ways to place labels includng
Word Clouds, physics based approaches, voronoi diagrams. I decided to base my approach on An Empirical Study of Algorithms
for Point-Feature Label Placement as I could see a simple and quick way to position labels. Page 2 gave me the idea of having four possible locations for a desired point and I built my own implementation with very simple rules.
I created a class called PointLabelPlacer with two methods
AddLabel
ComputeNewPositions
I would send all my labels along with the point to the AddLabel method.
Once I was done and ready I would call ComputeNewPositions. This would for all four possible locations count the number of locations from another labels that have overlapped.
I would also flag a location if it overlapped an original point of another label.
If two labels overlapped exactly, i would again choose the first one without overlaps, but I would mark all the other labels location as used
Then I would just choose the first one I found with the lowest number of overlaps and did not overlap another point and was not marked as used.
If after all that no alternate location could be found, I default to the top left and allow overlaps.
This is with the alternate locations displayed in yellow
This is the final result
Related
Unity Version: 5.6.5f1 Personal
TextMesh Pro Version: 1.0.56.0b3
I am attempting to create dynamic TextMesh Pro text blocks that are stored in a Vertical Layout Group UI Element. Each block of text is stored in its own gameobject, and all gameobjects are children of the Vertical Layout Group. All TextMesh Pro objects use the same font and have the "Auto-Sizing" flag enabled, so that they scale within the bounds of the Vertical Layout Group. Ideally, all text blocks should have the same font-size when scaling. See the current Vertical Layout Group Inspector with hierarchy for the Group and Child TextMesh Pro Text Blocks.
Vertical Layout Group Inspector
The problem is that if one block of text consists of two lines, and another block consists of three lines, both blocks will take up roughly half of the Vertical Layout Group. However, the first block's font-size will be around 2/3 of the second block's font-size. In practice, I will also occasionally see the two-line text block span three lines with a much larger font-size. See the image link below for details.
Output In Practice vs. Desired Outcome
The goal here is not to modify the Vertical Layout Group in any way. The contents must fit within the group's fixed-position and fixed-size. The blocks of text must be separate objects for the purposes of defining clickable regions. Each region spans over the entire text block, and will resize as the text changes.
Clickable Region Overlay Demonstration
The code behind the Monobehaviour that manages the Vertical Layout Group maintains an array of strings which contain the aforementioned text of all text blocks. Changes, such as additions, edits, and removals to this array appear as changes to the Vertical Layout Group by extension. I'm pretty certain at this point that I'll need to implement functionality to manipulate the text boxes whenever a change occurs, rather than rely on auto-sizing from TextMesh Pro, but it is at this point, that I'm stuck.
How can I achieve the desired outcome, programmatically or otherwise, of maintaining a font-size that is the same across all text boxes added to the Vertical Layout Group while distributing the space of the group amongst text boxes of varying content such that I utilize as much of the Vertical Layout Group as possible?
EDIT: Added Vertical Layout Group Inspector and Object Hierarchy as an image to this question.
As Ian H. stated, Auto-Size scales the font-size to fit the content of the text into the object's rectangle. While the individual TextMesh Pro blocks must be distinct objects, the Vertical Layout Group was deemed as part of an object with "fixed-position and fixed-size." The solution to this problem is derived from using Auto-Sizing in a TextMesh Pro text block that is the same size as the Vertical Layout Group to determine the font-size of all child objects, then using that font-size for each child knowing that the fit is assured. First, let's begin with a proposed hierarchy:
Hierarchy with Raw Text and Vertical Layout Group
A new gameobject with a single TextMesh Pro text block using the same RectTransform values as the Vertical Layout Group is used to simulate how the text should be scaled relative to the area it's contained in. As outlined in the question, several paragraph entries were stored into an array that maintains them. Each string element was output to the Raw Text block, separated by System.Environment.Newline characters.
Raw Text Inspector
Raw Text Viewed In Scene
Note the use of Auto-Size. The issue mentioned in the question is that the text scales improperly when broken into several distinct objects. However, as a single object, Auto-Size will work as intended, providing the target font-size needed for each individual object. Now, a method can Instantiate TextMesh Pro text block objects and store them into the Vertical Layout Group. The difference in this case is that each object is deliberately set to the font-size determined beforehand.
Instantiated Text Inspector
For visualization purposes, each Instantiated Text object consists of the TextMesh Pro text block, as well as a color box added as a child object that encompasses the bounds of the text block. This will still function as intended without the color box. A comparison between the results of the Raw Text overlaying the Instantiated Text is shown in the image below, where the Raw Text is red and the Instantiated Text is black.
Comparison: Raw vs. Instantiated Text
The text itself will not overlap perfectly at every point. This is due to the fact that the centering of the Raw Text block produces a different effect from the centering of multiple Instantiated Text blocks, each having their own center. However, for the purposes of a list of clickable regions defined by text, this behavior is closer to the end goal than what the Raw Text output provides. Removing the Raw Text from visibility, this is the final result.
Result: Instantiated Text Viewed in Scene
Further testing shows that this behavior can be duplicated with text of arbitrary amounts. Furthermore, the user can impose font-size restrictions for the Instantiated Text Blocks by defining a minimum and maximum font size in the Raw Text Auto-Size Properties. Below, you can see the changes in font-size as some elements are removed, as well as an imposed maximum font-size when the amount of text is reduced to two entries.
Four Entries - Increased Font Size / Maximum Font Size Not Reached
Two Entries - Maximum Font Size Reached / Padding Increased
I do have different images which all have some kind of border around the "real" image. What I would like to achieve is to find the "real" image (size and location in pixels).
For me the challenge is that the border is not always black (can be any kind of black or grey with a lot of noise) and the "real" image (water with shark in this example) can have any combination of color, saturation, ...
Now in general I'm aware of algorithms like Canny, Blob detection, hough lines, ..., but I have just started using them. So far I managed to find the border for a specific image, but as soon as I try to apply the same algorithms and parameters to the next image it doesn't work. My current approach looks like this (pseudo code):
convert to gray CvInvoke.CvtColor(_processedImage, tempMat, CvEnum.ColorConversion.Rgb2Gray)
downsample with CvInvoke.PyrDown(srcImage, targetImage) and CvInvoke.PyrUp(srcImage, targetImage)
blur image with CvInvoke.GaussianBlur(_processedImage, bluredImage, New Drawing.Size(5, 5), 0)
Binarize with CvInvoke.Threshold(_processedImage, blackWhiteImage, _parameters.BinarizeThreshold, 255, CvEnum.ThresholdType.Binary)
Detect Edges with CvInvoke.Canny(_processedImage, imgEdges, 60, 100)
Find Contours with CvInvoke.FindContours(_processedImage, contours, Nothing, CvEnum.RetrType.External, CvEnum.ChainApproxMethod.ChainApproxSimple)
Assume that largest contour is the real image
I already tried different approaches based on for example:
Thresholding saturation channel and bounding box
Thresholding, canny edge and finding contours
Any hint especially on how to find proper parameters (that apply for all images) for algorithms like (adaptive) threshold and canny as well as ideas for improving the processing pipeline would be highly appreciated.
you can try to subtract black image from this image , and you will get the inside image , way to do this:
Use image subtraction to compare images in C# ,
If the border was uniform, this would be easy. Use cv::reduce to find MIN and MAX of each row and column; then count the top,left,bottom,right rows/columns whose MIN and MAX are equal (or very close) to the pixel value in a nearby corner. For sanity, maybe check the border colour is the same on all sides.
In your example the border contains faint red stuff, but a row/column approach might still be a useful way to simplify the problem. Maybe, as Nofar suggests, take an absolute difference with what you think is the background colour; square it, convert to grey, then reduce to Sums of rows and columns. You still need to find edges, but have reduced the data from two dimensions to one.
If there's a large border and lots of noise, maybe iterate: in the second pass, exclude the rows you think comprise the border, from statistics on columns (and vice versa).
EDIT: The above only works for an upright rectangle! If it could be rotated then the row/column projection method won't work. In that case I might go for sum-of-squared differences as above (don't start by converting to grey as it could throw away information), followed by blurring or some morphology, edge detection then some kind of Hough transform to find straight edges.
I have a map of a camping, this is it:
Now, on this map, there are a lot of camping places. And all of the places(yellow, pink and the striped yellow), need to be clickable.
So my question is, how would i achieve this? I was thinking about using SVG or something. Is this a good solution?
Basic idea: Create a color map to look up which spot the user has clicked.
To create that color map, start with the original map, overlay it with an empty bitmap and write a small tool application to help you:
it should let you paint a filled circle in a special color for each site spot
ideally those colors should allow you to re-construct the number and type of the places
upon each click the next color should be prepared
you don't need to match the place too exactly, but it is up to you to improve the colormap with a paint program; put the orginal map in a layer under it and use an eyedropper tool to get the right color and then draw the places a little better
as many of the places have consecutive numbers you can
count them up with each click
use an input box to set new starting numbers
For the actual application you should
hold the color map in memory
use the MouseClick of a PictureBox to get the coordinates of the place
multiply (or rather divide) those with the zoom factor
use GetPixel on the color map to get the color and then
extract the place number.
An ARGB color has 3 color bytes; two will suffice for the place numbers and you will still have one byte for the color coded types of places..
The zoom factor is 1f * PictureBox.clientSize.Width / PictureBox.Image.Width.
For best user experience I would use the PictureBox.MouseMove to look up the place in the color map and give feedback whenever the color changes, including setting and clearing the mouse cursor betwenn Hand and Default whenever the location is clickable, i.e. has a non-transparent color on the color map..
To avoid artifacts the color map must be stored asPNG , not as JPG!
If you want more info with the places you could (and should) create a Place class and hold a Dictionary<Color, Place> to look up Place by Color..
If you put the image in a PictureBox, say, you could use the MouseClick event.
I'm working on an experimental project in which the challenge is to identify and extract an image of the icon or control that the user is has clicked on/touched. The method I'm trying is as follows (I need some help with step 3):
1) Take a screen shot when the user clicks/touches the screen:
2) Apply edge detection:
3) Extract the possible icon images around the Point associated with the user's cursor (Don't know how to do this)
There are easier cases in which the mouse-over event will highlight the icon/control, which allows me to identify the control with a simple screen shot comparison (before and after mouse-over). The above method is specifically for cases in which the icon is not highlighted. I'm new to emgu, so if anyone has any pointers on how to better achieve this, I'm all ears.
Cheers! Matt
Instead of doing edge detection. Consider taking the following steps:
Only grab pixels which are within a certain radius of the point of the user's cursor. Create a new image with just these pixels.
Use thresholding to classify into foreground and background.
Calculate the centroid, (use mean x coordinate and mean y coordinate). Calculate deviation from the mean. Discard foreground pixels which are beyond a certain deviation from the mean. Eg: discard pixels that are more than 1.6 deviations from the mean.
(You may need to experiment with this step ).
Use a convex hull to find the area of the image with the icon in it.
I want to create some heat-map style tiles to overlay over our base maps using Open Layers. Basically, I want to divide some some bounding box into a grid, and display each square of the grid using a different color based on how many points of a sample fall within that grid square.
The technologies involved are C#, OpenLayers, SQL Server 2008 and GeoServer.
My question is basically one of general approach, I'm not really sure where to put the tip of the chisel on this one.
My ultimate goal is to be able to take any arbitrary bounding box, calculate an x-mile by x-mile grid that fits within that bounding box, the iterate over a collection of individual points and assign them to one grid square or another so I can calculate point density per grid square, then color the grid according to the densities, then overlay that on a CloudMade base map using Open Layers.
Any help at all would be greatly appreciated, on the whole thing or any piece of it.
If your bounding box is axis aligned, this is fairly simple. Just make your image, and create a world file for it by hand. The world file is just 6 lines of text, and you already know everything needed (x & y pixel size, coordinate of your upper left corner).
Just make sure that you use the CENTER of the upper left corner pixel, not the corner of the box.
------ Here's how you'd make the world file -------
Say your bounding box's upper left corner is at 203732x598374, and you want an image that has rectangles that are 200m wide east<->west and 300m tall north<->south.
You'd make an image that was the appropriate number of pixels, then a world file that had the following 6 lines:
200
0
0
-300
203632
598524
This corresponds to:
200 == size of one pixel in X
0 == shear1
0 == shear2
-300 == size of one pixel in Y (from top down)
203632 == left edge - 1/2 pixel size (to center on pixel instead of edge of box)
598524 == top edge - 1/2 pixel size (to center on pixel instead of edge of box)
If you use a .png image, you'll want to save this with the same name, but as .pgw. If you use a .jpg, it'd be .jgw, etc.
For complete details, see:
Wiki on World Files
"Dividing some some bounding box into a grid, and displaying each square of the grid using a different color based on how many points of a sample fall within that grid square." This is a raster and there are features in GeoServer for displaying these with colour shading, legends and so on. I think it will be more flexible to use these features than to create image tiles in C#.
From the GeoServer documentation:
Raster data is not merely a picture,
rather it can be thought of as a grid
of georeferenced information, much
like a graphic is a grid of visual
information (with combination of reds,
greens, and blues). Unlike graphics,
which only contain visual data, each
point/pixel in a raster grid can have
lots of different attributes, with
possibly none of them having an
inherently visual component.
This is also called thematic mapping or contour plots or heatmaps or 2.5D plots in other GIS packages.
You could use a free GIS like Grass to create the raster grids, but from your description you don't need to interpolate (because every cell contains at least one point) so it might be just as easy to roll your own code.
EDIT: there is an open source library GDAL which you can use to write raster files in various formats. There are C# bindings.
I think the formulas for computing the center of the upper left pixel are wrong. In the example, the center of the upper left pixel would be down and to the right of (203732,598374). So shouldn't it be the following?
203832 == left edge + 1/2 pixel size (to center on pixel instead of edge of box)
598224 == top edge - 1/2 pixel size (to center on pixel instead of edge of box)