var random = new Random();
Canvas.SetLeft(rectangle, random.Next((int)(ImageCanvas.Width - 100)));
Canvas.SetTop(rectangle, random.Next((int)(ImageCanvas.Height - 100)));
return rectangle;
So the above code just randomly sets the Top and Left positions of a rectangle that will appear on the canvas. I can easily reuse this code if I want multiple rectangles to appear on the screen, however what I was having trouble doing is tweaking the code so that each rectangle is never overlapping each other.
I thought of maybe doing a while loop that keeps running random.Next((int)(ImageCanvas.Height - 100)) continuously until it is not equal to the previous random... But that isn't perfect. The shapes are quite big, so having slightly different X or Y coordinates doesn't prevent a overlap. They would somehow need to be at least 50 pixels distance between each other or something for this to prevent any overlap between other rectangles.
Assuming your Canvas is reasonably large, i.e. the rectangles will not occupy a large amount of the area, it most likely suffices to simply generate rectangles at random (as in your example code), and then check to make sure they don't overlap with any of the previously selected rectangles.
Note that "overlaps with another rectangle" is really the same as "has a non-empty intersection with another rectangle". And .NET provides that functionality; for WPF, you should use the System.Windows.Rect struct. It even has an IntersectsWith() method, giving the information you need in a single call (otherwise you'd have to get the intersection as one step, and then check to see if the result is empty in a second step).
The whole thing might look something like this:
List<Rectangle> GenerateRectangles(Canvas canvas, int count, Size size)
{
Random random = new Random();
List<Rect> rectangles = new List<Rect>(count);
while (count-- > 0)
{
Rect rect;
do
{
rect = new Rect(random.Next((int)(canvas.Width - size.Width),
(int)(canvas.Height - size.Height), size.Width, size.Height);
} while (rectangles.Any(r => r.IntersectsWith(rect));
rectangles.Add(rect);
}
return rectangles.Select(r =>
{
Rectangle rectangle = new Rectangle();
rectangle.Width = r.Width;
rectangle.Height = r.Height;
canvas.SetLeft(rectangle, r.Left);
canvas.SetTop(rectangle, r.Top);
return rectangle;
}).ToList();
}
You would want something more sophisticated if you were dealing with a more constrained area and/or a larger number of rectangles. The above won't scale well for large numbers of rectangles, especially if the probability of a collision is high. But for your stated goals, it should work fine.
Related
Well, I'm continuing this question without answer (Smoothing random noises with different amplitudes) and I have another question.
I have opted to use the contour/shadow of a shape (Translating/transforming? list of points from its center with an offset/distance).
This contour/shadow is bigger than the current path. I used this repository (https://github.com/n-yoda/unity-vertex-effects) to recreate the shadow. And this works pretty well, except for one fact.
To know the height of all points (obtained by this shadow algorithm (Line 13 of ModifiedShadow.cs & Line 69 of CircleOutline.cs)) I get the distance of the current point to the center and I divide between the maximum distance to the center:
float dist = orig.Max(v => (v - Center).magnitude);
foreach Point in poly --> float d = 1f - (Center - p).magnitude / dist;
Where orig is the entire list of points obtained by the shadow algorithm.
D is the height of the shadow.
But the problem is obvious I get a perfect circle:
In red and black to see the contrast:
And this is not what I want:
As you can see this not a perfect gradient. Let's explain what's happening.
I use this library to generate noises: https://github.com/Auburns/FastNoise_CSharp
Note: If you want to know what I use to get noises with different amplitude: Smoothing random noises with different amplitudes (see first block of code), to see this in action, see this repo
Green background color represent noises with a mean height of -0.25 and an amplitude of 0.3
White background color represent noises with a mean height of 0 and an amplitude of 0.1
Red means 1 (total interpolation for noises corresponding to white pixels)
Black means 0 (total interpolation for noises corresponding to green pixels)
That's why we have this output:
Actually, I have tried comparing distances of each individual point to the center, but this output a weird and unexpected result.
Actually, I don't know what to try...
The problem is that the lerp percentage (e.g., from high/low or "red" to "black" in your visualization) is only a function of the point's distance from the center, which is divided by a constant (which happens to be the maximum distance of any point from the center). That's why it appears circular.
For instance, the centermost point on the left side of the polygon might be 300 pixels away from the center, while the centermost point on the right might be 5 pixels. Both need to be red, but basing it off of 0 distance from center = red won't have either be red, and basing it off the min distance from center = red will only have red on the right side.
The relevant minimum and maximum distances will change depending on where the point is
One alternative method is for each point: find the closest white pixel, and find the closest green pixel, (or, the closest shadow pixel that is adjacent to green/white, such as here). Then, choose your redness depending on how the distances compare between those two points and the current point.
Therefore, you could do this (pseudo-C#):
foreach pixel p in shadow_region {
// technically, closest shadow pixel which is adjacent to x Pixel:
float closestGreen_distance = +inf;
float closestWhite_distance = +inf;
// Possibly: find all shadow-adjacent pixels prior to the outer loop
// and cache them. Then, you only have to loop through those pixels.
foreach pixel p2 in shadow {
float p2Dist = (p-p2).magnitude;
if (p2 is adjacent to green) {
if (p2Dist < closestGreen_distance) {
closestGreen_distance = p2Dist;
}
}
if (p2 is adjacent to white) {
if (p2Dist < closestWhite_distance) {
closestWhite_distance = p2Dist;
}
}
}
float d = 1f - closestWhite_distance / (closestWhite_distance + closestGreen_distance)
}
Using the code you've posted in the comments, this might look like:
foreach (Point p in value)
{
float minOuterDistance = outerPoints.Min(p2 => (p - p2).magnitude);
float minInnerDistance = innerPoints.Min(p2 => (p - p2).magnitude);
float d = 1f - minInnerDistance / (minInnerDistance + minOuterDistance);
Color32? colorValue = func?.Invoke(p.x, p.y, d);
if (colorValue.HasValue)
target[F.P(p.x, p.y, width, height)] = colorValue.Value;
}
The above part was chosen for the solution. The below part, mentioned as another option, turned out to be unnecessary.
If you can't determine if a shadow pixel is adjacent to white/green, here's an alternative that only requires the calculation of the normals of each vertex in your pink (original) outline.
Create outer "yellow" vertices by going to each pink vertex and following its normal outward. Create inner "blue" vertices by going to each pink vertex and following its normal inward.
Then, when looping through each pixel in the shadow, loop through the yellow vertices to get your "closest to green" and through the blue to get "closest to white".
The problem is that since your shapes aren't fully convex, these projected blue and yellow outlines might be inside-out in some places, so you would need to deal with that somehow. I'm having trouble determining an exact method of dealing with that but here's what I have so far:
One step is to ignore any blues/yellows that have outward-normals that point towards the current shadow pixel.
However, if the current pixel is inside of a point where the yellow/blue shape is inside out, I'm not sure how to proceed. There might be something to ignoring blue/yellow vertexes that are closer to the closest pink vertex than they should be.
extremely rough pseudocode:
list yellow_vertex_list = new list
list blue_vertex_list = new list
foreach pink vertex p:
given float dist;
vertex yellowvertex = new vertex(p+normal*dist)
vertex bluevertex = new vertex(p-normal*dist)
yellow_vertex_list.add(yellowvertex)
blue_vertex_list.add(bluevertex)
create shadow
for each pixel p in shadow:
foreach vertex v in blue_vertex_list
if v.normal points towards v: break;
if v is the wrong side of inside-out region: break;
if v is closest so far:
closest_blue = v
closest_blue_dist = (v-p).magnitude
foreach vertex v in yellow_vertex_list
if v.normal points towards v break;
if v is the wrong side of inside-out region: break;
if v is closest so far:
closest_yellow = v
closest_yellow_dist = (v-p).magnitude
float d = 1f - closest_blue_dist / (closest_blue_dist + closest_yellow_dist)
I need to graph rectangles of different heights and widths in a C# application. The rectangles may or may not overlap.
I thought the System.Windows.Forms.DataVisualization.Charting would have what I need, but every chart type I've explored wants data points composed of a single value in one dimension and multiple values in the other.
I've considered: Box, Bubble, and Range Bar.
It turns out that Richard Eriksson has the closest answer in that the Charting package doesn't contain what I needed. The solution I'm moving forward with is to use a Point chart to manage axes and whatnot, but overload the PostPaint event to effectively draw the rectangles I need on top. The Chart provides value-to-pixel (and vice versa) conversions.
Here is a minimal example that throws 100 squares of different colors and sizes randomly onto one Chart of ChartType Point with custom Marker Images.
You can modify to de-couple the datapoints from the colors, allow for any sizes or shapes etc..:
int count = 100;
int mSize = 60; // marker size
List<Color> colors = new List<Color>(); // a color list
for (int i = 0; i < count; i++)
colors.Add(Color.FromArgb(255, 255 - i * 2, (i*i) %256, i*2));
Random R = new Random(99);
for (int i = 0; i < count; i++) // create and store the marker images
{
int w = 10 + R.Next(50); // inner width of visible marker
int off = (mSize - w) / 2;
Bitmap bmp = new Bitmap(mSize, mSize);
using (Graphics G = Graphics.FromImage(bmp))
{
G.Clear(Color.Transparent);
G.FillRectangle(new SolidBrush(colors[i]), off, off, w, w);
chart5.Images.Add(new NamedImage("NI" + i, bmp));
}
}
for (int i = 0; i < count; i++) // now add a few points to random locations
{
int p = chart5.Series["S1"].Points.AddXY(R.Next(100), R.Next(100));
chart5.Series["S1"].Points[p].MarkerImage = "NI" + p;
}
Note that this is really just a quick one; in the Link to the original answer about a heat map I show how to resize the Markers along with the Chart. Here they will always stay the same size..:
I have lowered the Alpha of the colors for this image from 255 to 155, btw.
The sizes also stay fixed when zooming in on the Chart; see how nicely they drift apart, so you can see the space between them:
This may or may not be what you want, of course..
Note that I had disabled both Axes in the first images for nicer looks. For zooming I have turned them back on so I get the simple reset button..
Also note that posting the screenshots here introduces some level of resizing, which doesn't come from the chart!
I have a small project in WPF, in which I am required to interchange UIElements. Something resembling iGoogle's functionality.
Due to the fact that I can't post pictures (not enough reputation) I will explain in text. I have a 3x3 grid defined like this:
0 1 2
0 C e C
1 e e e
2 L e C
Where C = canvas, L = label, e = empty cell (column+row).
In the MouseMove event, I'm keeping track of my currently selected canvas and I go through a list of all the other canvases available in the grid to check if they are overlapping. And here comes the problem; even though I'm moving the canvas from (0,0) to the right by 1 pixel, it detects that it is intersecting with the canvas from (2,2).
I am using Rect.Intersect(r1, r2) to determine the intersected area and it should return an empty Rect, because r1 is not overlapping r2, but instead it always returns a non-empty Rect.
// Create the rectangle with the moving element width and height
Size draggedElementSize = new Size(this.DraggedElement.ActualWidth, this.DraggedElement.ActualHeight);
Rect draggedElementRect = new Rect(draggedElementSize);
foreach (Canvas c in canvases)
{
// Create a rectangle for each canvas
Size s = new Size(c.ActualWidth, c.ActualHeight);
Rect r = new Rect(s);
// Get the intersected area
Rect currentIntersection = Rect.Intersect(r, draggedElementRect);
if (currentIntersection == Rect.Empty) // this is never true
return;
} // end-foreach
I am doing various other things inside the loop, but they don't interact in any way with this, since this isn't working properly.
I'd appreciate any help whatsoever.
Thanks.
Nowhere in your code example are you offsetting the rects by location. You're only setting the rects size.
So of course, all your rects start at Point(0,0), and therefore all intersect.
You'll need to transform the rects from the element your checking, to their parent.
The quickest way to accomplish this is VisualTreeHelper.GetOffset
// Create the rectangle with the moving element width and height
Size draggedElementSize = new Size(this.DraggedElement.ActualWidth, this.DraggedElement.ActualHeight);
Rect draggedElementRect = new Rect(draggedElementSize);
draggedElementRect.offset(VisualTreeHelper.GetOffset(this.DraggedElement));
foreach (Canvas c in canvases)
{
if (this.DraggedElement == c) continue; // skip dragged element.
// Create a rectangle for each canvas
Size s = new Size(c.ActualWidth, c.ActualHeight);
Rect r = new Rect(s);
r.offset(VisualTreeHelper.GetOffset(c));
// Get the intersected area
Rect currentIntersection = Rect.Intersect(r, draggedElementRect);
if (currentIntersection == Rect.Empty) // this is never true
return;
} // end-foreach
You might want to make sure you skip the currently dragged element, as indicated.
I don't see any references to positions in your code, only width and height. Do you really want to start all your rectangles at 0/0? Most likely, they will all overlap. You need to include the x/y coordinates.
I've been making a top-down shooter game in XNA that requires rectangular collision for the map.
The collision walls for a map is stored in a text file in the format of:rect[0,0,1024,8]
The values correspond to defining a rectangle (x, y, width, height).
I've been thinking that I could write a separate application that can illiterate through the data of the map image, find out the pixels that are black (or any color of the wall) and make rectangles there. Basically, this program will generate the rectangles required for the collision. Ideally, it would be pixel perfect, which would require something like a thousand rectangles each 1 pixel wide that covers all the walls.
Is there a possible way to detect which of these rectangles (or squares I should say) are adjacent to one another, then connect them into the a bigger (but still covering the same area) rectangle?
EG. Lets say I have a wall that is 10 by 2. The program would generate 20 different rectangles, each 1 pixel high. How would I efficiently detect that these rectangles are adjacent and automatically make a 10 by 2 rectangle covering the whole wall instead of having 20 different little pixel rectangles?
EDIT: I've worked out a solution that fits my purposes, for future reference, my code is below:
//map is a bitmap, horizontalCollisions and collisions are List<Rectangle>s
for (int y = 0; y < map.Height; y++) //loop through pixels
{
for (int x = 0; x < map.Width; x++)
{
if (map.GetPixel(x, y).Name == "ff000000") //wall color
{
int i = 1;
while (map.GetPixel(x + i, y).Name == "ff000000")
{
if (i != map.Width - x)
{
i++;
}
if (i == map.Width - x)
{
break;
}
}
Rectangle r = new Rectangle(x, y, i, 1);//create and add
x += i - 1;
horizontalCollisions.Add(r);
}
}
}
for (int j = 0; j < horizontalCollisions.Count; j++)
{
int i = 1;
Rectangle current = horizontalCollisions[j];
Rectangle r = new Rectangle(current.X, current.Y + 1, current.Width, 1);
while(horizontalCollisions.Contains(r))
{
i++;
horizontalCollisions.Remove(r);
r = new Rectangle(current.X, current.Y + i, current.Width, 1);
}
Rectangle add = new Rectangle(current.X, current.Y, current.Width, i);
collisions.Add(add);
}
//collisions now has all the rectangles
Basically, it will loop through the pixel data horizontally. When it encounters a wall pixel, it will stop the counter and (using a while loop) move the counter towards the right, one by one until it hits a non-wall pixel. Then, it will create a rectangle of that width, and continue on. After this process, there will be a big list of rectangles, each 1px tall. Basically, a bunch of horizontal lines. The next loop will run through the horizontal lines, and using the same process as above, it will find out of there are any rectangles with the same X value and the same Width value under it (y+1). This will keep incrementing until there are none, in which one big rectangle will be created, and the used rectangles are deleted from the List. The final resulting list contains all the rectangles that will make up all the black pixels on the image (pretty efficiently, I think).
Etiquette may suggest that I should comment this instead of add it as an answer, but I do not yet have that capability, so bear with me.
I'm afraid I am not able to translate this into code for you, but I can send you towards some academic papers that discuss algorithms that can do some of the things that you're asking.
Other time this questions has appeared:
Find the set of largest contiguous rectangles to cover multiple areas
Puzzle: Find largest rectangle (maximal rectangle problem)
Papers linked in those questions:
Fast Algorithms To Partition Simple Rectilinear Polygons
Polygon Decomposition
The Maximal Rectangle Problem
Hopefully these questions and papers can lead help you find the answer you're looking for, or at least scare you off towards finding another solution.
This is my first time here (With an account), I'm looking to make a height-map editor with XNA 4.0 (Somewhat similar to Earth2150's, if you've played it).
I've written a custom Effect File here: http://pastebin.com/CUFtB8Z9
It blends textures just fine, except it blends over the entire map.
What I really want is to be able to have multiple textures on my heightmap (Which i'll then blend with the nearest other texture) and I am looking for ways to do this.
I thought about assigning a float in my Vertex Declaration, then using an array of textures to "Assign" a texture to a specific vertex. But how would I go about getting my effect file to take in a different value for a texture on each vertex?
Sorry about not being very clear, here is my Draw code and my Vertex Declaration:
(Excuse the random number changing, It was my attempt to try and get each vertex to pick a random texture)
public void Draw(Texture2D[] TextureArray)
{
RasterizerState rs = new RasterizerState();
rs.CullMode = CullMode.None;
//rs.FillMode = FillMode.WireFrame;
EditGame.Instance.GraphicsDevice.RasterizerState = rs;
Random rnd = new Random();
foreach (EffectPass pass in EditGame.Instance.baseEffect.CurrentTechnique.Passes)
{
if (SlowCounter == 60)
{
EditGame.Instance.baseEffect.Parameters["xTexture"].SetValue(TextureArray[rnd.Next(0, 2)]);
EditGame.Instance.baseEffect.Parameters["bTexture"].SetValue(TextureArray[rnd.Next(0, 2)]);
SlowCounter = 0;
}
pass.Apply();
EditGame.Instance.GraphicsDevice.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, vertices, 0, vertices.Length, indices, 0, indices.Length / 3, VP2TC.VertexDeclaration);
}
SlowCounter++;
}
public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration(
new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0),
new VertexElement(12, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate,0),
new VertexElement(20, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate,1),
new VertexElement(28, VertexElementFormat.Single, VertexElementUsage.BlendWeight,0),
new VertexElement(32, VertexElementFormat.Vector3,VertexElementUsage.Normal,0),
new VertexElement(44, VertexElementFormat.Color,VertexElementUsage.Color,0)
);
As I said in my comment, I'm not certain this is what you're looking for but I'll go ahead anyway.
I think what you probably want is described here.
Essentially you have a Vector 4 which stores the weights of each texture and then take a weighted average of all 4 textures weighted by the individual elements in the vector (acting as 4 blend weights).
If you want to blend textures without having to have a blend element for every single texture things get more fun.
You could have a single blend weight, which essentially picks the blending of 2 adjacent textures in order. So if you have:
Snow
Grass
Rock
Sand
Blend Weight = 0.5
Would pick a blend of Grass (0.25) and Rock (0.75). Blended in equal amounts (since it's halfway between them).
If you want lots of textures, your shader is going to become very cumbersome with ~50 texture samplers. If you really want this many textures you should consider a texture atlas or just procedurally generated virtual textures with the blending already done at generation time.