I have two areas, which are both given in there bounds.size. Now the z-axis doesnt matter for me, since im working in 2D. I want to add these vectors so i have a vector, which represents the jointed area. Simply adding these vectors the normal way does not work. The way the area looks in the end is not important, its just important that the size is the same as, both areas combined.
Edit: I have the bounds.size of two polygoncolliders and i want to get a value that represents the bounds.size of the two polygoncolliders combined
area 1 and area 2 combined
The way the area looks in the end is not important, its just important that the size is the same as, both areas combined.
As there are nigh infinite possibilities otherwise, I'm going to limit myself to results where x = y, for the simple reason that you don't end up with silly vectors like (0.5,80000) but rather a more balanced (200,200).
This isn't all that hard when you look at it algebraically:
float result_area = first_area + second_area;
Calculating the area is easy:
float area = myVector.X * myVector.Y;
Thus rendering the sum of the areas also easy:
float result_area = myFirstVector.X * myFirstVector.Y + mySecondVector.X * mySecondVector.Y;
For the sake of example, let's say first_area = 50 and second_area = 350, thus resulting in result_area = 400;
Since we are limited to results where x = y, the result is the square root of the area:
float theSquareRoot = Math.Sqrt(result_area);
myResultVector.X = theSquareRoot;
myResultVector.Y = theSquareRoot;
As I said, there are many other possible result vectors. For other cases, you're either going to have to define a given ratio (e.g. a ratio of 1 : 4 would give you (10,40) for the same example), but the calculation is a bit harder and you mentioned that you don't care about the exact shape anyway.
You could also just make a vector where X = result_area and Y = 1 (or vice versa), without having to calculate a square root.
Note that you've overengineered it. The area of an object is a onedimensional value (a number); yet you're expressing it using a twodimensional value (a number pair) to represent them.
Since you don't care about particular X/Y values, only what their product is, I would suggest you avoid vectors where possible, so you don't make it unnecessarily complicated.
Related
I need to convert meters to decimal degrees in C#. I read on Wikipedia that 1 decimal degree equals 111.32 km. But it is on equator, so if I'm located above/below it my the conversion will be wrong?
I assume this is wrong:
long sRad = (long.Parse(sRadTBx.Text)) / (111.32*1000);
EDIT: I need this search radius to find nearby users
long myLatitude = 100;
long myLongitude = 100;
long sRad = /* right formula to convert meters to decimal degrees*/
long begLat = myLatitude - searchRad;
long endLat = myLatitude + searchRad;
long begLong = myLongitude - searchRad;
long endLong = myLongitude + searchRad;
List<User> FoundUsers = new List<User>();
foreach (User user in db.Users)
{
// Check if the user in the database is within range
if (user.usrLat >= begLat && user.usrLat <= endLat && user.usrLong >= begLong && user.usrLong <= endLong)
{
// Add the user to the FoundUsers list
FoundUsers.Add(user);
}
}
Also from that very same Wikipedia article:
As one moves away from the equator towards a pole, however,
one degree of longitude is multiplied by
the cosine of the latitude,
decreasing the distance, approaching zero at the pole.
So this would be a function of latitude:
double GetSRad(double latitude)
{
return 111.32 * Math.Cos(latitude * (Math.PI / 180));
}
or similar.
edit: So for going the other way around, converting meters to decimal degrees, you need to do this:
double MetersToDecimalDegrees(double meters, double latitude)
{
return meters / (111.32 * 1000 * Math.Cos(latitude * (Math.PI / 180)));
}
Christopher Olsson already has a good answer, but I thought I'd fill in some of the theory too.
I've always found this webpage useful for these formulas.
A quick note on the concept
Think about the actual geometry going on.
As it stands, you are currently doing nothing more than scaling the input. Imagine the classic example of a balloon. Draw two lines on the balloon that meet at the bottom and the top. These represent lines of longitude, since they go "up and down." Quotes, of course, since there aren't really such concepts, but we can imagine. Now, if you look at each line, you'll see that they vary in distance as you go up and down their lengths. Per the original specification, they meet at the top of the balloon and the bottom, but they don't meet anywhere else. The same is true of lines of longitude. Non-Euclidean geometry tells us that lines intersect exactly twice if they intersect at all, which can be hard to conceptualize. But because of that, the distance between our lines is effectively reflected across the equator.
As you can see, the latitude greatly affects the distance between your longitudinal lines. They vary from the closest at the north and south poles, to the farthest away at the equator.
Latitudinal lines are a bit easier. They do not converge. If you're holding our theoretical balloon straight up and down, with the poles pointed straight up and straight down that is, lines of latitude will be parallel to the floor. In a more generalized sense, they will be perpendicular to the axis (a Euclidean concept) made by the poles of the longitudinal lines. Thus, the distance is constant between latitudes, regardless of your longitude.
Your implementation
Now, your implementation relies on the idea that these lines are always at a constant distance. If that was the case, you'd be able to do take a simple scaling approach, as you have. If they were, in fact, parallel in the Euclidean sense, it would be not too dissimilar to the concept of converting from miles per hour to kilometers per hour. However, the variance in distance makes this much more complicated.
The distance between longitudes at the north pole is zero, and at the equator, as your cited Wikipedia page states, it's 111.32 kilometers. Consequently, to get a truly accurate result, you must account for the latitude you're looking for. That's why this gets a little more complicated.
Getting Realistic Results
Now, the formula you want, given your recent edit, it seems that you're looking to incorporate both latitude and longitude in your assessment. Given your code example, it seems that you want to find the distance between two coordinates, and that you want it to work well at short distances. Thus, I will suggest, as the website I pointed you to at the beginning of this posts suggests, a Haversine formula. That website gives lots of good information on it, but this is the formula itself. I'm copying it directly from the site, symbols and all, to make sure I don't make any stupid typos. Thus, this is, of course, JavaScript, but you can basically just change some cases and it will run in C#.
In this, φ is latitude, λ is longitude, θ is the bearing (in radians, clockwise from north), δ is the angular distance (in radians) d/R; d being the distance travelled, R the earth’s radius
var R = 6371; // km
var φ1 = lat1.toRadians();
var φ2 = lat2.toRadians();
var Δφ = (lat2-lat1).toRadians();
var Δλ = (lon2-lon1).toRadians();
var a = Math.sin(Δφ/2) * Math.sin(Δφ/2) +
Math.cos(φ1) * Math.cos(φ2) *
Math.sin(Δλ/2) * Math.sin(Δλ/2);
var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));
var d = R * c;
I think the only thing that must be noted here is that R, as declared in the first line, is the radius of the earth. As the comment suggests, we're already working in kilometers so you may or may not have to change that for your implementation. It's easy enough, fortunately, to find the (average) radius of the earth in your favorite units by doing a search online.
Of course, you'll also want to note that toRadians is simply the input multiplied by Math.PI, then divided by 180. Simple enough.
Alternative
This doesn't really look relevant to your case, but I will include it. The aforementioned formula will give accurate results, but it will be at the cost of speed. Obviously, it's a pretty small deal on any individual record, but as you build up to handle more and more, this might become an issue. If it does, and if you're dealing in a fairly centralized locale, you could work off the immense nature of our planet and find numbers suitable for the distance between one degree of latitude and longitude, then treat the planet as "more or less Euclidean" (flat, that is), and use the Pythagorean Theorem to figure the values. Of course, that will become less and less accurate the further away you get from your original test site (I'd just find these numbers, personally, by asking Google Earth or a similar product). But if you're dealing with a dense cluster of users, that will be way, way, way faster than running a flurry of formulas to the Math class to work out.
Another, more abstract alternative
You might also want to think about where you're doing this logic. Here I begin to overstep my reach a bit, but if you happen to be storing your data in SQL Server, it already has some really cool geography functionality built right in that will handle distance calculations for you. Just check out the GEOGRAPHY type.
Edit
This is a response to a comment, suggesting that the desired result is really a rectangle denoting boundaries. Now, I would advise against this, because it isn't really a search "radius" as your code may suggest.
But if you do want to stick to that method, you'll be looking at two separate distances: one for latitude and one for longitude. This is also from that webpage. φ1 is myLatitude, and λ1 is myLongitude. This formula accepts a bearing and starting coordinates, then gives the resulting position.
var φ2 = Math.asin( Math.sin(φ1)*Math.cos(d/R) + Math.cos(φ1)*Math.sin(d/R)*Math.cos(brng) );
var λ2 = λ1 + Math.atan2(Math.sin(brng)*Math.sin(d/R)*Math.cos(φ1), Math.cos(d/R)-Math.sin(φ1)*Math.sin(φ2));
You could use that to determine the boundaries of your search rectangle.
I am working on a method wich should decide whether or not a curve has a nearly constant slope or not.
There are of course x,y points involved. What I did so far is dividing y of each data point by its x to get the slope of that data point. I store this slopes in a List<double>
I think so far I am on the right track (tell me please, if I am not!). Now it's time to decide about being dealing with a constant curve or not, so I ended up with the method below:
private bool IsConstantSlope(List<double> slopes)
{
var max = slopes.Max();
var min = slopes.Min();
var diff = max - min;
return (diff > 0.01) ? false : true;
}
So what I do here checking for maximum and minimum values of slopes and compare it to a custom threshold which I beleive is not good at all.
This method works good for perfectly constant sloped lines, but I want to give it some felexibility, I don't think comparing the difference of max and min values to a constant number is a good practice.
I will appriciate more ideas!
There are ofcource x,y points involved. what I did so far is dividing
y of each data point by its x to get the slope of that data point. I
store this slopes in a List
Strictly speaking a point does not have a slope, what you are measuring here is the slope of the line that connects your point (x,y) and the point (0,0). So if you are doing this for an ordered set of points, then the notion of having a single line is not quite correct. You dont even have the set of slopes of lines that connect adjacent points. Also in your function
return (max > 0.01) || (min < -0.01);
is better if your threshold is 0.01.
If what you really want is a line that fits or approximates the set of points then you first need to perform some kind of straight line regression to your data and test the gradient of this approximating line to see if it is within your threshold limits.
This might be a useful read http://en.wikipedia.org/wiki/Simple_linear_regression
Alternatively, you can order your points by their x value, then work out the slope between each consecutive pair (effectively generating a polyline) and store these in your list and then use your slope camparison function.
I would design a recursive algorithm, working on the whole set of slopes. Considering only the min/max slopes doesn't tell anything about the whole curve.
First of all, I would establish which is the requirement that two slopes A and B must fulfill in order to determine a "constant slope". Then, I would consider the first (A) and last (B) values in your list: do the two values statisfy the requirement? No: no constant slope; Yes: subdivide the range (A,B) into two subranges: (A,M), (M,B) where M is the value equidistant, in the list, from A and B. Then you apply the same algorithm to the two subranges. The number of subranges depends on the accuracy you want to achieve.
I would like to find a fast algorithm in order to find the x closest points to a given point on a plane.
We are actually dealing with not too many points (between 1,000 and 100,000), but I need the x closest points for every of these points. (where x usually will be between 5 and 20.)
I need to write it in C#.
A bit more context about the use case: These points are coordinates on a map. (I know, this means we are not exactly talking about a plane, but I hope to avoid dealing with projection issues.) In the end points that have many other points close to them should be displayed in red, points that have not too many points close to them should be displayed green. Between these two extremees the points are on a color gradient.
What you need is a data structure appropriate for organizing points in a plane. The K-D-Tree is often used in such situations. See k-d tree on Wikipedia.
Here, I found a general description of Geometric Algorithms
UPDATE
I ported a Java implementation of a KD-tree to C#. Please see User:Ojd/KD-Tree on RoboWiki. You can download the code there or you can download CySoft.Collections.zip directly from my homepage (only download, no docu).
For a given point (not all of them) and as the number of points is not extreme, you could calculate the distance from each point:
var points = new List<Point>();
Point source = ...
....
var closestPoints = points.Where(point => point != source).
OrderBy(point => NotReallyDistanceButShouldDo(source, point)).
Take(20);
private double NotReallyDistanceButShouldDo(Point source, Point target)
{
return Math.Pow(target.X - source.X, 2) + Math.Pow(target.Y - source.Y, 2);
}
(I've used x = 20)
The calculation are based on doubles so the fpu should be able to do a decent job here.
Note that you might get better performance if Point is a class rather than a struct.
You need to create a distance function, then calculate distance for every point and sort the results, and take the first x.
If the results must be 100% accurate then you can use the standard distance function:
d = SQRT((x2 - x1)^2 + (y2 - y1)^2)
To make this more efficent. lets say the distance is k. Take all points with x coordinates between x-k and x+k. similarly take, y-k and y+k. So you have removed all excess coordinates. now make distance by (x-x1)^2 + (y-y1)^2. Make a min heap of k elements on them , and add them to the heap if new point < min(heap). You now have the k minimum elements in the heap.
In my office at work, we are not allowed to paint the walls, so I have decided to frame out squares and rectangles, attach some nice fabric to them, and arrange them on the wall.
I am trying to write a method which will take my input dimensions (9' x 8' 8") and min/max size (1' x 3', 2', 4', etc..) and generate a random pattern of squares and rectangles to fill the wall. I tried doing this by hand, but I'm just not happy with the layout that I got, and it takes about 35 minutes each time I want to 'randomize' the layout.
One solution is to start with x*y squares and randomly merge squares together to form rectangles. You'll want to give differing weights to different size squares to keep the algorithm from just ending up with loads of tiny rectangles (i.e. large rectangles should probably have a higher chance of being picked for merging until they get too big).
Sounds like a Treemap
Another idea:
1. Randomly generate points on the wall
Use as many points as the number of rectangles you want
Introduce sampling bias to get cooler patterns
2. Build the kd-tree of these points
The kd-tree will split the space in a number of rectangles. There might be too much structure for what you want, but its still a neat geeky algorithm.
(see: http://en.wikipedia.org/wiki/Kd-tree)
Edit: Just looked at JTreeMap, looks a bit like this is what its doing.
If you're talking on a pure programing problem ;) There is a technique called Bin Packing that tries to pack a number of bins into the smallest area possible. There's loads of material out there:
http://en.wikipedia.org/wiki/Bin_packing_problem
http://mathworld.wolfram.com/Bin-PackingProblem.html
http://www.cs.sunysb.edu/~algorith/files/bin-packing.shtml
So you 'could' create a load of random squares and run it through a bin packer to generate your pattern.
I've not implemented a bin packing algorithm myself but I've seen it done by a colleague for a Nike website. Best of luck
Since you can pick the size of the rectangles, this is not a hard problem.
I'd say you can do something as simple as:
Pick an (x,y) coordinate that is not currently inside a rectangle.
Pick a second (x,y) coordinate so that when you draw a rectangle between
the two coordinates, it won't overlap anything. The bounding box of
valid points is just bounded by the nearest rectangles' walls.
Draw that rectangle.
Repeat until, say, you have 90% of the area covered. At that point you
can either stop, or fill in the remaining holes with as big rectangles
as possible.
It might be interesting to parametrize the generation of points, and then make a genetic algorithm. The fitness function will be how much you like the arrangement - it would draw hundreds of arrangements for you, and you would rate them on a scale of 1-10. It would then take the best ones and tweak those, and repeat until you get an arrangement you really like.
Bin packing or square packing?
Bin packing:
http://www.cs.sunysb.edu/~algorith/files/bin-packing.shtml
Square packing:
http://www.maa.org/editorial/mathgames/mathgames_12_01_03.html
This actually sounds more like an old school random square painting demo, circa 8-bit computing days, especially if you don't mind overlaps. But if you want to be especially geeky, create random squares and solve for the packing problem.
Building off Philippe Beaudoin answer.
There are treemap implementations in other languages that you can also use. In Ruby with RubyTreeMap you could do
require 'Treemap'
require 'Treemap/image_output.rb'
root = Treemap::Node.new 0.upto(100){|i| root.new_child(:size => rand) }
output = Treemap::ImageOutput.new do |o|
o.width = 800
o.height = 600
end
output.to_png(root, "C:/output/test.png")
However it sorts the rectangles, so it doesn't look very random, but it could be a start. See rubytreemap.rubyforge.org/docs/index.html for more info
I would generate everything in a spiral slowly going in. If at any point you reach a point where your solution is proven to be 'unsolvable' (IE, can't put any squares in the remaining middle to satisfy the constraints), go to an earlier draft and change some square until you find a happy solution.
Pseudocode would look something like:
public Board GenerateSquares(direction, board, prevSquare)
{
Rectangle[] rs = generateAllPossibleNextRectangles(direction, prevSquare, board);
for(/*all possible next rectangles in some random order*/)){
if(board.add(rs[x]){
//see if you need to change direction)
Board nBoard = GenerateSquares(direction, board, rs[x]);
if(nBoard != null) return nBoard; //done
else board.remove(rs[x]);
}
}
//all possibilities tried, none worked
return null;
}
}
I suggest:
Start by setting up a polygon with four vertices to be eaten in varying size (up to maxside) rectangle lumps:
public double[] fillBoard(double width, double height, double maxside) {
double[] dest = new int[0];
double[] poly = new int[10];
poly[0] = 0; poly[1] = 0; poly[2] = width; poly[3] = 0;
poly[4] = width; poly[5] = height; poly[6] = 0; poly[7] = height;
poly[8] = 0; poly[9] = 0;
...
return dest; /* x,y pairs */
}
Then choose a random vertex, find polygon lines within (inclusive) 2 X maxside of the line.
Find x values of all vertical lines and y values of all horizontal lines. Create ratings for the "goodness" of choosing each x and y value, and equations to generate ratings for values in between the values. Goodness is measured as reducing number of lines in remaining polygon. Generate three options for each range of values between two x coordinates or two y coordinates, using pseudo-random generator. Rate and choose pairs of x and pair of y values on weighted average basis leaning towards good options. Apply new rectangle to list by cutting its shape from the poly array and adding rectangle coordinates to the dest array.
Question does not state a minimum side parameter. But if one is needed, algorithm should (upon hitting a hitch with a gap being too small) not include too small candidates in selection lists (whic will occasionally make them empty) and deselect a number of the surrounding rectangles in a certain radius of the problem with size and perform new regeneration attempts of that area, and hopefully the problem area, until the criteria are met. Recursion can remove progressively larger areas if a smaller relaying of tiles fails.
EDIT
Do some hit testing to eliminate potential overlaps. And eat some spinach before starting the typing. ;)
Define input area;
Draw vertical lines at several random horizontal locations through the entire height;
Draw horizontal lines at several vertical positions through the entire width;
Shift some "columns" up or down by arbitrary amounts;
Shift some "rows" left or right by arbitrary amounts (it may be required to subdivide some cells to obtain full horizontal seams;
Remove seams as aesthetically required.
This graphical method has similarities to Brian's answer.
I am working on a C# 2d soft body physics engine and I need to assign masses to an object's vertices given: a list of vertices (x,y positions), the total mass for the object, and the center of mass.
The center of mass is given as:
where,
R = center of mass
M = total mass
mj = mass of vertex j
rj = position of vertex j
I need an algorithm that can approximate each mj given R, M, and rj.
edit: I just want to clarify that I am aware that there are an infinite set of solutions. I am looking for a quick algorithm that finds a set of mj's (such that they are each sufficiently close to mj = M/[number of vertices] and where "sufficiently" is defined as some small floating point threshold).
Also, each object will consist of about 5 to 35 points.
You can compute the CM of a uniformly dense polygon as follows: number the N vertices from 0..N-1, and treat them cyclicly, so that vertex N wraps to vertex 0:
total_area = sum[i=0..N-1]( X(p[i],p[i+1])/2 )
CM = sum[i=0..N-1]( (p[i]+p[i+1])*X(p[i],p[i+1])/6 ) / total_area
where X(p,q)= p.x*q.y - q.x*p.y [basically, a 2D cross product]
If the polygon is convex, the CM will be inside the polygon, so you can reasonably start out by slicing up the area in triangles like a pie, with the CM at the hub. You should be able to weight each vertex of a triangle with a third of its mass, without changing the CM -- however, this would still leave a third of the total mass at the CM of the entire polygon. Nonetheless, scaling the mass transfer by 3/2 should let you split the mass of each triangle between the two "external" vertices. As a result,
area[i] = X( (p[i]-CM), (p[i+1]-CM) ) / 2
(this is the area of the triangle between the CM and vertices i and i+1)
mass[i] = (total_mass/total_area) * (area[i-1] + area[i])/2
Note that this kind of mass transfer is profoundly "unphysical" -- if nothing else, if treated literally, it would screw up the moment of inertia something fierce. However, if you need to distribute the mass among the vertices (like for some kind of cheesy explosion), and you don't want to disrupt the CM in doing so, this should do the trick.
Finally, a couple of warnings:
if you don't use the actual CM for this, it won't work right
it is hazardous to use this on concave objects; you risk ending up with negative masses
The center of mass R will constantly be changing as the vertices move. So, if you have 10 vertices, store the values from 10 consecutive "frames" - this will give you 10 equations for your 10 unknowns (assuming that the masses don't change over time).
Count the degrees of freedom: for points in D dimensional space you have D+1 equations[+] and n unknowns for n separate particles. If n>D+1 you are sunk (unless you have more information than you have told us about: symmetry constraints, higher order moments, etc...).
edit: My earlier version assumed you had the m_is and were looking for the r_is. It is slightly better when you have the r_is and want the m_is.
[+] The one you list above (which is actual D separate equation) and M = \sum m_j
Arriu said:
Oh sorry I misunderstood your question. I thought you were asking if I was modeling objects such as a torus, doughnut, or ring (objects with cutouts...). I am modeling bodies with just outer shells (like balloons or bubbles). I don't require anything more complex than that.
Now we are getting somewhere. You do know something more.
You can approximate the surface area of the object by breaking it into triangles between adjacent points. This total area gives you mean mass density. Now find the DoF deficit, and assign that many r_is (drawn at random, I guess) an initial mass based on the mean density and 1/3 of the area of each triangle it is a party to. Then solve the remaining system analytically. If the problem is ill-conditioned you can either draw a new set of assigned points, or attempt a random walk on the masses that you have already guessed at.
I would flip the problem around. That is, given a density and the position of the object (which is of course naturally still the center of mass of the object and three vectors corresponding to the orientation of the object, see Euler's angles), at each vertex associate a volume with that element (which would change with resolution and could be fractional for positions at the edge of the object) and multiply the density (d_j) with the associated volume (v_j), m_j=v_j * d_j. This approach should naturally reproduce the center of the mass of the object again.
Perhaps I didn't understand your problem, but consider that this would ultimately yield the correct mass ( Mass = sum(m_j) = sum(v_j * d_j) ) and at worst this approach should yield a verification of your result.