I am implementing radial layout drawing algorithm, according to the publication of mr.Andy Pavlo link [page 18]
The problem is, that my result contains crossed edges. Which is something that is unacceptable. I found some solution, similiar problem link but I was not able to implement them into this algorithm (I would have to change the whole approach to the solution). In addition, the algorithm by Mr. Andy Pavlo should be able to solve this problem. When we look at the result of its algorithm, there are no crossed edges here. What am I doing wrong? Am I missing something? Thank you in advance.
Mr.Pavlo pseudo code of algorithm
My implementation of algorithm
public void RadialPositions(Tree<string> rootedTree, Node<string> vertex, double alfa, double beta,
List<RadialPoint<string>> outputGraph)
{
//check if vertex is root of rootedTree
if (vertex.IsRoot)
{
vertex.Point.X = 0;
vertex.Point.Y = 0;
outputGraph.Add(new RadialPoint<string>
{
Node = vertex,
Point = new Point
{
X = 0,
Y = 0
},
ParentPoint = null
});
}
//Depth of vertex starting from 0
int depthOfVertex = vertex.Depth;
double theta = alfa;
double radius = Constants.CircleRadius + (Constants.Delta * depthOfVertex);
//Leaves number in the subtree rooted at v
int leavesNumber = BFS.BreatFirstSearch(vertex);
foreach (var child in vertex.Children)
{
//Leaves number in the subtree rooted at child
int lambda = BFS.BreatFirstSearch(child);
double mi = theta + ((double)lambda / leavesNumber * (beta - alfa));
double x = radius * Math.Cos((theta + mi) / 2.0);
double y = radius * Math.Sin((theta + mi) / 2.0);
//setting x and y
child.Point.X = x;
child.Point.Y = y;
outputGraph.Add(new RadialPoint<string>
{
Node = child,
Point = new Point
{
X = x,
Y = y,
Radius = radius
},
ParentPoint = vertex.Point
});
if (child.Children.Count > 0)
{
child.Point.Y = y;
child.Point.X = x;
RadialPositions(rootedTree, child, theta, mi, outputGraph);
}
theta = mi;
}
}
BFS algorithm for getting leaves
public static int BreatFirstSearch<T>(Node<T> root)
{
var visited = new List<Node<T>>();
var queue = new Queue<Node<T>>();
int leaves = 0;
visited.Add(root);
queue.Enqueue(root);
while (queue.Count != 0)
{
var current = queue.Dequeue();
if (current.Children.Count == 0)
leaves++;
foreach (var node in current.Children)
{
if (!visited.Contains(node))
{
visited.Add(node);
queue.Enqueue(node);
}
}
}
return leaves;
}
Initial call
var outputPoints = new List<RadialPoint<string>>();
alg.RadialPositions(tree, tree.Root,0, 360, outputPoints);
mr.Pavlo result
My result on simple sample
Math.Cos and Sin expect the input angle to be in radians, not degrees. In your initial method call, your upper angle limit (beta) should be 2 * Math.PI, not 360. This will ensure that all the angles you calculate will be in radians and not degrees.
Related
I have a navigation system for a game bot that uses X and Y co-ordinates for waypoints. To understand how my character is moving, I have a simple drawing surface 1000 * 1000 on which I paint the path. Then I can hover my mouse over any node in the path and get it's details.
The code for converting co-ordinates like 60.75, 73.13 is to simply convert to ints. That makes a ridiculously small image so I multiply by 10 but it's still not using the space available.
foreach (Node spot in path)
{
Point point = new Point();
point.X = Convert.ToInt32(spot.X * 10);
point.Y = Convert.ToInt32(spot.Y * 10);
NodeSpot dot = new NodeSpot()
{
Name = spot.Name,
Location = point
};
drawingSurface1.Nodes.Add(dot);
}
How can I make the path I draw centred on the drawing surface and use the full height or width so that I can get a clearer view?
List<double> OldMinAndMax(List<Node> path)
{
List<double> retValues = new();
path.Sort((x, y) => y.X.CompareTo(x.X));
retValues.Add(path.LastOrDefault().X);
retValues.Add( path.FirstOrDefault().X);
path.Sort((x, y) => y.Y.CompareTo(x.Y));
retValues.Add(path.LastOrDefault().Y);
retValues.Add(path.FirstOrDefault().Y);
return retValues;
}
int Rescale(double oldV, double oldMin, double oldMax, int newMin, int newMax)
{
return Convert.ToInt32(((oldV - oldMin) * (newMax - newMin) / (oldMax - oldMin)) + newMin);
}
Then I can call it just by:
List<Node> path = await PathMaker();
List<double> oldMinMax = OldMinAndMax(path);
double oldMinX = oldMinMax[0];
double oldMaxX = oldMinMax[1];
double oldMinY = oldMinMax[2];
double oldMaxY = oldMinMax[3];
path.FirstOrDefault().Name = "First";
path.LastOrDefault().Name = "Last";
foreach (Node spot in path)
{
Point point = new Point();
point.X = Rescale(spot.X, oldMinX, oldMaxX, 50, 950);
point.Y = Rescale(spot.Y, oldMinY, oldMaxY, 50, 950);
It works perfectly. Thanks for the rescaling suggestion.
So, I know that there are similar questions and I searched a lot before typing my code and asking this question.
In my case, the user clicks on a place on the screen to add a point. When the user finishes adding points, makes a right click to say that the points are ok and draw the polygon.
A the poins are irregularly placed, I must calculate the center point and the angle of each point to order the point list.
And then, when I move a point, I recalculate the angles with new positions and redraw the polygon.
It works but, when I move a point beyond two others beyond them, sometimes it doesn't draw the plygon. I couldn't find what is wrong.
here is my code and two images to explain the problem :
public class CustomPoint3D
{
public double X { get; set; }
public double Y { get; set; }
public double Z { get; set; }
public int Angle { get; set; }
public CustomPoint3D()
{
}
public CustomPoint3D(double x, double y, double z)
{
this.X = x;
this.Y = y;
this.Z = z;
}
}
private void AddZoneSurface(List<CustomPoint3D> customPoints, string guid)
{
//Calculates angles and orders / sorts the list of points
List<Point2D> points = From3DTo2D(customPoints);
//Draws a polygon in Eyeshot but it can be any tool to create a polygon.
var polygon = devDept.Eyeshot.Entities.Region.CreatePolygon(points.ToArray());
polygon.ColorMethod = colorMethodType.byEntity;
polygon.EntityData = "tool-surface-" + guid;
polygon.Color = System.Drawing.Color.FromArgb(80, 0, 0, 0);
sceneLeft.Entities.Add(polygon);
sceneLeft.Invalidate();
}
private List<Point2D> From3DTo2D(List<CustomPoint3D> points)
{
List<Point2D> retVal = new List<Point2D>();
var minX = points.Min(ro => ro.X);
var maxX = points.Max(ro => ro.X);
var minY = points.Min(ro => ro.Y);
var maxY = points.Max(ro => ro.Y);
var center = new CustomPoint3D()
{
X = minX + (maxX - minX) / 2,
Y = minY + (maxY - minY) / 2
};
// precalculate the angles of each point to avoid multiple calculations on sort
for (var i = 0; i < points.Count; i++)
{
points[i].Angle = (int)(Math.Acos((points[i].X - center.X) / lineDistance(center, points[i])));
if (points[i].Y > center.Y)
{
points[i].Angle = (int)(Math.PI + Math.PI - points[i].Angle);
}
}
//points.Sort((a, b) => a.Angle - b.Angle);
points = points.OrderBy(ro => ro.Angle).ToList();
foreach (var item in points)
{
retVal.Add(new Point2D() { X = item.X, Y = item.Y });
}
return retVal;
}
double lineDistance(CustomPoint3D point1, CustomPoint3D point2)
{
double xs = 0;
double ys = 0;
xs = point2.X - point1.X;
xs = xs * xs;
ys = point2.Y - point1.Y;
ys = ys * ys;
return Math.Sqrt(xs + ys);
}
On the first images, I move the point from its initial position to the indicated position, it doesn't draw the polygon.
You should read the Wikipedia page on convex hull algorithms and pick an algorithm that you feel comfortable implementing that also meets your O(n) complexity requirements.
If convex hull isn't what you're after then you'll need to be a bit more specific as to how you want the points to define the shape. One (probably sub-optimal) solution would be to calculate the convex hull, find the center, pick a point as your "start" point and then order the remaining points by angle from the start point.
So if someone needs a sample which works, I found the problem.
I should have declared the angle property of th CustomPoint3D object like this
As the property was integer, an angle 0,3 or 0,99 was giving 0 as angle.
public class CustomPoint3D
{
public double X { get; set; }
public double Y { get; set; }
public double Z { get; set; }
public double Angle { get; set; }
public CustomPoint3D()
{
}
public CustomPoint3D(double x, double y, double z)
{
this.X = x;
this.Y = y;
this.Z = z;
}
}
and calculate this values as double
private List<Point2D> From3DTo2D(List<CustomPoint3D> points)
{
List<Point2D> retVal = new List<Point2D>();
var minX = points.Min(ro => ro.X);
var maxX = points.Max(ro => ro.X);
var minY = points.Min(ro => ro.Y);
var maxY = points.Max(ro => ro.Y);
var center = new CustomPoint3D()
{
X = minX + (maxX - minX) / 2,
Y = minY + (maxY - minY) / 2
};
// precalculate the angles of each point to avoid multiple calculations on sort
for (var i = 0; i < points.Count; i++)
{
points[i].Angle = Math.Acos((points[i].X - center.X) / lineDistance(center, points[i]));
if (points[i].Y > center.Y)
{
points[i].Angle = Math.PI + Math.PI - points[i].Angle;
}
}
//points.Sort((a, b) => a.Angle - b.Angle);
points = points.OrderBy(ro => ro.Angle).ToList();
foreach (var item in points)
{
retVal.Add(new Point2D() { X = item.X, Y = item.Y });
}
return retVal;
}
And
So I have an acceleration sensor that gives me acceleration data. The device is currently resting at a certain position, so the data looks like this (with some noise per axis):
ACCELX = 264
ACCELY = -43
ACCELZ = 964
Then there's a 3D model represting the device, and "all I want" is this 3D model to represent the real device's orientation. In my attempts to understand the usage of quaternions in .NET, here's the code I've gobbled up:
/* globals */
Vector3D PrevVector = new Vector3D(0, 0, 0);
ModelVisual3D model; // initialized and model file loaded
private async void TimerEvent()
{
RotateTransform3D rot = new RotateTransform3D();
QuaternionRotation3D q = new QuaternionRotation3D();
double x = 0, y = 0, z = 0;
List<Reading> results = await Device.ReadSensor();
foreach (Reading r in results)
{
switch (r.Type)
{
case "RPF_SEN_ACCELX":
x = r.Value;
break;
case "RPF_SEN_ACCELY":
y = r.Value;
break;
case "RPF_SEN_ACCELZ":
z = r.Value;
break;
}
}
double angle = Vector3D.AngleBetween(new Vector3D(x, y, z), PrevVector);
q.Quaternion = new Quaternion(new Vector3D(x, y, z), angle);
rot.Rotation = q;
model.Transform = rot;
PrevVector = new Vector3D(x, y, z);
}
Moving my real device does yield changes in the reported values, but the model on the screen just twitches in what seems to me random directions, little more far than from the noise and seemingly unrelated to how I rotate the real device. I'm fairly sure I'm constructing and using quaternions incorrectly. How would I do it right?
This is .NET with WPF. There's also HelixToolkit.WPF available, but I haven't seen any function to create quaternions from acceleration data in there. Higher level frameworks such as Unreal Engine or Unity are NOT available for this project.
Is your sensor output rotation value accumulative or differences? Sometimes the output rotation is differences, and you need previous rotation value plus the differences to calculate the current new one.
You can try to save the previous quaternion and add current quaterion with previous quaternion to get the new accumulated rotation.
Turns out my issue was of a completely different nature: I had to utilize a class called Transform3DGroup. This is how the code must be altered to enable a rotation around the Z axis:
/* globals */
ModelVisual3D model; // initialized and model file loaded
Transform3DGroup tg = new Transform3DGroup();
private async void TimerEvent()
{
RotateTransform3D rot = new RotateTransform3D();
QuaternionRotation3D q = new QuaternionRotation3D();
double x = 0, y = 0, z = 0;
List<Reading> results = await Device.ReadSensor();
foreach (Reading r in results)
{
switch (r.Type)
{
case "RPF_SEN_ACCELX":
x = r.Value;
break;
case "RPF_SEN_ACCELY":
y = r.Value;
break;
case "RPF_SEN_ACCELZ":
z = r.Value;
angle = GetAngle(x, y).ToDegrees();
q.Quaternion = new Quaternion(new Vector3D(0, 0, 1), angle);
break;
}
rot.Rotation = q;
tg.Children.Clear();
tg.Children.Add(rot);
model.Transform = tg; // Use Transform3DGroup!
}
}
I haven't found any documentation on the obligatory use of Transform3DGroup.
For the sake of completion, here are the internals of GetAngle(), as derived by Bill Wallis on Math Overflow:
private double GetAngle(double x, double y)
{
if (x > 0)
{
return 2 * Math.PI - Math.Atan(y / x);
}
else if (x < 0)
{
return Math.PI - Math.Atan(y / x);
}
else // x == 0
{
return 2 * Math.PI - Math.Sign(y) * Math.PI / 2;
}
}
And the extensions to the double type to transform doubles between radians and degrees (outside of class, within namespace):
public static class NumericExtensions
{
public static double ToRadians(this double val)
{
return (Math.PI / 180) * val;
}
public static double ToDegrees(this double val)
{
return (180 / Math.PI) * val;
}
}
Lets use C# in our example.
public class Sphere
{
public Point Center { get; set; }
public float Radius { get; set; }
public Sphere(IEnumerable<Point> points)
{
Point first = points.First();
Point vecMaxZ = first;
Point vecMinZ = first;
Point vecMaxY = first;
Point vecMinY = first;
Point vecMinX = first;
Point vecMaxX = first;
foreach (Point current in points)
{
if (current.X < vecMinX.X)
{
vecMinX = current;
}
if (current.X > vecMaxX.X)
{
vecMaxX = current;
}
if (current.Y < vecMinY.Y)
{
vecMinY = current;
}
if (current.Y > vecMaxY.Y)
{
vecMaxY = current;
}
if (current.Z < vecMinZ.Z)
{
vecMinZ = current;
}
if (current.Z > vecMaxZ.Z)
{
vecMaxZ = current;
}
}
//the lines bellow assure at least 2 points sit on the surface of the sphere.
//I'm pretty sure the algorithm is solid so far, unless I messed up the if/elses.
//I've been over this, looking at the variables and the if/elses and they all
//seem correct, but our own errors are the hardest to spot,
//so maybe there's something wrong here.
float diameterCandidateX = vecMinX.Distance(vecMaxX);
float diameterCandidateY = vecMinY.Distance(vecMaxY);
float diameterCandidateZ = vecMinZ.Distance(vecMaxZ);
Point c;
float r;
if (diameterCandidateX > diameterCandidateY)
{
if (diameterCandidateX > diameterCandidateZ)
{
c = vecMinX.Midpoint(vecMaxX);
r = diameterCandidateX / 2f;
}
else
{
c = vecMinZ.Midpoint(vecMaxZ);
r = diameterCandidateZ / 2f;
}
}
else if (diameterCandidateY > diameterCandidateZ)
{
c = vecMinY.Midpoint(vecMaxY);
r = diameterCandidateY / 2f;
}
else
{
c = vecMinZ.Midpoint(vecMaxZ);
r = diameterCandidateZ / 2f;
}
//the lines bellow look for points outside the sphere, and if one is found, then:
//1 let dist be the distance from the stray point to the current center
//2 let diff be the equal to dist - radius
//3 radius will then the increased by half of diff.
//4 a vector with the same direction as the stray point but with magnitude equal to diff is found
//5 the current center is moved by half the vector found in the step above.
//
//the stray point will now be included
//and, I would expect, the relationship between the center and other points will be mantained:
//if distance from p to center = r / k,
//then new distance from p to center' = r' / k,
//where k doesn't change from one equation to the other.
//this is where I'm wrong. I cannot figure out how to mantain this relationship.
//clearly, I'm moving the center by the wrong amount, and increasing the radius wrongly too.
//I've been over this problem for so much time, I cannot think outside the box.
//my whole world is the box. The box and I are one.
//maybe someone from outside my world (the box) could tell me where my math is wrong, please.
foreach (Point current in points)
{
float dist = current.Distance(c);
if (dist > r)
{
float diff = dist - r;
r += diff / 2f;
float scaleFactor = diff / current.Length();
Point adjust = current * scaleFactor;
c += adjust / 2f;
}
}
Center = c;
Radius = r;
}
public bool Contains(Point point) => Center.Distance(point) <= Radius;
public override string ToString() => $"Center: {Center}; Radius: {Radius}";
}
public class Point
{
public float X { get; set; }
public float Y { get; set; }
public float Z { get; set; }
public Point(float x, float y, float z)
{
X = x;
Y = y;
Z = z;
}
public float LengthSquared() => X * X + Y * Y + Z * Z;
public float Length() => (float) Math.Sqrt(X * X + Y * Y + Z * Z);
public float Distance(Point another)
{
return (float) Math.Sqrt(
(X - another.X) * (X - another.X)
+ (Y - another.Y) * (Y - another.Y)
+ (Z - another.Z) * (Z - another.Z));
}
public float DistanceSquared(Point another)
{
return (X - another.X) * (X - another.X)
+ (Y - another.Y) * (Y - another.Y)
+ (Z - another.Z) * (Z - another.Z);
}
public Point Perpendicular()
{
return new Point(-Y, X, Z);
}
public Point Midpoint(Point another)
{
return new Point(
(X + another.X) / 2f,
(Y + another.Y) / 2f,
(Z + another.Z) / 2f);
}
public override string ToString() => $"({X}, {Y}, {Z})";
public static Point operator +(Point p1, Point p2)
{
return new Point(p1.X + p2.X, p1.Y + p2.Y, p1.Z + p2.Z);
}
public static Point operator *(Point p1, float v)
{
return new Point(p1.X * v, p1.Y * v, p1.Z * v);
}
public static Point operator /(Point p1, float v)
{
return new Point(p1.X / v, p1.Y / v, p1.Z / v);
}
}
//Note: this class is here so I can be able to solve the problems suggested by
//Eric Lippert.
public class Line
{
private float coefficient;
private float constant;
public Line(Point p1, Point p2)
{
float deltaY = p2.Y - p1.Y;
float deltaX = p2.X - p1.X;
coefficient = deltaY / deltaX;
constant = coefficient * -p1.X + p1.Y;
}
public Point FromX(float x)
{
return new Point(x, x * coefficient + constant, 0);
}
public Point FromY(float y)
{
return new Point((y - constant) / coefficient, y, 0);
}
public Point Intersection(Line another)
{
float x = (another.constant - constant) / (coefficient - another.coefficient);
float y = FromX(x).Y;
return new Point(x, y, 0);
}
}
Can I safely assume this will run at least just as fast as the fancy algorithms out there that usually consider, for robustness sake, the possibility of the Points having any number of dimensions, from 2 to anything, like 1000 or 10,000 dimensions.
I only need it for 3 dimensions, never more and never less than that. Since I have no academic degree on computer science (or any degree for that matter, I'm a highschool sophomore), I have difficulties in analyzing algorithms for performance and resource consumption. So my question basically is: Is my "smallest enclosing sphere for dumbs" algoritm good in performance and resource consumption when compared with the fancy ones? Is there a point where my algorithm breaks while the professional ones don't, meaning it performs so bad it will cause noticeable loss (like, if I have too many points).
EDIT 1: I editted the code because it made no sense at all (I was hungry, it was 4pm and I haven't eaten all day). This one makes more sense I think, not sure if it's correct though. The original question stands: If this one solves the problem, does it do it well enough to compete with the stardard professional algorithms in case we know in advance that all points have 3 dimensions?
EDIT 2: Now I'm pretty sure the performance is bad, and I lost all hope of implementing a naive algorithm to find the smallest enclosing sphere. I just want to make something that work. Please, check the latest update.
EDIT 3: Doesn't work either. I quit.
EDIT 4: Finally, after, I don't know... some 5 hours. I figured it out. Jesus Christ. This one works. Could someone tell me about the performance issue? Is it really bad compared to the professional algorithms? What lines can I change to make it better? Is there a point where it breaks? Remember, I will always use it for 3D points.
EDIT 5: I learned from Bychenko the previous algorithm still didn't work. I slept on this issue, and this is my new version of the algorithm. I know it doesn't work, and I have a good clue where it is wrong, could anyone please tell why those particular calculations are wrong and how to fix them? I'm inclined to think this has something to do with trigonometry. My assumptions don't hold true for Euclidean space, because I can't stop seeing vectors as real numbers instead
of sets of real numbers that, in my case, I use to pin-point a location in Euclidean space. I'm pretty sure I'm missing some sine or cosine somewhere in the last loop (of course, not exactly sine or cosine, but the equivalent in cartesian coordinates, since we don't know any angles.
Addendum to EDIT 5: About the problems proposed by Eric Lippert:
(1) argh too trivial :p
(2) I will do it for the circle first; I will add a class Line for that.
Point a, b, c; //they are not collinear
Point midOfAB = a.Midpoint(b);
Point midOfBC = b.Midpoint(c);
//multiplying the vector by a scalar as I do bellow doesn't matter right?
Point perpendicularToAB = midOfAB.Perpendicular() * 3;
Point perpendicularToBC = midOfBC.Perpendicular() * 3;
Line bisectorAB = new Line(perpendicularToAB, midOfAB);
Line bisectorBC = new Line(perpendicularToBC, midOfBC);
Point center = bisectorAB.Intersection(bisectorBC);
float distA = center.Distance(a);
float distB = center.Distance(b);
float distC = center.Distance(c);
if(distA == distB && distB == distC)
//it works (spoiler alert: it doesn't)
else
//you're a failure, programmer, pick up your skate and practice some ollies
Sorry, but your algorithm is wrong. It doesn't solve the problem.
Counter example (3 points):
A = (0, 0, 0) - closest to origin (0)
B = (3, 3, 0) - farthest from origin (3 * sqrt(2) == 4.2426...)
C = (4, 0, 0)
your naive algorithm declares that the sphere has center at
P = (3 / sqrt(2), 3 / sqrt(2), 0)
and radius
R = 3 / sqrt(2)
and you can see that the point C = (4, 0, 0) is beyond the sphere
Edit the updated (but naive) algorithm is still wrong.
Counter example (3 points):
A = (0, 0, 0)
B = (1, 2, 0)
C = (4, 1, 0)
according the algorithm the sphere has its center at
P = (2, 1, 0)
with radius
R = sqrt(5)
and you can see that the sphere is not a minimal (smallest) one.
Nth Edit you still have an incorrect algorithm. When exploring gray zone (you know the problem, but partially, with holes) it's a good practice to invest into testing automatition. As you should know, in case of triangle all the vertexes should be on the sphere; let's validate your the solution on this fact:
public static class SphereValidator {
private static Random m_Random = new Random();
private static String Validate() {
var triangle = Enumerable
.Range(0, 3)
.Select(i => new Point(m_Random.Next(100), m_Random.Next(100), m_Random.Next(100)))
.ToArray();
Sphere solution = new Sphere(triangle);
double tolerance = 1.0e-5;
for (int i = 0; i < triangle.Length; ++i) {
double r = triangle[i].Distance(solution.Center);
if (Math.Abs(r - solution.Radius) > tolerance) {
return String.Format("Counter example\r\n A: {0}\r\n B: {1}\r\n C: {2}\r\n expected distance to \"{3}\": {4}; actual R {5}",
triangle[0], triangle[1], triangle[2], (char) ('A' + i), r, solution.Radius);
}
}
return null;
}
public static String FindCounterExample(int attempts = 10000) {
for (int i = 0; i < attempts; ++i) {
String result = Validate();
if (!String.IsNullOrEmpty(result))
Console.WriteLine(result);
return;
}
Console.WriteLine(String.Format("Yes! All {0} tests passed!", attempts));
}
}
I've just run the code above and got:
Counter example
A: (3, 30, 9)
B: (1, 63, 40)
C: (69, 1, 16)
expected distance to "A": 35.120849609375; actual R 53.62698
For a crude approximation, compute the Axis-Aligned Bounding Box, then the bounding sphere of that box (same center, diameter = √(W² + H² + D²) ).
You can refine by computing the largest distance from that center.
I have try many things without finding the good solution so here I am.
In my game (2D) I have to check collision with all my Object (house, garage..) which are image inside Rotated Rectangle, between a ray from a Point A to Point B.
I'm using Xna and there some code:
public void Update(List<Obstacle> Lob, DragObj Ldo)
{
bool inter = false;
Point A;
Point B;
A = new Point((int)pos.X, (int)pos.Y);
B = new Point((int)Ldo.Position.X, (int)Ldo.Position.Y);
for (int j = 0; j < Lob.Count(); j++)
{
if (inter = interclass.LineIntersectsRect(A, B, Lob[j].Shape)) // I have this for the moment, Shape is the rectangle but not rotated )
{
inter = true;
islight = false;
}
else
{
inter = false;
}
}
}
So to solve my problem, whether I find a solution to have a rotatedRectangle Object with a method to check collision with line. Whether totally something else, maybe only check collision between yy straight and each rotated Rectangle Axis?
Thanks for your advices.
I have solver this problem by checking intersection between my line and each side of the rotated Rectangle (I have to rotate each Line-side of the rectangle first).
I will post the little algo soon.
I don't know C# but...
There is this algorithm that can get the closest point on a line from a point.
(Note that the closestPointOnLine function is not my code)
var closestPointOnLine = function(line1, point1)
{
var A1 = line1.y2 - line1.y1;
var B1 = line1.x1 - line1.x2;
var C1 = A1 * line1.x1 + B1 * line1.y1;
var C2 = -B1 * point1.x + A1 * point1.y;
var det = A1 * A1 + B1 * B1;
var cx = 0;
var cy = 0;
if(det !== 0)
{
cx = ((A1 * C1 - B1 * C2) / det);
cy = ((A1 * C2 + B1 * C1) / det);
}else{
cx = point1.x;
cy = point1.y;
}
return {
x : constrain(cx, Math.min(line1.x1, line1.x2), Math.max(line1.x1, line1.x2)),
y : constrain(cy, Math.min(line1.y1, line1.y2), Math.max(line1.y1, line1.y2)),
};
};
Before we go any further let's make it clear that:
Our line is:
var lineToTest = {
x1: someNumber
y1: someNumber,
x2: someNumber,
y2: someNumber
};
And that our rotated rectangle contains:
var rectToTest = {
points: [
{
x : someNumber,
y : someNumber,
},
{
x : someNumber,
y : someNumber,
},
{
x : someNumber,
y : someNumber,
},
{
x : someNumber,
y : someNumber,
},
],
};
We then take the lineToTest and take it's first point and begin using the closestPointOnLine to get a point on a line of the rectToTest,
We then check if that point is touching the lineToTest, if it is not repeat for the other lines in the rectangle.
Now I don't actually know the code to check if a point is touching a line:
But it might go something like this:
function isLineTouchingPoint(line1, point)
{
//Other code here
//You'll have to use trigonometry for this one though
return boolean;
}
Now you could convert this code into C# to get it to work.