The bouncing ball example shown in the Matlab ODE Solver has a way to terminate integration when an even triggers (https://www.mathworks.com/help/matlab/math/ode-event-location.html). In this example it terminates when height is 0. You could also terminate when the slope changes from positive to negative (apex of ball flight) or negative to positive (when ball hits the floor).
Is there a way to implement this kind of triggering in Math.Net RungeKutta.FourthOrder() ?
Also, is there better documentation anywhere besides the tests and class reference? The information there is pretty thin https://numerics.mathdotnet.com/api/MathNet.Numerics.OdeSolvers/RungeKutta.htm#FourthOrder.
Any help is appreciated!
The principal idea is to check every integration step if there is a root in the current segment. To facilitate that an interpolating polynomial for the solution is used, the so-called dense output. This is inserted into the event function and then an ordinary root finding procedure is used for this composite function, usually starting with sampling the interval to look for sign changes, then refining the root via some method with a bracketing interval.
For RK4 there exists a cubic interpolation polynomial constructed from the values of the stages. This gives results of middling quality, better than just the secant, but not the full 4th or 5th order of the step error.
Using dy=(k1+2*k2+2*k3+k4)/6 and k1,k2,k3,k4 from the step computation,
the interpolated value y at t between t0 and t1=t0+h is obtained as
s = (t-t0)/h
y = y0+s*(3*(1-s)**2+s**2)*dy + s*(1-s)/2*(k1-2*(1-2*s)*(k2+k3)-k4)
Related
I got a WCF service from which I can get a distance in meters from one point to another (latitude and lontitude) with the contract method:
public double GetDistance(double originLat, double originLng, double destLat, double destLng)
One of the points is a constant point, and the other point is one of several locations I need to extract from a database according to some other information I receive. The end goal is to get the 5 most closest locations to that constant point.
Imagine if using the WCF service cost money per request.. using the most direct approach, I would need to get all the locations from the database and then need to make a request from the service for each location.. Is there a way to somehow make it better like somehow filtering the locations in database in order to make less requests to the service?
This method is just a mathematical function, so there's no need to host it in a WCF service. Whatever is calling this service should just have its own local version of this method. That will minimize the service requests by eliminating them, and it will be insanely faster.
From the additional details, it sounds like you're also executing a query that returns a number of points, and out of those points you want to find the five that are closest to a given location.
Caching only helps if you're making the same requests with some frequency. It's possible that the first query, which returns a collection of points, might get repeated, so it might make some sense to cache the collection of points for a given query.
But unless the location that you're comparing to those points is also frequently repeated, adding it would mess up your caching.
For example, this might benefit from caching...
Points[] GetPointsUsingSomeQuery(queryInput)
...if queryInput repeats over and over.
But if you change it to this...
Points[] GetPointsClosestToSomeLocation(queryInput, Point location)
...then any benefit of caching goes out the window if location isn't frequently repeated. You'd just be caching a bunch of data and never using it because you never make the exact same request twice.
That's also why caching probably won't help with your original function. Unless you're going to repeat exact combinations over and over, you'd never find the result you're looking for in the cache. Even if it repeats occasionally it probably isn't worth it. You'd still make a lot of requests and you'd also store lots of data you're not using in the cache.
Your best bet is to overcome whatever constraint says that you can't execute this mathematical function locally.
If you are trying to find point to point distance or flight distance between 2 long/lat points then you can look at the answer below:
SO Answer
If you are check distance by road then your only option is to cache the results between those points if it is called often. Beware with caching, your provider might forbid this and best check their T&C's.
In the end, the answer is to treat the (Longitude, Latitude) as (x,y) coordinates and calculate a length of a line from the starting point to the current (x,y) with the formula:
d = sqrt((x1-x2)^2 + (y1-y2)^2)
We first read 5 points, calculating the length and keeping the max distance and the point to the max distance (with a stack or a list in order to keep several distances and points). at each point we read, we simply calculate the distance and update the distance and point if the new distance is lower
I've implemented a Probabilistic roadmap method function, which works and executes correctly. The only problem is that the output of the prm is not smooth for example, if a hand needs to rotate from 30 to 100 degrees, the steps might be 30,55,42,66,99,100, i wat to be able to smoothen the transition betwen the 30 and 100 degree. I know that the problem is related tp smoothing of a signal yet i dont know what type of smoothing might be able to do the job. No sophisticated method is needed. My implementation is in c#, if possible i wish to let such job be done by a library. Is there any such library? which i can give it an array of integers and likewise produce an array of smoothed values.
I think what you need is a simple curve fitting algorithm. A quick google search will give you lots of example code. And if you want to have a strictly increasing curve, you need to sort the values before you do the curve fitting.
If you are just interested in reaching the target, you can drop the values in between and do a linear interpolation from start to end or something similar.
I am working on a graphing calculator application, and of course, the main feature of the application is to display graphs.
Right now, this is how my algorithm of plotting graphs works: I divide the drawing canvas in N intervals (where N is defined the application's settings, default value is about 700). For each interval, I evaluate the function for the two ends, and I draw a segment between the two points.
Here are the disadvantages I found to this method:
The precision of the graph isn't great (for example the function sin(tan(x)) )
Rendering gets slow for a higher number of intervals (e.g. N is above 1000). Also, zoom and navigation controls suffer.
So is there a better approach to drawing graphs?
I am programming in C# (WPF), but I think this is irrelevant, because I am looking for an algorithm.
A better approach would be to use adaptive interval sizes. That is, start with relatively coarse intervals, say 20. For each interval, compute the function for the interval ends and the middle. If the middle point is close to the line connecting the two end points, draw a line and you're done with that interval. If not, split the interval in two and repeat with the two smaller intervals.
If the interval gets too small without converging to a line, you've probably found a discontinuity and should not connect the interval endpoints.
You don't need to write your own algorithm if you are plotting some arbitrary functions. Use a graph control from a relevant library, see here and provide the neccessary data (x, y cordinates).
I hope i can help you with this snippet of C++ program which i made few years back using primitive graphics.h ported for mingw compiler. The variable names are pretty much clear.
void func_gen(char expr[100],float precision,int color)
{
float x=-(xres/2)/(float)zoom_factor;
float max_range=-x;
while(x<=max_range)
{
float y;
y = evalu(expr,x); //user defined function which i used to evaluate ann expression
float xcord=xby2+zoom_factor*x+xshift;
float ycord=yby2-zoom_factor*y+yshift;
if(xcord<=xres && xcord>=0 && ycord>=0 && ycord<=yres)
putpixel(xcord,ycord,color);
x=x+precision;
}
}
This method gets pretty slow when i reduce the precision value (which actually increases the precision of the plot :p, sorry for noobness)
I think you should do with DrawPath. That method use an auxiliary structure (a GraphicsPath) optimized just for kind of task as you are coding. edit A small optimization could be to eval the function just at the left point of the segment, and eval at close point just on last segment.
I've got two polygons defined as a list of Vectors, I've managed to write routines to transform and intersect these two polygons (seen below Frame 1). Using line-intersection I can figure out whether these collide, and have written a working Collide() function.
This is to be used in a variable step timed game, and therefore (as shown below) in Frame 1 the right polygon is not colliding, it's perfectly normal for on Frame 2 for the polygons to be right inside each other, with the right polygon having moved to the left.
My question is, what is the best way to figure out the moment of intersection? In the example, let's assume in Frame 1 the right polygon is at X = 300, Frame 2 it moved -100 and is now at 200, and that's all I know by the time Frame 2 comes about, it was at 300, now it's at 200. What I want to know is when did it actually collide, at what X value, here it was probably about 250.
I'm preferably looking for a C# source code solution to this problem.
Maybe there's a better way of approaching this for games?
I would use the separating axis theorem, as outlined here:
Metanet tutorial
Wikipedia
Then I would sweep test or use multisampling if needed.
GMan here on StackOverflow wrote a sample implementation over at gpwiki.org.
This may all be overkill for your use-case, but it handles polygons of any order. Of course, for simple bounding boxes it can be done much more efficiently through other means.
I'm no mathematician either, but one possible though crude solution would be to run a mini simulation.
Let us call the moving polygon M and the stationary polygon S (though there is no requirement for S to actually be stationary, the approach should work just the same regardless). Let us also call the two frames you have F1 for the earlier and F2 for the later, as per your diagram.
If you were to translate polygon M back towards its position in F1 in very small increments until such time that they are no longer intersecting, then you would have a location for M at which it 'just' intersects, i.e. the previous location before they stop intersecting in this simulation. The intersection in this 'just' intersecting location should be very small — small enough that you could treat it as a point. Let us call this polygon of intersection I.
To treat I as a point you could choose the vertex of it that is nearest the centre point of M in F1: that vertex has the best chance of being outside of S at time of collision. (There are lots of other possibilities for interpreting I as a point that you could experiment with too that may have better results.)
Obviously this approach has some drawbacks:
The simulation will be slower for greater speeds of M as the distance between its locations in F1 and F2 will be greater, more simulation steps will need to be run. (You could address this by having a fixed number of simulation cycles irrespective of speed of M but that would mean the accuracy of the result would be different for faster and slower moving bodies.)
The 'step' size in the simulation will have to be sufficiently small to get the accuracy you require but smaller step sizes will obviously have a larger calculation cost.
Personally, without the necessary mathematical intuition, I would go with this simple approach first and try to find a mathematical solution as an optimization later.
If you have the ability to determine whether the two polygons overlap, one idea might be to use a modified binary search to detect where the two hit. Start by subdividing the time interval in half and seeing if the two polygons intersected at the midpoint. If so, recursively search the first half of the range; if not, search the second half. If you specify some tolerance level at which you no longer care about small distances (for example, at the level of a pixel), then the runtime of this approach is O(log D / K), where D is the distance between the polygons and K is the cutoff threshold. If you know what point is going to ultimately enter the second polygon, you should be able to detect the collision very quickly this way.
Hope this helps!
For a rather generic solution, and assuming ...
no polygons are intersecting at time = 0
at least one polygon is intersecting another polygon at time = t
and you're happy to use a C# clipping library (eg Clipper)
then use a binary approach to deriving the time of intersection by...
double tInterval = t;
double tCurrent = 0;
int direction = +1;
while (tInterval > MinInterval)
{
tInterval = tInterval/2;
tCurrent += (tInterval * direction);
MovePolygons(tCurrent);
if (PolygonsIntersect)
direction = +1;
else
direction = -1;
}
Well - you may see that it's allways a point of one of the polygons that hits the side of the other first (or another point - but thats after all almost the same) - a possible solution would be to calculate the distance of the points from the other lines in the move-direction. But I think this would end beeing rather slow.
I guess normaly the distances between frames are so small that it's not importand to really know excactly where it hit first - some small intersections will not be visible and after all the things will rebound or explode anyway - don't they? :)
Was wondering if anyone has knowledge on implementing pathfinding, but using scent. The stronger the scent in the nodes surrounding, is the way the 'enemy' moves towards.
Thanks
Yes, I did my university final project on the subject.
One of the applications of this idea is for finding the shortest path.
The idea is that the 'scent', as you put it, will decay over time. But the shortest path between two points will have the strongest scent.
Have a look at this paper.
What did you want to know exactly??
Not quite clear what the question is in particular - but this just seems like another way of describing the Ant colony optimization problem:
In computer science and operations
research, the ant colony optimization
algorithm (ACO) is a probabilistic
technique for solving computational
problems which can be reduced to
finding good paths through graphs.
Well, think about it for a minute.
My idea would to divide the game field into sections of 32x32 (or whatever size your character is). Then run some checks every x seconds (so if they stay still the tiles around them will have more 'scent') to figure out how strong a scent is on any given tile. Some examples might be: 1) If you cross over the tile, add 3; 2) if you crossed over an adjacent tile, add 1.
Then add things like degradation over time, reduce every tile by 1 every x seconds until it hits zero.
The last thing you will need to worry about is using AI to track this path. I would recommend just putting the AI somewhere, and telling it to find a node with a scent, then goto an adjacent node with a higher/equal value scent. Also worry about crossing off paths taken. If the player goes up a path, then back down it another direction, make sure the AI does always just take the looped back path.
The last thing to look at with the AI would be to add a bit of error. Make the AI take the wrong path every once in a while. Or lose the trail a little more easily.
Those are the key points, I'm sure you can come up with some more, with some more brainstorming.
Every game update (or some other, less frequent time frame), increase the scent value of nodes near to where the target objects (red blobs) are.
Decrease all node scent values by some fall-off amount to zero.
In the yellow blob's think/move function get available nodes to move to. Move towards the node with the highest scent value.
Depending on the number of nodes the 'decrease all node scent values' could do with optomisation, e.g. maybe maintaining a a list of non-zero nodes to be decreased.
I see a big contradiction between scent model and pathfinding. For a hunter in the nature finding the path by scent means finding exactly the path used by the followed subject. And in games pathfinding means finding the fastest path between two points. It is not the same.
1. While modelling the scent you will count the scent concentration in the point as the SUM of the surrounding concentrations multiplied by different factors. And searching for the fastest path from the point means taking the MINIMUM of the times counted for surrounding points, multiplied by the different parametres.
2. Counting the scent you should use recursive model - scent goes in all directions, including backward. In the case of the pathfinding, if you have found the shortest paths for points surrounding the target, they won't change.
3 Level of scent can rise and fall. In pathfinding, while searching for minimum, the result can never rise.
So, the scent model is really much more complicated than your target. Of course, what I have said, is true only for the standard situation and you can have something very special...