For those of you who don't remember exactly what the old windows Starfield screensaver looked like, here's a YouTube video: http://www.youtube.com/watch?v=r5AoFiVs2ME
Right now, I can generate random particles ("stars") inside in a certain radius. What I've having trouble doing is figuring out the best way the achieve the affected seen in the afore-linked video.
Question: Given that I have the coordinates (vectors) for my randomly generated particles. What is the best way and/or equation to give them a direction (vector) so that they move across the screen in a way which closely resembles that which is seen in the old screensaver?
Thanks!
They seem to move away from the center. You could try to calculate the vector from the center point of the screen to the generated particle position? Then use the same direction to move the particle and accelerate the particle until it is outside the screen.
A basic algorithm for you to work with:
Generate stars at random location, with a 3-D gaussian distribution (middle of screen most likely, less likely as you go farther from the screen). Note that the motion vector of the star is determined by this starting point... the motion will effectively travel along the line formed by the origin point and the starting location, outward.
Assign each newly generated star a distance. Note that distance is irrespective of starting location.
Move the star in a straight line at an exponentially increasing speed while simultaneously decreasing it's distance. You'll have to tweak these parameters yourself.
The star should disappear when it passes the boundary of the screen, regardless of speed.
Related
When I was testing my game written in Unity 3D, I see that the camera traspass the wall on the left superior part but in the right superior part don't do it.
Can anybody help me please?
A picture - the error is up at the left:
The wall is closer than the shortest distance at which the camera can render. The distance is called near clipping plane. You can change it from the inspector after you select the camera. Set it to a very small number like 0.01 to render objects at a very close distance and avoid this issue.
Keep in mind that as your far/near ratio increases, your depth buffer precision will decrease which can cause some artifacts. So only reduce the near plane/increase the far plan as much as you really need.
I have searched far and wide for an answer to this problem, and I cannot find one so I am asking here.
The Problem:
I have a laser projecting down on a surface from overhead and I want to project some specific size shapes on this surface. In order to do this I need to 'calibrate' the laser to ground it in the real world.
The laser projects in its own coordinate system ranging from -32000 to 32000 in the x and y directions. I have targets setup on my surface in a rough rectangle (see image below for more details). The targets are set up in terms of millimeters and are their own coordinate system.
I need to be able to take points in millimeters and get them into this range of -32000 to 32000 accurately in an array of scenarios.
Example:
What is the most accurate way of determining the laser space coordinates of the desired point?
Problem 2:
The projection space is not guaranteed to be flat. It could be tilted in any direction. For example, if the bottom (in relation to the example picture) is raised, the real world coordinates stay the same in 2-D, but the measured laser coordinates become more of a Trapezoid. See Image below
If anyone has encountered/solved a similar problem or can even begin to point me in the right direction for a solution, it would be greatly appreciated.
Thank you!
I had the same issue on my post right here: https://stackoverflow.com/a/52480400/9130280
As an example I asked my question for pictures, because it was easier to explain but I applied the solution for device positioning on a surface. It is close to what you are trying to do.
Basically, you have to use OpenCvSharp 3 library (from nuget).
First you have to get a homography matrix. The only coordinates you have to know are the edges. So you fill up two arrays with the edges and then you use:
homographyMatrix = OpenCvSharp.Cv2.FindHomography(originalPointsList, targetPointsList);
And then to get any point in "millimeters" to its equivalent in laser coordinates:
targetPoint = OpenCvSharp.Cv2.PerspectiveTransform(orignalPoint, homographyMatrix);
Let me know if you need more details.
So I am working on a Risk type game in XNA/C#. I have a map, similar this one, and I need to be able to detect mouseovers on each territory (number). If these areas were squares, it would be easy, as they could each be represented by a rectangle. However, they are different size polygons. Is there a polygon shape that behaves similar to a square? If there isn't, how would I go about doing this?
I sugest this:attach color to each number, recreate your picture in these colors: every shape will be in its particular color. Dont draw it onscreen, use it only as reference map. And when the user clicks or moves mouse over your original map, you just simply project mouse coordinates into the color map, check the color of pixel laying under the mouse and because you have each color associated to number of territory...
This is not c# specific (as I've never written anything in the language, so no idea of what apis there are), though there are 2 algorithms that come to mind for detecting if a point is inside a polygon (which can be used to detect if a mouse point is over another polygon/map shape).
One is based on raycasting, where you cast a ray in 1 direction from the (mouse) point to "infinity" (edge of the board in this case) and count the number of times it crosses the polygon's edges. If it is odd, then the point is inside the polygon, if it is even, then the point is outside of the polygon.
A wiki link to it: http://en.wikipedia.org/wiki/Point_in_polygon#Ray_casting_algorithm
The other algorithm that comes to mind works only for triangles I think but it can be more simple to implement I think (taking a quick glance at your shapes, I think they can easily be broken down into triangles and some are already triangles). It is to do with checking if the point is on the same (internal) "side" of all the edges in the triangle. To find out what "side" a point is on vs an edge, you'd take create 2 vectors, the first vector would be the edge itself (made up of 2 points) and the other vector would be the first point of that edge to the input point, then calculate the cross product of those 2 vectors. The result will be negative or positive, which can be used to determine the "direction".
A link to it: http://www.blackpawn.com/texts/pointinpoly/default.html
(On that page is another algorithm that can also work for triangles)
Hit testing on a polygon is not so difficult to do in real time. You could use a KD-Tree for optimisation if the map is huge. Otherwise find a simple Contains method for a polygon and use that. I have one on another computer. Let me know if you'd like it.
So basically i am creating an XNA game at college and i need some help with something as i cannot seem to figure it out myself and i'm pretty new to this.
Basically i have a spaceship with a scrolling background of stars. I have falling asteroids and basically the point of my game is to travel as far as possible without being hit by said asteroids.
I'm really looking for some guidance as to how i could measure a theoretical distance travelled by the ship and then draw it on screen?
Any help would be greatly appreciated.
Many thanks.
Solution A
Somewhere in your code you are defining the offset of the backdrop for each frame. You could just invert* this value and add it to the total amount every frame:
totalDistance += -backdropOffset;
If the offset is defined in pixels you have to convert it to your game world unit (kilometers, lightyears, ...) before displaying the distance.
* If the ship moves forward, the backdrop "slides" in the other direction.
Solution B (more work but less headaches)
It is actually not the backdrop that is moving; it's the ship. So why not move the ship and follow it with the camera?
You will be able to do all kinds of motion. Right now you have to invert every movement of the ship and then apply it to the backdrop. Kind of counter-intuitive, don't you think? So going with this solution your code will be much closer to reality => less headaches during debugging, easier maintenance of your application and you will be quicker when adding new features.
And of course, getting the total distance would be as trivial as
var totalDistance = myShip.Position.Y;
It might be that my math is rusty or I'm just stuck in my box after trying to solve this for so long, either way I need your help.
Background: I'm making a 2d-based game in C# using XNA. In that game I want a camera to be able to zoom in/out so that a certain part of objects always are in view. Needless to say, the objects move in two dimensions while the camera moves in three.
Situation: I'm currently using basic trigonometry to calculate which height the camera should be at for all objects to show. I also position the camera between those objects.
It looks something like this:
1.Loop through all objects to find the outer edges of our objects : farRight, farLeft, farUp, farDown.
2.When we know what the edges of what has to be shown are, calculate the center, also known as the camera position:
CenterX = farLeft + (farRight - farLeft) * 0.5f;
CenterY = farUp + (farDown - farUp) * 0.5f;
3.Loop through our edges to find the largest value compared to our camera position, thus the furthest distance from the center of screen.
4.Using the largest distance-value we can easily calculate the needed height to show all of those objects (points):
float T = 90f - Constants.CAMERA_FIELDOFVIEW * 0.5f;
float height = (float)Math.Tan(MathHelper.ToRadians(T)) * (length);
So far so good, the camera positions itself perfectly based on the calculations.
Problem:
a) My rendering target is 1280*720 with a Field of View of 45 degrees, so one always sees a bit more on the X-axis, 560 pixels more actually. This is not a problem per se but more one that on b)...
b) I want the camera to be a bit further out than it is, so that one sees a bit more on what is happening beyond the furthest point. Sure, this happens on the X-axis, but that is technically my flawed logic's result. I want to be able to see more on both the X- and Y-axis and to control this behavior.
Question
Uhm, so to clarify. I would like to have some input on a way to make the camera position itself, creating this state:
Objects won't get closer than say... 150 pixels to the edge of the X-axis and 100 pixels to the edge of the Y-axis. To do this the camera shall position itself along the Z-axis so that the field of view covers it all.
I don't need help with the coding, just the math and logic of calculating the height of my camera. As you probably can see, I have a hard time wrapping this inside my head and even harder time trying to explain it to you.
If anyone out there has been dealing with this or is just better than me at math, I'd appreciate whatever you have to say! :)
Don't you just need to add or subtract 150 or 100 pixels (depending on which edge you are looking at) to each distance measurement in your loop at step 3 and carry this larger value into length at step 4? Or am I missing something.
I can't explore this area further at the moment, but if anyone is having the same issue but is not satisfied by provided answer there is another possibility in XNA.
ViewPort.Unproject()
This nifty feature converts a screen space coordinate to a world space one.
ViewPort.Project()
Does the opposite thing, namely converting world space to screen space. Just thought that someone might want to go further than me. As much as my OCD hates to leave things not perfect, I can't be perfectioning this... yet.