I'm creating a windows desktop app using the UWP Map control (MapControl) and inking based on this blog post.
I have the basics working. can draw ink on the canvas and convert them to map locations, but I can't get the points to correctly project onto the map 3D surface.
I'm using mapControl.GetLocationFromOffset to transform the control-space offset to map position but no matter what AltitudeReferenceSystem I use the points are offset above or below the 3D mesh. I'm basically trying to emulate how inking works in the built in windows Maps app. But since there's a pause after drawing each stroke I'm thinking it's doing some ray casting onto the 3D mesh. Lat/Long are correct but altitude has an offset.
Geopoint GeopointFromPoint(Point point)
{
Geopoint geoPoint = null;
this.map.GetLocationFromOffset(point, AltitudeReferenceSystem.Geoid, out geoPoint); // Tried all alt ref systems here
return (geoPoint);
}
...
BasicGeoposition curP;
foreach (var segment in firstStroke.GetRenderingSegments())
{
curP = GeopointFromPoint(segment.Position).Position;
geoPoints.Add(curP);
}
...
This is my inking around a patch of grass in the Maps App which determines correct altitude:
And the same area using the Map control + GetLocationFromOffset, blue ink is annotation showing altitude offset:
How can I project screen/control space coordinates onto the 3D mesh in the UWP Map control and get the correct altitude?
Edit: And the answer is I'm an idiot, have been thinking in meters from center of earth too long, ie I thought altitude was the altitude above sea level of the map location, but it's not, it's the points altitude above the map. So just setting altitude to zero works.
Just setting the altitude to zero may still produce unexpected results. The call to GetLocationFromOffset is where your issue is. (you should really use TryGetLocationFromOffset BTW and handle failure cases - it isn't always possible to map a screen offset to a location, for example when you click above the horizon).
The AltitudeReference you pass is is telling the control what to intersect with. You can think of it as a ray you're shooting into the 3D scene at that screen pixel. The ray will intersect the Surface first, then the terrain, then the Geoid or Ellipsoid depending on where you are. You were asking for the intersection with the Geoid, which probably wasn't what you really wanted. You probably want the intersection with the surface for an inking case. The docs aren't terribly clear on what this parameter does and should probably be updated.
If you reset all altitudes to 0 with an altitude reference of surface for the line you pass in, it will always put the line on the surface, but that might not match your ink.
There may actually be a bug here - if your code snippet above was passing the Geopoint returned from GetLocationFromOffset to the polyline, it should appear at the same pixel (you should be able to round-trip values). What was the AltitudeReferenceSystem and value returned for these GeoPoints?
First problem was I wasn't passing a AltitudeReferenceSystem to the Geopath constructor while translating ink strokes to map polygons. Once you do this you can pass the result of TryGetLocationFromOffset directly to the polygons.
Second is a possible bug with the map control. If the map control does not take up the entire height of the main window you see a pixel offset between what TryGetLocationFromOffset and where the ink stroke was drawn.
Related
I'm working in a project that has a layer system represented by several planes in front of each other. These planes receive different textures, which are projected in a render texture with an orthographic camera to generate composite textures.
This project is being build on top of another system (a game), so I have some restrictions and requirements in order to make my project work as expected to fit properly in this game. One of the requirements refers to the decals, which has their position and scale represented by a single Vector4 coordinate. I believe that this Vector4, represents 4 vertex positions in both X and Y axis (2 for X and 2 for Y). For a better understanding, see the image below:
It happens that these Vector4 coordinates seems to be related with the UV of the texture where they belong, cause they only have positive values between 0 and 1. I'm facing a hard time trying to fit this coordinate together with my project, cause Unity position system uses the traditional cartesian plane with positive and negative values rather than the normalized UV coordinates. So if I use the original Vector4 coordinates of the game, the decals get wrongly positioned and vice versa (I'm using the original coordinates as base, but my system is meant to generate stuff to be used within the game, so these decal's coordinates must match the game standards).
Considering all this, how could I convert the local/global position used by Unity as UV position used the game?
Anyway, I tried my best to explain my doubt, not sure if it has an easy solution or not. I figured out this Vector4 stuff only from observation, so feel free to suggest other ideas if you think I'm wrong about it.
EDIT #1 - Despite some paragraphs, I'm afraid my intensions can be more clear, so complementing, the whole point is to find a way to position the decals using the Vector4 coordinates in a way that they get in the right positions. The layer system contains bigger planes, which has the full size of the texture plus smaller ones, representing the decals, varying in size. I believe that the easiest solution would be use one of these bigger planes as the "UV area", which would have the normalized positions mentioned. But I don't know how would I do that...
I have a very simple scene with an image and a canvas (its parent).
The image has a shader on it, all it does that it multiplies the vertex.x by 2 (while in object space) before translating it into clip space.
The result is the following :
It seems like that the image used the canvas's object space instead of its own for the multiplication.
The whole shader looks like this :
I tried to use the tag "DisableBatching" = "True" to preserve the object space of the image in the shader, but with no success. Even tried with different unity versions. (yes im getting desperate here:D)
Thanks for any ideas in advance.
The UI system's vertex data is already provided relative to the screen, with (0, 0) in the center and - in my experience - the upper right corner being (screen.width/2, screen.height/2). This might however change depending on platforms or your canvas setup.
This is the behaviour you are seing here, the x coordinate is scaled by 2 relative to the center of your canvas (which would most likely be the extend of your screen in the game view).
There is no "object space" per-se, you would need to pass additional data (i.e. in a texture coordinate) depending on your needs.
I am writing a virtual globe using DirectX similar to Google Earth. So far, I have completed tessellation, and have tested with a wrapped texture over the entire sphere, which was successful. I have written the texture coordinates to correspond with the latitude and longitude (90lat,-180lon = 0,0 and -90lat,180lon = 1,1).
For this project, I need to layer several image tiles over the sphere. For example, 8 images spanning 90 degrees by 90 degrees. These tiles may dynamically update (i.e. tiles may be added or removed as you pan around). I have thought about using a render target view and drawing the tiles directly to that, but I'm sure there is a better way.
How would I go about doing this? Is there a way to set the texture to only span a specific texture coordinate space? I.e. from (0, 0) to (0.25, 0.5)?
There are three straight-forward solutions (and possibly many more sopisticated ones).
You can create geometry that matches the part of the sphere covered by a tile and draw those subsequently, setting the correct texture before each draw call (if the tiles are laid out in a simple way, you can also generate this geometry using instancing and a single draw call).
You can write a pixel shader that evaluates the texture coordinates and chooses the appropriate texture using transformed texture coordinates.
Render all textures to a big texture and use that to render the sphere. Whenever a tile changes, bind the big texture as a render target and draw the new tile on top of it.
I have a map, containing many objects in an area sized 5000*5000.
my screen size is 800*600.
how can i scroll my map, i don't want to move all my objects left and right, i want the "camera" to move, But unfortunately i didn't found any way to move it.
Thanks
I think you are looking for the transformMatrix parameter to SpriteBatch.Begin (this overload).
You say you don't want the objects to move, but you want the camera to move. But, at the lowest level, in both 2D and 3D rendering, there is no concept of a "camera". Rendering always happens in the same region - and you must use transformations to place your vertices/sprites into that region.
If you want the effect of a camera, you have to implement it by moving the entire world in the opposite direction.
Of course, you don't actually store the moved data. You just apply an offset when you render the data. Emartel's answer has you do that for each sprite. However using a matrix is cleaner, because you don't have to duplicate the code for every single Draw - you just let the GPU do it.
To finish with an example: Say you want your camera placed at (100, 200). To achieve this, pass Matrix.CreateTranslation(-100, -200, 0) to SpriteBatch.Begin.
(Performing a frustum cull yourself, as per emartel's answer, is probably a waste of time, unless your world is really huge. See this answer for an explanation of the performance considerations.)
Viewport
You start by creating your camera viewport. In the case of a 2D game it can be as easy as defining the bottom left position where you want to start rendering and expand it using your screen resolution, in your case 800x600.
Rectangle viewportRect = new Rectangle(viewportX, viewportY, screenWidth, screenHeight);
Here's an example of what your camera would look like if it was offset off 300,700 (the drawing is very approximate, it's just to give you a better idea)
Visibility Check
Now, you want to find every sprite that intersects the red square, which can be understood as your Viewport. This could be done with something similar to (this is untested code, just a sample of what it could look like)
List<GameObject> objectsToBeRendered = new List<GameObject>();
foreach(GameObject obj in allGameObjects)
{
Rectangle objectBounds = new Rectangle(obj.X, obj.Y, obj.Width, obj.Height);
if(viewportRect.IntersectsWith(objectBounds))
{
objectsToBeRendered.Add(obj);
}
}
Here's what it would look like graphically, the green sprites are the ones added to objectsToBeRendered. Adding the objects to a separate list makes it easy if you want to sort them from Back to Front before rendering them!
Rendering
Now that we found which objects were intersecting we need to figure out where on the screen the will end up.
spriteBatch.Begin();
foreach(GameObject obj in objectsToBeRendered)
{
Vector2 pos = new Vector2(obj.X - viewportX, obj.Y - viewportY);
spriteBatch.Draw(obj.GetTexture(), pos, Color.White);
}
spriteBatch.End();
As you can see, we deduce the X and Y position of the viewport to bring the world position of the object into Screen Coordinates within the viewport. This means that the small square that could be at 400, 800 in World Coordinates would be rendered at 100, 100 on the screen given the viewport we have here.
Edit:
While I agree with the change of "correct answer", keep in mind that what I posted here is still very useful when deciding which animations to process, which AIs to update, etc... letting the camera and the GPU make the work alone prevents you from knowing which objects were actually on screen!
i am using the following code to setup my camera. I can see the elements in a range of some 100 fs. I want the camera to see farther.
projection = Matrix.CreatePerspectiveFieldOfView((3.14159265f/10f), device.Viewport.AspectRatio, 0.2f, 40.0f);
How to do it ?
Look at the documentation for Matrix.CreatePerspectiveFieldOfView.
The last two parameters are the near and far plane distances. They determine the size of the view frustum associated with the camera. The view frustum looks like this:
Everything in the frustum is in the volume that the rasteriser uses for drawing - this includes a depth component. Everything outside this region is not drawn.
Increase the distance of the far plane from the camera.
But don't increase it further than you need to. The larger the distance between the near and far plane, the less resolution the Z-buffer has and the more likely you will see artefacts like Z-fighting.