I am writing a virtual globe using DirectX similar to Google Earth. So far, I have completed tessellation, and have tested with a wrapped texture over the entire sphere, which was successful. I have written the texture coordinates to correspond with the latitude and longitude (90lat,-180lon = 0,0 and -90lat,180lon = 1,1).
For this project, I need to layer several image tiles over the sphere. For example, 8 images spanning 90 degrees by 90 degrees. These tiles may dynamically update (i.e. tiles may be added or removed as you pan around). I have thought about using a render target view and drawing the tiles directly to that, but I'm sure there is a better way.
How would I go about doing this? Is there a way to set the texture to only span a specific texture coordinate space? I.e. from (0, 0) to (0.25, 0.5)?
There are three straight-forward solutions (and possibly many more sopisticated ones).
You can create geometry that matches the part of the sphere covered by a tile and draw those subsequently, setting the correct texture before each draw call (if the tiles are laid out in a simple way, you can also generate this geometry using instancing and a single draw call).
You can write a pixel shader that evaluates the texture coordinates and chooses the appropriate texture using transformed texture coordinates.
Render all textures to a big texture and use that to render the sphere. Whenever a tile changes, bind the big texture as a render target and draw the new tile on top of it.
Related
I'm creating a windows desktop app using the UWP Map control (MapControl) and inking based on this blog post.
I have the basics working. can draw ink on the canvas and convert them to map locations, but I can't get the points to correctly project onto the map 3D surface.
I'm using mapControl.GetLocationFromOffset to transform the control-space offset to map position but no matter what AltitudeReferenceSystem I use the points are offset above or below the 3D mesh. I'm basically trying to emulate how inking works in the built in windows Maps app. But since there's a pause after drawing each stroke I'm thinking it's doing some ray casting onto the 3D mesh. Lat/Long are correct but altitude has an offset.
Geopoint GeopointFromPoint(Point point)
{
Geopoint geoPoint = null;
this.map.GetLocationFromOffset(point, AltitudeReferenceSystem.Geoid, out geoPoint); // Tried all alt ref systems here
return (geoPoint);
}
...
BasicGeoposition curP;
foreach (var segment in firstStroke.GetRenderingSegments())
{
curP = GeopointFromPoint(segment.Position).Position;
geoPoints.Add(curP);
}
...
This is my inking around a patch of grass in the Maps App which determines correct altitude:
And the same area using the Map control + GetLocationFromOffset, blue ink is annotation showing altitude offset:
How can I project screen/control space coordinates onto the 3D mesh in the UWP Map control and get the correct altitude?
Edit: And the answer is I'm an idiot, have been thinking in meters from center of earth too long, ie I thought altitude was the altitude above sea level of the map location, but it's not, it's the points altitude above the map. So just setting altitude to zero works.
Just setting the altitude to zero may still produce unexpected results. The call to GetLocationFromOffset is where your issue is. (you should really use TryGetLocationFromOffset BTW and handle failure cases - it isn't always possible to map a screen offset to a location, for example when you click above the horizon).
The AltitudeReference you pass is is telling the control what to intersect with. You can think of it as a ray you're shooting into the 3D scene at that screen pixel. The ray will intersect the Surface first, then the terrain, then the Geoid or Ellipsoid depending on where you are. You were asking for the intersection with the Geoid, which probably wasn't what you really wanted. You probably want the intersection with the surface for an inking case. The docs aren't terribly clear on what this parameter does and should probably be updated.
If you reset all altitudes to 0 with an altitude reference of surface for the line you pass in, it will always put the line on the surface, but that might not match your ink.
There may actually be a bug here - if your code snippet above was passing the Geopoint returned from GetLocationFromOffset to the polyline, it should appear at the same pixel (you should be able to round-trip values). What was the AltitudeReferenceSystem and value returned for these GeoPoints?
First problem was I wasn't passing a AltitudeReferenceSystem to the Geopath constructor while translating ink strokes to map polygons. Once you do this you can pass the result of TryGetLocationFromOffset directly to the polygons.
Second is a possible bug with the map control. If the map control does not take up the entire height of the main window you see a pixel offset between what TryGetLocationFromOffset and where the ink stroke was drawn.
I am working on a game (in XNA/MonoGame) in which I have large world consisting of many individual tiles with different textures. I store my world as a 2D array and to render it, I simply loop through every tile and draw it at the correct position.
Recently I implemented a zoom feature in my camera. My game's camera is made in the 'usual' XNA way where you have create a matrix based on position and scale and that pass that into your SpriteBatch.Begin() calls. My transform matrix is calculated as so:
Transform = Matrix.Identity *
Matrix.CreateTranslation(-(int)Position.X, -(int)Position.Y, 0) *
Matrix.CreateTranslation(Origin.X, Origin.Y, 0) *
Matrix.CreateScale(Scale);
The problem I am now facing is when I zoom in (by changing the camera's scale variable) some tile textures look odd at some zoom levels. Here are some pictures showing what I mean:
Here is a map at perfectly fine zoom level:
Here is a zoomed out (and cropped) view of the same map, notice how the sand texture is wierdly "upscaled":
I do not have much experience with graphics programming and really have no idea what this is caused by, but it makes the map look very janky.
Im writting very simple 3d engine in c# and GDI+, just for render some models (I think Directx or OpenGL is like shovel to eat soup). So far I have succesfully implemented drawing Wireframe of my model, but next step is of course Faces. And there is my problem, for now I just project my 3d points to 2d point and then drawn it using simple
for each faceg.DrawPolygon(Pens.Red, projected_points); and for wireframe its ok.
It is possible to calculate overlapping part of polygon? and then draw FilledPolygon,
Or better idea is drawing pixel by pixel and if z-buffer of my pixel is further then set new pixel.
If first option is possible, which one is faster (for implement and calculating)?
It is possible to calculate overlapping part of polygon? and then draw FilledPolygon, Or
better idea is drawing pixel by pixel and if z-buffer of my pixel is further then set
new pixel.
If first option is possible, which one is faster (for implement and calculating)?
Yes, it is possible. You can test every polygon with every other of your list. The complexity depends on the type of the polygon (of course, it's easiest with triangles). But the performance may drop drastically with high count of polygons. And even if you find the overlapping areas, you will need to interpolate colors, or texture coordinates (if you plan to use such). Also I'm not sure about the API you use for drawing, but GDI doesn't support fill polygon with interpolated colors.
I have heard that this was the approach used in 3d graphics before inventing the Z buffer. :)
I once tried to realize similar project and used Z-buffer + my own routine to fill triangles with interpolated colors (which uses the Z-buffer). I drawed directly to a GDI bitmap's pixel data buffer. Then after all polygons has been rendered, I bitblt'ed the result to the screen.
I want to know how to remove part of a Texture from a Texture2D.
I have a simple game in which I want to blow up a planet piece by piece, when a bullet hits it "digs" into the planet.
The physics are already working but I am stuck on how to cut the texture properly.
I need to create a function that takes a Texture2D a position and a radius as input and returns the new Texture2D.
Here is an example of the Texture2D before and after what I want to accomplish.
http://img513.imageshack.us/img513/6749/redplanet512examplesmal.png
Also note that i drew a thin brown border around the crater hole. If this is possible it would be a great bonus.
After doing alot of googling on the subject it seems the best and fastest way to achieve the effect i want is to use pixel shaders.
More specifically a shader method called 'Alpha mapping'. Alpha mapping is done by using the original texture and another greyscale texture that defines what parts are visible or not.
The idea of the shader is to go through each pixel in the original texture and check how black each pixel in the greyscale image is at the same coordinate. The blacker the pixel in the greyscale picture is the higher the alpha value (more visible) the pixel in the original texture becomes. Since all this is done on the GPU it is lightning fast and leaves the CPU ready to do the actual logic for the game.
For my example I will create a black image to use as my greyscale image and then draw white circles on this corresponding to the parts i want to remove.
I've found a MSDN examples with working source code for XNA 4 that does this (the cat example):
http://create.msdn.com/en-US/education/catalog/sample/sprite_effects
EDIT:
I got this to work quite nicely. Created a small tutorial with source code here: http://syntaxwarriors.com/2012/xna-alpha-mapping-with-pixel-shaders/
A good way of doing this is to render a "hole texture" using alphablend on top of your planet texture. Think of it like drawing an invisibility circle over your original texture.
Take a look at this thread for a few nice links worms-style-destructible-terrain.
To achieve your brown edges I'd guess you'd need to take a similar approach. First render the hole to your terrain with say radius 10px. Then you render another circle from the same origin point but with a slightly larger radius, say 12px. You'd then need to set this circle to a blendmode that results in a brown color.
look at my class here:
http://www.codeproject.com/Articles/328894/XNA-Sprite-Class-with-useful-methods
1.Simply create an object of Sprite class for your planet
Sprite PlanetSprite = new Sprite(PlanetTexture2D , new Vector2(//yourPlanet.X, //yourPlanet.Y));
2.when the bullet hits the planet, make a circle texure2d by the center of collision point using "GetCollisionPoint(Sprite b)" method
-you can have a Circle.png with transparent corners
-or you can create a Circle using math(which is better if you want to have bullet power)
3.then create an Sprite object of your circle
4.now use the "GetCollisionArea(Sprite b)" to get the overlapped area
5.now use the "ChangeBatchPixelColor(List pixels, Color color)" where pixels is the overlapped area and color is Color.FromNonPremultiplied(0, 0, 0, 0)
-note you don't need to draw your circle at all, after using it you can destroy it, or leave it for further use
We've got a sphere which we want to display in 3D and color given a function that depends on spherical coordinates.
The sphere was triangulated using a regular grid in (theta, phi), but this produced a lot of small triangles near the poles. In an attempt to reduce the number triangles at the poles, we've changed out mesh generation to produce more evenly sized triangles over the surface.
The first triangulation method had the advantage that we could easily create a texture and drape it over the surface. It seems that in WPF it isn't possible to assign colors to vertices the way one would go about in OpenGL or Direct3D.
With the second triangulation method it isn't apparent how to go about generating the texture and setting the texture coordinates, since the vertices aren't aligned to a grid any more.
Maybe it would be possible to create a linear texture containing a color for each vertex, but then how will that effect the coloring? Will it still render smoothly over the triangle surfaces as one would expect by applying per vertex coloring?
I've converted the algorithm to use a linear texture which is really just a lookup into the colormap. This seems to work great and is a much better solution than the previous one. Instead of creating a texture of size ThetaSamples * PhiSamples, I'm now only creating a fixed texture of 256 x 1.