I want to know how to remove part of a Texture from a Texture2D.
I have a simple game in which I want to blow up a planet piece by piece, when a bullet hits it "digs" into the planet.
The physics are already working but I am stuck on how to cut the texture properly.
I need to create a function that takes a Texture2D a position and a radius as input and returns the new Texture2D.
Here is an example of the Texture2D before and after what I want to accomplish.
http://img513.imageshack.us/img513/6749/redplanet512examplesmal.png
Also note that i drew a thin brown border around the crater hole. If this is possible it would be a great bonus.
After doing alot of googling on the subject it seems the best and fastest way to achieve the effect i want is to use pixel shaders.
More specifically a shader method called 'Alpha mapping'. Alpha mapping is done by using the original texture and another greyscale texture that defines what parts are visible or not.
The idea of the shader is to go through each pixel in the original texture and check how black each pixel in the greyscale image is at the same coordinate. The blacker the pixel in the greyscale picture is the higher the alpha value (more visible) the pixel in the original texture becomes. Since all this is done on the GPU it is lightning fast and leaves the CPU ready to do the actual logic for the game.
For my example I will create a black image to use as my greyscale image and then draw white circles on this corresponding to the parts i want to remove.
I've found a MSDN examples with working source code for XNA 4 that does this (the cat example):
http://create.msdn.com/en-US/education/catalog/sample/sprite_effects
EDIT:
I got this to work quite nicely. Created a small tutorial with source code here: http://syntaxwarriors.com/2012/xna-alpha-mapping-with-pixel-shaders/
A good way of doing this is to render a "hole texture" using alphablend on top of your planet texture. Think of it like drawing an invisibility circle over your original texture.
Take a look at this thread for a few nice links worms-style-destructible-terrain.
To achieve your brown edges I'd guess you'd need to take a similar approach. First render the hole to your terrain with say radius 10px. Then you render another circle from the same origin point but with a slightly larger radius, say 12px. You'd then need to set this circle to a blendmode that results in a brown color.
look at my class here:
http://www.codeproject.com/Articles/328894/XNA-Sprite-Class-with-useful-methods
1.Simply create an object of Sprite class for your planet
Sprite PlanetSprite = new Sprite(PlanetTexture2D , new Vector2(//yourPlanet.X, //yourPlanet.Y));
2.when the bullet hits the planet, make a circle texure2d by the center of collision point using "GetCollisionPoint(Sprite b)" method
-you can have a Circle.png with transparent corners
-or you can create a Circle using math(which is better if you want to have bullet power)
3.then create an Sprite object of your circle
4.now use the "GetCollisionArea(Sprite b)" to get the overlapped area
5.now use the "ChangeBatchPixelColor(List pixels, Color color)" where pixels is the overlapped area and color is Color.FromNonPremultiplied(0, 0, 0, 0)
-note you don't need to draw your circle at all, after using it you can destroy it, or leave it for further use
Related
I´ve got a problem. I am taking pictures of a common solar module with a camera flash. I need to detect the frame of the module to cut out the module and undistort it (I only need all of the cell area (dark area inside the frame)).
sample image - direct flash --> problems with big reflection ( I think i can reduce it with a good diffusor)
sample image - flash from angle
Anybody have some recommendation for a robust method to detect the frame? I need something to work with various image angles and lighting.
processed sample image 2
The last picture is processed. I blured the image, grayscaled, inverted. After that I thresholded the image and tried to detect contours (Got some Problems with the shadow on the bottom of the image)
Thanks for your time.
Chris
as mentioned in :
Rectangle recognition with perspective projection
Hough transform should work well for rectangle detection IFF you can assume that the sides of the rectangle are the most prominent lines in your image. Then you can simply detect the 4 biggest peaks in hough space and you got your rectangle.
This works for example with a photo of a white sheet of paper in front of a dark background.
Ideally you would preprocess the image with blur, threshold, morphological operators to remove any small-scale structures before hough transform.
If there are multiple smaller rectangles or other sorts of prominent lines in your images, contour detection might be the better choice.
Some general advantages for the hough transform off the top of my head:
Hough transform can still work if part of the rectangle is obstructed or out of the frame.
Hough transform should be faster than contour detection, I guess?
Hough transform will ignore anything that is not a straight line, so you may have greater success with cluttered images. (if the rectangle sides are the most prominent lines)
I am writing a virtual globe using DirectX similar to Google Earth. So far, I have completed tessellation, and have tested with a wrapped texture over the entire sphere, which was successful. I have written the texture coordinates to correspond with the latitude and longitude (90lat,-180lon = 0,0 and -90lat,180lon = 1,1).
For this project, I need to layer several image tiles over the sphere. For example, 8 images spanning 90 degrees by 90 degrees. These tiles may dynamically update (i.e. tiles may be added or removed as you pan around). I have thought about using a render target view and drawing the tiles directly to that, but I'm sure there is a better way.
How would I go about doing this? Is there a way to set the texture to only span a specific texture coordinate space? I.e. from (0, 0) to (0.25, 0.5)?
There are three straight-forward solutions (and possibly many more sopisticated ones).
You can create geometry that matches the part of the sphere covered by a tile and draw those subsequently, setting the correct texture before each draw call (if the tiles are laid out in a simple way, you can also generate this geometry using instancing and a single draw call).
You can write a pixel shader that evaluates the texture coordinates and chooses the appropriate texture using transformed texture coordinates.
Render all textures to a big texture and use that to render the sphere. Whenever a tile changes, bind the big texture as a render target and draw the new tile on top of it.
Im writting very simple 3d engine in c# and GDI+, just for render some models (I think Directx or OpenGL is like shovel to eat soup). So far I have succesfully implemented drawing Wireframe of my model, but next step is of course Faces. And there is my problem, for now I just project my 3d points to 2d point and then drawn it using simple
for each faceg.DrawPolygon(Pens.Red, projected_points); and for wireframe its ok.
It is possible to calculate overlapping part of polygon? and then draw FilledPolygon,
Or better idea is drawing pixel by pixel and if z-buffer of my pixel is further then set new pixel.
If first option is possible, which one is faster (for implement and calculating)?
It is possible to calculate overlapping part of polygon? and then draw FilledPolygon, Or
better idea is drawing pixel by pixel and if z-buffer of my pixel is further then set
new pixel.
If first option is possible, which one is faster (for implement and calculating)?
Yes, it is possible. You can test every polygon with every other of your list. The complexity depends on the type of the polygon (of course, it's easiest with triangles). But the performance may drop drastically with high count of polygons. And even if you find the overlapping areas, you will need to interpolate colors, or texture coordinates (if you plan to use such). Also I'm not sure about the API you use for drawing, but GDI doesn't support fill polygon with interpolated colors.
I have heard that this was the approach used in 3d graphics before inventing the Z buffer. :)
I once tried to realize similar project and used Z-buffer + my own routine to fill triangles with interpolated colors (which uses the Z-buffer). I drawed directly to a GDI bitmap's pixel data buffer. Then after all polygons has been rendered, I bitblt'ed the result to the screen.
We've got a sphere which we want to display in 3D and color given a function that depends on spherical coordinates.
The sphere was triangulated using a regular grid in (theta, phi), but this produced a lot of small triangles near the poles. In an attempt to reduce the number triangles at the poles, we've changed out mesh generation to produce more evenly sized triangles over the surface.
The first triangulation method had the advantage that we could easily create a texture and drape it over the surface. It seems that in WPF it isn't possible to assign colors to vertices the way one would go about in OpenGL or Direct3D.
With the second triangulation method it isn't apparent how to go about generating the texture and setting the texture coordinates, since the vertices aren't aligned to a grid any more.
Maybe it would be possible to create a linear texture containing a color for each vertex, but then how will that effect the coloring? Will it still render smoothly over the triangle surfaces as one would expect by applying per vertex coloring?
I've converted the algorithm to use a linear texture which is really just a lookup into the colormap. This seems to work great and is a much better solution than the previous one. Instead of creating a texture of size ThetaSamples * PhiSamples, I'm now only creating a fixed texture of 256 x 1.
Give a Coordinate, how can I color a single pixel in XNA? i.e.
Coordinate(10,11).Color = Color.Red
If you're planning on doing a lot of pixels, for something like a particle system, it would be better to use a shader. You'll probably run into performance issues eventually just using a SpriteBatch.
There's two ways depending on what coordinates you mean:
For screen coordinates the easiest way is to have a Texture2D that holds nothing but a single white pixel, then drawing it with SpriteBatch and passing whatever color you want to the Draw method.
For 3D space coordinates you want to use a PointList.
There's a bit more complicated things you could do as well: use Texture2D.SetData to make your own single white pixel texture at run time. Or, it's also possible to use a PointList and project to screen space.