XNA clears textures mid-render? - c#

We're prerendering large sets of textures to RenderTexture2D and this is the issue we're having:
It seems that randomly during the render of a chunk, the textures for each cell (the top and sides) will corrupt and disappear. The weird things is that they come back when the next chunk is rendered though, so it seems to be something that is occurring on a per-frame basis.
Does anyone know why this occurs (and randomly it seems; note the white rectangle is where a side texture corrupts and you can see from there on out the texture contains just transparent)?
EDIT: The sides of the cubes are being saved to Texture2D but they are still disappearing in the middle of a chunk render and then coming back on the next one. So I don't understand why graphics that are in Texture2D are disappearing and coming back, without reinitialization (and that's the weird part).

RenderTexture2D is only a temporary memory construct, and gets flushed quite quickly and regularly. It is because it is reused in an effort to save memory and to a lesser extent to speed things up. As such you should only treat it as a very temporary place to store your texture. You will want to shift it to a proper Texture2D which will be stored for longer. As just doing a simple:
Texture2D YourPic = (RenderTexture2D)SomeRenderedPic;
Will not do it. This just passes the pointer to the memory space of the rendered image. When the graphics card discards it, then it will still just vanish. What you want to do is something more like:
Color[] MyColorArray = new Color[SomeRenderedPic.Width * SomeRenderedPic.Height];
SomeRendeerPic.GetData<Color>(MyColorArray);
Texture2D YourPic = new Texture2D(
GraphicsDevice,
SomeRenderedPic.Width,
SomeRenderedPic.Height);
YourPic.SetData<Color>(MyColorArray);
Now if I have whipped up that code right then it should store the data and not the pointer into the new texture. This makes the new texture its own unique memory space that won't get flushed the same way a Render Target space would.
There is a down side to this method. It cannot be done at the full refresh rate of XNA. (Something like 60 frames a second... I think... maybe 30... I forget.) At any rate, this may not be fast enough if you need a very constant refreshing. However if you are creating a static texture that doesn't really change much if ever, then this may do the trick for you.
Hopefully this made sense as I am writing this on the fly and late at night. If this doesn't work I apologize. Feel free to write me at jareth_gk#hotmail.com if need be. If I am able to answer your questions I will be happy to.
Otherwise good luck, and be inventive. I am sure there is a solution.
x Jeremy M.

I can't say that we ever solved this issue for sure, but it appears to have been something caused by either threading or splitting the task across multiple cycles. It wasn't an issue with the RenderTarget2D since we were already doing that at the time.

Related

Unity 3D - Hide vertices inside chunks

I am making a Minecraft-like terrain, therefor I need to combine thousands of blocks into chunks. The chunks I generate look like this:
This all looks fine. Except for the fact it still has the vertices in the middle of the chunk, like this:
I once saw a Youtube video with explanation on how to hide these vertices, since you can't actually see them while playing the game and they will take up a lot of memory when playing the game. Plus, I want to make the chunks bigger, prefered 16*16*64. If I do that right now, I will get an error:
count <= std::numeric_limits<UInt16>::max()
UnityEngine.Mesh:CombineMeshes(CombineInstance[])
However, I can't seem to find the video I talked about anymore, which is why I am here.
How can I update the chunk so it only shows the vertices which are actually visible?

How to apply pixel perfect collision to rotated sprites?

I'm having difficulty of knowing how to approach or how to tackle this problem. I've looked at some tutorials but they are meant for programmers that already know what they're doing. I followed a video on how to perform a form of pixel collision that applies to regular bounding boxes, where if the bounding boxes collide it checks if any non-transparent pixel in both intersecting boxes are overlapping. If they do, then a boolean will return a true value. Where and how could I start to implement the changing of the bounding box's axis in a rotating object to compliment the texture's appreance? I wouldn't prefer being pointed to an external tutorial because most of the ones I've read assumes the programmer knows everything the writer is talking about.
I've also looked at some source code that perfectly demonstrates what I'm looking for, but it seems I need a very in depth explanation to make any use of reading code as well.
First off, I don't really recommend doing this, as it's gonna be either computing- or resource-intensive (or both).
That said, one idea is to still do your aforementioned AABB method of straight-up pixel on pixel. This requires you to maintain your own pixel data in memory to only be used for collision, as opposed to relying strictly on the texture's data.
To be more specific, using this method you will have to generate what is essentially an "image", or 2 dimensional matrix of some kind, one that represents/follows your rotated image's pixels. But you will not be storing color information in it, as you would with a normal image. Instead, each "pixel" or entry in the structure shall be collision data: "block" or "not block". You could easily use a bitmask to represent this, with 1 meaning "block" and 0 meaning "not block", and you'd need one bit per pixel. (NOTE: Usually you don't need more than just a boolean "on" & "off" for this, but it's possible you may want different types of collision per pixel; if so, bitmasks won't work, instead encode whatever you need per pixel, regardless the overall idea remains the same)
Generating a bitmask (or other such structure) for your sprite will enable you to just use the AABB method; all you'd have to do is use the generated bitmask instead of the texture data directly, and everything else is the same as before. But how do we generate this? That's the true difficulty of this method, because you generating your own image is basically replicating the work of your graphics card when you tell it to do rotations.
You would essentially "draw" out the rotated image yourself. This could be done by stepping through your base texture image data pixel by pixel, and applying a rotational transformation matrix to each pixel to get it to the correct destination in your bitmask/buffer. Once you have the correct destination, you then would test the image data for "block" or "not block" (using transparency as you mentioned) and write a 1 or 0 there accordingly.
While you're generating, you should also keep track of local minima and maxima; that is, how far left, right, up, and down your rotated image goes, just to give it an actual true AABB to live inside for quick checks (i.e. "Do I even need to check per-pixel collision?")
To be fully accurate, you will probably need to know which interpolation/rounding algorithm you're using (bilinear, nearest neighbor, etc.), which can get ugly. Graphics systems often do very complicated things, so taking ALL of this into account just for collision is pretty extreme. At the end of the day, even applying this method, it may not truly be "pixel perfect" as far as "perfectly synchronized with the rendered image output", unless you really go far in replicating exactly what XNA / DirectX is doing.
Finally, when does this generation occur? The answer is every time anything rotates! Otherwise you'll be checking stale data. Obviously you could just keep one buffer per sprite and just keep changing that, to not hog so much memory. But this does mean potentially once per frame if you're rotating consistently. Which means multiple times per frame if multiple sprites are all rotating a lot. Might not be the most computationally friendly.

How do I properly achieve subtractive blending in C#, XNA?

I'm working on some kind of mod for Terraria (written in C# and using XNA), in which I need to use some blend modes. I didn't have any troubles getting additive blending to work, but subtractive one causes me some problems.
I managed to display stuff with subtractive blending, but it doesn't really want to return to the standard mode. SpriteBatch.End and Begin doesn't help at all.
This is my custom BlendState:
public readonly static BlendState
bsSubtract = new BlendState{
ColorSourceBlend = Blend.SourceAlpha,
ColorDestinationBlend = Blend.One,
ColorBlendFunction = BlendFunction.ReverseSubtract,
AlphaSourceBlend = Blend.SourceAlpha,
AlphaDestinationBlend = Blend.One,
AlphaBlendFunction = BlendFunction.ReverseSubtract
},
Drawing code:
sb.End();
sb.Begin(SpriteSortMode.Immediate,bsSubtract);
(...drawing drawing blah...)
sb.End();
sb.Begin(SpriteSortMode.Immediate,BlendState.Additive);
The problem is, everything that is drawn after this code seems to still use some old options (half-transparent, bland). What am I doing wrong?
I even tried calling just sb.End() and sb.Begin() before setting the blend state back, or using another custom blend state which was a standard additive one, just with BlendFunctions set to Add, to no avail.
EDIT: Seems like setting ANY custom BlendState makes it do that...
EDIT2: Seems like the problem was me splitting the drawing to 3 separate places: one for item slots, one for tiles and one for world in general. And in one of these (items) I forgot to set the SpriteBatch before using and reset it afterwards. I should have spent more time looking at my code. Still, thanks for trying to help!
(can't close the question just yet, gonna close it after StackOverflow lets me do it)
The default blending mode is BlendState.AlphaBlend.
Try replacing BlendState.Additive with BlendState.AlphaBlend in your code. Or possibly NonPremultiplied, depending on what Terraria is actually using.
Better yet, you could read out exactly the blend state that Terraria was using, as SpriteBatch sets it on the graphics card and simply leaves it there. Here is some untested code that should do exactly that:
sb.End(); // Sets blend state
BlendState previousState = GraphicsDevice.BlendState; // Retrieve it
sb.Begin(SpriteSortMode.Immediate, bsSubtract);
// (...drawing drawing blah...)
sb.End();
sb.Begin(SpriteSortMode.Immediate, previousState); // Re-use it
Seems like the problem was me splitting the drawing to 3 separate places: one for item slots, one for tiles and one for world in general. And in one of these (items) I forgot to set the SpriteBatch before using and reset it afterwards. I should have spent more time looking at my code. Still, thanks for trying to help!

Verify image sequence

Problem
Problem shaping
Image sequence position and size are fixed and known beforehand (it's not scaled). It will be quite short, maximum of 20 frames and in a closed loop. I want to verify (event driven by button click), that I have seen it before.
Lets say I have some image sequence, like:
http://img514.imageshack.us/img514/5440/60372aeba8595eda.gif
If seen, I want to see the ID associated with it, if not - it will be analyzed and added as new instance of image sequence, that has been seen. I have though about this quite a while, and I admit, this might be a hard problem. I seem to be having hard time of putting this all together, can someone assist (in C#)?
Limitations and uses
I am not trying to recreate copyright detection system, like content id system Youtube has implemented (Margaret Gould Stewart at TED ( link )). The image sequence can be thought about like a (.gif) file, but it is not and there is no direct way to get binary. Similar method could be used, to avoid duplicates in "image sharing database", but it is not what I am trying to do.
My effort
Gaussian blur
Mathematica function to generate Gaussian blur kernels:
getKernel[L_] := Transpose[{L}].{L}/(Total[Total[Transpose[{L}].{L}]])
getVKernel[L_] := L/Total[L]
Turns out, that it is much more efficient to use 2 passes of vector kernel, then matrix kernel. Thy are based on Pascal triangle uneven rows:
{1d/4, 1d/2, 1d/4}
{1d/16, 1d/4, 3d/8, 1d/4, 1d/16}
{1d/64, 3d/32, 15d/64, 5d/16, 15d/64, 3d/32, 1d/64}
Data input, hashing, grayscaleing and lightboxing
Example of source bits, that might be useful:
Lightbox around the known rectangle: FrameX
Using MD5CryptoServiceProvider to get md5 hash of the content inside known rectangle atm.
Using ColorMatrix to grayscale image
Source example
Source example (GUI; code):
Get current content inside defined rectangle.
private Bitmap getContentBitmap() {
Rectangle r = f.r;
Bitmap hc = new Bitmap(r.Width, r.Height);
using (Graphics gf = Graphics.FromImage(hc)) {
gf.CopyFromScreen(r.Left, r.Top, 0, 0, //
new Size(r.Width, r.Height), CopyPixelOperation.SourceCopy);
}
return hc;
}
Get md5 hash of bitmap.
private byte[] getBitmapHash(Bitmap hc) {
return md5.ComputeHash(c.ConvertTo(hc, typeof(byte[])) as byte[]);
}
Get grayscale of the image.
public static Bitmap getGrayscale(Bitmap hc){
Bitmap result = new Bitmap(hc.Width, hc.Height);
ColorMatrix colorMatrix = new ColorMatrix(new float[][]{
new float[]{0.5f,0.5f,0.5f,0,0}, new float[]{0.5f,0.5f,0.5f,0,0},
new float[]{0.5f,0.5f,0.5f,0,0}, new float[]{0,0,0,1,0,0},
new float[]{0,0,0,0,1,0}, new float[]{0,0,0,0,0,1}});
using (Graphics g = Graphics.FromImage(result)) {
ImageAttributes attributes = new ImageAttributes();
attributes.SetColorMatrix(colorMatrix);
g.DrawImage(hc, new Rectangle(0, 0, hc.Width, hc.Height),
0, 0, hc.Width, hc.Height, GraphicsUnit.Pixel, attributes);
}
return result;
}
I think you have a few issues with this:
Not all image sequences [videos] are equal [but many are similar]
Where is your data coming from?
How will you repesent the data related to your viewings?
Size of the data
Issue #1:
Many images can differ slightly by compression, water marking, missing frames, and adding clips. I would suggest sampling the video. For example you may want to consider sub-sampling small sections of the images in the video. Additionally, to avoid noisy images and issues with lossely compression algorithms. You may want to consider grayscaling the frames sampled, and doing a gaussian blur. [Guassian because its "more natural" (short answer)] Once you have enough sub samples to where you have a good confidence of similarity to the video then store it in a database. With the samples you can hash them, or store them to do a % similarity later.
Issue #2
Your datasource is going to influence the tool kits, and libraries that you use.
I would suggest keeping this simple [keep it with gifs and create a custom viewer, dont' try to write a browser plugin while developing your logic]
Issue #3
Using something like Postgres [if there are a lot of large sized objects] or SQLLite is highly suggested for indexing, storing, and recalling past meta data.
Issue #4
The size of the data will have a huge determination on recall, sampling, querying the database, etc.
Overall advice: Don't bite off more than you can handle at this stage. Start small and then grow.
Also take a look at Computer Vision algorithms for more help on the object representation/recall.
The question itself is sure very interesting and challenging, however there are many practical issues as stated by #monksy.
The opportunist pragmatic in me would take a step back, look at the big picture and see if there is another way to solve the problem. For example, if you are building some kind of "image sharing community" and want to avoid duplicates in the database, you could do a simple md5 on the file (animated gifs on the web are usually always the same, it's rare that people modify them).
Another example: if you are analyzing scientific samples (like meteo sequences) it may be easier to directly embed some kind of hash in every file when generating them.
This depends on wether you only want to know wether you've seen an absolutely identical movie again, or you also want to identify movies that are very similar but have been changed a bit (made lighter, have a watermark added, compression changed, etc.)
In the first case, just take any type of hash of the file and use that (because the file will be identical on the binary level.
In the second case (which I think is what you want) you have an interesting image processing problem on your hands. You could find yourself at the front-lines of image processing science with this if you'd want. If that is the case I suggest you start reading about SURF and OpenCV, and continue on from that.
If you want to match very similar, but not identical videos, and don't want to go the ultra-robus scientific route then I'd suggest the following process:
Do the gaussian blur you already do.
Divide each image into a few equally sized rectangles (you'd have to test for the best number, but I'd suggest you start with 9.
For each rectangle in each frame compute the full-colour histogram, then find the most occurring colour in that rectangle. This gives you 9*20 = 180 numbers. This is the "fingerprint" of this movie.
Find the most similar fingerprint in your database, if it is similar enough you already know about it, otherwise you don't.
Step 4 is a bit vague because I'm not really into this field. You are currently using an MD5 hash as a sort of fingerprint, but this is unsuitable in this case because slight differences in the input of a good cryptographic hashing function produce very large differences in the hash. This will mean that two very similar frames will have a totally different MD5 hash, so from the hash you'd never know they were similar.
As long as speed of database lookups is not an issue I'd just go for the sum of square differences as a measure of fingerprint similarity, and set a threshold on that to identify equal movies. However, this is not very fast for huge datasets, and in those cases you'd probably need to transform your fingerprint to something that will allow you to find similar fingerprints faster. One thing you could do here is start by selecting all known movies with very similar average colour for the entire video, then from that select the movies that have very similar average colour in each frame, and in the ones that remain at that point do the full rectangle-by-rectangle fingerprint match. But I'm sure there are even faster options for matching 180 numbers.
Perhaps you can find a way to get a binary copy of the image data of each frame in a variable. Hash that data (md5?) and store each of the hashes. Then you can see if you've ever seen that hash before. If you haven't, it's a new frame.

Representing a Gameworld that is Irregularly shaped

I am working on a project where the game world is irregularly shaped (Think of the shape of a lake). this shape has a grid with coordinates placed over it. The game world is only on the inside of the shape. (Once again, think Lake)
How can I efficiently represent the game world? I know that many worlds are basically square, and work well in a 2 or 3 dimension array. I feel like if I use an array that is square, then I am basically wasting space, and increasing the amount of time that I need to iterate through the array. However, I am not sure how a jagged array would work here either.
Example shape of gameworld
X
XX
XX X XX
XXX XXX
XXXXXXX
XXXXXXXX
XXXXX XX
XX X
X
Edit:
The game world will most likely need each valid location stepped through. So I would a method that makes it easy to do so.
There's computational overhead and complexity associated with sparse representations, so unless the bounding area is much larger than your actual world, it's probably most efficient to simply accept the 'wasted' space. You're essentially trading off additional memory usage for faster access to world contents. More importantly, the 'wasted-space' implementation is easier to understand and maintain, which is always preferable until the point where a more complex implementation is required. If you don't have good evidence that it's required, then it's much better to keep it simple.
You could use a quadtree to minimize the amount of wasted space in your representation. Quad trees are good for partitioning 2-dimensional space with varying granularity - in your case, the finest granularity is a game square. If you had a whole 20x20 area without any game squares, the quad tree representation would allow you to use only one node to represent that whole area, instead of 400 as in the array representation.
Use whatever structure you've come up with---you can always change it later. If you're comfortable with using an array, use it. Stop worrying about the data structure you're going to use and start coding.
As you code, build abstractions away from this underlying array, like wrapping it in a semantic model; then, if you realize (through profiling) that it's waste of space or slow for the operations you need, you can swap it out without causing problems. Don't try to optimize until you know what you need.
Use a data structure like a list or map, and only insert the valid game world coordinates. That way the only thing you are saving are valid locations, and you don't waste memory saving the non-game world locations since you can deduce those from lack of presence in your data structure.
The easiest thing is to just use the array, and just mark the non-gamespace positions with some special marker. A jagged array might work too, but I don't use those much.
You could present the world as an (undirected) graph of land (or water) patches. Each patch then has a regular form and the world is the combination of these patches. Every patch is a node in the graph and has has graph edges to all its neighbours.
That is probably also the most natural representation of any general world (but it might not be the most efficient one). From an efficiency point of view, it will probably beat an array or list for a highly irregular map but not for one that fits well into a rectangle (or other regular shape) with few deviations.
An example of a highly irregular map:
x
x x
x x x
x x
x xxx
x
x
x
x
There’s virtually no way this can be efficiently fitted (both in space ratio and access time) into a regular shape. The following, on the other hand, fits very well into a regular shape by applying basic geometric transformations (it’s a parallelogram with small bits missing):
xxxxxx x
xxxxxxxxx
xxxxxxxxx
xx xxxx
One other option that could allow you to still access game world locations in O(1) time and not waste too much space would be a hashtable, where the keys would be the coordinates.
Another way would be to store an edge list - a line vector along each straight edge. Easy to check for inclusion this way and a quad tree or even a simple location hash on each vertice can speed lookup of info. We did this with a height component per edge to model the walls of a baseball stadium and it worked beautifully.
There is a big issue that nobody here addressed: the huge difference between storing it on disk and storing it in memory.
Assuming you are talking about a game world as you said, this means it's going to be very large. You're not going to store the whole thing in memory in once, but instead you will store the immediate vicinity in memory and update it as the player walks around.
This vicinity area should be as simple, easy and quick to access as possible. It should definitely be an array (or a set of arrays which are swapped out as the player moves). It will be referenced often and by many subsystems of your game engine: graphics and physics will handle loading the models, drawing them, keeping the player on top of the terrain, collisions, etc.; sound will need to know what ground type the player is currently standing on, to play the appropriate footstep sound; and so on. Rather than broadcast and duplicate this data among all the subsystems, if you just keep it in global arrays they can access it at will and at 100% speed and efficiency. This can really simplify things (but be aware of the consequences of global variables!).
However, on disk you definitely want to compress it. Some of the given answers provide good suggestions; you can serialize a data structure such as a hash table, or a list of only filled-in locations. You could certainly store an octree as well. In any case, you don't want to store blank locations on disk; according to your statistic, that would mean 66% of the space is wasted. Sure there is a time to forget about optimization and make it Just Work, but you don't want to distribute a 66%-empty file to end users. Also keep in mind that disks are not perfect random-access machines (except for SSDs); mechanical hard drives should still be around another several years at least, and they work best sequentially. See if you can organize your data structure so that the read operations are sequential, as you stream more vicinity terrain while the player moves, and you'll probably find it to be a noticeable difference. Don't take my word for it though, I haven't actually tested this sort of thing, it just makes sense right?

Categories