Could someone give me advice on how to have the player go through one side of the screen, and come out the opposite side in XNA C#?
If you need an example, just look at Super Mario Bros. 2, if you go into the right side you'll come out the left, for example.
I am going to respond at the highest level, there is a lot more to this than I am letting on.
Normally you stop the character from moving past the barrier of the screen, if you are using such a system you need to instead track them outside of the viewport, and know when to wrap them around. This could either be when the character completely disappears off the screen, or the moment any part of him disappears from the screen.
Then it becomes handling the special cases of drawing when he is partially off the screen, and moving him over to the other side once the transition has completed (or any time in between if that makes sense for your implementation, for instance the character could be considered on the other side once 50% moved off the screen).
Also note that I focus on characters, but any visible elements will have to know about these situations, even if they can't travel the same way. For instance if you are half on both sides you may be vulnerable on both sides. All of these are design decisions you need to make.
With regards to the technical implementation, that is above and beyond what we can do for you here, refinement is something we can do, but creation should come from you. (And is so heavily dependent on your other code that anything we made would just be an example and not directly usable).
Related
I can't seem to wrap my head around this topic and i need a bit of a push forward, maybe an example.
I'm currently developing a project including simulating a large number of automated characters trying to fulfill their "needs".
As for now the characters have their need levels stored in their scripts as simple floating point <0,1> value and check every frame if it exceeds a given value and then try to move to a defined point and satisfy their need.
The problem is one character can have many needs and that leads to a situation when an character moves to satisfy one need and it drops just below the threshold it moves to satisfy the next need. Assuming that later i would like the needs to "rise" over time this will be a big problem.
I think i should implement an state machine to transition from idle -> move to satisfy -> wait to satisfy -> idle but as i said i cant quite understand the whole state machine thing. The closest to understanding for me was this: https://github.com/nblumhardt/stateless but I still cant wrap my mind around it.
Would be grateful for any help, tutorials, examples, anything.
I've implemented a custom control in C#/Winforms which does things like syntax highlighting & autocomplete. I'm using AutoScroll to manage scrolling and it works nicely.
Currently I have not optimized at all (sure optimization is important, but I'm doing that last; functionality is what I'm after first), I am rendering huge documents, and each keypress will re-parse the affected line to make sure syntax highlighting is consistent.
Right now in my big meaty paint method, I am painting every string, keyword, etc, even if it is outside of the clip region. But regardless of how big the document is & how many combinations of keywords/highlighted bits & pieces I have, it still runs bloody fast with not much memory & CPU overhead.
So my question - do the Graphics.Draw* methods do any kind of culling? Eg: If the AutoScrollPosition is way down the document & I Graphics.DrawString(insert some coordinates outside the draw region), is any actual work being done? Also note I'm running VS on Win 7 inside a VM, and it is still running fast. Not that it's an issue now, but it would be nice to keep in mind for later when it comes to the optimization phase. :D
Cheers,
Aaron
From personal experience writing games that use Graphics.Draw* methods, you will notice a speed increase if you perform your own bounds checking before calling the drawing methods.
Attempting to draw things offscreen is faster than drawing things onscreen, but its still noticeably slower than not drawing them at all.
I am looking for a way to approximate a volume of fluid moving over a heightmap. The easiest solution I can think of is to approximate it as a large number of non-drawn spheres, of small diameter (<0.1m). I would then place a visible plane representing the surface of the water on "top" of the spheres, at the locations they came to rest. To my knowledge, no managed physics engines contain a built in fluid simulator, hence the question.
Implementation would consist of using a physics engine such as JigLibX, which is capable of simulating the motion of the spheres. To determine the height of the planes, I was thinking of averaging the maximum height of each sphere that is on the top layer of a grouping.
I dont expect performance to be great, but would it be approachable for real time? If not, could I use this simulation to pre-bake lines of flow?
I hope this makes sense, I really want opinions/suggestions as to whether this is feasible, or if there is a better way of approaching this.
Thanks for any help, Venatu
(If its relevant, my target platform is XNA 4.0, using C#. Windows only at this point in time, so PhysX/Havok are possibilities for the simulation, but I would prefer a managed solution)
I haven't seen realistic fluid dynamics in real time without using something like PhysX as of yet - probably because the calculations needed are so complicated! The problem with your approach as I see it would come with the resting contact of all those spheres as they settled down, which takes up a lot of processing power. Lots of resting contact points are notorious for eating into performance very quickly, even on the most powerful of desktops.
If you are going down this route then I'd recommend modelling the fluid as an elastic but solid body using spring based physics, where the force applied to one part of the water would use springs to propagate out to the rest. This gives you the option of setting a breaking point for the springs and separating the body into two or more bodies when that happens (and the reverse for coming back together.) This can give you the foundation for things like spray. It's also a more versatile approach in terms of performance, because you can choose the number of particles and springs you use to approximate your model.
It's a big and complicated topic, but I hope that provided at least some insight!
The most popular method to simulate fluids in real-time is Smoothed-particle hydrodynamics.
Several useful links:
http://en.wikipedia.org/wiki/Smoothed-particle_hydrodynamics
http://http.developer.nvidia.com/GPUGems/gpugems_ch38.html
http://www.plunk.org/~trina/thesis/html/thesis_toc.html
In addition to simulation itself you will also need some specialized broad-phase collision detection algorithms such as sweep-and-prune or hashing cells.
And you're right, there is no completed 2d solutions for the fluid dynamics.
I'm deciding on whether or not to use VSync for a new game that I've been developing using OpenGL. The goal is to provide the user with the best gaming experience, and to have a good balance between performance and quality. This game is designed to run on both older (Intel/Netbook) computers and newer (NVidia/i7 desktop) computers. I am aware that the purpose of VSync is to prevent tearing, however; I have never been able to reproduce this tearing issue even with VSync turned off. To provide the best experience; should VSync be turned on, or off?
There are many things that can be said about this issue. Let me count the way:
if you don't see tearing, you're likely vsync'ed, even if you think you're not. Reasons may vary, but ultimately, swapping buffers in the middle of the frame is very noticeable if any movement is happening (one reason might be that you're not configured to flip buffers, so something has to do a copy of your framebuffer)
vsync on has noticeable artifacts too. Typically, it creates frames that display for a variable amount of time, more tied to the display refresh rate than your rendering rate. This can create micro-stuttering, and is very hard to control, as you don't know when you generate your frame which tick it will display at, so, you can't compensate for motion artifacts. This is why some people try to lock their rendering speed to the refresh rate. Always render at 60fps (or 30fps), time update is time += 16.7ms John Carmack has been asking for a mode that does "vsync at 60Hz, but don't sync if I missed the deadline" for what I assume would be this reason.
vsync on saves power (important on netbooks)
vsync off reduces input latency (when your input is 3 frames before display, it can start to matter). You can try to compensate some of that, but ultimately, it is hard to inject input updates at the very last minute.
So the bottom line answer is that there is no perfect answer. It really depends on what matters more for your game. Things you need to look at: which framerate you want to achieve, how much input latency matters, how much movement is there in the game, is power going to be a concern.
For the best user experience, place an option in your menu to turn VSync on/off. This way the user can decide. You might not be able to reproduce tearing, but it's definitely a real issue on some systems.
As for what should be the default setting, I'm not sure, I think it's your choice. I prefer to have vertical sync off by default because it reduces performance a bit when enabled and most people don't recognize the tearing or don't care about it, but there are good reasons to enable it by default, too.
I believe the default for VSync should be on (based on all my gaming, not programming experience :) ). Just because you don't see it, doesn't mean you shouldn't follow a good
practice.
maybe you can't stare as closely at the screen as some other gamer
maybe when you see a little tear you are not as annoyed as some gamers
maybe your specific screen hides tear better than other screens would
And as schanaader mentioned, typically games would leave vsync as a configuration option somewhere in the video settings menu. Default should still be "on" for those that don't know what vsync means, and if the user is knowledgeable enough, they have the option of tweaking it to see what the difference is
I'm getting images from a C328R camera attached to a small arduino robot. I want the robot to drive towards orange ping-pong balls and pick them up. I'm using the C# code supplied by funkotron76 at http://www.codeproject.com/KB/recipes/C328R.aspx.
Is there a library I can use to do this, or do I need to iterate over every pixel in the image looking for orange? If so, what kind of tolerance would I need to compensate for various lighting conditions?
I could probably test to figure out these numbers, but I'm hoping someone out there knows the answers.
Vision can be surprisingly difficult, especially as you try to tolerate varying conditions. A few good things to research include Blob Finding (searching for contiguous pixels matching certain criteria, usually a threshold of brightness), Image Segmentation (can you have multiple balls in an image?) and general theory on Hue (most vision algorithms work with grayscale or binary images, so you'll first need to transform the image in a way that highlights the orangeness as the criteria for selection.)
Since you are presumably tracking these objects in real time as you move toward them, you might also be interested in learning about tracking models, such as the Kalman filter. It's overkill for what you're doing, but it's interesting and the basic ideas are helpful. Since you presumably know that the object should not be moving very quickly, you can use that fact to filter out false positives that could otherwise lead you to move away from the object. You can put together a simpler version of this kind of filtering by simply ignoring frames that have moved too far from the previous accepted frame (with a few boundary conditions to avoid getting stuck ignoring the object.)