Temporal point clustering in 3d polar coordinates / screen space - c#

I am making a air combat game, with thousands of entities flying around in all directions.
All entities can have a HUD overlay associated with them. If they are in the frustum, it's a simple projection to the screen plane. Otherwise, it's projected to the screen border.
There is a lot of overlap of HUD elements.
I want to group entities overlay indicators to avoid overlap.
When entities are off screen, grouping them is trivial. A simple sorted dictionary does the trick.
However for frustum grouping, it's a bit more tricky.
I could just do 2d point clustering, but it would end up grouping points that have very different distances from the player.
Simple 3d point clustering would fail too, because points that are close to the player should not be grouped as easily as points far away.
So the ideal solution seems to cluster points by angular distance, as well as the logarithm of the distance from the player.
But here's the last issue: the algorithm needs to either be stable enough to avoid constantly shifting group populations OR take in account the previous frame's groups.
Thanks for reading

Related

isometric tile engine

I am making an RPG game using an isometric tile engine that I found here:
http://xnaresources.com/default.asp?page=TUTORIALS
However after completing the tutorial I found myself wanting to do some things with the camera that I am not sure how to do.
Firstly I would like to zoom the camera in more so that it is displaying a 1 to 1 pixel ratio.
Secondly, would it be possible to make this game 2.5d in the way that when the camera moves, the sprite trees and things alike, move properly. By this I mean that the bottom of the sprite is planted while the top moves against the background, making a very 3d like experience. This effect can best be seen in games like diablo 2.
Here is the source code off their website:
http://www.xnaresources.com/downloads/tileengineseries9.zip
Any help would be great, Thanks
Games like Diablo or Sims 1, 2, SimCity 1-3, X-Com 1,2 etc. were actually just 2D games. The 2.5D effect requires that tiles further away are exactly the same size as tiles nearby. Your rotation around these games are restricted to 90 degrees.
How they draw is basically painters algorithm. Drawing what is furthest away first and overdrawing things that are nearer. Diablo is actually pretty simple, it didn't introduce layers or height differences as far as I remember. Just a flat map. So you draw the floor tiles first (in this case back to front isn't too necessary since they are all on the same elevation.) Then drawing back to front the walls, characters effects etc.
Everything in these games were rendered to bitmaps and rendered as bitmaps. Even though their source may have been a 3D textured model.
If you want to add perspective or free rotation then you need everything to be a 3D model. Your rendering will be simpler because depth or render order isn't as critical as you would use z-buffering to solve your issues. The only main issue is to properly render transparent bits in the right order or else you may end up with some odd results. However even if your rendering is simpler, your animation or in memory storage is a bit more difficult. You need to animate 3D models instead of just having an array of bitmaps to do the animation. Selection of items on the screen requires a little more work since position and size of the elements are no longer consistent or easily predictable.
So it depends on which features you want that will dictate which sort of solution you can use. Either way has it's plusses and minuses.

Starfield Screensaver Equations

For those of you who don't remember exactly what the old windows Starfield screensaver looked like, here's a YouTube video: http://www.youtube.com/watch?v=r5AoFiVs2ME
Right now, I can generate random particles ("stars") inside in a certain radius. What I've having trouble doing is figuring out the best way the achieve the affected seen in the afore-linked video.
Question: Given that I have the coordinates (vectors) for my randomly generated particles. What is the best way and/or equation to give them a direction (vector) so that they move across the screen in a way which closely resembles that which is seen in the old screensaver?
Thanks!
They seem to move away from the center. You could try to calculate the vector from the center point of the screen to the generated particle position? Then use the same direction to move the particle and accelerate the particle until it is outside the screen.
A basic algorithm for you to work with:
Generate stars at random location, with a 3-D gaussian distribution (middle of screen most likely, less likely as you go farther from the screen). Note that the motion vector of the star is determined by this starting point... the motion will effectively travel along the line formed by the origin point and the starting location, outward.
Assign each newly generated star a distance. Note that distance is irrespective of starting location.
Move the star in a straight line at an exponentially increasing speed while simultaneously decreasing it's distance. You'll have to tweak these parameters yourself.
The star should disappear when it passes the boundary of the screen, regardless of speed.

How to create polarized 3D image using Matlab?

I want to create polarized 3D image using Matlab or C#?.
Is any way to create 3D image from any 2D image using Matlab or C#?
Polarized 3D is an effect created in the physical world with physical projectors shining onto the same spot of a physical screen. It's not a digital effect that you can create in an image on a computer screen. You cannot write code to render an image onto a normal computer screen then see 3D with the polarized glasses.
Stereoscopic images for use with polarised glasses are created by projecting the left and right eye images so that they overlap through separate projectors which have a polarising filter fitted.
The same is true for the red and green tinted glasses (which are not the same as the old style anaglyph images).
If you only have one 2D image you cannot create a 3D image from it without getting involved in manual image processing.
Build your own Polarized Stereoscopic Projection System
Principles of Polarization Optics
Polarized Light
Since the late 19th century we know, that light can be described in terms
of electromagnetic waves. The theory behind it are the well understood
Maxwell Equations. Since this is not an article about electrodynamics just
the essentials:
Light is electromagnetic radiation with wavelenghts between 800nm (red) to 400nm (violet).
Electromagnetic radiation has an electric and a magnetic field component.
The electric and magnetic field are transversal, which means perpendicular to the propagation of the wave.
The electric and magnetic field are perpendicular to each other.
http://en.wikipedia.org/wiki/Electromagnetic_radiation
The electric field vector (one could also use the magnetic field, but convention
is to use the electric filed) determines the polaization. There are two kinds of
polarization:
Linear polarization: The electric component remains in one single plane, the polarization plane
Circular polarization: With each cycle the electric component "swings" into different direction
If you look along the propagation the field vector may cycle through:
↑→↓← -- this is called right turning polarization
↑←↓→ -- this is called left turning polarization
The effect of circular polarization is created by retarding one component
of linear polarized light by a quarter of a wavelength.
See also this Wikipedia article
http://en.wikipedia.org/wiki/Polarization_(waves)
Creating Polarized Light
Wikipedia has an excellent article on the details
http://en.wikipedia.org/wiki/Polarizer
Here's the essentials.
Linear Polarization
Linear polarized light can be obtained in various ways:
By filtering out all unwanted polarization components
from light with a broad polarization distribution.
All light emitted in a statistical manner (thermic radiation,
high pressure gas discharge, lighting arcs) has this property.
One can filter the desired polarization plane using a filter.
The following filters are known:
Brewster Beam Spliters use brewster reflection to split
a beam of light into two polarization components, polarized
perpendicular to each other.
Birefringence employs the phenomenon that some crystals
have different indices of refraction for different polarization
planes. Again the light ways are split.
Absorption in strechted polymers. Stretching a polymer gives
it an anisotropic structure. Some anisotropic polymers will absorb
only incoming light polarized parallel (or perpendicular, it
depends on the material) to the strechting direction.
The light emited from a laser is linear polarzied.
Depending on how the laser is built, the polarization plane will
gradually change over time.
http://en.wikipedia.org/wiki/Linear_polarization
Circular Polarization
In optics, circular polarization is created by passing linear polarized
light through some anisotropic material, that will retard one of the
components (electric or magnetic) by a quarter of the wavelength. This
is called a λ/4 retarder.
The angle between linear polarization and the anisotropic material's major
axis determines ratio between left and right turning polarized light resulting:
Incoming linear polarized light tilted by +45° will be fully left turning.
Incoming linear polarized light tilted by -45° will be fully right turning.
Incoming linear polarized light tilted by 0° will consist of 50% left and 50% right turning.
It should be noted, that due to the reversibility of the light's way the
passing of circular polarized light through a λ/4 retarder will
turn it into linear polarized light of the corresponding certain polarization
plane. This linear polarized light can the be filtered again by linear
polarizers. This is, how circular polarization 3D glasses work.
http://en.wikipedia.org/wiki/Circular_polarization
Polarized Light and Interaction with the Screen
Scattering and Diffraction
The typical projection screen uses very small particles, usually they're TiO2,
to scatter and diffract the light into all directions. In the scattering process
the light bounces multiple times between the particles. While each bounce leaves
a light wave solition polarized in the grand statistical scheme any notable
polarization is lost.
Thus a normal white projection screen is unsuitable for polarized stereoscopic projection.
Metallic Reflection
The key for building a polarizing stereoscopic projection system is the use
of a screen material that retains the polaization of the incoming light.
This is achieved by employing metallic reflection on particles much larger than
the light's wavelengths.
A DIY Stereoscopic Projection System
Making a DIY Silver Screen
You'll need:
aluminum powder pigment
clear acrylic base
deep black textile dye
canvas
This is how you do it:
Dye the canvas deep black. This will absorb any light not reflected,
instead of scattering it. Let it dry thorougly. You may repeat step 1
multiple times.
Paint one layer of clear acrylic base on the now deep black dyed canvas.
Doing it one side suffices. All futher steps are now done on this clear acrylic base.
Make a very thick aluminum acrylic paint. Here are a few hints:
Mix the aluminum powder with the acrylic base in very small batches.
Don't make a aluminum power paste by mixing it with water!
After putting each small batch of aluminum powder into the acrylic
stir thoroughly so that it is a homogenous mass.
You should end up with a 1 part aluminum powder to 1 part acrylic base paint
Once you've got that thick paint, thin it with 1 part of water.
Apply layers of aluminum acrylic paint on the prepared canvas. Let each layer dry.
Repeat step 4 until you've got an even aluminum metallic painted surface with no
black parts shining through.
Video Projection
Single Projector Setup
Most cinemas are using a single projector and the RealD Z-filter system to
alternating show left and right images at a swap rate of 144Hz, where the
Z-filter is dynamically modulating the polarization.
Technically the Z-filter is just some large Liquid Crystal Panel. LCs have the
property, to rotate the passing light's polarization plane, depending on a voltage
applied on the LC. The Z-filter thus rotates the light by +/-45°, controlled
by an AC voltage in sync with the left-right-image swap. Before the Z-filter
is a linear polarizer, behind it a λ/4 retarder, in parallel to the linear
polarizer. The Z-filter will rotate the polarization plane to that either only
left or right turning polarized light is leaving the system, if there's stereoscopic
material shown.
If the Z-filter is turned off, the light will be turned into 50% left and 50% right
turning polarization.
It is perfectly possible to recreate this system DIY. This however shall be described
in a separate article still to be written.
Dual Projector Setup
Using two projectors is the most easy way to project the distinct polarized images.
The idea is simple: Each projector is equiped with a polarizing filter matching the
filter in the viewer's glasses eye filters, so that light projected from the "left"
projector will reach only the left eyes, and the "right" projector's light reaches
only the viewer's right eyes.
Selecting the Projectors
It boils down to the following: You need two identical projectors which emit either
unpolarized light - that are DLP projectors using classical arc lamps -
or evenly linear polarized light for all base colours.
The later case is more appealing since you'll not "throw away" light. But the safer
is, choosing some DLP type. Note that those new nifty LED projectors usually exhibit
some uneven polarization, which makes them tricky to next impossible to use for
polarized stereoscopy.
Making the Filter Slides
The projector's filter slides can be made from the very same kind of 3D glasses
which are worn by the viewers. The 3D glasses of RealD are meant for single use.
Although cinemas setup boxes for recycling, there's no harm to the venues if you
put those glasses you got in the cinema to your own use. In fact most cinemas will
have no problem with giving you some of the glasses returned to the recycling boxes.
You may be tempted to just put those filters right behind the projectors lens.
This is however crude and will quickly destroy those filters. Remember that 50% of
the lights power may end up in the filters, heating them up.
So you want to distribute the light's power over a significant large area.
You'll need:
a number of used RealD glasses
4 panes of identically sized picture frame glass (something like 50mm × 50mm)
sharp and exact scissors or a paper guillotine
a fine tip water solvent marker pen (or similar) - whiteboard markers do fine!
some adhesive tape. Duct tape works very well (what, did you expect something else?)
This is how it goes:
In all the 3D glasses mark the back side (i.e. the side towards the eyes)
with a small letter 'L' or 'R' (left eye or right eye), right in the middle.
By applying some twist/torque on the glasses frames you can separate RealD glasses
releasing the filters.
Sort the filters into left and right filters.
Cut the filters into equally sized rectangular pieces sort them into left and right.
Don't make them quadratic. It's important that you still know the orientation within
the glasses' frame.
Clear the marking, making sure you still know what's front and what's back.
Arrange the filter pieces on the glass panes so that they nearly fill it. Of course
all facing the same (i.e. all front or all back).
Keep the gaps as small as possible.
Apply the second glass pane, apply the duck tape along the borders.
You've now left and right polarizing filter slides. Put on 3D glasses of the same making
and determine the orientation in which each pane blocks the light most efficiently by looking
through the filter slide. Important: The filter plane that blocks the light on a eye
looking through it directly will be the slide for projection that particular eye later.
The reason for that is, that reflection changes chirality, i.e. left and right turning are
swapped by reflection.
Setting up the Projection
Align the projectors so that their images match. Vertical alignment must be perfect.
Horizontal alignment may be slightly shifted, but it should be done as good as possible, to.
Place the filters in the light's way. The whole filter area should be used.
Show the stereoscopic material, so that each projector display's its eyes picture.

Subdividing 3D mesh into arbitrarily sized pieces

I have a mesh defined by 4 points in 3D space. I need an algorithm which will subdivide that mesh into subdivisions of an arbitrary horizontal and vertical size. If the subdivision size isn't an exact divisor of the mesh size, the edge pieces will be smaller.
All of the subdivision algorithms I've found only subdivide meshes into exact powers of 2. Does anyone know of one that can do what I want?
Failing that, my thoughts about a possible implementation is to rotate the mesh so that it is flat on the Z axis, subdivide in 2D and then translate back into 3D. That's because my mind finds 3D hard ;) Any better suggestions?
Using C# if that makes any difference.
If you only have to work with a rectangle in 3D, then you simply need to obtain the two edge vectors and then you can generate all the interior points of the subdivided rectangle. For example, say your quad is defined by (x0,y0),...,(x3,y3), in order going around the quad. The edge vectors relative to point (x0,y0) are u = (x1-x0,y1-y0) and v = (x3-x0,y3-y0).
Now, you can generate all the interior points. Suppose you want M edges along the first edge, and N along the second, then the interior points are just
(x0,y0) + i/(M -1)* u + j/(N-1) * v
where i and j go from 0 .. M-1 and 0 .. N-1, respectively. You can figure out which vertices need to be connected together by just working it out on paper.
This kind of uniform subdivision works fine for triangular meshes as well, but each edge must have the same number of subdivided edges.
If you want to subdivide a general mesh, you can just do this to each individual triangle/quad. This kind of uniform subdivision results in poor quality meshes since all the original flat facets remain flat. If you want something more sophisticated, you can look at Loop subidivision, Catmull-Clark, etc. Those are typically constrained to power-of-two levels, but if you research the original formulations, I think you can derive subdivision stencils for non-power-of-two divisions. The theory behind that is a bit more involved than I can reasonably describe here.
Now that you've explained things a bit more clearly, I don't see your problem: you have a rectangle and you want to divide it up into rectangular tiles. So the mesh points you want are regularly spaced in both orthogonal directions. In 2D this is trivial, surely ? In 3D it's also trivial though the maths is a little trickier.
Off the top of my head I would guess that transforming from 3D to 2D (and aligning the rectangle with the coordinate axes at the same time) then calculating the mesh points, then transforming back to 3D is probably about as simple (and CPU-time consuming) as working it all out in 3D in the first place.
Yes, using C# means that I'm not able to propose a code to help you.
Comment or edit you question if I've missed the point.

2D Bone system in XNA

I am trying to write a 2D Bone system in XNA.
My initial thought was using matrices to keep track of the rotations and positioning through out the bone tree so items could easily displayed.
Cool I thought, and then dismay hit me in the face when I saw matrices could only be applied to single sprite batch.Begin call and not on a per draw call!
I ran some performance tests to check if my dismay was desevered, and it was, calling spritebatch.Begin and End a bunch of time drops my frame rate by a huge (and unacceptable) amount.
So, before drawing a single bones image I am going to have to construct its final position and rotation (and maybe scale in the future) manually. In this case would you still use matrices and somehow extract the information at the end just before drawing the bone? If so, any ideas on how to get the final information I need? Or would it be easier to try and construct it all from the raw positions and rotations of its parent nodes?
Honestly I would ditch the Sprite rendering object and switch to screen space quads. There are no artificial limitations to screen space quads and you can use the standard implementation of bone systems: traverse down the tree applying transforms as you go, then pop them as you move back up the tree.
Aren't matrices overkill when you are working in 2D anyway? I mean 16 scalar multiplications for each matrix-vector product when you can do with 4 multiplications for rotation and 2 summations for translation?

Categories