I want to create polarized 3D image using Matlab or C#?.
Is any way to create 3D image from any 2D image using Matlab or C#?
Polarized 3D is an effect created in the physical world with physical projectors shining onto the same spot of a physical screen. It's not a digital effect that you can create in an image on a computer screen. You cannot write code to render an image onto a normal computer screen then see 3D with the polarized glasses.
Stereoscopic images for use with polarised glasses are created by projecting the left and right eye images so that they overlap through separate projectors which have a polarising filter fitted.
The same is true for the red and green tinted glasses (which are not the same as the old style anaglyph images).
If you only have one 2D image you cannot create a 3D image from it without getting involved in manual image processing.
Build your own Polarized Stereoscopic Projection System
Principles of Polarization Optics
Polarized Light
Since the late 19th century we know, that light can be described in terms
of electromagnetic waves. The theory behind it are the well understood
Maxwell Equations. Since this is not an article about electrodynamics just
the essentials:
Light is electromagnetic radiation with wavelenghts between 800nm (red) to 400nm (violet).
Electromagnetic radiation has an electric and a magnetic field component.
The electric and magnetic field are transversal, which means perpendicular to the propagation of the wave.
The electric and magnetic field are perpendicular to each other.
http://en.wikipedia.org/wiki/Electromagnetic_radiation
The electric field vector (one could also use the magnetic field, but convention
is to use the electric filed) determines the polaization. There are two kinds of
polarization:
Linear polarization: The electric component remains in one single plane, the polarization plane
Circular polarization: With each cycle the electric component "swings" into different direction
If you look along the propagation the field vector may cycle through:
↑→↓← -- this is called right turning polarization
↑←↓→ -- this is called left turning polarization
The effect of circular polarization is created by retarding one component
of linear polarized light by a quarter of a wavelength.
See also this Wikipedia article
http://en.wikipedia.org/wiki/Polarization_(waves)
Creating Polarized Light
Wikipedia has an excellent article on the details
http://en.wikipedia.org/wiki/Polarizer
Here's the essentials.
Linear Polarization
Linear polarized light can be obtained in various ways:
By filtering out all unwanted polarization components
from light with a broad polarization distribution.
All light emitted in a statistical manner (thermic radiation,
high pressure gas discharge, lighting arcs) has this property.
One can filter the desired polarization plane using a filter.
The following filters are known:
Brewster Beam Spliters use brewster reflection to split
a beam of light into two polarization components, polarized
perpendicular to each other.
Birefringence employs the phenomenon that some crystals
have different indices of refraction for different polarization
planes. Again the light ways are split.
Absorption in strechted polymers. Stretching a polymer gives
it an anisotropic structure. Some anisotropic polymers will absorb
only incoming light polarized parallel (or perpendicular, it
depends on the material) to the strechting direction.
The light emited from a laser is linear polarzied.
Depending on how the laser is built, the polarization plane will
gradually change over time.
http://en.wikipedia.org/wiki/Linear_polarization
Circular Polarization
In optics, circular polarization is created by passing linear polarized
light through some anisotropic material, that will retard one of the
components (electric or magnetic) by a quarter of the wavelength. This
is called a λ/4 retarder.
The angle between linear polarization and the anisotropic material's major
axis determines ratio between left and right turning polarized light resulting:
Incoming linear polarized light tilted by +45° will be fully left turning.
Incoming linear polarized light tilted by -45° will be fully right turning.
Incoming linear polarized light tilted by 0° will consist of 50% left and 50% right turning.
It should be noted, that due to the reversibility of the light's way the
passing of circular polarized light through a λ/4 retarder will
turn it into linear polarized light of the corresponding certain polarization
plane. This linear polarized light can the be filtered again by linear
polarizers. This is, how circular polarization 3D glasses work.
http://en.wikipedia.org/wiki/Circular_polarization
Polarized Light and Interaction with the Screen
Scattering and Diffraction
The typical projection screen uses very small particles, usually they're TiO2,
to scatter and diffract the light into all directions. In the scattering process
the light bounces multiple times between the particles. While each bounce leaves
a light wave solition polarized in the grand statistical scheme any notable
polarization is lost.
Thus a normal white projection screen is unsuitable for polarized stereoscopic projection.
Metallic Reflection
The key for building a polarizing stereoscopic projection system is the use
of a screen material that retains the polaization of the incoming light.
This is achieved by employing metallic reflection on particles much larger than
the light's wavelengths.
A DIY Stereoscopic Projection System
Making a DIY Silver Screen
You'll need:
aluminum powder pigment
clear acrylic base
deep black textile dye
canvas
This is how you do it:
Dye the canvas deep black. This will absorb any light not reflected,
instead of scattering it. Let it dry thorougly. You may repeat step 1
multiple times.
Paint one layer of clear acrylic base on the now deep black dyed canvas.
Doing it one side suffices. All futher steps are now done on this clear acrylic base.
Make a very thick aluminum acrylic paint. Here are a few hints:
Mix the aluminum powder with the acrylic base in very small batches.
Don't make a aluminum power paste by mixing it with water!
After putting each small batch of aluminum powder into the acrylic
stir thoroughly so that it is a homogenous mass.
You should end up with a 1 part aluminum powder to 1 part acrylic base paint
Once you've got that thick paint, thin it with 1 part of water.
Apply layers of aluminum acrylic paint on the prepared canvas. Let each layer dry.
Repeat step 4 until you've got an even aluminum metallic painted surface with no
black parts shining through.
Video Projection
Single Projector Setup
Most cinemas are using a single projector and the RealD Z-filter system to
alternating show left and right images at a swap rate of 144Hz, where the
Z-filter is dynamically modulating the polarization.
Technically the Z-filter is just some large Liquid Crystal Panel. LCs have the
property, to rotate the passing light's polarization plane, depending on a voltage
applied on the LC. The Z-filter thus rotates the light by +/-45°, controlled
by an AC voltage in sync with the left-right-image swap. Before the Z-filter
is a linear polarizer, behind it a λ/4 retarder, in parallel to the linear
polarizer. The Z-filter will rotate the polarization plane to that either only
left or right turning polarized light is leaving the system, if there's stereoscopic
material shown.
If the Z-filter is turned off, the light will be turned into 50% left and 50% right
turning polarization.
It is perfectly possible to recreate this system DIY. This however shall be described
in a separate article still to be written.
Dual Projector Setup
Using two projectors is the most easy way to project the distinct polarized images.
The idea is simple: Each projector is equiped with a polarizing filter matching the
filter in the viewer's glasses eye filters, so that light projected from the "left"
projector will reach only the left eyes, and the "right" projector's light reaches
only the viewer's right eyes.
Selecting the Projectors
It boils down to the following: You need two identical projectors which emit either
unpolarized light - that are DLP projectors using classical arc lamps -
or evenly linear polarized light for all base colours.
The later case is more appealing since you'll not "throw away" light. But the safer
is, choosing some DLP type. Note that those new nifty LED projectors usually exhibit
some uneven polarization, which makes them tricky to next impossible to use for
polarized stereoscopy.
Making the Filter Slides
The projector's filter slides can be made from the very same kind of 3D glasses
which are worn by the viewers. The 3D glasses of RealD are meant for single use.
Although cinemas setup boxes for recycling, there's no harm to the venues if you
put those glasses you got in the cinema to your own use. In fact most cinemas will
have no problem with giving you some of the glasses returned to the recycling boxes.
You may be tempted to just put those filters right behind the projectors lens.
This is however crude and will quickly destroy those filters. Remember that 50% of
the lights power may end up in the filters, heating them up.
So you want to distribute the light's power over a significant large area.
You'll need:
a number of used RealD glasses
4 panes of identically sized picture frame glass (something like 50mm × 50mm)
sharp and exact scissors or a paper guillotine
a fine tip water solvent marker pen (or similar) - whiteboard markers do fine!
some adhesive tape. Duct tape works very well (what, did you expect something else?)
This is how it goes:
In all the 3D glasses mark the back side (i.e. the side towards the eyes)
with a small letter 'L' or 'R' (left eye or right eye), right in the middle.
By applying some twist/torque on the glasses frames you can separate RealD glasses
releasing the filters.
Sort the filters into left and right filters.
Cut the filters into equally sized rectangular pieces sort them into left and right.
Don't make them quadratic. It's important that you still know the orientation within
the glasses' frame.
Clear the marking, making sure you still know what's front and what's back.
Arrange the filter pieces on the glass panes so that they nearly fill it. Of course
all facing the same (i.e. all front or all back).
Keep the gaps as small as possible.
Apply the second glass pane, apply the duck tape along the borders.
You've now left and right polarizing filter slides. Put on 3D glasses of the same making
and determine the orientation in which each pane blocks the light most efficiently by looking
through the filter slide. Important: The filter plane that blocks the light on a eye
looking through it directly will be the slide for projection that particular eye later.
The reason for that is, that reflection changes chirality, i.e. left and right turning are
swapped by reflection.
Setting up the Projection
Align the projectors so that their images match. Vertical alignment must be perfect.
Horizontal alignment may be slightly shifted, but it should be done as good as possible, to.
Place the filters in the light's way. The whole filter area should be used.
Show the stereoscopic material, so that each projector display's its eyes picture.
Related
I am making a air combat game, with thousands of entities flying around in all directions.
All entities can have a HUD overlay associated with them. If they are in the frustum, it's a simple projection to the screen plane. Otherwise, it's projected to the screen border.
There is a lot of overlap of HUD elements.
I want to group entities overlay indicators to avoid overlap.
When entities are off screen, grouping them is trivial. A simple sorted dictionary does the trick.
However for frustum grouping, it's a bit more tricky.
I could just do 2d point clustering, but it would end up grouping points that have very different distances from the player.
Simple 3d point clustering would fail too, because points that are close to the player should not be grouped as easily as points far away.
So the ideal solution seems to cluster points by angular distance, as well as the logarithm of the distance from the player.
But here's the last issue: the algorithm needs to either be stable enough to avoid constantly shifting group populations OR take in account the previous frame's groups.
Thanks for reading
How do I remove this "giggly" effect when slowly moving a sprite?
I have tried adjusting Antialiasing values in QualitySettings and Filter Mode in ImportSettings in the Unity Editor but that doesn't change anything.
Ideally, I would like to keep the Filter Mode to Point (no filter) and anti aliasing turned on to 2x
The sprite is located inside a Sprite Renderer component of a GameObject.
I have uploaded my Unity Project here: http://www.filedropper.com/sprite
I really don't know how to fix the problem... Can anyone help with my personal project?
I cooked up a quick animation to demonstrate what's happening here:
The grid represents the output pixels of your display. I've overlaid on top of it the sliding sprite we want to sample, if we could render it with unlimited sub-pixel resolution.
The dots in the center of each grid cell represent their sampling point. Because we're using Nearest-Nieghbour/Point filtering, that's the only point in the texture they pay attention to. When the edge of a new colour crosses that sampling point, the whole pixel changes colour at once.
The trouble arises when the source texel grid doesn't line up with our output pixels. In the example above, the sprite is 16x16 texels, but I've scaled it to occupy 17x17 pixels on the display. That means, somewhere in every frame, some texels must get repeated. Where this happens changes as we move the sprite around.
Because each texel is rendered slightly larger than a pixel, there's a moment where it completely bridges the sampling points of two adjacent pixels. Both sampling points land within the same enlarged texel, so both pixels see that texel as the nearest one to sample from, and the texel gets output to the screen in two places.
In this case, since there's only a 1/16th scale difference, each texel is only in this weird situation for a frame or two, then it shifts to its neighbour, creating a ripple of doubled pixels that appears to slide across the image.
(One could view this as a type of moiré pattern resulting from the interaction of the texel grid and the sampling grid when they're dissimilar)
The fix is to ensure that you scale your pixel art so each texel is displayed at the size of an integer multiple of pixels.
Either 1:1
Or 2:1, 3:1...
Using a higher multiple lets the sprite move in increments shorter than its own texel size, without localized stretching that impacts the intended appearance of the art.
So: pay close attention to the resolution of your output and the scaling applied to your assets, to ensure you keep an integer multiple relationship between them. The blog post that CAD97 links has practical steps you can take to achieve this.
Edit: To demonstrate this in the Unity project you've uploaded, I modified the camera settings to match your pixels to units setting, and laid out the following test. The Mario at the top has a slightly non-integer texel-to-pixel ratio (1.01:1), while the Mario at the bottom has 1:1. You can see only the top Mario exhibits rippling artifacts:
You might be interested in this blog post about making "pixel-perfect" 2D games in Unity.
Some relevant excerpts:
If you start your pixel game with all the default settings in Unity, it will look terrible!
The secret to making your pixelated game look nice is to ensure that your sprite is rendered on a nice pixel boundary. In other words, ensure that each pixel of your sprite is rendered on one screen pixel.
These other settings are essential to make things as crisp as possible.
On the sprite:
Ensure your sprites are using lossless compression e.g. True Color
Turn off mipmapping
Use Point sampling
In Render Quality Settings:
Turn off anisotropic filtering
Turn off anti aliasing
Turn on pixel snapping in the sprite shader by creating a custom material that uses the Sprite/Default shader and attaching it to the SpriteRenderer.
Also, I'd just like to point out that Unless you are applying Physics, Never Use FixedUpdate. Also, if your sprite has a Collider and is moving, it should have a Kinematic RigidBody attached even if you're never going to use physics, to tell the engine that the Collider is going to move.
Same problem here. I noticed that the camera settings and scale are also rather important to fix the rippling problem.
Here is What Worked for me:
Go to Project Settings > Quality
Under Quality Make the default Quality as High for all.
Set the Anistropic Texture to "Disabled"
Done, And the issue is resolved for me.
Image Reference:
enter image description here
I am working with C#, Monogame and XNA 4.0. In my scene I have a lot of cubes. Some are connected, some are not. I would like to render the edges of the cube with another shader than the filling. Besides that, I would like to render the outer edges of connected cubes in another color (or thicker) than the edges within the cube-object. Here is a small painting to make clear what I want to do (sorry for my bad painting skills, but I think you will get it).
I know how to render a cube with a specific shader and I am also able to render the wireframe but I was not able to connect both methods. Besids that, the outer lines can not be rendered differently with this approach.
I tried it with post-effects like the edgefinding of comic shaders but in this approach I am not able render only specific edges. Besides that if two cubes are next to each other the shader does not recognize the edges.
I am not searching for a ready-to-use solution from you but I would be glad to get some tips/approaches/tutorials/similar projects/etc on how to achieve my goal. Are there some shader experts out there? I am at my wit's end.
(If you however would like to post a ready to use solution I would not be miffy :D)
It is a shame you're not using deferred shading, this would be pretty straight forward to implement if you were.
If you can access the normal and material for each pixel on screen through a texture lookup you can easily post-process this. You could use a 3x3 filter kernel and search for sufficiently large normal discontinuities (this would catch silhouette edges) and also search for pixels that lie on the transition between material IDs (this would catch the edges between blue and orange cubes). If your filter neighborhood satisfied either of these two conditions, then draw a black pixel to form the outline.
You should be able to do this if you use MRT rendering when you draw your cubes, and encode the normal + material ID into an RGBA texture (x,y,z,material).
The basic theory is described in this paper (pp. 13). In this case instead of using the depth as the secondary characteristic for outlining, you would use the material (or object ID, if you want EVERY cube to have an outline).
I am making an RPG game using an isometric tile engine that I found here:
http://xnaresources.com/default.asp?page=TUTORIALS
However after completing the tutorial I found myself wanting to do some things with the camera that I am not sure how to do.
Firstly I would like to zoom the camera in more so that it is displaying a 1 to 1 pixel ratio.
Secondly, would it be possible to make this game 2.5d in the way that when the camera moves, the sprite trees and things alike, move properly. By this I mean that the bottom of the sprite is planted while the top moves against the background, making a very 3d like experience. This effect can best be seen in games like diablo 2.
Here is the source code off their website:
http://www.xnaresources.com/downloads/tileengineseries9.zip
Any help would be great, Thanks
Games like Diablo or Sims 1, 2, SimCity 1-3, X-Com 1,2 etc. were actually just 2D games. The 2.5D effect requires that tiles further away are exactly the same size as tiles nearby. Your rotation around these games are restricted to 90 degrees.
How they draw is basically painters algorithm. Drawing what is furthest away first and overdrawing things that are nearer. Diablo is actually pretty simple, it didn't introduce layers or height differences as far as I remember. Just a flat map. So you draw the floor tiles first (in this case back to front isn't too necessary since they are all on the same elevation.) Then drawing back to front the walls, characters effects etc.
Everything in these games were rendered to bitmaps and rendered as bitmaps. Even though their source may have been a 3D textured model.
If you want to add perspective or free rotation then you need everything to be a 3D model. Your rendering will be simpler because depth or render order isn't as critical as you would use z-buffering to solve your issues. The only main issue is to properly render transparent bits in the right order or else you may end up with some odd results. However even if your rendering is simpler, your animation or in memory storage is a bit more difficult. You need to animate 3D models instead of just having an array of bitmaps to do the animation. Selection of items on the screen requires a little more work since position and size of the elements are no longer consistent or easily predictable.
So it depends on which features you want that will dictate which sort of solution you can use. Either way has it's plusses and minuses.
I am trying to write a custom Minecraft Classic multiplayer client in XNA 4.0, but I am completely stumped when it comes to actually drawing the world in the game. Each block is a cube in 3D space, and it is possible for it to have different textures on each side. I have been reading around the Internet, and found out that for a cube to have a different texture on each side, each face needs its own set of vertices. That makes a total of 24 vertices for each cube, and if you have a world that consists of 64*64*64 cubes (or possibly even more!), that makes a lot of vertices.
In my original code, I split up the texture map I had into separate textures, and applied these before drawing each side of every cube. I was told that this is a very expensive approach, and that I should keep the textures in the same file, and simply use the UV coordinates to map certain subtextures onto the cube. This didn't do much for performance though, since the sheer amount of vertices is simply too much. I was also told to collect the vertices in a VertexBuffer and draw them all at once, but this didn't help much either, and occasionally causes an exception when the number of vertices exceeds the maximum size of the buffer. Any attempt I've tried to make cubes share vertices has also failed, resulting in massive slowdown and glitchy cubes.
I have no idea what to do with this. I am pretty good at programming in general, but any kind of 3D programming or game development completely escapes me.
Here is the method I use to draw the cubes. I have two global lists List<VertexPositionTexture> and List<int>, one for vertices and one for indices. When drawing, I iterate through all of the cubes in the world and do RenderShape on the ones that aren't empty (like Air). The shape class that I have is pasted below. The commented code in the AddVertices method is the attempt to make cubes share vertices. When all of the cubes' vertices have been added to the list, the data is pasted into a VertexBuffer and IndexBuffer, and DrawIndexedPrimitives is called.
To be honest, I am probably doing it completely wrong, but I really have no idea how to do it, and there are no tutorials that actually describe how to draw lots of objects, only extremely simple ones. I had to figure out how to redo the BasicShape to have several textures myself.
The shape:
http://pastebin.com/zNUFPygP
You can get a copy of the code I wrote with a few other devs called TechCraft:
http://techcraft.codeplex.com
Its free and open source. It should show you how to create an engine similar to Minecrafts.
There are a lot of things you can do to speed this up:
What you want to do is bake a region of cubes into a vertex buffer. What I mean by this is to take all of the cubes in a small area, and put them all into one vertex buffer. Only update this buffer when a cube changes.
In a world like minecraft's, LOTS of faces are occluding each other. The biggest thing you can do is to hide faces that are shared between two cubes. Imagine two cubes sitting right next to each other, you don't really need to draw the face in between, since it can never be seen anyway. In our engine, this resulted in 20 times less vertices.
_ _ _ _
|_|_| == |_ _|
As for your textures, it is a good idea, like you said, to use a texture atlas. This greatly reduces your draw calls.
Good luck! And if you feel like cheating, look at Infiniminer. Infiniminer is the game minecraft was based off. It's written in XNA and is open-source!
You need to think about reducing the size of the problem. How can you produce the same image by doing less work?
If your cubes are spaced at regular intervals and are all the same size, you may not need to store the vertices at all - your shader may be able to calculate the vertex positions as it runs. If they are different sizes and not spaced at regular intervals, then you may still be able to use some for onf instancing (where you supply the position and size of a cube to a shader and it works out where to render the vertices to make a cube appear at that location)
If your cubes obscure anything behnd them, then you only need to draw the front-most cubes - anything behind them is just not visible. A natural approach for this would be to use an octree data structure, which divides 3D space into voxels (cubes). Using an octree you could quickly deternine which cubes are visible, and just draw those cubes - so rather than drawing 64x64x64 cubes, you may find you nly have to draw a few hundred per frame. You will also find that as the camera moves, the set of visible cubes will not change much, so you may be able to use this "temporal coherence" to update your data structures to minimise the work that needs to be done to decide which cubes are visible.
I don't think Minecraft draws all the cubes, all the time. Most of them are interior, and you need to draw only those on the surface. So basically, you need an efficient voxel renderer.
I recently wrote an engine to do this in XNA, the technique you want to look into is called hardware instancing and allows you to pass one model into the shader with a stream of world positions to "instance" that model hundreds (even thousands of times) all over your game world.
I built my engine on top of this example, replacing the instanced model with my own.
http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing
Once you make it into a re-usable class, it and its accompanying shaders become very useful for rendering thousands of pretty much anything you want (bushes, trees, cubes, swarms of birds, etc).
Once you have a base model (could be one face of the block), its mesh will have an associated texture that you can then replace with whatever you want to allow you to dynamically change block texturing for each side and differing types of blocks.