I'm working in a project that has a layer system represented by several planes in front of each other. These planes receive different textures, which are projected in a render texture with an orthographic camera to generate composite textures.
This project is being build on top of another system (a game), so I have some restrictions and requirements in order to make my project work as expected to fit properly in this game. One of the requirements refers to the decals, which has their position and scale represented by a single Vector4 coordinate. I believe that this Vector4, represents 4 vertex positions in both X and Y axis (2 for X and 2 for Y). For a better understanding, see the image below:
It happens that these Vector4 coordinates seems to be related with the UV of the texture where they belong, cause they only have positive values between 0 and 1. I'm facing a hard time trying to fit this coordinate together with my project, cause Unity position system uses the traditional cartesian plane with positive and negative values rather than the normalized UV coordinates. So if I use the original Vector4 coordinates of the game, the decals get wrongly positioned and vice versa (I'm using the original coordinates as base, but my system is meant to generate stuff to be used within the game, so these decal's coordinates must match the game standards).
Considering all this, how could I convert the local/global position used by Unity as UV position used the game?
Anyway, I tried my best to explain my doubt, not sure if it has an easy solution or not. I figured out this Vector4 stuff only from observation, so feel free to suggest other ideas if you think I'm wrong about it.
EDIT #1 - Despite some paragraphs, I'm afraid my intensions can be more clear, so complementing, the whole point is to find a way to position the decals using the Vector4 coordinates in a way that they get in the right positions. The layer system contains bigger planes, which has the full size of the texture plus smaller ones, representing the decals, varying in size. I believe that the easiest solution would be use one of these bigger planes as the "UV area", which would have the normalized positions mentioned. But I don't know how would I do that...
Related
I'm not really like to post questions about problems without doing the research, but I'm close to give up, so I thought I give it a shot and ask you about my problem.
I want to create a custom collision detection in Unity ( So please don't advice "use rigidbody and\or colliders" because I don't want to use them by purpose).
The main idea: I want to detect Basic Sphere and Basic Box collision. I already find AABB vs Sphere theme with the following solution:
bool intersect(sphere, box) {
var x = Math.max(box.minX, Math.min(sphere.x, box.maxX));
var y = Math.max(box.minY, Math.min(sphere.y, box.maxY));
var z = Math.max(box.minZ, Math.min(sphere.z, box.maxZ));
var distance = Math.sqrt((x - sphere.x) * (x - sphere.x) +
(y - sphere.y) * (y - sphere.y) +
(z - sphere.z) * (z - sphere.z));
return distance < sphere.radius;
}
And this code does the job, the box bounding and the sphere center point with radius works fine, I can detect the Sphere collision on Box.
The problem is, I want to Rotating the Cube in Runtime, so that will screw up everything, the bounding will split away and the collision will gone (or collide on random places). I've read about some comments where they said, bounding not works with rotation, but I'm not sure what else can I use to solve this problem.
Can you help me with this topic please? I'll take every advice I can get (except Colliders & Rigidbodies of course).
Thank you very much.
You might try using the separating axis theorem. Essentially, for a polyhedron, you use the normal of each face to create an axis. Project the two shapes you are comparing onto each axis and look for an intersection. If there is no intersection along any of the axes, there is no intersection of shapes. For a sphere, you will just need to project onto the polyhedron's axes. There is a great 2D intro to this from metanet.
Edit: hey, check it out-- a Unity implementation.
A good method to find if an AABB (axis aligned bounding box) and sphere are intersecting is to find the closest point on the box to the sphere's center and determine if that point is within the sphere's radius. If so, then they are intersecting, if not then not.
I believe you can do the same thing with this more complicated scenario. You can represent a rotated AABB with a geometrical shape called a parallelepiped. You would then find the closest point on the parallelepiped to the center of the sphere and again check if that point exists within the sphere's radius. If so, then they intersect. If not, then not.
The difficult part is finding the closest point on the parallelepiped. You can represent a parallelepiped in code with 4 3d vectors: center, extentRight, extentUp, and extentForward. This is similar to how you can represent an AABB with a 3d vector for center along with 3 floats: extentRight, extentUp, and extentForward. The difference is that for the parallelepiped those 3 extents are not 1 dimensional scalars, but are full vectors.
When finding the closest point on an AABB surface to a given point, you are basically taking that given point and clamping it to the AABB's volume. You would, for example, call Math.Clamp(point.x, AABB.Min.x, AABB.Max.x) and so on for Y and Z.
The resulting X,Y,Z would be the closest point on the AABB surface to the given point.
To do this for a parallelepiped you need to solve the "linear combination" (math keyword) of extentRight(ER), extentUp(EU), and extentForward(EF) to get the given point. In other words, what scalars do you have to multiply ER, EU, and EF by to get to the given point? When you find those scalars you need to clamp them between 0 and 1 and then multiply them again by ER, EU, and EF respectively to get that closest point on the surface of the parallelepiped. Be sure to offset the given point by the Parallelepiped's min position so that the whole calculation is done in its local space.
I didn't want to spend any extra time learning how to solve for a linear combination (it seems it involves things like using an "augmented matrix" and "gaussian elimination") otherwise I'd include that here too. This should get you or anyone else reading this off to the right track hopefully.
Edit:
Actually I think its a lot simpler and you don't need a parallelepiped. If you have access to the rotation (Vector3 or Quaternion) that rotated the cube you could get the inverse of that and use that inverse rotation to orbit the sphere around the cube so that the new scenario is just the normal axis aligned cube and the orbited sphere. Then you can do a normal AABB - sphere collision detection.
I'm asking a question that has been asked a million times before, but I still haven't found a good answer after going through these and also resorting to other sites:
How to rotate a graphic over global axes, and not to local axes?
rotating objects in opengl
How to rotate vertices exactly like with glRotatef() in OpenGL?
Rotate object about 3 axes in OpenGL
Rotate an object on its own axes in OpenGL
Is it possible to rotate an object around its own axis and not around the base coordinate's axis?
OpenGL Rotation - Local vs Global Axes
Task is relatively simple, I am making an sensor module with accelerometer and gyro onboard, and I need to display rotation.
I have a 3D model of the object in STL format, I've made an STL parser and I can visualise the object perfectly well.
When I try to rotate it I get all kinds of strange results.
I am using SharpGL (the Scene component).
I tried to write an "effect" that (for now) is controlled with 3 separate trackbars, but this will be the data from the sensors.
I wanted to do it as an effect because (in theory) I could have more objects in the scene and I only need to rotate a specific one (and also for educational purposes as I am relatively new to OpenGL).
I tried to rotate the object using quaternions, multiplying them and generating a rotation matrix, then applying gl.MultMatrix(transformMatrix.AsColumnMajorArray).
I also tried gl.Rotate(angleX, 1, 0, 0) (and similar for Y and Z axes).
Depending on the order of quaternion multiplication or gl.Rotate(...) statements, I would get the object to rotate about the local axis (first), something weird (second), and the "world" third. For example, if I did it with quaternions qX*qY*qZ and got the rotation matrix from that, I would get (I think) the rotations about a local X, "arbitrary" Y, and the world Z axes.
The part I cannot understand is that when I apply gl.Rotate(dx, 1, 0, 0) (and others for Y and Z rotations) inside the trackbar's Scroll event, the entire scene rotates about the X, Y, and Z axes and in that order, and exactly as I expected the rotation to happen.
My only issue with this approach is that the entire scene rotates. So if I (hypothetically) wanted to draw a room around the object, the whole room would rotate too, which is not what I want.
I will post any code requested, but I feel there is too much and the aforementioned links have similar code to what I have.
At the end of the day, I want my object (Polygon in SharpGL terms) to rotate about its own axes (or about the "world" axes, but be consistent).
EDIT:
I've placed the project in my dropbox: https://goo.gl/1LzNhn
If someone wants to look at it, any help will be greatly appreciated.
In the MainForm, the TbRotXYZScroll function has trackbar code and the Rotation class is where the problem resides (or so I think).
Rotate X, Y, and Z by say 45 degrees in that order. All seems to work great. Now rotate X, Y, and Z further by another 30 or so degrees, try to visualise about which axis the rotation is happening as you rotate...
X seems to be always world and Z seems to be always local, while Y is something in between (changing the order of rotation changes this).
Going into TbRotXYZScroll function and changing method to 2 does what I want (except the rotation of the entire scene). Tried quaternions and they produce the exact same result, so I must be doing something wrong...
Changing it to 1 produces similar result, but that is applying the rotation to the polygons alone (not what's being rendered as that includes local coordinate axes too).
At the end of the day, I want my object (Polygon in SharpGL terms) to
rotate about its own axes (or about the "world" axes, but be
consistent).
I think this answer you put in your question is somehow explaining the situation. In order to perform rotation around object axis:
1. Perform Translation/Rotation to your object and make the object axis overlap with one of base axis (x,y or z)
2. Perform your rotation
3. Revert your object back to its original position. (simply revert the translation/rotations at step1)
Note: While doing that, one thing you should consider is openGL apply transformations from the reverse order.
Example :
glRotatef(90,1,0,0);
glTranslatef(-5,-10,5);
drawMyObject();
In this case, opengl will first translate your object and then rotate. So while writing your transformations consider that. Here is my answer that may give you an idea.
Ok, in my XNA project i've added simple shader + model loading code, anything works. I created a very simple low-detailed model in 3Ds Max. Exported and imported to XNA with FBX format.
The problem is:
if i move my simple camera to some distance from this model, one of its components starts to flicker. I tried another model and there is the same situation, some of components start to flicker and only if i get to some distance from model.
This flickering (or blinking or ..) appears only with textured objects (probably), and looks like:
in each frame random parts/pixels of model (or not so random) replaced with object which is behind a model or its component... :(
UPDATE: Now i know - problem is in my model (i checked some other models). I dont understand why but Plane object gives that flickering. Maybe the problem is not in Plane object.
This is only an educated guess but: Your far-plane is too far away, or your near-plane is too close, or both.
A perspective camera gives you a viewable area that looks like this:
Your Z-buffer (depth buffer) covers the range between the near and the far planes. A typical Z-buffer might have 24-bits of precision, giving you 224 possible values. The further apart your near and far planes are, the greater the world-space distance each possible value must covers. In other words: your Z buffer is less accurate.
What you are seeing is known as "Z-fighting". This is where the Z-buffer is not accurate enough to differentiate between the depths of two given pixels. So you end up with pixels that should have been rejected as being "behind" what was already rendered, drawn instead.
(Alternately your model has some coplanar or nearly coplanar triangles - that is triangles who's surfaces are too close together. Same issue: Not enough precision in the Z-buffer to differentiate between the two surfaces.)
You may also wish to enable backface-culling (RasterizerState.CullCounterClockwise), if it is not already enabled. This culls triangles facing away from the camera, removing one possible source of Z-fighting.
I have seen this happen before on models where there are two or more surfaces overlapping in the same plane, one surface is inside the other but in the same plane - so the system can not tell which surface is in front of the other and usually ends up with a mash-up of both surfaces.
It looks like you have a smaller rectangular surface intersecting with a larger rectangular surface that makes up the lower base shape of your model. Probably from another object inside the box? Or from a object subtraction error that left two rectangles inside each other on that surface maybe?
Either way modify the model so there are no longer two surfaces with in each other.
I have a mesh defined by 4 points in 3D space. I need an algorithm which will subdivide that mesh into subdivisions of an arbitrary horizontal and vertical size. If the subdivision size isn't an exact divisor of the mesh size, the edge pieces will be smaller.
All of the subdivision algorithms I've found only subdivide meshes into exact powers of 2. Does anyone know of one that can do what I want?
Failing that, my thoughts about a possible implementation is to rotate the mesh so that it is flat on the Z axis, subdivide in 2D and then translate back into 3D. That's because my mind finds 3D hard ;) Any better suggestions?
Using C# if that makes any difference.
If you only have to work with a rectangle in 3D, then you simply need to obtain the two edge vectors and then you can generate all the interior points of the subdivided rectangle. For example, say your quad is defined by (x0,y0),...,(x3,y3), in order going around the quad. The edge vectors relative to point (x0,y0) are u = (x1-x0,y1-y0) and v = (x3-x0,y3-y0).
Now, you can generate all the interior points. Suppose you want M edges along the first edge, and N along the second, then the interior points are just
(x0,y0) + i/(M -1)* u + j/(N-1) * v
where i and j go from 0 .. M-1 and 0 .. N-1, respectively. You can figure out which vertices need to be connected together by just working it out on paper.
This kind of uniform subdivision works fine for triangular meshes as well, but each edge must have the same number of subdivided edges.
If you want to subdivide a general mesh, you can just do this to each individual triangle/quad. This kind of uniform subdivision results in poor quality meshes since all the original flat facets remain flat. If you want something more sophisticated, you can look at Loop subidivision, Catmull-Clark, etc. Those are typically constrained to power-of-two levels, but if you research the original formulations, I think you can derive subdivision stencils for non-power-of-two divisions. The theory behind that is a bit more involved than I can reasonably describe here.
Now that you've explained things a bit more clearly, I don't see your problem: you have a rectangle and you want to divide it up into rectangular tiles. So the mesh points you want are regularly spaced in both orthogonal directions. In 2D this is trivial, surely ? In 3D it's also trivial though the maths is a little trickier.
Off the top of my head I would guess that transforming from 3D to 2D (and aligning the rectangle with the coordinate axes at the same time) then calculating the mesh points, then transforming back to 3D is probably about as simple (and CPU-time consuming) as working it all out in 3D in the first place.
Yes, using C# means that I'm not able to propose a code to help you.
Comment or edit you question if I've missed the point.
I would like to write a C# program that generates a 2D image from a rendered 3D object(s) by "slicing" the 3D object or through a cut-plane. The desired output of the 2D image should be data that can be displayed using a CAD. For example:
A 3D image is defined by its vertices, these vertices is contained within Point3DList(). A method is then called taking Point3DList as its parameter e.g: Cut2D(Point3DList).The method then generates the 2D vertices and saved it inside Point2DList() and these vertices can be read through a CAD program which display it in 2D form.
My question therefore is whether there is a previous implementation of this in C#(.NET compatible) or is there any suggestion on third-party components/algorithms to solve this problem.
Thanks in advance.
You pose an interesting question, in part, by not including a full definition of a 3D shape. You need to specify either the vertices and edges, or an algorithm to obtain the edges from the vertex list. Since an algorithm to obtain the edges from the vertex list devolves into specifying the vertices and edges, I will only cover that case here. My description also works best when the vertices and edges are transformed into a list of flat polygons. To break a vertex list down into polygons, you have to find cycles in the undirected graph that is created by the vertices and edges. For a triangular polygon with vertices A, B, and C you will end up with edges AB, BC, and AC.
The easiest algorithm that I can think of is:
Transform all points so that your 2D plane where the Z axis is 0. (rotate, twist, and move as required to transform the desired 2D plane to line up with the XY plane where Z=0).
For each flat polygon:
a. For each edge, check to see if the vertices have opposite sign on the Z axis (or if one is 0). If Z0 * Z1 <= 0 then this is the case
b. Use the definition of a line and solve for the point where Z=0. This will give you the X,Y of the intersection.
c. You now have a dot, line, or polygon that represents the intersection of your the original flat polygon from step 1 intersecting the 2D plane.
d. Fill in the polygon formed by the shapes (if desired). If your 2D rendering package will not create a polygon from the list of vertices,you need to start rendering pixels using scanlines.
Each of the individual algorithms should be in "Algorithms in C" or similar.
Graphics programs can be quite rewarding when they start to work.
Have Fun,
Jacob
This is more opengl specific rather than c# specific, but what i'd do:
Rotate and transform by a 3d matrix, so that the 'slice' you want is 1 metre in 'front' of the camera.
Then set the near and far horizon limits to 1m and 1.001m, respectively.
-update- Are you even using opengl? If not, you could perform your matrix arithmetic yourself somehow.
It sounds like you want to get the 2D representation of the points of intersection of a plane with a three-dimensional surface or object. While I don't know the algorithm to produce such a thing off hand (I have done very little with 3D modeling applications), I think that is what you are asking about.
I encountered such an algorithm a number of years ago in either a Graphics Gems or GPU Gems or similar book. I could not find anything through a few Bing searches, but hopefully this will give you some ideas.
if its a 3d texture cant you just specify 3d tex coords (into the texture) for each vertex of a quad? wouldnt that auto-interpolate the texels?
If you are looking for a 3rd party implementation, maybe you should explore Coin3d. Capable of such things as you require, though I am not sure of its exact database format or input requirements. I find your description lacking in that you do not specify the direction from which you want to project the 3d image on to a 2d plane.