Subdividing 3D mesh into arbitrarily sized pieces - c#

I have a mesh defined by 4 points in 3D space. I need an algorithm which will subdivide that mesh into subdivisions of an arbitrary horizontal and vertical size. If the subdivision size isn't an exact divisor of the mesh size, the edge pieces will be smaller.
All of the subdivision algorithms I've found only subdivide meshes into exact powers of 2. Does anyone know of one that can do what I want?
Failing that, my thoughts about a possible implementation is to rotate the mesh so that it is flat on the Z axis, subdivide in 2D and then translate back into 3D. That's because my mind finds 3D hard ;) Any better suggestions?
Using C# if that makes any difference.

If you only have to work with a rectangle in 3D, then you simply need to obtain the two edge vectors and then you can generate all the interior points of the subdivided rectangle. For example, say your quad is defined by (x0,y0),...,(x3,y3), in order going around the quad. The edge vectors relative to point (x0,y0) are u = (x1-x0,y1-y0) and v = (x3-x0,y3-y0).
Now, you can generate all the interior points. Suppose you want M edges along the first edge, and N along the second, then the interior points are just
(x0,y0) + i/(M -1)* u + j/(N-1) * v
where i and j go from 0 .. M-1 and 0 .. N-1, respectively. You can figure out which vertices need to be connected together by just working it out on paper.
This kind of uniform subdivision works fine for triangular meshes as well, but each edge must have the same number of subdivided edges.
If you want to subdivide a general mesh, you can just do this to each individual triangle/quad. This kind of uniform subdivision results in poor quality meshes since all the original flat facets remain flat. If you want something more sophisticated, you can look at Loop subidivision, Catmull-Clark, etc. Those are typically constrained to power-of-two levels, but if you research the original formulations, I think you can derive subdivision stencils for non-power-of-two divisions. The theory behind that is a bit more involved than I can reasonably describe here.

Now that you've explained things a bit more clearly, I don't see your problem: you have a rectangle and you want to divide it up into rectangular tiles. So the mesh points you want are regularly spaced in both orthogonal directions. In 2D this is trivial, surely ? In 3D it's also trivial though the maths is a little trickier.
Off the top of my head I would guess that transforming from 3D to 2D (and aligning the rectangle with the coordinate axes at the same time) then calculating the mesh points, then transforming back to 3D is probably about as simple (and CPU-time consuming) as working it all out in 3D in the first place.
Yes, using C# means that I'm not able to propose a code to help you.
Comment or edit you question if I've missed the point.

Related

How to convert UV position to local position and vice-versa?

I'm working in a project that has a layer system represented by several planes in front of each other. These planes receive different textures, which are projected in a render texture with an orthographic camera to generate composite textures.
This project is being build on top of another system (a game), so I have some restrictions and requirements in order to make my project work as expected to fit properly in this game. One of the requirements refers to the decals, which has their position and scale represented by a single Vector4 coordinate. I believe that this Vector4, represents 4 vertex positions in both X and Y axis (2 for X and 2 for Y). For a better understanding, see the image below:
It happens that these Vector4 coordinates seems to be related with the UV of the texture where they belong, cause they only have positive values between 0 and 1. I'm facing a hard time trying to fit this coordinate together with my project, cause Unity position system uses the traditional cartesian plane with positive and negative values rather than the normalized UV coordinates. So if I use the original Vector4 coordinates of the game, the decals get wrongly positioned and vice versa (I'm using the original coordinates as base, but my system is meant to generate stuff to be used within the game, so these decal's coordinates must match the game standards).
Considering all this, how could I convert the local/global position used by Unity as UV position used the game?
Anyway, I tried my best to explain my doubt, not sure if it has an easy solution or not. I figured out this Vector4 stuff only from observation, so feel free to suggest other ideas if you think I'm wrong about it.
EDIT #1 - Despite some paragraphs, I'm afraid my intensions can be more clear, so complementing, the whole point is to find a way to position the decals using the Vector4 coordinates in a way that they get in the right positions. The layer system contains bigger planes, which has the full size of the texture plus smaller ones, representing the decals, varying in size. I believe that the easiest solution would be use one of these bigger planes as the "UV area", which would have the normalized positions mentioned. But I don't know how would I do that...

Algorithm to compute the remaining polygon after subtraction

I have a big polygon (Pa). Inside the polygon there are a lot of small "holes", as shown:
Here are a few condition for the holes:
The holes cannot overlap one another
The holes cannot go outside the outer polygon
However, the holes can touch the outer polygon edge
How to obtain the remaining polygon ( or the polygon list) in an efficient manner? The easiest way ( brute force way) is to take the Pa, and gradually computing the remaining polygon by subtracting out the holes. Although this idea is feasible, but I suspect that there is a more efficient algorithm.
Edit: I'm not asking about how to perform polygon clipping ( or subtraction) algorithm! In fact that's something I would do by brute force. I'm asking in addition to the polygon clipping method ( take the main polygon and then gradually clip the holes out), is there other more efficient way?
This is very hard to do in a general manner. You can find source code for a solution here:
General Polygon Clipper (GPC)
Well, if you use the right representation for your polygon you would not need to do anything. Just append the list of edges of the holes to the list of edges of Pa.
The only consideration you should have is that if some hole vertex or edge can touch Pa edge, you will have to perform some simplification there.
A different problem is rendering that polygon into a bitmap!
You can do like this.
Draw the main polygon with a color in a bitmap.
Draw the holes with another color in the same bitmap.
Then extract the polygon by running marching square algorithm with the main polygons color as threshold.
The output will contain all the points that belong to that polygon.
You can sort the points if you want it as a continous closed polygon.
I agree with salva, but my post is going to address the drawing part. Basically, you can add up all lines of the main and the hole polygons together and thereby get a single complex polygon.
The algorithm itself is not very complicted and it is nicely explained in the Polygon Fill Teaching Tool.

Draw 2D Curve in XNA

Is there any way to generate a Curve class and then draw that curve in 2D on the screen in XNA?
I want to basically randomly generate some terrain using the Curve and then draw it. Hoping that I can then use that curve to detect collision with the ground.
It sounds like what you want is the 2D equivalent of a height-map. I'd avoid making a true "curve" and simply approximate one with line segments.
So basically you'll have an array or list of numbers that represent the height of your terrain at a series evenly spaced (horizontally) points. When you need a height between two points, you simply linearly interpolate between the two.
To generate it - you could set a few points randomly, and then do some form of smooth interpolation to set the rest. (It really depends on what kind of curve you want.)
To render it you could then just use a triangle strip. Each point in your height-map will have two vertices associated with it - one at the bottom of the screen, the other at the height of that point in the height-map.
To do collision detection - the easiest way is to have your objects be a single point (it sounds like you're making a artillery game like Scorched Earth) - simply take the X position of your object, get the Y position of your terrain at that X position, if the Y position of your object is below the terrain, set it so that it is on the terrain's surface.
That's the rough guide, anyway :)

How to color a mesh with values at the vertices in WPF 3D?

We've got a sphere which we want to display in 3D and color given a function that depends on spherical coordinates.
The sphere was triangulated using a regular grid in (theta, phi), but this produced a lot of small triangles near the poles. In an attempt to reduce the number triangles at the poles, we've changed out mesh generation to produce more evenly sized triangles over the surface.
The first triangulation method had the advantage that we could easily create a texture and drape it over the surface. It seems that in WPF it isn't possible to assign colors to vertices the way one would go about in OpenGL or Direct3D.
With the second triangulation method it isn't apparent how to go about generating the texture and setting the texture coordinates, since the vertices aren't aligned to a grid any more.
Maybe it would be possible to create a linear texture containing a color for each vertex, but then how will that effect the coloring? Will it still render smoothly over the triangle surfaces as one would expect by applying per vertex coloring?
I've converted the algorithm to use a linear texture which is really just a lookup into the colormap. This seems to work great and is a much better solution than the previous one. Instead of creating a texture of size ThetaSamples * PhiSamples, I'm now only creating a fixed texture of 256 x 1.

How To Produce A 2D Plane Cut from a 3D Image

I would like to write a C# program that generates a 2D image from a rendered 3D object(s) by "slicing" the 3D object or through a cut-plane. The desired output of the 2D image should be data that can be displayed using a CAD. For example:
A 3D image is defined by its vertices, these vertices is contained within Point3DList(). A method is then called taking Point3DList as its parameter e.g: Cut2D(Point3DList).The method then generates the 2D vertices and saved it inside Point2DList() and these vertices can be read through a CAD program which display it in 2D form.
My question therefore is whether there is a previous implementation of this in C#(.NET compatible) or is there any suggestion on third-party components/algorithms to solve this problem.
Thanks in advance.
You pose an interesting question, in part, by not including a full definition of a 3D shape. You need to specify either the vertices and edges, or an algorithm to obtain the edges from the vertex list. Since an algorithm to obtain the edges from the vertex list devolves into specifying the vertices and edges, I will only cover that case here. My description also works best when the vertices and edges are transformed into a list of flat polygons. To break a vertex list down into polygons, you have to find cycles in the undirected graph that is created by the vertices and edges. For a triangular polygon with vertices A, B, and C you will end up with edges AB, BC, and AC.
The easiest algorithm that I can think of is:
Transform all points so that your 2D plane where the Z axis is 0. (rotate, twist, and move as required to transform the desired 2D plane to line up with the XY plane where Z=0).
For each flat polygon:
a. For each edge, check to see if the vertices have opposite sign on the Z axis (or if one is 0). If Z0 * Z1 <= 0 then this is the case
b. Use the definition of a line and solve for the point where Z=0. This will give you the X,Y of the intersection.
c. You now have a dot, line, or polygon that represents the intersection of your the original flat polygon from step 1 intersecting the 2D plane.
d. Fill in the polygon formed by the shapes (if desired). If your 2D rendering package will not create a polygon from the list of vertices,you need to start rendering pixels using scanlines.
Each of the individual algorithms should be in "Algorithms in C" or similar.
Graphics programs can be quite rewarding when they start to work.
Have Fun,
Jacob
This is more opengl specific rather than c# specific, but what i'd do:
Rotate and transform by a 3d matrix, so that the 'slice' you want is 1 metre in 'front' of the camera.
Then set the near and far horizon limits to 1m and 1.001m, respectively.
-update- Are you even using opengl? If not, you could perform your matrix arithmetic yourself somehow.
It sounds like you want to get the 2D representation of the points of intersection of a plane with a three-dimensional surface or object. While I don't know the algorithm to produce such a thing off hand (I have done very little with 3D modeling applications), I think that is what you are asking about.
I encountered such an algorithm a number of years ago in either a Graphics Gems or GPU Gems or similar book. I could not find anything through a few Bing searches, but hopefully this will give you some ideas.
if its a 3d texture cant you just specify 3d tex coords (into the texture) for each vertex of a quad? wouldnt that auto-interpolate the texels?
If you are looking for a 3rd party implementation, maybe you should explore Coin3d. Capable of such things as you require, though I am not sure of its exact database format or input requirements. I find your description lacking in that you do not specify the direction from which you want to project the 3d image on to a 2d plane.

Categories