how can i extract region to mesh or solid using eyeshot? - c#

i'm trying to create navigation mesh on autodesk naviswork using Eyeshot.
convert vertices and IndexTriangle to vertice triangles, after create solid using Solid.FromTriangles().
var solidList = new List();
var Solid = Solid.FromTriangles(item.vertices, item.triangles);
but it doesn't work to boolean operators at i thought.
so i want extract region for using boolean operators.
how can i extract region to mesh or solid (or vertices triangles)?

It is very easy to do. You have to make sure your region vertese are sorted otherwise you might have some issues with it down the line but it's a simple parameter. If the shape isn't hollow here is an example :
// the verteses has to be in order and direction doesn't matter here
// i simply assume it's drawn on X/Y for the purpose of the example
public static Region CreateRegion(List<Point3D> verteses)
{
// create a curve list representing
var curves = new List<ICurve>();
// for each vertex we add them to the list
for (int i = 1; i < verteses.Count; i++)
{
curves.Add(new Line(verteses[i - 1], verteses[i]));
}
// close the region
curves.Add(new Line(verteses.Last(), verteses[0]));
return new Region(new CompositeCurve(curves, true), Plane.XY, true);
}
// this extrude in Z the region
public static Solid CreateSolidFromRegion(Region region, double extrudedHeight)
{
// extrude toward Z by the amount
return region.ExtrudeAsSolid(new Vector3D(0, 0, 1), extrudedHeight);
}
a simple example of creating a cube of 10 by 10 by 10 from vertese (there are much easier method to make a cube but for sake of simplicity i'll make a cube)
// create the 4 verteses
var verteses = new List<Point3D>()
{
new Point3D(0, 0, 0),
new Point3D(10, 0, 0),
new Point3D(10, 10, 0),
new Point3D(0, 10, 0)
}
// create the region on the XY plane using the static method
var region = CreateRegion(verteses);
// extrude the region in Z by 10 units
var solid = CreateSolidFromRegion(region, 10d);

Related

Default texturecoordinates on MeshGeometry3D

Is it possible(Without looping through all texturecoordinates) to set them as 0 as default?
I am creating a linear gradient ImageBrush
var colorBitmap = GetColorsBitmap(gradient.ToList()); // Create Colors from gray to my selected color
ImageBrush ib = new ImageBrush(colorBitmap)
{
ViewportUnits = BrushMappingMode.Absolute,
Viewport = new Rect(0, 0, 1, 1) // Matches the pixels in the bitmap.
};
myModel.Material = new DiffuseMaterial(ib);
Then depending on a condition I customize some of the Texturecoordinates like this, where colorvalue is the distance from an object:
var mesh = (MeshGeometry3D) myModel.Geometry;
//In a loop, depending on the distance I set a colorValue
mesh.TextureCoordinates[count] = new Point(colorValue, 0);
I want every texturecoordinate to default to a Point(0,0) but if I have to loop through all 1.2 million of them takes too much time. So is there a way to setup my ImageBrush so that they default to 0,0 or something similiar?

Dont set pixels into Texture2D if they are transprant

so Im trying to make a full sprite having it in 2 parts, one sprite with the head and another one with the body.
I set the 2 textures in the inspector and create another one though code that is the one that I want.
What I do is getting the pixels that I want for the body and set them. No problem here, the problem comes when I want to put the head, because its a 128x128 and I dont use all, so it picks transparent pixels of this one and deletes the bodies ones.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class MSSGameHandler : MonoBehaviour
{
[SerializeField] private Texture2D baseTexture;
[SerializeField] private Texture2D headTexture;
[SerializeField] private Texture2D bodyTexture;
[SerializeField] private Material guestMaterial;
private Sprite mysprite;
private void Awake()
{
Texture2D texture = new Texture2D(512, 512, TextureFormat.RGBA32, true);
Color[] spriteSheetBasePixels = baseTexture.GetPixels(0, 0, 512, 512);
texture.SetPixels(0, 0, 512, 512, spriteSheetBasePixels);
Color[] bodyPixels = bodyTexture.GetPixels(0, 0, 128, 128);
texture.SetPixels(0, 256, 128, 128, bodyPixels);
Color[] headPixels = headTexture.GetPixels(0, 0, 128, 128);
texture.SetPixels(0, 294, 128, 128, headPixels);
texture.Apply();
guestMaterial.mainTexture = texture;
//mysprite = Sprite.Create(texture)
}
}
well in SetPixels you overwrite all existing pixels
you should rather loop them "manually" and check the alpha value
var finalpixels = new Color[512 * 512];
var spriteSheetBasePixels = baseTexture.GetPixels(0, 0, 512, 512);
for(var i = 0; i < finalpixels.Length; i++)
{
finalpixels[i] = spriteSheetBasePixels [i];
}
var bodyPixels = bodyTexture.GetPixels(0, 0, 128, 128);
for(var x = 0; x < 128; x++)
{
for(var y = 256; y < 256 + 128; y++)
{
finalpixels[x + y * 512] = bodyPixels[x + (y - 256) * 128];
}
}
var headPixels = headTexture.GetPixels(0, 0, 128, 128);
for(var x = 0; x < 128; x++)
{
for(var y = 294; y < 294 + 128; y++)
{
var pixel = headPixels[x + (y - 256) * 128];
if(Mathf.Approximately(pixel.a, 0)) continue;
finalpixels[x + y * 512] = pixel;
}
}
texture.SetPixels(finalPixels);
Body image
Head image
Result (in a RawImage)
Sorry for my painting skills :D
Merging these two textures together adds unnecessary loading time at the start of your application for very little per-frame benefit. And, if you decide to move the relative positions of the head and body (e.g., as part of an idle animation), you have to take a performance hit to re-create the new texture showing the new relative positions.
So, instead, put the head and the body into separate objects then use a Sorting Group component to keep them sorted together.
Example character:
Before sorting group
Apply sorting group to root object (ChombiBoy, in this example):
After sorting group (the character rendered on top has a higher Order in Layer in its Sorting Group):
(images taken from Unity documentation)
Sorting a Sorting Group
Unity uses the concept of sorting layers to allow you to divide sprites into groups for overlay priority. Sorting Groups with a Sorting Layer lower in the order are overlaid by those in a higher Sorting Layer.
Sometimes, two or more objects in the same Sorting Layer can overlap (for example, two player characters in a side scrolling game, as shown in the example below). The Order in Layer property can be used to apply consistent priorities to Sorting Groups in the same layer. As with Sorting Layer, lower numbers are rendered first, and are obscured by Sorting Groups with higher layer numbers, which are rendered later. See the documentation on Tags and Layers for details on editing Sorting Layers.
The descendants of the Sorting Group are sorted against other descendants of closest or next Sorting Group (depending on whether sorting is by distance or Order in Layer). In other words, the Sorting Group creates a local sorting space for its descendants only. This allows each of the Renderers inside the group to be sorted using the Sorting Layer and Order in Layer, but locally to the containing Sorting Group.

Clone and resize SeismicCube

i'm quite new in Ocean Framework. I have an issue about copy a SeismicCube object with different size. I got to resize K index of the cube for time/depth resampling. All I knew is clone a cube with exactly same properties. Something like this:
Template template = source.Template;
clone = collection.CreateSeismicCube(source, template);
with source is the original cube and clone is the result. Is it possible to find a way to resize clone to different size? size of index K (trace length) particularly. I've explored the overload methods of CreateSeismicCube but still can't understand how to fill the correct parameters. Do you guys have a solution about this issue? Thanks in advance.
When you create a seismic cube using the overload that clones from another seismic cube you do not have the ability to resize it in any direction (I, J, or K). If you desire a different K dimension for your new cube, then you have to create it providing the long list of arguments that includes the vectors describing its rotation and spacing. You can generate the vectors from the original cube using the samples nearest the origin sample (0,0,0) of the original seismic cube.
Consider that you have the following locations in the cube expressed by their I,J,K indexes. Since the K vector is easy to generate, only needing sample rate, I'll focus on I and J here.
First, get positions at the origin and two neighborhing traces.
Point3 I0J0 = inputCube.PositionAtIndex( new IndexDouble3( 0, 0, 0 ) );
Point3 I1J0 = inputCube.PositionAtIndex( new IndexDouble3( 1, 0, 0 ) );
Point3 I0J1 = inputCube.PositionAtIndex( new IndexDouble3( 0, 1, 0 ) );
Now build segments in the I and J directions and use them to create the vectors.
Vector3 iVector = new Vector3( new Segment3( I0J0, I1J0 ) );
Vector3 jVector = new Vector3( new Segment3( I0J0, I0J1 ) );
Now create the K vector from the input cube sampling. Note that you have to negate the value.
Vector3 kVector = new Vector3( 0, 0, -inputCube.SampleSpacingIJK.Z );

Polygon Collision Testing / Polygon Overlap Test in C# - Not Point in Polygon

I am testing to determine if two polygons overlap. I have developed a first version which does a simple point in polygon test (Fig 1). However I am looking to revamp that method to deal with situations where no vertices of polygon A are in polygon B but their line segments overlap (Fig B).
Any help getting started would be greatly appreciated.
Here is an example with using Region:
GraphicsPath grp = new GraphicsPath();
// Create an open figure
grp.AddLine(10, 10, 10, 50); // a of polygon
grp.AddLine(10, 50, 50, 50); // b of polygon
grp.CloseFigure(); // close polygon
// Create a Region regarding to grp
Region reg = new Region(grp);
Now you can use the Method Region.IsVisible to determine whether the region is in an Rectangle or Point.
The solution:
I modified some code found here.
private Region FindIntersections(List<PolyRegion> regions)
{
if (regions.Count < 1) return null;
Region region = new Region();
for (int i = 0; i < regions.Count; i++)
{
using (GraphicsPath path = new GraphicsPath())
{
path.AddPath(regions[i].Path, false);
region.Intersect(path);
}
}
return region;
}
The result:

Rotate model group according to mouse drag direction and location in the model

I added several cubes to a Viewport3D in WPF and now I want to manipulate groups of them with the mouse.
When I click & drag over one and a half of those cubes I want the hole plane rotated in the direction that the drag was made, the rotation will be handled by RotateTransform3D so it won't be a problem.
The problem is that I don't know how I should handle the drag, more exactly:
How can I know which faces of the cubes were dragged over in order to determine what plane to rotate?
For example in the case below I'd like to know that I need to rotate the right plane of cubes with 90 degrees clockwise so the row of blue faces will be at the top instead of the white ones which will be in the back.
And in this example the top layer should be rotated 90 degrees counterclockwise:
Currently my idea is to place some sort of invisible areas over the cube, to check in which one the drag is happening with VisualTreeHelper.HitTest and then to determine which plane I should rotate, this area will match the first drag example:
But when I add all four regions then I'm back to square one because I still need to determine the direction and which face to rotate according to which areas were "touched".
I'm open to ideas.
Please note that this cube can be freely moved, so it may not be in the initial position when the user clicks and drags, this is what bothers me the most.
PS:
The drag will be implemented with a combination of MouseLeftButtonDown, MouseMove and MouseLeftButtonUp.
MouseEvents
You'll need to use VisualTreeHelper.HitTest() to pick Visual3D objects (process may be simpler if each face is a separate ModelVisual3D). Here is some help on the HitTesting in general, and here is a very useful tidbit that simplifies the picking process tremendously.
Event Culling
Let's say that you now have two ModelVisual3D objects from your picking tests (one from the MouseDown event, one from the MouseUp event). First, we should detect if they are coplanar (to avoid picks going from one face to another). One way to do this is to compare the face Normals to see if they are pointing the same direction. If you have defined the Normals in your MeshGeometry3D, that's great. If not, then we can still find it. I'd suggest adding a static class for extensions. An example of calculating a normal:
public static class GeometricExtensions3D
{
public static Vector3D FaceNormal(this MeshGeometry3D geo)
{
// get first triangle's positions
var ptA = geo.Positions[geo.TriangleIndices[0]];
var ptB = geo.Positions[geo.TriangleIndices[1]];
var ptC = geo.Positions[geo.TriangleIndices[2]];
// get specific vectors for right-hand normalization
var vecAB = ptB - ptA;
var vecBC = ptC - ptB;
// normal is cross product
var normal = Vector3D.CrossProduct(vecAB, vecBC);
// unit vector for cleanliness
normal.Normalize();
return normal;
}
}
Using this, you can compare the normals of the MeshGeometry3D from your Visual3D hits (lots of casting involved here) and see if they are pointing in the same direction. I would use a tolerance test on the X,Y,Z of the vectors as opposed to a straight equivalence, just for safety's sake. Another extension might be helpful:
public static double SSDifference(this Vector3D vectorA, Vector3D vectorB)
{
// set vectors to length = 1
vectorA.Normalize();
vectorB.Normalize();
// subtract to get difference vector
var diff = Vector3D.Subtract(vectorA, vectorB);
// sum of the squares of the difference (also happens to be difference vector squared)
return diff.LengthSquared;
}
If they are not coplanar (SSDifference > some arbitrary test value), you can return here (or give some kind of feedback).
Object Selection
Now that we have determined our two faces and that they are, indeed, ripe for our desired event-handling, we must deduce a way to bang out the information from what we have. You should still have the Normals you calculated before. We're going to be using them again to pick the rest of the faces to be rotated. Another extension method can be helpful for the comparison to determine if a face should be included in the rotation:
public static bool SharedColumn(this MeshGeometry3D basis, MeshGeometry3D compareTo, Vector3D normal)
{
foreach (Point3D basePt in basis.Positions)
{
foreach (Point3D compPt in compareTo.Positions)
{
var compToBasis = basePt - compPt; // vector from compare point to basis point
if (normal.SSDifference(compToBasis) < float.Epsilon) // at least one will be same direction as
{ // as normal if they are shared in a column
return true;
}
}
}
return false;
}
You'll need to cull faces for both of your meshes (MouseDown and MouseUp), iterating over all of the faces. Save the list of Geometries that need to be rotated.
RotateTransform
Now the tricky part. An Axis-Angle rotation takes two parameters: a Vector3D representing the axis normal to the rotation (using right-hand rule) and the angle of rotation. But the midpoint of our cube may not be at (0, 0, 0), so rotations can be tricky. Ergo, first we must find the midpoint of the cube! The simplest way I can think of is to add the X, Y, and Z components of every point in the cube and then divide them by the number of points. The trick, of course, will be not to add the same point more than once! How you do that will depend on how your data is organized, but I'll assume it to be a (relatively) trivial exercise. Instead of applying transforms, you'll want to move the points themselves, so instead of creating and adding to a TransformGroup, we're going to build Matrices! A translate matrix looks like:
1, 0, 0, dx
0, 1, 0, dy
0, 0, 1, dz
0, 0, 0, 1
So, given the midpoint of your cube, your translation matrices will be:
var cp = GetCubeCenterPoint(); // user-defined method of retrieving cube's center point
// gpu's process matrices in column major order, and they are defined thusly
var matToCenter = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 0, 1,
-cp.X, -cp.Y, -cp.Z, 1);
var matBackToPosition = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 0, 1,
cp.X, cp.Y, cp.Z, 1);
Which just leaves our rotation. Do you still have reference to the two meshes we picked from the MouseEvents? Good! Let's define another extension:
public static Point3D CenterPoint(this MeshGeometry3D geo)
{
var midPt = new Point3D(0, 0, 0);
var n = geo.Positions.Count;
foreach (Point3D pt in geo.Positions)
{
midPt.Offset(pt.X, pt.Y, pt.Z);
}
midPt.X /= n; midPt.Y /= n; midPt.Z /= n;
return midPt;
}
Get the vector from the MouseDown's mesh to the MouseUp's mesh (the order is important).
var swipeVector = MouseUpMesh.CenterPoint() - MouseDownMesh.CenterPoint();
And you still have the normal for our hit faces, right? We can (basically magically) get the rotation axis by:
var rotationAxis = Vector3D.CrossProduct(swipeVector, faceNormal);
Which will make your rotation angle always +90°. Make the RotationMatrix (source):
swipeVector.Normalize();
var cosT = Math.Cos(Math.PI/2);
var sinT = Math.Cos(Math.PI/2);
var x = swipeVector.X;
var y = swipeVector.Y;
var z = swipeVector.Z;
// build matrix, remember Column-Major
var matRotate = new Matrix3D(
cosT + x*x*(1 -cosT), y*x*(1 -cosT) + z*sinT, z*x*(1 -cosT) -y*sinT, 0,
x*y*(1 -cosT) -z*sinT, cosT + y*y*(1 -cosT), y*z*(1 -cosT) -x*sinT, 0,
x*z*(1 -cosT) -y*sinT, y*z*(1 -cosT) -x*sinT, cosT + z*z*(1 -cosT), 0,
0, 0, 0, 1);
Combine them to get the Transformation matrix, note that the order is important. We want to take the point, transform it to coordinates relative to the origin, rotate it, then transform it back to original coordinates, in that order. So:
var matTrans = Matrix3D.Multiply(Matrix3D.Multiply(matToCenter, matRotate), matBackToPosition);
Then, you're ready to move the points. Iterate through each Point3D in each MeshGeometry3D that you previously tagged for rotation, and do:
foreach (MeshGeometry3D geo in taggedGeometries)
{
for (int i = 0; i < geo.Positions.Count; i++)
{
geo.Positions[i] *= matTrans;
}
}
And then... oh wait, we're done!

Categories