create custom polygon wall using xbim library - c#

I try to make ifc wall using some polygon points and save to ifc file.
I found some approach and try that, but it does not work.
https://github.com/xBimTeam/XbimGeometry/issues/117
IFC - Representation of triangle mesh
here is my code:
private static void CreateCustomPolygonWall(IfcStore model)
{
using (var txn = model.BeginTransaction("Create Custom Polygon"))
{
List<double[]> points = new List<double[]>();
points.Add(new double[] { 0, 0, 0 });
points.Add(new double[] { 100, 0, 0 });
points.Add(new double[] { 100, 100, 0 });
var list = new List<IfcCartesianPoint>();
foreach (var coordinates in points.Select(p => p.Select(x => new IfcLengthMeasure(x))))
{
var point = model.Instances.New<IfcCartesianPoint>();
point.Coordinates.AddRange(coordinates);
list.Add(point);
}
var faceSet = model.Instances.New<Xbim.Ifc4.TopologyResource.IfcConnectedFaceSet>();
List<int[]> indexes = new List<int[]>();
indexes.Add(new int[] { 0, 1, 2 });
foreach (var t in indexes)
{
var polyLoop = model.Instances.New<Xbim.Ifc4.TopologyResource.IfcPolyLoop>();
polyLoop.Polygon.AddRange(t.Select(k => list[k]));
var bound = model.Instances.New<Xbim.Ifc4.TopologyResource.IfcFaceBound>();
bound.Bound = polyLoop;
var face = model.Instances.New<Xbim.Ifc4.TopologyResource.IfcFace>();
face.Bounds.Add(bound);
faceSet.CfsFaces.Add(face);
}
var surface = model.Instances.New<IfcFaceBasedSurfaceModel>();
surface.FbsmFaces.Add(faceSet);
txn.Commit();
}
}
and if I save to ifc file following the code, the file has polygon points that I describe. but it is not showing any ifc viewer.
#23=IFCCARTESIANPOINT((0.,0.,0.));
#24=IFCCARTESIANPOINT((100.,0.,0.));
#25=IFCCARTESIANPOINT((100.,100.,0.));
so how can I create a polygon wall and save it to ifc file using xbim library?
any hint?
best regards.

You need to create more than just the geometry to create an IFC file which other viewers would process and display. Here is a working example of 3D wall creation. If you want to define the wall as any arbitrary profile, you would replace IfcRectangleProfileDef in the example with other profile definition, likely IfcArbitraryClosedProfileDef with OuterCurve being IfcPolyline.

Related

Emgu.CV Fisheye.Calibrate method

I'm currently trying to use the Fisheye.Calibrate method and the Fisheye.UndistorImage method from the Emgu.CV library. As far as I've understood, the Calibrate method is used to calculate a camera matrix (K) and a distortion vector (D), which are to be used to undistort fisheye-images using the UndistorImage method. However, when I use these two methods the results are not convincing. This is the input image I'm testing on: fisheye input image and this is the result: fisheye output image.
When I tried to look at the values of K and D by looking at the data-variable of the objects, it said 'null' for both K and D. Therefore I'm unsure if I'm using the Calibrate() metod correctly. My code is as follow:
private void EmguCVUndistortFisheye()
{
string[] fileNames = Directory.GetFiles(#"C:\Users\Test\Desktop\Jakob\ImageAnalysis\Images\Calibration", "*.png");
Size patternSize = new Size(6, 8);
VectorOfVectorOfPoint3D32F objPoints = new VectorOfVectorOfPoint3D32F();
VectorOfVectorOfPointF imagePoints = new VectorOfVectorOfPointF();
foreach (string file in fileNames)
{
Mat img = CvInvoke.Imread(file, ImreadModes.Grayscale);
CvInvoke.Imshow("input", img);
VectorOfPointF corners = new VectorOfPointF(patternSize.Width * patternSize.Height);
bool find = CvInvoke.FindChessboardCorners(img, patternSize, corners);
if (find)
{
MCvPoint3D32f[] points = new MCvPoint3D32f[patternSize.Width * patternSize.Height];
int loopIndex = 0;
for (int i = 0; i < patternSize.Height; i++)
{
for (int j = 0; j < patternSize.Width; j++)
points[loopIndex++] = new MCvPoint3D32f(j, i, 0);
}
objPoints.Push(new VectorOfPoint3D32F(points));
imagePoints.Push(corners);
}
}
Size imageSize = new Size(1280, 1024);
Mat K = new Mat();
Mat D = new Mat();
Mat rotation = new Mat();
Mat translation = new Mat();
Fisheye.Calibrate(
objPoints,
imagePoints,
imageSize,
K,
D,
rotation,
translation,
Fisheye.CalibrationFlag.CheckCond,
new MCvTermCriteria(30, 0.1)
);
foreach (string file in fileNames)
{
Mat img = CvInvoke.Imread(file, ImreadModes.Grayscale);
Mat output = img.Clone();
Fisheye.UndistorImage(img, output, K, D);
CvInvoke.Imshow("output", output);
}
}
Is the reason for my strange results a consequence of wrong parameters to the Calibrate method or is it simply the case of not using enough input images?
This looks like a similar problem to one I had recently when trying to pass a Mat into the calibration function when it needed a Matrix and as you've found it just doesn't work without reporting any errors. I think you'll need the following:
var K = new Matrix<double>(3, 3);
var D = new Matrix<double>(4, 1);
Also note that if you want to retrieve the rotation and translation vectors passing a Mat in is fine but you'll probably want to convert back to a Matrix if you want to perform calculations on them. I was just using a normal camera calibration rather than fish-eye but the following working code fragment might be useful to get the idea:
var cameraMatrix = new Matrix<double>(3, 3);
var distortionCoeffs = new Matrix<double>(4, 1);
var termCriteria = new MCvTermCriteria(30, 0.1);
System.Drawing.PointF[][] imagePoints = imagePointsList.Select(p => p.ToArray()).ToArray();
MCvPoint3D32f[][] worldPoints = worldPointsList.Select(p => p.ToArray()).ToArray();
double error = CvInvoke.CalibrateCamera(worldPoints, imagePoints, imageSize, cameraMatrix, distortionCoeffs, CalibType.RationalModel, termCriteria, out Mat[] rotationVectors, out Mat[] translationVectors);
var rotation = new Matrix<double>(rotationVectors[0].Rows, rotationVectors[0].Cols, rotationVectors[0].DataPointer);
var translation = new Matrix<double>(translationVectors[0].Rows, translationVectors[0].Cols, translationVectors[0].DataPointer);

Extracting points coordinates(x,y) from a curve c#

i have a curve that i draw on a picturebox in c# using the method graphics.drawcurve(pen, points, tension)
is there anyway that i can extract all points (x,y coordinates) been covered by the curve ? and save them into an array or list or any thing would be great, so i can use them in a different things.
My code:
void Curved()
{
Graphics gg = pictureBox1.CreateGraphics();
Pen pp = new Pen(Color.Green, 1);
int i,j;
Point[] pointss = new Point[counter];
for (i = 0; i < counter; i++)
{
pointss[i].X = Convert.ToInt32(arrayx[i]);
pointss[i].Y = Convert.ToInt32(arrayy[i]);
}
gg.DrawCurve(pp, pointss, 1.0F);
}
Many thanks in advance.
If you really want a list of pixel co-ordinates, you can still let GDI+ do the heavy lifting:
using System.Collections.Generic;
using System.Diagnostics;
using System.Drawing;
using System.Drawing.Drawing2D;
namespace so_pointsfromcurve
{
class Program
{
static void Main(string[] args)
{
/* some test data */
var pointss = new Point[]
{
new Point(5,20),
new Point(17,63),
new Point(2,9)
};
/* instead of to the picture box, draw to a path */
using (var path = new GraphicsPath())
{
path.AddCurve(pointss, 1.0F);
/* use a unit matrix to get points per pixel */
using (var mx = new Matrix(1, 0, 0, 1, 0, 0))
{
path.Flatten(mx, 0.1f);
}
/* store points in a list */
var list_of_points = new List<PointF>(path.PathPoints);
/* show them */
int i = 0;
foreach(var point in list_of_points)
{
Debug.WriteLine($"Point #{ ++i }: X={ point.X }, Y={point.Y}");
}
}
}
}
}
This approach draws the spline to a path, then uses the built-in capability of flattening that path to a sufficiently dense set of line segments (in a way most vector drawing programs do, too) and then extracts the path points from the line mesh into a list of PointFs.
The artefacts of GDI+ device rendering (smoothing, anti-aliasing) are lost in this process.

Dynamic levels brick breaker

I'm trying to create a brick breaker game with dynamically loading levels. I'd like to stay in one scene the entire game, and then dynamically load different levels by changing the brick positions (e.g. one level in where the bricks are in the shape of a circle, another level where the bricks are in the shape of a square, etc.).
I've imagined the screen as a grid of which each cell either has a brick or doesn't, and place them using for loops. My trouble is dynamically loading the data. Right now I have the grid data in terms of arrays. I've half attempted to upload 1 json file, but didn't succeed.
I'm not sure how to go about this problem. Do I make individual json files for each level? Can json files even have jagged arrays? How would I extract the data as an array? Is there a way of doing this with playerprefs?
Any help would be appreciated
public class BrickGrid : MonoBehaviour {
string filename = "data.json";
string jsonString;
string path;
public Transform brickPrefab;
[System.Serializable]
public class Bricks {
public string[] rows;
}
void Start() {
LoadGridData();
InitGrid();
}
void LoadGridData() {
path = Application.streamingAssetsPath + "/" + filename;
if (File.Exists(path)) {
jsonString = File.ReadAllText(path);
BrickPattern rows = JsonUtility.FromJson<BrickPattern>(jsonString);
}
}
void InitGrid() {
int[] row1 = { 0, 0, 1, 1 };
int[] row2 = { 1, 1, 0, 1 };
int[] row3 = { 0, 0, 0, 1 };
int[][] rows = new int[][] {row1, row2, row3};
Vector2 brickPosition = new Vector3(-2.25f, 4f, 0);
for (int i = 0; i < rows.Length; i++) {
int[] individualRow = rows[i];
for (int j = 0; j < individualRow.Length; j++){
if (individualRow[j] == 1){
// instantiate
Instantiate(brickPrefab, brickPosition, Quaternion.identity);
}
else if (individualRow[j] == 0) {
continue;
}
// inrease x position
brickPosition.x = brickPosition.x + 1.5f;
}
// increase y position and reset x position
brickPosition.x = -2.25f;
brickPosition.y = brickPosition.y - 1.5f;
}
}
}
yes you can create a prefab for every level, just design them under one GameObject and create a Prefab from that GameObject. Now when you change the level, instantiate the correct prefab.
I think that would be the easiest way.
You could also try many other ways.
Hardcode the locations for each bricks on each levels and use those information when you "dynamically" load a new level.
Store the position values somewhere else (file, database, etc.).
Hope this helps.

Using Imagecomparer, How can I exclude a middle section of a ToleranceRectangle from being compared?

I am using imagecomparer in my mobile test project and I am able to compare a baseline image to a current screenshot, but the problem comes in where there is a section of the screenshot that is always changing and I would like to exclude that part from being compared. Here is my code:
private bool RunVisualCheck(string screen, string resultsPath, string baseline = "baseline.jpeg", string screenshot = "screenshot.jpeg")
{
GetScreenshot(resultsPath + screenshot);
var baselineImage = Image.FromFile(resultsPath + baseline);
var actualImage = Image.FromFile(resultsPath + screenshot);
Image diffImage;
int ignoreTop = 64;
var compareArea = new List<ToleranceRectangle>
{
new ToleranceRectangle()
{
Rectangle = new Rectangle(0,ignoreTop,baselineImage.Width, baselineImage.Height - ignoreTop),
Difference = new ColorDifference()
}
};
bool goodCompare = ImageComparer.Compare(actualImage, baselineImage, compareArea, out diffImage);
if (!goodCompare)
{
diffImage.Save(resultsPath + "diffImage.jpeg");
}
return goodCompare;
}
private void GetScreenshot(string pathFile)
{
System.Threading.Thread.Sleep(2000); // Temp fix to wait until page loads
var srcFiler = ((ITakesScreenshot)mobileDriver).GetScreenshot();
srcFiler.SaveAsFile(pathFile, ImageFormat.Jpeg);
}
Here is an example (not the app being tested) where I would like to exclude the area inside the red rectangle from the overall screenshot from being compared.
Mobile Screenshot Example
Is there an easy way to do this?
Found a better approach than trying to exclude a section from being compared. Thanks to a coworkers suggestion, I am blacking out the sections that do not need comparing and then saving this image. Doing this on the baseline image and the screenshot will have the same effect as excluding it altogether. Here is the code:
Image image = Image.FromFile(#"C:\Screenshots\Screenshot.jpeg");
using (Graphics g = Graphics.FromImage(image))
{
SolidBrush brush = new SolidBrush(Color.Black);
Size size = new Size(image.Width, 64);
Point point = new Point(0, 0);
Rectangle rectangle;
rectangle = new Rectangle(point, size);
g.FillRectangle(brush, rectangle);
}
image.Save(#"C:\Screenshots\Screenshot.jpeg");

Clipperlib is not behaving as intended when doing a union on a multipolygon

I am trying to union a multipolygon but it is not working well as shown in the picture below:
The code I am using is:
using ClipperLib;
using Polygon = System.Collections.Generic.List<ClipperLib.IntPoint>;
using Polygons = System.Collections.Generic.List<System.Collections.Generic.List<ClipperLib.IntPoint>>;
namespace Ylp.ComputationalGeometry
{
public static class Merge
{
public static IList<IList<double>> Multipolygon(IList<IList<IList<IList<double>>>> multiPolygon)
{
const double precisionFactor = 1000000000000000.0;
//precondition: all your polygons have the same orientation
//(ie either clockwise or counter clockwise)
Polygons polys = new Polygons();
multiPolygon.ForEach(x =>
{
Polygon polygon = x.First().Select( y => new IntPoint()
{
X = (long)(y[0] * precisionFactor),
Y = (long)(y[1] * precisionFactor)
}).ToList();
polys.Add(polygon);
});
Polygons solution = new Polygons();
Clipper c = new Clipper();
c.AddPaths(polys, PolyType.ptSubject,true);
c.Execute(ClipType.ctUnion, solution,
PolyFillType.pftNonZero, PolyFillType.pftNonZero);
var coordinates = solution.SelectMany(x => x.Select(y=> (IList<double>)new List<double>()
{
y.X / precisionFactor,
y.Y / precisionFactor
}).ToList()) .ToList();
return coordinates;
}
}
}
What I really want the oucome of the union to look like is something like the following, so any intersections in the middle should be ignored and an area covering both shapes should be created:
and the original shape is:

Categories