Unity: detect click event on UIVertex - c#

I am drawing lines on a canvas using the 'UIVertex' struct and I would like to be able to detect click events on the lines I have drawn.
Here is how I draw lines (largely inspired from this tutorial => https://www.youtube.com/watch?v=--LB7URk60A):
void DrawVerticesForPoint(Vector2 point, float angle, VertexHelper vh)
{
vertex = UIVertex.simpleVert;
//vertex.color = Color.red;
vertex.position = Quaternion.Euler(0, 0, angle) * new Vector3(-thickness / 2, 0);
vertex.position += new Vector3(unitWidth * point.x, unitHeight * point.y);
vh.AddVert(vertex);
vertex.position = Quaternion.Euler(0, 0, angle) * new Vector3(thickness / 2, 0);
vertex.position += new Vector3(unitWidth * point.x, unitHeight * point.y);
vh.AddVert(vertex);
}
Any idea?

Here is the solution I have found thanks to this post:
public bool PointIsOnLine(Vector3 point, UILineRenderer line)
{
Vector3 point1 = line.points[0];
Vector3 point2 = line.points[1];
var dirNorm = (point2 - point1).normalized;
var t = Vector2.Dot(point - point1, dirNorm);
var tClamped = Mathf.Clamp(t, 0, (point2 - point1).magnitude);
var closestPoint = point1 + dirNorm * tClamped;
var dist = Vector2.Distance(point, closestPoint);
if(dist < line.thickness / 2)
{
return true;
}
return false;
}
The UILineRenderer class is the class I have which represents my lines.
line.points[0] and line.points[1] contain the coordinates of the two points which determine the line length and position. line.thickness is the... thickness of the line :O

Related

Can you help me draw my 3D model of a chain to a specific direction?

With the code below I created a function to draw a 3D chain model in C# using the helix toolkit. This works exactly how I wanted to but... now I'm breaking my head around a good approach to draw the chainlinks in specific direction, from a startpoint to a endpoint, but I didn't come much further the last week. I know I need to work with vector multiplication or scalars but I need some guidance to right topic to solve my problem.
using HelixToolkit.SharpDX.Core;
using HelixToolkit.Wpf.SharpDX;
using SharpDX;
namespace RapiD.Geometry.Models
{
public partial class ChainLink3D : GeometryBase3D
{
[ObservableProperty]
float radius;
[ObservableProperty]
float width;
[ObservableProperty]
float diameter;
[ObservableProperty]
float length;
[ObservableProperty]
int copies;
[ObservableProperty]
ObservableCollection<Element3D> elements;
public ChainLink3D(float diameter, float width, float length, int copies)
{
this.width = width;
this.length = length;
this.diameter = diameter;
this.copies = copies;
this.elements= new ObservableCollection<Element3D>();
OriginalMaterial = PhongMaterials.Chrome;
DrawChainLink();
}
public void DrawChainLink()
{
MeshBuilder meshBuilder = new MeshBuilder();
float radius = (width - diameter) / 2;
float trans = 0f;
float translate = length + (radius * 2) - diameter;
float yoffset = 0;
int segments = 10;
float interval = 180 / segments;
int numOfCopies = copies;
float startPoint = radius - (diameter / 2);
float endPoint = -length -radius + (diameter / 2);
Vector3 startVector = new Vector3(-300, 200f, 0);
Vector3 endVector = new Vector3(300, 500, 0);
Vector3 direction = Vector3.Normalize (endVector - startVector);
//The for loop is drawing the chainlink
for (int j = 0; j < numOfCopies; j++)
{
List<Vector3> single_chain_link = new List<Vector3>();
for (float i = 0; i <= 360; i += interval)
{
if (i > 180)
yoffset = -length;
else
yoffset = 0;
float a = i * MathF.PI / 180;
float x = radius * MathF.Cos(a);
float y = radius * MathF.Sin(a) + yoffset + trans;
Vector3 vec = new Vector3(x, y, 0);
//Rotates every second chainlink
if (j % 2 == 1)
vec =new Vector3(0, y, x);
vec += startVector;
//vec *= direction;
single_chain_link.Add(vec);
}
// this three are a reference for a new example direction in which I want to draw the chain link to
meshBuilder.AddSphere(Vector3.Zero, 5, 10, 10);
meshBuilder.AddSphere(startVector, 5, 10, 10);
meshBuilder.AddSphere(endVector, 5, 10, 10);
meshBuilder.AddTube(single_chain_link, diameter, 10, true);
meshBuilder.AddArrow(new Vector3(0, startPoint + trans, 0), new Vector3(0, endPoint + trans, 0), 2, 10);
elements.Add(new Element3D(new Vector3(0, startPoint + trans, 0), new Vector3(0, endPoint + trans, 0)));
//single_chain_link.OrderByDescending(x => x.X);
MeshGeometry = meshBuilder.ToMeshGeometry3D();
trans -= translate;
}
}
}
}
I did successfully draw the chain form a specific startpoint, but I want to draw the elements from the given startpoint to a endposition.
You should be using a transformation to rotate and/or move your model to the correct orientation.
To create a rotation matrix from a direction it is useful to know some linear algebra. Notably that the cross product between two vectors result in a vector orthogonal to both. And that a rotation matrix is not really anything more than three ortogonal axes. So you can do something like the following pseudo code
var x = myDirection;
if(x.AlmostEqual(Vector3.UnitY)){
var y = x.CrossProduct(Vector3.UnitZ);
}
else{
var y = x.CrossProduct(Vector3.UnitY);
}
var z = y.CrossProduct(x);
// Create a matrix from the x, y, z axes
If you are using System.Numerics there is the Matrix4x4.CreateLookAt that does more or less this.
Once you have a matrix you can just transform your model to rotate it in whatever direction you want. Note that it is common, at least for me, to mix up directions and end up with something that is of by 90 degrees, or some other error. It does not help that different libraries can use some different conventions. My best solution is to do things in small steps and verify that the result is as you expect it to be.

Why isn't my perspective transform working

I am building a test 3D renderer in WinForms using the objects in System.Numerics such as Vector3 and Matrix4x4.
The object drawn is a point cloud, centered around (0,0,0), and rotated about the origin. Each node renders as dots on the screen. Here is what the 3D shape should look like
Fake Perspective
and more specifically when viewed from the front the perspective should be obvious with the blue dots that are further away from the eye to be at a smaller distance from the center
Fake Perspective
The pipeline is roughly as follows:
Rotation transformation
Matrix4x4 RY = Matrix4x4.CreateRotationY(ry);
Perspective transformation (fov=90, aspect=1.0f, near=1f, far=100f)
Matrix4x4 P = Matrix4x4.CreatePerspectiveFieldOfView(fov.Radians(), 1.0f, 1f, 100f);
Camera transformation
Matrix4x4 C = RY * P;
var node = Vector3.Transform(face.Nodes[i], C);
Project to 2D
Vector2 point = new Vector2(node.X, node.Y);
View transformation
Matrix3x2 S = Matrix3x2.CreateScale(height / scale, -height / scale);
Matrix3x2 T = Matrix3x2.CreateTranslation(width / 2f, height / 2f);
Matrix3x2 V = S*T
point = Vector2.Transform(point, V);
Pixel Coordinates & Render
PointF pixel = new PointF(point.X, point.Y);
e.Graphics.FillEllipse(brush,pixel.X - 2, pixel.Y - 2, 4, 4);
So what I am seeing is an orthographic projection.
Program Output
The blue nodes further away are not smaller as expected. Somehow the perspective transformation is being ignored.
So my question is my usage of Matrix4x4.CreatePerspectiveFieldOfView() correct in step #2? And is the projection from 3D to 2D in step #4 correct?
Steps #1, #5 and #6 seem to be working exactly as intended, my issue is with steps #2-#4 somewhere.
Example code to reproduce the issue
Form1.cs
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
public Shape Object { get; set; }
protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
this.Object = Shape.DemoShape1();
}
protected override void OnPaint(PaintEventArgs e)
{
base.OnPaint(e);
float width = ClientSize.Width, height = ClientSize.Height;
float scale = 40f, fov = 90f;
Matrix4x4 RY = Matrix4x4.CreateRotationY(ry);
Matrix4x4 RX = Matrix4x4.CreateRotationX(rx);
Matrix4x4 P = Matrix4x4.CreatePerspectiveFieldOfView(fov.Radians(), 1.0f, 1f, 100f);
Matrix4x4 C = RY * RX * P;
Matrix3x2 S = Matrix3x2.CreateScale(
height / scale, -height / scale);
Matrix3x2 T = Matrix3x2.CreateTranslation(
width / 2f, height / 2f);
Matrix3x2 V = S * T;
using (var pen = new Pen(Color.Black, 0))
{
var arrow = new AdjustableArrowCap(4f, 9.0f);
pen.CustomEndCap = arrow;
using (var brush = new SolidBrush(Color.Black))
{
// Draw coordinate triad (omited)
// Each face has multiple nodes with the same color
foreach (var face in Object.Faces)
{
brush.Color = face.Color;
PointF[] points = new PointF[face.Nodes.Count];
for (int i = 0; i < points.Length; i++)
{
// transform nodes into draw points
var item = Vector4.Transform(face.Nodes[i], C);
var point = Vector2.Transform(item.Project(), V);
points[i] = point.ToPoint();
}
// Draw points as dots
e.Graphics.SmoothingMode = SmoothingMode.HighQuality;
for (int i = 0; i < points.Length; i++)
{
e.Graphics.FillEllipse(brush,
points[i].X - 2, points[i].Y - 2,
4, 4);
}
}
}
}
}
}
GraphicsExtensions.cs
public static class GraphicsExtensions
{
public static PointF ToPoint(this Vector2 vector)
=> new PointF(vector.X, vector.Y);
public static Vector2 Project(this Vector3 vector)
=> new Vector2(vector.X, vector.Y);
public static Vector2 Project(this Vector4 vector)
=> new Vector2(vector.X, vector.Y);
public static float Radians(this float degrees) => (float)(Math.PI/180) * degrees;
public static float Degrees(this float radians) => (float)(180/Math.PI) * radians;
}

C# - Use of compute shaders

I'm trying to implement, using SharpDX11, a ray/mesh intersection method using the GPU. I've seen from an older post (Older post) that this can be done using the Compute Shader; but I need help in order to create and define the buffer outside the .hlsl code.
My HLSL code is the following:
struct rayHit
{
float3 intersection;
};
cbuffer cbRaySettings : register(b0)
{
float3 rayFrom;
float3 rayDir;
uint TriangleCount;
};
StructuredBuffer<float3> positionBuffer : register(t0);
StructuredBuffer<uint3> indexBuffer : register(t1);
AppendStructuredBuffer<rayHit> appendRayHitBuffer : register(u0);
void TestTriangle(float3 p1, float3 p2, float3 p3, out bool hit, out float3 intersection)
{
//Perform ray/triangle intersection
//Compute vectors along two edges of the triangle.
float3 edge1, edge2;
float distance;
//Edge 1
edge1.x = p2.x - p1.x;
edge1.y = p2.y - p1.y;
edge1.z = p2.z - p1.z;
//Edge2
edge2.x = p3.x - p1.x;
edge2.y = p3.y - p1.y;
edge2.z = p3.z - p1.z;
//Cross product of ray direction and edge2 - first part of determinant.
float3 directioncrossedge2;
directioncrossedge2.x = (rayDir.y * edge2.z) - (rayDir.z * edge2.y);
directioncrossedge2.y = (rayDir.z * edge2.x) - (rayDir.x * edge2.z);
directioncrossedge2.z = (rayDir.x * edge2.y) - (rayDir.y * edge2.x);
//Compute the determinant.
float determinant;
//Dot product of edge1 and the first part of determinant.
determinant = (edge1.x * directioncrossedge2.x) + (edge1.y * directioncrossedge2.y) + (edge1.z * directioncrossedge2.z);
//If the ray is parallel to the triangle plane, there is no collision.
//This also means that we are not culling, the ray may hit both the
//back and the front of the triangle.
if (determinant == 0)
{
distance = 0.0f;
intersection = float3(0, 0, 0);
hit = false;
}
float inversedeterminant = 1.0f / determinant;
//Calculate the U parameter of the intersection point.
float3 distanceVector;
distanceVector.x = rayFrom.x - p1.x;
distanceVector.y = rayFrom.y - p1.y;
distanceVector.z = rayFrom.z - p1.z;
float triangleU;
triangleU = (distanceVector.x * directioncrossedge2.x) + (distanceVector.y * directioncrossedge2.y) + (distanceVector.z * directioncrossedge2.z);
triangleU = triangleU * inversedeterminant;
//Make sure it is inside the triangle.
if (triangleU < 0.0f || triangleU > 1.0f)
{
distance = 0.0f;
intersection = float3(0, 0, 0);
hit = false;
}
//Calculate the V parameter of the intersection point.
float3 distancecrossedge1;
distancecrossedge1.x = (distanceVector.y * edge1.z) - (distanceVector.z * edge1.y);
distancecrossedge1.y = (distanceVector.z * edge1.x) - (distanceVector.x * edge1.z);
distancecrossedge1.z = (distanceVector.x * edge1.y) - (distanceVector.y * edge1.x);
float triangleV;
triangleV = ((rayDir.x * distancecrossedge1.x) + (rayDir.y * distancecrossedge1.y)) + (rayDir.z * distancecrossedge1.z);
triangleV = triangleV * inversedeterminant;
//Make sure it is inside the triangle.
if (triangleV < 0.0f || triangleU + triangleV > 1.0f)
{
distance = 0.0f;
intersection = float3(0, 0, 0);
hit = false;
}
//Compute the distance along the ray to the triangle.
float raydistance;
raydistance = (edge2.x * distancecrossedge1.x) + (edge2.y * distancecrossedge1.y) + (edge2.z * distancecrossedge1.z);
raydistance = raydistance * inversedeterminant;
//Is the triangle behind the ray origin?
if (raydistance < 0.0f)
{
distance = 0.0f;
intersection = float3(0, 0, 0);
hit = false;
}
intersection = rayFrom + (rayDir * distance);
hit = true;
}
[numthreads(64, 1, 1)]
void CS_RayAppend(uint3 tid : SV_DispatchThreadID)
{
if (tid.x >= TriangleCount)
return;
uint3 indices = indexBuffer[tid.x];
float3 p1 = positionBuffer[indices.x];
float3 p2 = positionBuffer[indices.y];
float3 p3 = positionBuffer[indices.z];
bool hit;
float3 p;
TestTriangle(p1, p2, p3, hit, p);
if (hit)
{
rayHit hitData;
hitData.intersection = p;
appendRayHitBuffer.Append(hitData);
}
}
While the following is part of my c# implementation but I'm not able to understand how lo load buffers for compute shader.
int count = obj.Mesh.Triangles.Count;
int size = 8; //int+float for every hit
BufferDescription bufferDesc = new BufferDescription() {
BindFlags = BindFlags.UnorderedAccess | BindFlags.ShaderResource,
Usage = ResourceUsage.Default,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.BufferStructured,
StructureByteStride = size,
SizeInBytes = size * count
};
Buffer buffer = new Buffer(device, bufferDesc);
UnorderedAccessViewDescription uavDescription = new UnorderedAccessViewDescription() {
Buffer = new UnorderedAccessViewDescription.BufferResource() { FirstElement = 0, Flags = UnorderedAccessViewBufferFlags.None, ElementCount = count },
Format = SharpDX.DXGI.Format.Unknown,
Dimension = UnorderedAccessViewDimension.Buffer
};
UnorderedAccessView uav = new UnorderedAccessView(device, buffer, uavDescription);
context.ComputeShader.SetUnorderedAccessView(0, uav);
var code = HLSLCompiler.CompileFromFile(#"Shaders\TestTriangle.hlsl", "CS_RayAppend", "cs_5_0");
ComputeShader _shader = new ComputeShader(device, code);
Buffer positionsBuffer = new Buffer(device, Utilities.SizeOf<Vector3>(), ResourceUsage.Default, BindFlags.None, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
context.UpdateSubresource(ref data, positionsBuffer);
context.ComputeShader.Set(_shader);
Inside my c# implementation i'm considering only one ray (with its origin and direction) and I would like to use the shader to check the intersection with all the triangles of the mesh. I'm already able to do that using the CPU but for 20k+ triangles the computation took too long even if i'm already using parallel coding.

Point-Zoom on Mandelbrot Set in C# - It works, except when the mouse has moved

I'm able to point zoom on the Mandelbrot set, as long as the mouse doesn't move after zooming has begun. I've tried calculating a normalized delta (new coordinate - old coordinate)*(oldzoom), but what happens is the image appears to jump around to a new location. I've seen this issue before. I'm struggling more here because I have to somehow convert this mouse position delta back to the -2,2 coordinate space of the Mandelbrot set.
Here's my code. What's important is the GetZoomPoint method, and then the lines of code that define x0 and y0. Also, I use the Range class to scale values from one range to another. I WAS using deltaTrans (thats the thing I was talking about earlier where I normalize the mouse delta with the old scale).
using OpenTK.Graphics.OpenGL;
using SpriteSheetMaker;
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Fractal.Fractal
{
public class Mandelbrot : BaseTexture
{
private static Transform GlobalTransform = SpriteSheetMaker.Global.Transform;
private static Vector3 GlobalScale = GlobalTransform.Scale;
private static Vector3 GlobalTrans = GlobalTransform.Translation;
private static Vector3 LastWindowPoint = null;
private static Vector3 ZoomFactor = Vector3.ONE * 1.2f;
private static Vector3 Displacement = Vector3.ZERO;
private static int WindowSize = 100;
public static Vector3 GetZoomPoint()
{
var zP = OpenGLHelpers.LastZoomPoint.Clone();
if (LastWindowPoint == null)
{
LastWindowPoint = zP.Clone();
}
var delta = zP - LastWindowPoint;
var oldZoom = GlobalScale / ZoomFactor;
var deltaTrans = delta.XY * oldZoom.XY;
var factor = ZoomFactor.Clone();
Range xR = new Range(0, WindowSize);
Range yR = new Range(0, WindowSize);
Range complexRange = new Range(-2, 2);
// Calculate displacement of zooming position.
var dx = (zP.X - Displacement.X) * (factor.X - 1f);
var dy = (zP.Y - Displacement.Y) * (factor.Y - 1f);
// Compensate for displacement.
Displacement.X -= dx;
Displacement.Y -= dy;
zP -= Displacement;
var x = complexRange.ScaleValue(zP.X, xR);
var y = complexRange.ScaleValue(zP.Y, yR);
var rtn = new Vector3(x, y);
LastWindowPoint = zP.Clone();
return rtn;
}
public static Mandelbrot Generate()
{
var size = new Size(WindowSize, WindowSize);
var radius = new Size(size.Width / 2, size.Height / 2);
Bitmap bmp = new Bitmap(size.Width, size.Height);
LockBitmap.LockBitmapUnsafe lbm = new LockBitmap.LockBitmapUnsafe(bmp);
lbm.LockBits();
var pt = Mandelbrot.GetZoomPoint();
Parallel.For(0, size.Width, i =>
{
// float x0 = complexRangeX.ScaleValue(i, xRange);
float x0 = ((i - radius.Width) / GlobalScale.X) + pt.X;
Parallel.For(0, size.Height, j =>
{
// float y0 = complexRangeY.ScaleValue(j, yRange);
float y0 = ((j - radius.Height) / GlobalScale.Y) + pt.Y;
float value = 0f;
float x = 0.0f;
float y = 0.0f;
int iteration = 0;
int max_iteration = 100;
while (x * x + y * y <= 4.0 && iteration < max_iteration)
{
float xtemp = x * x - y * y + x0;
y = 2.0f * x * y + y0;
x = xtemp;
iteration += 1;
if (iteration == max_iteration)
{
value = 255;
break;
}
else
{
value = iteration * 50f % 255f;
}
}
int v = (int)value;
lbm.SetPixel(i, j, new ColorLibrary.HSL(v / 255f, 1.0, 0.5).ToDotNetColor());
});
});
lbm.UnlockBits();
var tex = new BaseTextureImage(bmp);
var rtn = new Mandelbrot(tex);
return rtn;
}
public override void Draw()
{
base._draw();
}
private Mandelbrot(BaseTextureImage graphic)
{
var topLeft = new Vector3(0, 1);
var bottomLeft = new Vector3(0, 0);
var bottomRight = new Vector3(1, 0);
var topRight = new Vector3(1, 1);
this.Vertices = new List<Vector3>()
{
topLeft,bottomLeft,bottomRight,topRight
};
this.Size.X = WindowSize;
this.Size.Y = WindowSize;
this.Texture2D = graphic;
}
}
}
I refactored my code, and also figured out a solution to this problem. 2 big wins in one. Ok, so I found a solution on CodeProject written in C# which I was readily able to adapt to my project. I'm not sure why I didn't realize this when I posted the question, but what I needed to solve this issue was to create a 'window' of zoom and not think in terms of a 'point zoom'. Yes, even if I am trying to zoom directly into a point, that point is just the center of some sort of a window.
Here is the method I have, which expects start and end mousedown coordinates (screen space), and converts the mandelbrot set window size accordingly.
public void ApplyZoom(double x0, double y0, double x1, double y1)
{
if (x1 == x0 && y0 == y1)
{
//This was just a click, no movement occurred
return;
}
/*
* XMin, YMin and XMax, YMax are the current extent of the set
* mx0,my0 and mx1,my1 are the part we selected
* do the math to draw the selected rectangle
* */
double scaleX, scaleY;
scaleX = (XMax - XMin) / (float)BitmapSize;
scaleY = (YMax - YMin) / (float)BitmapSize;
XMax = (float)x1 * scaleX + XMin;
YMax = (float)y1 * scaleY + YMin;
XMin = (float)x0 * scaleX + XMin;
YMin = (float)y0 * scaleY + YMin;
this.Refresh(); // force mandelbrot to redraw
}
Basically, whats happening is we calculate the ratio between the mandelbrot window size versus the screen size we are drawing to. Then, using that scale, we basically convert our mousedown coordinates to mandelbrot set coordinates (x1*scaleX, etc) and manipulate the current Min and Max coordinates with them, using the Min values as the pivot point.
Here's the link to the CodeProject I used as a reference: CodeProject link

Transcribing a polygon on a circle

i am currently try to inscribe diagonals of a decagon inside a circle
like this
in c# my approach would be creating a circle
e.Graphics.DrawEllipse(myPen, 0, 0, 100, 100);
and draw lines inside using
e.Graphics.DrawLine(myPen, 20, 5, 50, 50);
after that i would draw a decagon polygon.
currently im stuck at how to divide the circle into 10 parts/ finding the correct coordiantes of the points on the circumference of the circles because im not good in math,
i want to know how would i know the next point in a circumference of the circle the size of my circle is indicated above.
and also i want also to ask a better approach for my problem.
Thank you :)
Just for grits and shins, here's a generic implementation that will inscribe an X-sided polygon into the Rectangle you pass it. Note that in this approach I'm not actually calculating any absolute points. Instead, I am translating the origin, rotating the surface, and drawing the lines only with respect to the origin using a fixed length and an angle. This is repeated in a loop to achieve the end result below, and is very similar to commanding the Turtle in Logo:
public partial class Form1 : Form
{
PictureBox pb = new PictureBox();
NumericUpDown nud = new NumericUpDown();
public Form1()
{
InitializeComponent();
this.Text = "Inscribed Polygon Demo";
TableLayoutPanel tlp = new TableLayoutPanel();
tlp.RowCount = 2;
tlp.RowStyles.Clear();
tlp.RowStyles.Add(new RowStyle(SizeType.AutoSize));
tlp.RowStyles.Add(new RowStyle(SizeType.Percent, 100));
tlp.ColumnCount = 2;
tlp.ColumnStyles.Clear();
tlp.ColumnStyles.Add(new ColumnStyle(SizeType.AutoSize));
tlp.ColumnStyles.Add(new ColumnStyle(SizeType.AutoSize));
tlp.Dock = DockStyle.Fill;
this.Controls.Add(tlp);
Label lbl = new Label();
lbl.Text = "Number of Sides:";
lbl.TextAlign = ContentAlignment.MiddleRight;
tlp.Controls.Add(lbl, 0, 0);
nud.Minimum = 3;
nud.Maximum = 20;
nud.AutoSize = true;
nud.ValueChanged += new EventHandler(nud_ValueChanged);
tlp.Controls.Add(nud, 1, 0);
pb.Dock = DockStyle.Fill;
pb.Paint += new PaintEventHandler(pb_Paint);
pb.SizeChanged += new EventHandler(pb_SizeChanged);
tlp.SetColumnSpan(pb, 2);
tlp.Controls.Add(pb, 0, 1);
}
void nud_ValueChanged(object sender, EventArgs e)
{
pb.Refresh();
}
void pb_SizeChanged(object sender, EventArgs e)
{
pb.Refresh();
}
void pb_Paint(object sender, PaintEventArgs e)
{
// make circle centered and 90% of PictureBox size:
int Radius = (int)((double)Math.Min(pb.ClientRectangle.Width, pb.ClientRectangle.Height) / (double)2.0 * (double).9);
Point Center = new Point((int)((double)pb.ClientRectangle.Width / (double)2.0), (int)((double)pb.ClientRectangle.Height / (double)2.0));
Rectangle rc = new Rectangle(Center, new Size(1, 1));
rc.Inflate(Radius, Radius);
InscribePolygon(e.Graphics, rc, (int)nud.Value);
}
private void InscribePolygon(Graphics G, Rectangle rc, int numSides)
{
if (numSides < 3)
throw new Exception("Number of sides must be greater than or equal to 3!");
float Radius = (float)((double)Math.Min(rc.Width, rc.Height) / 2.0);
PointF Center = new PointF((float)(rc.Location.X + rc.Width / 2.0), (float)(rc.Location.Y + rc.Height / 2.0));
RectangleF rcF = new RectangleF(Center, new SizeF(1, 1));
rcF.Inflate(Radius, Radius);
G.DrawEllipse(Pens.Black, rcF);
float Sides = (float)numSides;
float ExteriorAngle = (float)360 / Sides;
float InteriorAngle = (Sides - (float)2) / Sides * (float)180;
float SideLength = (float)2 * Radius * (float)Math.Sin(Math.PI / (double)Sides);
for (int i = 1; i <= Sides; i++)
{
G.ResetTransform();
G.TranslateTransform(Center.X, Center.Y);
G.RotateTransform((i - 1) * ExteriorAngle);
G.DrawLine(Pens.Black, new PointF(0, 0), new PointF(0, -Radius));
G.TranslateTransform(0, -Radius);
G.RotateTransform(180 - InteriorAngle / 2);
G.DrawLine(Pens.Black, new PointF(0, 0), new PointF(0, -SideLength));
}
}
}
I got the formula for the length of the side here at Regular Polygon Calculator.
One way of dealing with this is using trigonometric functions sin and cos. Pass them the desired angle, in radians, in a loop (you need a multiple of 2*π/10, i.e. a = i*π/5 for i between 0 and 9, inclusive). R*sin(a) will give you the vertical offset from the origin; R*cos(a) will give you the horizontal offset.
Note that sin and cos are in the range from -1 to 1, so you will see both positive and negative results. You will need to add an offset for the center of your circle to make the points appear at the right spots.
Once you've generated a list of points, connect point i to point i+1. When you reach the ninth point, connect it to the initial point to complete the polygon.
I don't test it, but i think it is ok.
#define DegreeToRadian(d) d * (Pi / 180)
float r = 1; // radius
float cX = 0; // centerX
float cY = 0; // centerY
int numSegment = 10;
float angleOffset = 360.0 / numSegment;
float currentAngle = 0;
for (int i = 0; i < numSegment; i++)
{
float startAngle = DegreeToRadian(currentAngle);
float endAngle = DegreeToRadian(fmod(currentAngle + angleOffset, 360));
float x1 = r * cos(startAngle) + cX;
float y1 = r * sin(startAngle) + cY;
float x2 = r * cos(endAngle) + cX;
float y2 = r * sin(endAngle) + cY;
currentAngle += angleOffset;
// [cX, cY][x1, y1][x2, y2]
}
(fmod is c++ function equals to floatNumber % floatNumber)

Categories