bevel or sunken effect algorithm on c# - c#

I am looking for ways to generate the bevel/emboss effect for a set of random closed beizier shapes. I came across the following post which seems to match my requirement.
https://dsp.stackexchange.com/questions/530/bitmap-alpha-bevel-algorithm
How do i get this ported to C# ? Are there any algorithms available that i can use? Or alternately are there any .NET imaging libraries to use or some code snippets to get me started ?
I would need to be running this code on a server to generate dynamic shapes that have transparency around them.

create 'mesh' from your closed polygon/polyline/path
base is enlarged basic shape by bevel/sunken width
top on or below it is your shape
enlargement is done by scaling around center for basic symmetric shapes
or by perpendicular shift + line/curve enlarging/intersect cuting to join
second choice is complex to code but always shape correct
create normals
normal to light source RED (usually light is on upper left corner)
and surface normals on the 'mesh' GREEN (for every edge,area or pixel)
light normal can be constant for whole area for direction light (far light source like
sunlight)
or computed for every point for point light (close light source)
all normal must be unit 3D vectors !!!
render 'mesh' with light (simple normal lighting will be enough)
lighted color = base color * dot_product(light normal,surface normal)
dot product is scalar vector multiplication like this
(A.B)
= dot_product(A(x1,y1,z1),B(x2,y2,z2))
= (x1*x2)+(y1*y2)+(z1*z2)
when A,B are unit vectors then the result is <-1,+1>
0 means A,B are perpendicular
+/- 1 means they are parallel
-1 means they have opposite direction
See image for more clarity
PS. 'mesh' can be still 2D only the normals must be 3D

Related

How to convert UV position to local position and vice-versa?

I'm working in a project that has a layer system represented by several planes in front of each other. These planes receive different textures, which are projected in a render texture with an orthographic camera to generate composite textures.
This project is being build on top of another system (a game), so I have some restrictions and requirements in order to make my project work as expected to fit properly in this game. One of the requirements refers to the decals, which has their position and scale represented by a single Vector4 coordinate. I believe that this Vector4, represents 4 vertex positions in both X and Y axis (2 for X and 2 for Y). For a better understanding, see the image below:
It happens that these Vector4 coordinates seems to be related with the UV of the texture where they belong, cause they only have positive values between 0 and 1. I'm facing a hard time trying to fit this coordinate together with my project, cause Unity position system uses the traditional cartesian plane with positive and negative values rather than the normalized UV coordinates. So if I use the original Vector4 coordinates of the game, the decals get wrongly positioned and vice versa (I'm using the original coordinates as base, but my system is meant to generate stuff to be used within the game, so these decal's coordinates must match the game standards).
Considering all this, how could I convert the local/global position used by Unity as UV position used the game?
Anyway, I tried my best to explain my doubt, not sure if it has an easy solution or not. I figured out this Vector4 stuff only from observation, so feel free to suggest other ideas if you think I'm wrong about it.
EDIT #1 - Despite some paragraphs, I'm afraid my intensions can be more clear, so complementing, the whole point is to find a way to position the decals using the Vector4 coordinates in a way that they get in the right positions. The layer system contains bigger planes, which has the full size of the texture plus smaller ones, representing the decals, varying in size. I believe that the easiest solution would be use one of these bigger planes as the "UV area", which would have the normalized positions mentioned. But I don't know how would I do that...

Map Projection Points to Real world Points

I have searched far and wide for an answer to this problem, and I cannot find one so I am asking here.
The Problem:
I have a laser projecting down on a surface from overhead and I want to project some specific size shapes on this surface. In order to do this I need to 'calibrate' the laser to ground it in the real world.
The laser projects in its own coordinate system ranging from -32000 to 32000 in the x and y directions. I have targets setup on my surface in a rough rectangle (see image below for more details). The targets are set up in terms of millimeters and are their own coordinate system.
I need to be able to take points in millimeters and get them into this range of -32000 to 32000 accurately in an array of scenarios.
Example:
What is the most accurate way of determining the laser space coordinates of the desired point?
Problem 2:
The projection space is not guaranteed to be flat. It could be tilted in any direction. For example, if the bottom (in relation to the example picture) is raised, the real world coordinates stay the same in 2-D, but the measured laser coordinates become more of a Trapezoid. See Image below
If anyone has encountered/solved a similar problem or can even begin to point me in the right direction for a solution, it would be greatly appreciated.
Thank you!
I had the same issue on my post right here: https://stackoverflow.com/a/52480400/9130280
As an example I asked my question for pictures, because it was easier to explain but I applied the solution for device positioning on a surface. It is close to what you are trying to do.
Basically, you have to use OpenCvSharp 3 library (from nuget).
First you have to get a homography matrix. The only coordinates you have to know are the edges. So you fill up two arrays with the edges and then you use:
homographyMatrix = OpenCvSharp.Cv2.FindHomography(originalPointsList, targetPointsList);
And then to get any point in "millimeters" to its equivalent in laser coordinates:
targetPoint = OpenCvSharp.Cv2.PerspectiveTransform(orignalPoint, homographyMatrix);
Let me know if you need more details.

How to animate a simple 3D shape using Helix Toolkit?

I'm trying to animate a simple rectangular shape so that it scales in size in a certain direction. As it is, I am making a rectangle that extends from point A to B. The end goal is to animate it so that it starts at A and is transformed to be the length required to get to B.
I'm pretty new to animation in general, so this process seems finicky to me.
Right now I am:
Creating a vector between the start and end point
Finding the 8 corners of the rectangle along that vector
Creating 2 triangles for each face of the rectangle
Rendering the shape
This is all being done by using a MeshBuilder object and adding the triangles and points individually.
So, the way I'm creating the prism doesn't really help for what I need to do. Ideally I suppose, I would just create a short prism aligned between the points, and then just extend the rectangle to be the right length in an animation.
Any thoughts?
I solved this is a sense by scaling the 3D object from a size of 0 in the X/Y/Z to 1.0. So instead of the prism "extending" from A to B, it more or less "grows" to B.
Note that the ScaleTransform3D needed to have the CenterXYZ properties set to the coordinates of point A in order for it to be anchored to the correct position.
If I find a better solution, I'll update this answer later.

Detect Rotation of a scanned image in C#

We want a c# solution to correct the scanned image because it is rotated. To solve this problem we must detect the rotation angle first and then rotate the image. This was our first thought for our problem. But then we thought image warping would be more accurate as I think it would make the scanned image like our template. Then we can process it as we know all the coordinates of our template... I searched for a free SDK or a free solution in c#. Helping me in this will be great as it is the last task in our work. Really, thanks for all.
We used the PrimeOCR product to do this. It's not free, but we couldn't find a free program that was comparable.
So, the hard part is to detect the angle of the page.
If you have full control over the template, the simplest way to do this is probably to come up with an easily-detectable symbol (e.g. a solid black circle) and stick 3 of them on the template. Then, detect them (just look for big blocks of pixels with high saturation, in the case of a solid black circle).
So, you'll then have 3 sets of coordinates. If you have a top circle, a left circle, and a right circle with all 3 circles at difference distances from one another, detecting which circle is the top circle should be pretty easy.
Then just call a rotation function. This part is easy and has been done before (e.g. http://www.switchonthecode.com/tutorials/csharp-tutorial-image-editing-rotate ).
Edit:
I suggested a circle because it's easier to find the center, but a rectangle should work, too.
To be more explicit about how to actually locate the rectangles/circles, take the average Brightness value of every a × a group of pixels. If that value is greater than b, then that a × a group of pixels is part of a rectangle. a and b are varables you'll want to come up with yourself.
Use flood-fill (or, more precisely, Connected Component Labeling) group the resulting pixels together. The end result should give you your rectangles.

How To Produce A 2D Plane Cut from a 3D Image

I would like to write a C# program that generates a 2D image from a rendered 3D object(s) by "slicing" the 3D object or through a cut-plane. The desired output of the 2D image should be data that can be displayed using a CAD. For example:
A 3D image is defined by its vertices, these vertices is contained within Point3DList(). A method is then called taking Point3DList as its parameter e.g: Cut2D(Point3DList).The method then generates the 2D vertices and saved it inside Point2DList() and these vertices can be read through a CAD program which display it in 2D form.
My question therefore is whether there is a previous implementation of this in C#(.NET compatible) or is there any suggestion on third-party components/algorithms to solve this problem.
Thanks in advance.
You pose an interesting question, in part, by not including a full definition of a 3D shape. You need to specify either the vertices and edges, or an algorithm to obtain the edges from the vertex list. Since an algorithm to obtain the edges from the vertex list devolves into specifying the vertices and edges, I will only cover that case here. My description also works best when the vertices and edges are transformed into a list of flat polygons. To break a vertex list down into polygons, you have to find cycles in the undirected graph that is created by the vertices and edges. For a triangular polygon with vertices A, B, and C you will end up with edges AB, BC, and AC.
The easiest algorithm that I can think of is:
Transform all points so that your 2D plane where the Z axis is 0. (rotate, twist, and move as required to transform the desired 2D plane to line up with the XY plane where Z=0).
For each flat polygon:
a. For each edge, check to see if the vertices have opposite sign on the Z axis (or if one is 0). If Z0 * Z1 <= 0 then this is the case
b. Use the definition of a line and solve for the point where Z=0. This will give you the X,Y of the intersection.
c. You now have a dot, line, or polygon that represents the intersection of your the original flat polygon from step 1 intersecting the 2D plane.
d. Fill in the polygon formed by the shapes (if desired). If your 2D rendering package will not create a polygon from the list of vertices,you need to start rendering pixels using scanlines.
Each of the individual algorithms should be in "Algorithms in C" or similar.
Graphics programs can be quite rewarding when they start to work.
Have Fun,
Jacob
This is more opengl specific rather than c# specific, but what i'd do:
Rotate and transform by a 3d matrix, so that the 'slice' you want is 1 metre in 'front' of the camera.
Then set the near and far horizon limits to 1m and 1.001m, respectively.
-update- Are you even using opengl? If not, you could perform your matrix arithmetic yourself somehow.
It sounds like you want to get the 2D representation of the points of intersection of a plane with a three-dimensional surface or object. While I don't know the algorithm to produce such a thing off hand (I have done very little with 3D modeling applications), I think that is what you are asking about.
I encountered such an algorithm a number of years ago in either a Graphics Gems or GPU Gems or similar book. I could not find anything through a few Bing searches, but hopefully this will give you some ideas.
if its a 3d texture cant you just specify 3d tex coords (into the texture) for each vertex of a quad? wouldnt that auto-interpolate the texels?
If you are looking for a 3rd party implementation, maybe you should explore Coin3d. Capable of such things as you require, though I am not sure of its exact database format or input requirements. I find your description lacking in that you do not specify the direction from which you want to project the 3d image on to a 2d plane.

Categories