How to match physical slab dimensions to image dimensions [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 months ago.
This post was edited and submitted for review 8 months ago and failed to reopen the post:
Needs details or clarity Add details and clarify the problem by editing this post.
Improve this question
I need to know how to approach about dimension conversion. I have some physical slabs. I'm taking pictures of them and getting few coordinates of special pixels.
Up to here everything seem well.
The point is I want to convert actual dimensions to pixel.
So the best approach seems to calculate known physical dimension to pixel such as this formula.
var referenceWidth = (double)physicalWidth /image.width ;
What should be the best approach?

The easiest way is to use a reference object of known size, like a coin, or a ruler. Measure how many pixels the reference object is will allow you to compute the size of a pixel
pixelsize = actual size of reference / size of reference in pixels
If you then mark the corners of the slabs you can compute the actual size of them:
actual size of slab = pixelSize * size of slab in pixels
This assumes the photo is fairly "flat", i.e. the objects are more or less two dimensional, the photo is taken perpendicular to the objects, the lens is fairly narrow, the lens distortion is fairly small etc.
Another alternative would be if you know the focal length and camera to object distance, that would allow you to compute the pixel size without a reference.
The dpi value would be more or less useless to compute the actual object size, it refers to the print size of the photo, not the size of any objects in the photo. Unless you have used a flat bed scanner, then the dpi might actually correspond to real life sizes.
A possible alternative would be to use existing photometry software to convert a random set of photos to 3D.

Related

How to calculate a logical curve along a set of values? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am trying to take a float value, with an arbitrary minimum and maximum possible value, and convert it to a linear scale, for representation on a bar-shaped indicator. The problem is, I can't just lerp it between the minimum and maximum, because the maximum value will always be dramatically higher than the minimum value. I have an array of arbitrary values that I want to act as intermediate points between the minimum and maximum. Now I just need to calculate a logical best-fit curve through the points. Each value is always larger than the last, and the rate of increase in value accelerates the further up you go, but there is no simple formula for calculating this rate of acceleration.
Here's an example of the values that may be used:
6.0, 13.5, 30.0, 75.0, 375.0
where 6 is the minimum, and 375 is the maximum.
If x is exactly one of these values, I would want a simple value depending on how many total values there are, I.E 0, 0.25, 0.5, 0.75, 1. The issue is calculating the in-between values.
How would I go about achieving this? I apologize if a question like this has already been asked, as it feels like a common problem, although I didn't know what to search for. If this has already been answered before, please just point me in the right direction.
Reposting my comment as an answer, as requested.
If a curve might be y(x) = k^(ax+b), take logs of both sides and you have a linear relation. As pointed out tho, this is maths not programming.
I’d pick k = 2, e or 10 for easier implementation; a & b you work out from data.

My application isn't using much memory (C#) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I'm currently designing a 2D Game using a Game Engine I created and have decided to check how much memory I'm using for every second that passes.
I currently have a game screen that contains a total of 4097 game objects, each object contains at the very least, a sprite (bitmap) that is rendered to the screen each frame. Each sprite is a 32x32 pixel image.
The resulting MB I'm apparently using is roughly 1.10MB, is this too much, or am I doing okay? What other things should I take into consideration?
Also, just to show, this is how I'm checking the amount of memory I'm using:
double mb = MathHelper.ConvertBytesToMegabytes(GC.GetTotalMemory(true));
Console.WriteLine("Memory: " + mb);
and the "ConvertBytesToMegabytes" method:
public static double ConvertBytesToMegabytes(long bytes)
{
return (bytes / 1024f) / 1024f;
}
GC.GetTotalMemory:
A number that is the best available approximation of the number of bytes currently allocated in managed memory.
If you are using GDI+, I assume you are using the Image class. However, its data are not located in the managed memory, and therefore cannot be obtained via invoking the garbage collector. Your managed data cost 1.1 MB, and that's completely okay for modern machines.
The memory cost of your bitmaps can be quite easily calculated. Assuming all the sprites are present in the memory, are 32x32 pixels big, and use 32-bit pixels, this gives us 16781312 bytes for the pixel data, or 16 MB. You should rely on this calculation more than on memory reports from the Process class.
I suppose your initial concern was that the reported amount of memory was too low to be able to store all the bitmap data. As you can see, you simply used a wrong method to obtain it. For other (more or less unreliable and confusing) methods to obtain the amount of memory, refer to this question.
Unless the engine is not rendering all of the 4097 game objects for camera reasons you should be using 12-17MB, assumming a normal 32x32 bitmap size of ~3KB

Measuring waist of a person using Microsoft Kinect [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am trying to create an application which is able to accurately measure the body parameters of a person like height, shoulder width and waist.
Currently I have been able to determine the height and the shoulder width of a person using skeletal tracking.
Can anybody help me out regarding how to measure the waist of a person using a Kinect!
I am coding in C# in Visual Studio.
Thanks a lot in advance!
It is hard to give you the exact code, right now, but the recipe:
First you need to understand what it entails. Every person has different proportions. Someone has a wide waist, but fit (athletic), someone has a wide waist, but has also big belly (fat figure), another has a wasp waist. Such variations are many and many...
So, you have to shoot waist in time during rotation around its axis. Then the measured width values convert to a model. After that you will read circumference of the waist plan (like from a blueprint).
EDIT:
Detailed:
If a person turns around (you know it, because the waist witdh values changes...front-left-back-rigth-front and many samples between each part of rotation) gives you the measures in time for the pattern.
Split whole time of rotation to number of samples. Each sample will determine the proportional angle of the turn. (8 samples per rotation means one sample is 45° [360°/8=45°]). Now imagine the circle. Split it into 8 circle chords. Each chord have length of the value measured during the rotation.
If the sample count is good enough, now you can reckon the circumference of the polygon. If the count of samples is too low, you can interpolate (or use another solution) the "missing" samples. The more samples you have, the more accurate result you have.

C# How to get location of smaller image inside larger image [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am new to image processing so please forgive my ignorance. I am trying the come up with a way to get the co-ordinates of a sub image inside that of its containing larger image. For example, I have a large image of the New York skyline and one of just the Empire State building. The large picture is always a high quality image, the small picture is supplied by a user's camera scanning a printed version of the larger image. There the quality, scale and colors of the smaller image will not perfectly match those of the larger one. What I am looking to get is X, Y coordinates from the top-left corner of the larger image, to the top-left corner of the smaller image as if the smaller image were a puzzle piece placed in the larger image. It would be much appreciated of someone could point me in the right direction. Thanks
EDIT
Thank you for the feedback. I have come to realize that this might be a very difficult task. I ended taking a different approach. I will be embedding recognizable shapes in the aforementioned print media and use OpenCvSharp (a free C# wrapper around OpenCV) to detect them.
to just give you one possible direction,
What you are might be facing here is a flavor of pattern detection and/or recognition (aka machine learning), I suggest look for ready implementations as this is complicated task.
The basic idea is that you train or teach an algorithm about features of objects of interest and then the algorithm searches in images for anything that matches your pattern.
There are many algorithms out there; each will have its own approach. As a starting point, You could try to look at what well known image processing framework can offer - OpenCV:
http://docs.opencv.org/2.4/doc/tutorials/features2d/feature_homography/feature_homography.html
EDIT :
OpenCV wrapper for .NET C# as OpenCV is C++ project
http://www.emgu.com/wiki/index.php/Main_Page
This is a very hard and big project to do.
BTW, You can get color of a pixel by GetPixel() method.
Following code creates a 200x200 image and get color of 100,100 coordination of that image.
Bitmap bmp = new Bitmap(200,200);
Color c = bmp.GetPixel(100,100);
For surfing image efficiently you must use pointer(unsafe code) not GetPixel() method unless the performance will be too slow.

Recognizing rectangles from varying lines in c# [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Given an image with an undefined amount of rectangles that are separated by predefined lines with undefined coordinates (the lines in the image only represent the coordinates where the predefined lines should be).
Every rectangle should become a separate System.Drawing.Bitmap and put into an array of Bitmaps.
The rectangles will always be rectangle shaped and will all have the same dimensions (so if you can find one proper rectangle, you may assume that the rest of the rectangles are the same).
You may assume that all the lines in all images are a predefined fixed width (e.g. 5 pixels)
The grid will always be parallel & perpendicular to the sides of the image.
All lines will go from top to bottom, or side to side, even if it doesn't look like it in the image.
The amount of rectangles is undefined (not always 4x4 as in the images)
These images are meant to find the rectangles, which will then be cut from the original image. But if I can cut these images in the proper rectangles, I should be able to do the same for the original image.
I can imagine that this question is rather hard to understand; I've had a hard time trying to explain. All questions are more than welcome.
I'm not quite sure what your question really is, so I'm assuming you are searching for an algorithm to detect your rectangles.
From the images it looks like you can separate the border lines of the rectangles with some kind of binarization filter from the background texture in the image.
I would try a Hough transformation on your images to detect the rectangles and look for similar sized rectangles in the Hough space to narrow down the results. The Hough Transform can be easily implemented and is not very complicated. But I guess a bit of googling will get you a sample code as well.

Categories