Smooth an existing "image" in C# - c#

How can I smooth a Graphics object in C# ? To be more precise, I need to run a smoothing at a very precise moment of my Graphics object generation, on the whole object. The image is coloured.
I am flexible in terms of input classes (Graphics, etc..). I just suggested Graphics at it is a central class for image manipulations in C#.
Graphics.SmoothingMode is out of context for what I need to do and I imagine WU's algorithm only applies to drawing lines in greyscale.

Have a look at the image processing features of AForge.Net. It is an open source framework that includes a lot of useful image processing capabilities. You will find many smoothing filters among them.

I think you used the wrong words to describe your problem. Anti aliasing refers to (as Hand mentioned) the point in time when individual objects are drawn for the first time. For instance, when drawing a diagonal line on an empty surface.
You already have an image, and you want that image to be smoothed. I suggest you detect edges in the image using a standard algorithm, then smooth those edges. I am not familiar with the exact process to do this myself, sadly.

Related

Drawing scatter plot is too slow with GDI

I am drawing a scatter plot with the help of GDI.
But, when there are many lines, it takes about two seconds to draw.
I looked into using SlimDX and SharpDX and utilized their 2D.
It only decreased the time by half. Is there any better tools that I can use to speed up the drawing? I am simply utilizing g.DrawLine. I also heard that there is a tool that lets me draw a scatter on memory and put it on a screen. If you have ideas, it will be greatly appreciated. Thank you.
[Are] there any better tools that I can use to speed up the drawing?
First, use g.DrawLines instead of g.DrawLine. This means there is one transition from user code to graphics code instead of one per line.
Second, select a bitmap into the graphics object, draw into that then write the bitmap to the display. See how to draw a line on a image? for an example of drawing a line on a bitmap. This means the lines are drawn once then refreshing the display is a single fast Bit Blit.
Using DirectX ports (e.g. SharpDX, and SlimDX) is possible but probably overkill unless you are dealing with extremely complex scatter plots. These are (generally) more geared toward 3D vector or 2D Bit Blit based graphics.

Pix-elated Drawing and 3D representations

I am writing a program to imitate Natural Physics. I want to know whether there is a better way to draw an object other than overriding the OnDraw method, and FillRectangle(x,y,1,1) for each pixel.
Is there a way to do a similar action using DirectX or OpenGL? Because to my knowledge the Graphics does not utilize the video card of ones computer (please correct me if I am wrong).
Saying this I would like some thoughts in relation to creating a 3D environment using mathematical calculations to work out the relative quadrant sizes so that objects appear to be farther away then in reality (as a monitor is only 2D), or closer.
Yes. Drawing pixel by pixel with FillRectangle will be very inefficient and slow things down a huge amount. As you say, you should use a graphics rendering system such as DirectX or OpenGL. The choice of which is up to you. If you do a simple search on the web you will find many tutorials on how to get started with 3d graphics.
OpenGL focuses on "Draw me this object in space", and it will take care of rendering it, taking advantage of your graphics card if possible. You do not worry about the pixels, you specify dimensions, camera angles, shaders etc.
You can draw pixels with OpenGL, but that is not the 'correct' way to draw 3d graphics with it.
EDIT in response to Vasa's questions:
I believe OpenGL does what's best based on your graphics card capabilities and drivers. In general OpenGL isn't going to be your best option for drawing direct pixels. BUT remember that
Pixels are different sizes on different machines. Are you expecting to just live with this? Or live with a big display on low-res screens and a tiny one on high-res screens? There may be multiplications involved. If you use literal pixels then once you start multiplying for different screens you are going to get artefacts and inaccuracies.
You want a direct mapping of X to pixels. OpenGL uses float values. They aren't integer 1 to 1 mappings, but they do use a direct proportion. If you choose a scale then OpenGL is not going to suddenly start distorting ratios.
The important thing is proportions not absolute pixels. Although I accept that it's possible for your case to be different.
See this for 2d drawing:
http://basic4gl.wikispaces.com/2D+Drawing+in+OpenGL

Separation and analysis of an image

Here's the scenario:
I am using Visual Studio 2008 with .NET framework 3.5. I am using C#. And for database I am using MySQL. I have a picturebox on a form and 10-12 buttons (each with some image manipulation function). On clicking one of the buttons openfiledialog box is shown up where the user can select the specific file to provide to the program. On clicking another button the program should perform the actions as explained below.
I have an image of a circuit. Suppose this is the image which is provided to the program. e.g.
What I intend to do is that - the program should hypothetically label the circuit as follows:
and then it should separate the image and store the information in a database.
Is there any way to do that. Can anyway tell me the approach to do that? Any help or suggestions please.
Thanks.
In image processing, the problem of finding the 'parts' of the circuit is known as connected component labeling. If you are using C#, I believe that you can use EmguCV (a wrapper to the OpenCV library) to solve the first part of the problem. To do that, you have to consider that the white pixels are the background and that the black pixels are objects.
Now that you have the separated traces, the problem is reduced to finding and labeling the white dots. Again, you can solve it by connected component labeling, but now the objects are represented by white pixels and the background are the black pixels.
At least for your example case, a very simple algorithm would work.
Find a black pixel from the image
Using a flood-fill algorithm, find all the pixels connected to it, and separate it. That's one of your traces.
Working with the separated trace, find a white pixel and use a flood-fill algorithm to find all the pixels connected to it. If you run to the edge of the image, it's not a hole. If you don't, it might be a hole, or a loop in the trace. Use a threshold for the hole size to determine if it's a terminal hole or a loop.
Label the hole and remove it from consideration. Repeat until there are no more unprocessed white pixels.
Remove the whole trace from consideration, and jump to 1.
When there are no more black pixels in consideration in step 1, you're done.
You should probably get pretty far with a basic image editing library that has a flood-fill function, a function to separate a certain color into a new image, and a function to replace colors (the last two are trivial to implement, and there are plenty of flood-fill algorithms available online). You can use different colors to mark different things, for instance, color everything "no in consideration" red. It also makes for an interesting visualization if you look at it in real time!

how to detect blobs and crop them into png files?

i've been working on a webapp. i got stuck here in a problematic issue.
i'll try to explain what im trying to do.
here you see first big image which has green shapes in it.
what i want to do is to crop those shapes into different png files and make their background transparent like the example cropped images below the big one.
The first image will be uploaded by user and i want to crop into pieces like the example cropped images above.it can be done with GD library of php or by a server-side software written in python or c#. but i dunno what this operation called so i dunno what to google to find information. it is something to do with computer vision detecting blobs and cropping them into pieces etc.
any keywords,links would be helpful.
thanks for helps
A really easy way to do this is to use Flood Fill/Connected Component Labeling. Basically, this would just be using a greedy algorithm by grouping any pixels that were the same or similar in color.
This is definitely not the ideal way to detect blobs and is only going to be effective in limited situations. However, it is much easier to understand and code and might be sufficient for your purposes.
Opencv provides a function named cv::findContours to find connected components in an image. If it's always green vs white, You want to cv::split the image into channels, use cv::threshold on the blue or the red channel (those will be white in the white regions and near black in the green region) with THRESH_BINARY_INV (because you want to extract the dark regions), then use cv::findContours to detect the blobs. You can then compute the bounding rectangle with cv::boundingRect, create a new image of that size, and use the contour you got as a mask to fill the new image.
Note: These are links to the C++ documentation, but those functions should be exposed in the python and C# wrappers - see http://www.emgu.com for the latter.
I believe this Wikipedia article covers the problem really well: http://en.wikipedia.org/wiki/Blob_detection
Can't remember any ready-to-use solutions though (-:
It really depends on what kinds of images you will be processing.
As Brian mentioned, you could use Connected Component Labeling, which usually is applied to binary images, where foreground is denoted by white pixels and background by black pixels (or the opposite). The problem is then how to transform the original image to a binary one. If all images are like the example you provided, this is straightforward and can be accomplished with thresholding. OpenCV provides useful methods:
Threshold
FindContours for finding contours of connected components
DrawContours for extracting each component individually into a separate image
For more complex images, however, all bets are off.

C# Create "wireframe"/3D "map"

image http://prod.triplesign.com/map.jpg
How can I produce a similar output in C# window forms in the easiest way?
Is there a good library for this purpose?
I just needs to be pointed in the direction of which graphic library is best for this.
You should just roll your own in a 3d graphics library. You could use directx. If using WPF it is built-in, you can lookup viewport3d. http://msdn.microsoft.com/en-us/magazine/cc163449.aspx
In graphics programming what you are building is a very simple version of a heightmap. I think building your own would give your greater flexibility in the long run.
So a best library doesn't exist. There are plenty of them and some are just for different purposes. Here a small list of possibilities:
Tao: Make anything yourself with OpenGL
OpenTK: The successor of the Tao framework
Dundas: One of the best but quite expensive (lacks in real time performance)
Nevron: Quite good, but much cheaper (also has problems with real time data)
National Instruments: Expensive, not the best looking ones, but damn good in real time data.
... Probably someone else made some other experiences.
Checkout Microsoft Chart Controls library.
Here's how I'd implement this using OpenGL.
First up, you will need a wrapper to import the OpenGL API into C#. A bit of Googling led me to this:
CsGL - OpenGL .NET
There a few example programs available to demonstrate how the OpenGL interface works. Play around with them to get an idea of how the system works.
To implement the 3D map:
Create an array of vectors (that's not the std::vector/List type but x,y,z triplets) where x and y are along the horizontal plane and z is the up amount.
Set the Z compare to less-than-or-equal (so the overlaid line segments are visible).
Create a list of quads where the vertices of the quads are taken from the array in (1)
Calculate the colour of the quad. Use a dot-product of the quad's normal and a light source direction to get a value to shade value, i.e. normal.light of 1 is black and -1 is white.
Create a list of line segments, again from the array in (1).
Calculate the screen position of the various projected axes points.
Set up your camera and world->view transform (use the example programs to get an idea of how to do this).
Render the quads and lines, OpenGL will do the transformation from world co-ordinates (the list in (1)) to screen space. Draw the labels, you might not want to do this using OpenGL as the labels shouldn't scale with distance from camera, otherwise they could get too small to read.
Since the above is quite a lot of stuff, there isn't really the space (and time on my part) to post working code (but someone else might add something if you're lucky). You could break the task down and ask questions on the parts you don't quite understand.
Have you tried this... gigasoft data visualization tools (Its not free)
And you can checkout the online wireframe demo here

Categories