I am going to Create an application where user can edit their pictures like color balance effect, gray sheet effect, invert effect , red eye fix etc.
My application will be quite resemble to Acdsee Software.
So i wanna know that can My application be called image Processing software ?
In my point of view Image Proccessing means playing with image's enhancing the image's.
Thanks In Advance
In electrical engineering and computer science, image processing is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it.[1]
So, I think you can call your software a Image Processing Software, once it check what this definition propose.
[1] http://en.wikipedia.org/wiki/Image_processing
Yes, an application that does color balance, gray sheet effect, invert and red-eye fix, can reasonably be called image processing software. In fact, even a single one of those I would already call an image-processing algorithm.
Related
My Scenario:
I have a camera focused at a white screen, which is taking a live feed and displaying that feed in a picture box by virtue of a FrameReceived event.
I need to kick off a process to crop the image if something is inserted between the camera and the screen.
This process needs to start when the image first changes so I need to compare one frame with another to see if anything has changed.
My Efforts
I have tried hashing the images and comparing them, which doesnt work as the frames are never exactly the same
I have tried to loop through each pixel, comparing different values such as brightness, hue etc but this is too slow
I have tried looping through with a sub sample but it is either too slow or too unreliable.
I even tried what I like to call the "Twisted Pair Solution" where I inverted one then added them together and checking the result but this was far too complex and slow.
My Environment
Visual Studio 2012 (2010 if neccessary is available)
Ueye camera
C#
The images are of type System.Drawing.Bitmap
Notes
The biggest problem seems to be that to reliably get this result, it takes longer than we have for a reasonable frame rate, meaning that the calculation is not finished before a new frame comes in, which means that whatever variable I use to store the previous image is being overwritten before it can stop being used, and there appears to be thread after thread building up and it causes a whole lotta shakin.
I would recommend using some sort of image processing library , because the default .Net image processing tools are limited ,you can use an image processing library like http://www.aforgenet.com/framework/.
Than you can for example subtract image 1 from image 2, and sum the differences. If they are below a threshold (you choose the on the fits your need) they are identical.
or you can deep deeper and try this http://thecsharper.com/?p=94
this may seem like an odd question, and I don't know the formats of images so I'll just go ahead and ask...
I'm making a minesweeper game (relevant to different things too) which utilizes a grid of buttons, then I'm adding a sprite to the buttons using backgroundImage. If the grid is 9x9 it's fine. 15x15 it slows down and 30x30 you can visibly see each button being loaded.
That raises me to my question: Which image format would load fastest? Obviously, file size takes a part in the loading speed, however, I am enquiring as to if, say, a jpeg - with the same filesize as a gif - will load faster. or a bmp, png, etc.
I'm asking this as to see if I can get my grid to load faster using a different format.
Thanks!
You want an image format that paints faster. There's only one, the one whose pixel format directly matches the video adapter's setting so that the pixel data can be directly copied without needing format adjustments.
On most any modern machine, that's PixelFormat.Format32bppPArgb. It draws ten times faster than all the other ones.
You won't get that format when loading an image from a resource or a file, you have to create it.
Do note that this still won't give you stellar paint speed if every cell in the grid is a control. Particularly if it is a button, they are very expensive to draw since they support transparency effects. You'll only get ahead here if you draw the entire grid in one Paint event handler. Like the form's.
A completely different approach is to cover up the visible delays with this hack.
use the 8-bit PNG or GIF format and reduce the number of colors in the palette. Some image programs such as PhotoShop allow you to save the image for the web and fine-tune the image settings. By reducing the color palette from 256 to something like 32, you greatly reduce the size of the file. The less colors that the image has, the smaller the file size is going to be.
PNG has a similar feature called "Interlaced". You may want to turn this feature off so that the full image downloads quicker.
Because the 8-bit PNG and GIF formats have the potential to result in much smaller image files, try to keep this in mind when creating graphics and illustrations for your application. Try to keep the amount of colors to a minimum and use flat graphics instead of photographs. This way you can create images with palettes of 16 colors, keeping the file size extremely small and fast to load.
Best Regards
Are you reloading the image every time you paint a button? If so, there's your problem - solve it with a cache.
Are you painting the image at its native size? Runtime resampling can create a performance hit.
use PNG or GIF it's faster types of images
Here's the scenario:
I am using Visual Studio 2008 with .NET framework 3.5. I am using C#. And for database I am using MySQL. I have a picturebox on a form and 10-12 buttons (each with some image manipulation function). On clicking one of the buttons openfiledialog box is shown up where the user can select the specific file to provide to the program. On clicking another button the program should perform the actions as explained below.
I have an image of a circuit. Suppose this is the image which is provided to the program. e.g.
What I intend to do is that - the program should hypothetically label the circuit as follows:
and then it should separate the image and store the information in a database.
Is there any way to do that. Can anyway tell me the approach to do that? Any help or suggestions please.
Thanks.
In image processing, the problem of finding the 'parts' of the circuit is known as connected component labeling. If you are using C#, I believe that you can use EmguCV (a wrapper to the OpenCV library) to solve the first part of the problem. To do that, you have to consider that the white pixels are the background and that the black pixels are objects.
Now that you have the separated traces, the problem is reduced to finding and labeling the white dots. Again, you can solve it by connected component labeling, but now the objects are represented by white pixels and the background are the black pixels.
At least for your example case, a very simple algorithm would work.
Find a black pixel from the image
Using a flood-fill algorithm, find all the pixels connected to it, and separate it. That's one of your traces.
Working with the separated trace, find a white pixel and use a flood-fill algorithm to find all the pixels connected to it. If you run to the edge of the image, it's not a hole. If you don't, it might be a hole, or a loop in the trace. Use a threshold for the hole size to determine if it's a terminal hole or a loop.
Label the hole and remove it from consideration. Repeat until there are no more unprocessed white pixels.
Remove the whole trace from consideration, and jump to 1.
When there are no more black pixels in consideration in step 1, you're done.
You should probably get pretty far with a basic image editing library that has a flood-fill function, a function to separate a certain color into a new image, and a function to replace colors (the last two are trivial to implement, and there are plenty of flood-fill algorithms available online). You can use different colors to mark different things, for instance, color everything "no in consideration" red. It also makes for an interesting visualization if you look at it in real time!
i've been working on a webapp. i got stuck here in a problematic issue.
i'll try to explain what im trying to do.
here you see first big image which has green shapes in it.
what i want to do is to crop those shapes into different png files and make their background transparent like the example cropped images below the big one.
The first image will be uploaded by user and i want to crop into pieces like the example cropped images above.it can be done with GD library of php or by a server-side software written in python or c#. but i dunno what this operation called so i dunno what to google to find information. it is something to do with computer vision detecting blobs and cropping them into pieces etc.
any keywords,links would be helpful.
thanks for helps
A really easy way to do this is to use Flood Fill/Connected Component Labeling. Basically, this would just be using a greedy algorithm by grouping any pixels that were the same or similar in color.
This is definitely not the ideal way to detect blobs and is only going to be effective in limited situations. However, it is much easier to understand and code and might be sufficient for your purposes.
Opencv provides a function named cv::findContours to find connected components in an image. If it's always green vs white, You want to cv::split the image into channels, use cv::threshold on the blue or the red channel (those will be white in the white regions and near black in the green region) with THRESH_BINARY_INV (because you want to extract the dark regions), then use cv::findContours to detect the blobs. You can then compute the bounding rectangle with cv::boundingRect, create a new image of that size, and use the contour you got as a mask to fill the new image.
Note: These are links to the C++ documentation, but those functions should be exposed in the python and C# wrappers - see http://www.emgu.com for the latter.
I believe this Wikipedia article covers the problem really well: http://en.wikipedia.org/wiki/Blob_detection
Can't remember any ready-to-use solutions though (-:
It really depends on what kinds of images you will be processing.
As Brian mentioned, you could use Connected Component Labeling, which usually is applied to binary images, where foreground is denoted by white pixels and background by black pixels (or the opposite). The problem is then how to transform the original image to a binary one. If all images are like the example you provided, this is straightforward and can be accomplished with thresholding. OpenCV provides useful methods:
Threshold
FindContours for finding contours of connected components
DrawContours for extracting each component individually into a separate image
For more complex images, however, all bets are off.
I'm working on a wallpaper application. Wallpapers are changed every few minutes as specified by the user.
The feature I want is to fade in a new image while fading out the old image. Anyone who has a mac may see the behavior I want if they change their wallpaper every X minutes.
My current thoughts on how I would approach this is to take both images and lay one over the other and vary the opacity. Start the old image at 90% and the new image at 10%. I would then decrease the old image by 10% until it is 0%, while increasing the new image by 10% until 90%. I would then set the wallpaper to the new image.
To make it look like a smooth transition I would create the transition wallpapers before starting the process instead of doing it in real-time.
My question is, is there a more effective way to do this?
I can think of some optimizations such as saving the transition images with a lower quality.
Any ideas on approaches that would make this more efficient than I described?
Sounds like an issue of trade-off.
It depends on the emphasis:
Speed of rendering
Use of resources
Speed of rendering is going to be an issue of how long the process of the blending images is going to take to render to a screen-drawable image. If the blending process takes too long (as transparency effects may take a long time compared to regular opaque drawing operations) then pre-rendering the transition may be a good approach.
Of course, pre-rendering means that there will be multiple images either in memory or disk storage which will have to be held onto. This will mean that more resources will be required for temporary storage of the transition effect. If resources are scarce, then doing the transition on-the-fly may be more desirable. Additionally, if the images are on the disk, there is going to be a performance hit due to the slower I/O speed of data outside of the main memory.
On the issue of "saving the transition images with a lower quality" -- what do you mean by "lower quality"? Do you mean compressing the image? Or, do you mean having smaller image? I can see some pros and cons for each method.
Compress the image
Pros: Per image, the amount of memory consumed will be lower. This would require less disk space, or space on the memory.
Cons: Decompression of the image is going to take processing. The decompressed image is going to take additional space in the memory before being drawn to the screen. If lossy compression like JPEG is used, compression artifacts may be visible.
Use a smaller image
Pros: Again, per image, the amount of memory used will be lower.
Cons: The process of stretching the image to the screen size will take some processing power. Again, additional memory will be needed to produce the stretched image.
Finally, there's one point to consider -- Is rendering the transition in real-time really not going to be fast enough?
It may actually turn out that rendering doesn't take too long, and this may all turn out to be premature optimization.
It might be worth a shot to make a prototype without any optimizations, and see if it would really be necessary to pre-render the transition. Profile each step of the process to see what is taking time.
If the performance of on-the-fly rendering is unsatisfactory, weigh the positives and negatives of each approach of pre-rendering, and pick the one that seems to work best.
Pre-rendering each blended frame of the transition will take up a lot of disk space (and potentially bandwidth). It would be better to simply load the two images and use the graphics card to do the blending in real time. If you have to use something like openGL directly, you will probably be able to just create two rectangles, set the images as the textures, and vary the alpha values. Most systems have simpler 2d apis that would let you do this very easily. (eg. CoreAnimation on OS X, which will automatically vary the transparency over time and make full use of hardware acceleration.)
On the fly rendering should be quick enough if handled by the graphics card, especially if it's the same texture with a different opacity (a lot of graphics rendering time is often loading textures to the card, ever wondered what game loading screens were actually doing?)