DrawImage is slow, lockbits to the rescue? - c#

I've read many C# tutorials on using lockbits to manipulate images, but I just don't know how to apply this info into PowerShell.
This is the problem:
$image1's height is 2950 pixels. $image2's height is 50 px taller, 3000 pixels. I need to fit $image2 into $image1 and I can skip $image2's first 49 px lines. So in pseudocode:
For(y=0... For(x=0.... { image1(x,y) = image2(x,y+50) } ....))
The PowerShell script below works, but does not work very fast:
$rect = new-object Drawing.Rectangle 0, 0, $image1.width, $image1.height
$image1drawing.drawimage($image2,
$rect,
0, 50, $image2.width, ($image2.height - 50),
$graphicalUnit)
The pages I've found, such as this (Not able to successfully use lockbits) or this (https://web.archive.org/web/20121203144033/http://www.bobpowell.net/lockingbits.htm) are in "plain English" but how to convert this concept into PowerShell?

You are using the correct approach. Using DrawImage will be faster that copying the pixels one by one.
Some suggestions to make it faster:
Try using Image.Clone to copy a rectangle from the original image, this will result in the smallest number of objects you need to create.
Make sure you use the same PixelFormat as the original image (faster copying). There is a PixelFormat attribute in Image.
Most important: accessing Width and Height takes a long time so save them to a local variable for reuse. If you know them before hand that's a good way to speed-up things.
Don't expect miracles at a width to height ratio of 4:3 each image is 3932 * 2950 * 3(assuming 24 bit RGB) = 33Mb per image. That's a lot of data, you may easily be be trying to copy a few gigs depending on how many images you have.
You are better off writing a simple cmdlet and using it in your PowerShell script.
BTW in case you are still interested:
Using lockbits in C# (as per your examples) relies on unsafe context and using pointers. I don't believe PowerShell has access to unsafe context.
You can manipulate unmanaged data without using an unsafe context by using the Marshall class; specifically the Read and Write methods (and you might be able to speed things up with the Copy method at expense of memory).
PowerShell was not meant as a replacement for .Net generic languages, that's what CmdLets are for.

Related

Unsafe Image Processing in Python like LockBits in C#

Is it possible to do Unsafe Image Processing in Python?
As with C# I encountered a hard wall with my pixel processing in Python as getPixel method from Image is simply running too slow.
Is it possible to get a direct access to the image in memory like LockBits in c# does? It will make my program run much faster.
Thanks,
Mark
There is nothing "unsafe" about this.
Once you understand how Python works, it becomes patent that calling a method to retrieve information on each pixel is going to be slow.
First of all, although you give no information on it, I assume you are using "Pillow" - the Python Image Library (PIL) which is the most well known library for image manipulation using Python. As it is a third party package, nothing obliges you to use that. (PIL does have a getpixel method on images, but not a getPixel one)
One straightforward way to have all data in a manipulable way is to create a bytearray object of the image data - given an image in a img variable you can do:
data = bytearray(img.tobytes())
And that is it, you have linear access to all data on your image. To get to an specific Pixel in there, you need to get the image width , heigth and bytes-per-pixel. The later one is not a direct Image attribute in PIL, so you have to compute it given the Image's mode. The most common image types are RGB, RGBA and L.
So, if you want to write out a rectangle at "x, y, width, size" at an image, you can do this:
def rectangle(img, x,y, width, height):
data = bytearray(img.tobytes())
blank_data = (255,) * witdh * bpp
bpp = 3 if data.mode == 'RGB' else 4 if data.mode == 'RGBA' else 1
stride = img.width * bpp
for i in range(y, y + height):
data[i * stride + x * bpp: i * stride + (x + width) * bpp)] = blank_data
return Image.frombytes(img.mode, (img.width, img.height), bytes(data))
That is not much used, but for simple manipulation. People needing to apply filters and other more complicated algorithms in images with Python usually access the image using numpy - python high-performance data manipulation package, wich is tightly coupled with a lot of other packages that have things specific for images - usually installed as scipy.
So, to have the image as an ndarray, which already does all of the above coordinate -> bytes conversion for you, you can use:
import scipy.misc
data = scipy.misc.imread(<filename>)
(Check the docs at https://docs.scipy.org/doc/scipy/reference/)

Generate huge image in C#

I need to create a huge image (aprox 24000 x 22000) with PixelFormat.Format24bppRgb encoding. I know it will barely impossible to open it...
What I'm trying to do is this:
Bitmap final = new Bitmap(width, height, PixelFormat.Format24bppRgb);
As expected, an exception is thrown as I can't handle a 11GB file in memory easy that way.
But I had an idea: could I write the file as I'm generating it? So, instead of working on RAM, I would be working on the HD.
Just to better explain: I have about 13K tiles and I plan to stitch it together in this stupidly humongous file. As I can iterate them in a give order, I thing I could write it down directly to the memory using unsafe code.
Any suggestions?
ImageMagick's Large Image Support (tera-pixel) can help you put the image together once you have the tiles that compose it. You can either use use the command line and issue commands to it using this wrapper or use this ImageMagick.NET as an API.
You could write it in a non-compressed format like BMP. BMP saves raw color bytes in rows. So you would load first row of tiles, read their separate pixel rows and write it as composite single row in output image. This way, you can have open only few tiles and imediately write down the output image.
But I don't know how to write it as compressed image, like JPG or PNG. But I'm sure some specialised software exists for that.
Depending on what you intend to do with this image upon completion, I would suggest dividing it into 4 and working with it that way. I have worked with 10,000 x 10,000 pixels without the OOM exception being thrown.

C# .NET Dynamic Image Library?

Hi there I am looking to create a image dynamically from an array[500][500] (500x500pixel)
Each array item has the pixel color data,
Does anyone know which .NET library/interface would be best for this? Could point me in right direction? I need to create/save the file.
Also, The image is a composite of the data from many images, I am wondering if it is possible to use various formats, or if I need to first convert the small images into one format,
Also which image format is best to use (the most compatible?) JPG/PNG24? (for web)
Thanks for your input!
if you can use unsafe code in your website (in other words, your code runs under full trust), just use the Bitmap class, and use the LockBits method, then you can use pointers just like in C++ to access the pixels (tip: create a Pixel struct to hold RGB values). You will see GetPixel and SetPixel methods, DO NOT EVER use them. The performance is terrible, more than 100 times slower than using pointers. Just go with BitmapData.Scan0.ToPointer() and then iterate with for.
You could use the System.Drawing.Bitmap class.
If you are able to use unsafe code, construct the bitmap using pointers and BitmapData rather than SetPixel
var bitmap = new Bitmap(500, 500);
// Update the pixels with values in you array...
bitmap.Save("myfilename.jpg", ImageFormat.Jpeg);
The format depends on what you need (for example png can support transaprency, jpg does not).
You could start wich System.Drawing.Bitmap .

Color Image Quantization in .NET

I want to reduce the number of unique colors of a bitmap in c#.
The reason I want to do this is that an image which is initially created with three color but due to many factors (including compression) has now more than three colors (i.e neighbour pixels has affected each other)
Any idea of how to do that?
The solution maybe something to convert the whole bitmap from RGB to Indexed color system or some function that can be applied to a single pixel.
Any GDI+ or Emgu (opencv) solutions are good for me.
Check out nQuant at http://nquant.codeplex.com. This yields much higher quality than the code in the MSDN article that Magnus references. It also takes the Alpha layer into consideration while the msdn article only evaluates RGB. Source code is available and there is an accompanying blog post that discusses the code and algorithm in detail.
There is an article on msdn called Optimizing Color Quantization for ASP.NET Images that might help you, it has good example code.
I've just stumbled upon this question and though it is quite an old one maybe it still can be useful to mention that last year I made my Drawing Libraries public (NuGet), which happens to support quantization, too.
Note: As the question contains the GDI+ tag the examples below go for the Bitmap type but the library supports completely managed bitmap data manipulation as well, which supports all pixel formats on every platform (see BitmapDataFactory and BitmapDataExtensions classes).
If you have a Bitmap instance, quantization is as simple as follows:
using System.Drawing;
using System.Drawing.Imaging;
using KGySoft.Drawing;
using KGySoft.Drawing.Imaging;
// [...]
IQuantizer quantizer = PredefinedColorsQuantizer.FromCustomPalette(myColors, backColor);
// getting a quantized clone of a Bitmap with arbitrary PixelFormat:
Bitmap quantizedBitmap = originalBitmap.ConvertPixelFormat(PixelFormat.Format8bppIndexed,
quantizer);
// or, you can quantize a Bitmap in-place (which does not change PixelFormat):
originalBitmap.Quantize(quantizer);
Original bitmap:
Quantized bitmap using a custom 8 colors palette and silver background (which appears white with this palette):
In the example above I used the FromCustomPalette method but there are many other predefined quantizers available in the PredefinedColorsQuantizer and OptimizedPaletteQuantizer classes (see the members for image and code examples).
And since reducing colors may severely affect the quality of the result you might want to use dithering with the quantization:
IQuantizer quantizer = PredefinedColorsQuantizer.FromCustomPalette(myColors, backColor);
IDitherer = OrderedDitherer.Bayer8x8;
// ConvertPixelFormat can be used also with a ditherer
Bitmap quantizedBitmap = originalBitmap.ConvertPixelFormat(PixelFormat.Format8bppIndexed,
quantizer, ditherer);
// Or use the Dither extension method to change the Bitmap in-place
originalBitmap.Dither(quantizer, ditherer);
The difference is quite significant, even though the same colors are used:
You will find a lot of image examples in the description of the OrderedDitherer, ErrorDiffusionDitherer, RandomNoiseDitherer and InterleavedGradientNoiseDitherer classes.
To try the possible built-in quantizers and ditherers in an application you can use my Imaging Tools app. In the link you can find also its source, which provides a bit more advanced examples with cancellable async conversions with progress tracking, etc.

Verify image sequence

Problem
Problem shaping
Image sequence position and size are fixed and known beforehand (it's not scaled). It will be quite short, maximum of 20 frames and in a closed loop. I want to verify (event driven by button click), that I have seen it before.
Lets say I have some image sequence, like:
http://img514.imageshack.us/img514/5440/60372aeba8595eda.gif
If seen, I want to see the ID associated with it, if not - it will be analyzed and added as new instance of image sequence, that has been seen. I have though about this quite a while, and I admit, this might be a hard problem. I seem to be having hard time of putting this all together, can someone assist (in C#)?
Limitations and uses
I am not trying to recreate copyright detection system, like content id system Youtube has implemented (Margaret Gould Stewart at TED ( link )). The image sequence can be thought about like a (.gif) file, but it is not and there is no direct way to get binary. Similar method could be used, to avoid duplicates in "image sharing database", but it is not what I am trying to do.
My effort
Gaussian blur
Mathematica function to generate Gaussian blur kernels:
getKernel[L_] := Transpose[{L}].{L}/(Total[Total[Transpose[{L}].{L}]])
getVKernel[L_] := L/Total[L]
Turns out, that it is much more efficient to use 2 passes of vector kernel, then matrix kernel. Thy are based on Pascal triangle uneven rows:
{1d/4, 1d/2, 1d/4}
{1d/16, 1d/4, 3d/8, 1d/4, 1d/16}
{1d/64, 3d/32, 15d/64, 5d/16, 15d/64, 3d/32, 1d/64}
Data input, hashing, grayscaleing and lightboxing
Example of source bits, that might be useful:
Lightbox around the known rectangle: FrameX
Using MD5CryptoServiceProvider to get md5 hash of the content inside known rectangle atm.
Using ColorMatrix to grayscale image
Source example
Source example (GUI; code):
Get current content inside defined rectangle.
private Bitmap getContentBitmap() {
Rectangle r = f.r;
Bitmap hc = new Bitmap(r.Width, r.Height);
using (Graphics gf = Graphics.FromImage(hc)) {
gf.CopyFromScreen(r.Left, r.Top, 0, 0, //
new Size(r.Width, r.Height), CopyPixelOperation.SourceCopy);
}
return hc;
}
Get md5 hash of bitmap.
private byte[] getBitmapHash(Bitmap hc) {
return md5.ComputeHash(c.ConvertTo(hc, typeof(byte[])) as byte[]);
}
Get grayscale of the image.
public static Bitmap getGrayscale(Bitmap hc){
Bitmap result = new Bitmap(hc.Width, hc.Height);
ColorMatrix colorMatrix = new ColorMatrix(new float[][]{
new float[]{0.5f,0.5f,0.5f,0,0}, new float[]{0.5f,0.5f,0.5f,0,0},
new float[]{0.5f,0.5f,0.5f,0,0}, new float[]{0,0,0,1,0,0},
new float[]{0,0,0,0,1,0}, new float[]{0,0,0,0,0,1}});
using (Graphics g = Graphics.FromImage(result)) {
ImageAttributes attributes = new ImageAttributes();
attributes.SetColorMatrix(colorMatrix);
g.DrawImage(hc, new Rectangle(0, 0, hc.Width, hc.Height),
0, 0, hc.Width, hc.Height, GraphicsUnit.Pixel, attributes);
}
return result;
}
I think you have a few issues with this:
Not all image sequences [videos] are equal [but many are similar]
Where is your data coming from?
How will you repesent the data related to your viewings?
Size of the data
Issue #1:
Many images can differ slightly by compression, water marking, missing frames, and adding clips. I would suggest sampling the video. For example you may want to consider sub-sampling small sections of the images in the video. Additionally, to avoid noisy images and issues with lossely compression algorithms. You may want to consider grayscaling the frames sampled, and doing a gaussian blur. [Guassian because its "more natural" (short answer)] Once you have enough sub samples to where you have a good confidence of similarity to the video then store it in a database. With the samples you can hash them, or store them to do a % similarity later.
Issue #2
Your datasource is going to influence the tool kits, and libraries that you use.
I would suggest keeping this simple [keep it with gifs and create a custom viewer, dont' try to write a browser plugin while developing your logic]
Issue #3
Using something like Postgres [if there are a lot of large sized objects] or SQLLite is highly suggested for indexing, storing, and recalling past meta data.
Issue #4
The size of the data will have a huge determination on recall, sampling, querying the database, etc.
Overall advice: Don't bite off more than you can handle at this stage. Start small and then grow.
Also take a look at Computer Vision algorithms for more help on the object representation/recall.
The question itself is sure very interesting and challenging, however there are many practical issues as stated by #monksy.
The opportunist pragmatic in me would take a step back, look at the big picture and see if there is another way to solve the problem. For example, if you are building some kind of "image sharing community" and want to avoid duplicates in the database, you could do a simple md5 on the file (animated gifs on the web are usually always the same, it's rare that people modify them).
Another example: if you are analyzing scientific samples (like meteo sequences) it may be easier to directly embed some kind of hash in every file when generating them.
This depends on wether you only want to know wether you've seen an absolutely identical movie again, or you also want to identify movies that are very similar but have been changed a bit (made lighter, have a watermark added, compression changed, etc.)
In the first case, just take any type of hash of the file and use that (because the file will be identical on the binary level.
In the second case (which I think is what you want) you have an interesting image processing problem on your hands. You could find yourself at the front-lines of image processing science with this if you'd want. If that is the case I suggest you start reading about SURF and OpenCV, and continue on from that.
If you want to match very similar, but not identical videos, and don't want to go the ultra-robus scientific route then I'd suggest the following process:
Do the gaussian blur you already do.
Divide each image into a few equally sized rectangles (you'd have to test for the best number, but I'd suggest you start with 9.
For each rectangle in each frame compute the full-colour histogram, then find the most occurring colour in that rectangle. This gives you 9*20 = 180 numbers. This is the "fingerprint" of this movie.
Find the most similar fingerprint in your database, if it is similar enough you already know about it, otherwise you don't.
Step 4 is a bit vague because I'm not really into this field. You are currently using an MD5 hash as a sort of fingerprint, but this is unsuitable in this case because slight differences in the input of a good cryptographic hashing function produce very large differences in the hash. This will mean that two very similar frames will have a totally different MD5 hash, so from the hash you'd never know they were similar.
As long as speed of database lookups is not an issue I'd just go for the sum of square differences as a measure of fingerprint similarity, and set a threshold on that to identify equal movies. However, this is not very fast for huge datasets, and in those cases you'd probably need to transform your fingerprint to something that will allow you to find similar fingerprints faster. One thing you could do here is start by selecting all known movies with very similar average colour for the entire video, then from that select the movies that have very similar average colour in each frame, and in the ones that remain at that point do the full rectangle-by-rectangle fingerprint match. But I'm sure there are even faster options for matching 180 numbers.
Perhaps you can find a way to get a binary copy of the image data of each frame in a variable. Hash that data (md5?) and store each of the hashes. Then you can see if you've ever seen that hash before. If you haven't, it's a new frame.

Categories