C# - Working with high resolution screenshots - c#

I need to capture an area within my desktop. But I need this area to be very high resolution (like, at least few thousand's pixels horizontal, same goes for vertical). Is it possible to get a screen capture that has high density of pixels? How can I do this? I tried capturing the screen with some AutoIt script, and got some very good results (images that were 350MB big), now I would like to do the same using C#.
Edit:
I am doing my read/write of a .tif file like that, and it already loses most of the data:
using (Bitmap bitmap = (Bitmap)Image.FromFile(#"ScreenShot.tif")) //this file has 350MB
{
using (Bitmap newBitmap = new Bitmap(bitmap))
{
newBitmap.Save("TESTRES.TIF", ImageFormat.Tiff); //now this file has about 60MB, Why?
}
}
I am trying to capture my screen like that, but the best I can get from this is few megabytes (nowhere near 350MB):
using (var bmpScreenCapture = new Bitmap(window[2], window[3], PixelFormat.Format32bppArgb))
{
using (var i = Graphics.FromImage(bmpScreenCapture))
{
i.InterpolationMode = InterpolationMode.High;
i.CopyFromScreen(window[0], window[1], 0, 0, bmpScreenCapture.Size, CopyPixelOperation.SourceCopy);
}
bmpScreenCapture.Save("test2.tif", ImageFormat.Tiff);
}

You can't gather more information than the source has.
This is a basic truth and it does apply here, too.
So you can't capture more the your 1920x1080 pixels at their color depth.
OTOH, since you want to feed the captured image into OCR, there a few more things to consider and in fact to do..
OCR is very happy if you help it by optimizing the image. This should involve
reducing colors and adding contrast
enlarging to the recommended dpi resolution
adding even more contrast
Funnily, this will help OCR although the real information cannot increase above the original source. But a good resizing algorithm will add invented data and these often will be just what the OCR software needs.
You should also take care to use a good i.e. non lossy format when you store the image to a file like png or tif and never jpg.
The best way will have to be adjusted by trial and error until the OCR results are good enough.
Hint: Due to font antialiasing most text on screenshots is surrounded by a halo of colorful pixels. Getting rid of it by the reducing or even removing saturation is one way; maybe you want to turn it off in your display properties? (Check out ClearType!)

Related

Compress existing PDF using C# programming using freeware libraries

I have been searching a lot on Google about how to compress existing pdf (size).
My problem is
I can't use any application, because it needs to be done by a C# program.
I can't use any paid library as my clients don't want to go out of Budget. So a PAID library is certainly a NO
I did my home-work for last 2 days and came upon a solution using iTextSharp, BitMiracle but to no avail as the former decrease just 1% of a file and later one is a paid.
I also came across PDFcompressNET and pdftk but i wasn't able to find their .dll.
Actually the pdf is insurance policy with 2-3 images (black and white) and around 70 pages accounting to size of 5 MB.
I need the output in pdf only(can't be in any other format)
Here's an approach to do this (and this should work without regard to the toolkit you use):
If you have a 24-bit rgb or 32 bit cmyk image do the following:
determine if the image is really what it is. If it's cmyk, convert to rgb. If it's rgb and really gray, convert to gray. If it's gray or paletted and only has 2 real colors, convert to 1-bit. If it's gray and there is relatively little in the way of gray variations, consider converting to 1 bit with a suitable binarization technique.
measure the image dimensions in relation to how it is being placed on the page - if it's 300 dpi or greater, consider resampling the image to a smaller size depending on the bit depth of the image - for example, you can probably go from 300 dpi gray or rgb to 200 dpi and not lose too much detail.
if you have an rgb image that is really color, consider palettizing it.
Examine the contents of the image to see if you can help make it more compressible. For example, if you run through a color/gray image and fine a lot of colors that cluster, consider smoothing them. If it's gray or black and white and contains a number of specks, consider despeckling.
choose your final compression wisely. JPEG2000 can do better than JPEG. JBIG2 does much better than G4. Flate is probably the best non-destructive compression for gray. Most implementations of JPEG2000 and JBIG2 are not free.
if you're a rock star, you want to try to segment the image and break it into areas that are really black and white and really color.
That said, if you do can do all of this well in an unsupervised manner, you have a commercial product in its own right.
I will say that you can do most of this with Atalasoft dotImage (disclaimers: it's not free; I work there; I've written nearly all the PDF tools; I used to work on Acrobat).
One particular way to that with dotImage is to pull out all the pages that are image only, recompress them and save them out to a new PDF then build a new PDF by taking all the pages from the original document and replacing them the recompressed pages, then saving again. It's not that hard.
List<int> pagesToReplace = new List<int>();
PdfImageCollection pagesToEncode = new PdfImageCollection();
using (Document doc = new Document(sourceStream, password)) {
for (int i=0; i < doc.Pages.Count; i++) {
Page page = doc.Pages[i];
if (page.SingleImageOnly) {
pagesToReplace.Add(i);
// a PDF image encapsulates an image an compression parameters
PdfImage image = ProcessImage(sourceStream, doc, page, i);
pagesToEncode.Add(i);
}
}
PdfEncoder encoder = new PdfEncoder();
encoder.Save(tempOutStream, pagesToEncode, null); // re-encoded pages
tempOutStream.Seek(0, SeekOrigin.Begin);
sourceStream.Seek(0, SeekOrigin.Begin);
PdfDocument finalDoc = new PdfDocument(sourceStream, password);
PdfDocument replacementPages = new PdfDocument(tempOutStream);
for (int i=0; i < pagesToReplace.Count; i++) {
finalDoc.Pages[pagesToReplace[i]] = replacementPages.Pages[i];
}
finalDoc.Save(finalOutputStream);
What's missing here is ProcessImage(). ProcessImage will rasterize the page (and you wouldn't need to understand that the image might have been scaled to be on the PDF) or extract the image (and track the transformation matrix on the image), and go through the steps listed above. This is non-trivial, but it's doable.
I think you might want to make your clients aware that any of the libraries you mentioned is not completely free:
iTextSharp is AGPL-licensed, so you must release source code of your solution or buy a commercial license.
PDFcompressNET is a commercial library.
pdftk is GPL-licensed, so you must release source code of your solution or buy a commercial license.
Docotic.Pdf is a commercial library.
Given all of the above I assume I can drop freeware requirement.
Docotic.Pdf can reduce size of compressed and uncompressed PDFs to different degrees without introducing any destructive changes.
Gains depend on the size and structure of a PDF: For small files or files that are mostly scanned images the reduction might not be that great, so you should try the library with your files and see for yourself.
If you are most concerned about size and there are many images in your files and you are fine with loosing some of the quality of those images then you can easily recompress existing images using Docotic.Pdf.
Here is the code that makes all images bilevel and compressed with fax compression:
static void RecompressExistingImages(string fileName, string outputName)
{
using (PdfDocument doc = new PdfDocument(fileName))
{
foreach (PdfImage image in doc.Images)
image.RecompressWithGroup4Fax();
doc.Save(outputName);
}
}
There are also RecompressWithFlate, RecompressWithGroup3Fax and RecompressWithJpeg methods.
The library will convert color images to bilevel ones if needed. You can specify deflate compression level, JPEG quality etc.
Docotic.Pdf can also resize big images (and recompress them at the same time) in PDF. This might be useful if images in a document are actually bigger then needed or if quality of images is not that important.
Below is a code that scales all images that have width or height greater or equal to 256. Scaled images are then encoded using JPEG compression.
public static void RecompressToJpeg(string path, string outputPath)
{
using (PdfDocument doc = new PdfDocument(path))
{
foreach (PdfImage image in doc.Images)
{
// image that is used as mask or image with attached mask are
// not good candidates for recompression
if (!image.IsMask && image.Mask == null && (image.Width >= 256 || image.Height >= 256))
image.Scale(0.5, PdfImageCompression.Jpeg, 65);
}
doc.Save(outputPath);
}
}
Images can be resized to specified width and height using one of the ResizeTo methods. Please note that ResizeTo method won't try to preserve aspect ratio of images. You should calculate proper width and height yourself.
Disclaimer: I work for Bit Miracle.
Using PdfSharp
public static void CompressPdf(string targetPath)
{
using (var stream = new MemoryStream(File.ReadAllBytes(targetPath)) {Position = 0})
using (var source = PdfReader.Open(stream, PdfDocumentOpenMode.Import))
using (var document = new PdfDocument())
{
var options = document.Options;
options.FlateEncodeMode = PdfFlateEncodeMode.BestCompression;
options.UseFlateDecoderForJpegImages = PdfUseFlateDecoderForJpegImages.Automatic;
options.CompressContentStreams = true;
options.NoCompression = false;
foreach (var page in source.Pages)
{
document.AddPage(page);
}
document.Save(targetPath);
}
}
GhostScript is AGPL licensed software that can compress PDFs. There is also an AGPL licensed C# wrapper for it on github here.
You could use the GhostscriptProcessor class from that wrapper to pass custom commands to GhostScript, like the ones found in this AskUbuntu answer describing PDF compression.

Saving a non-pow-2 dimension screen capture in Managed DirectX

I'm attempting to capture a rendered screen from a Managed DirectX application. Typically, the way to do this is as follows:
Surface renderTarget = device.GetRenderTarget(0);
SurfaceLoader.Save(snapshotName, ImageFileFormat.Bmp, renderTarget);
Which is (in my understanding) shorthand for something like:
Surface renderTarget = device.GetRenderTarget(0);
Surface destTarget = device.CreateOffscreenPlainSurface(ClientRectangle.Width, ClientRectangle.Height, graphicsSettings.WindowedDisplayMode.Format, Pool.SystemMemory);
device.GetRenderTargetData(renderTarget,destTarget);
SurfaceLoader.Save(snapshotName,ImageFileFormat.Bmp, destTarget);
The problem is that on older video cards which don't support non-power-of-two dimension textures, the above fails. I've tried a number of workarounds, but nothing seems to accomplish this seemingly simple task of saving arbitrary-dimensioned screen captures. For example, the following fails on new Bitmap() with an invalid parameter exception (note that this requires creating the device with PresentFlag.LockableBackBuffer):
Surface surf = m_device.GetRenderTarget(0);
GraphicsStream gs = surf.LockRectangle(LockFlags.ReadOnly);
Bitmap bmp = new Bitmap(gs);
bmp.Save(snapshotName, ImageFormat.Png);
surf.UnlockRectangle();
Any tips would be greatly appreciated...I've pretty much exhausted everything I can think of (or turn up on Google)...
Why not create a texture which is the next highest power of 2 and then copy a sub rect? It would get round your issues even if the image saved has a whole load of blank space.
I'm surprised Bitmap has issues, tbh. However .. if thats the case then the above will work even it its not ideal.

Verify image sequence

Problem
Problem shaping
Image sequence position and size are fixed and known beforehand (it's not scaled). It will be quite short, maximum of 20 frames and in a closed loop. I want to verify (event driven by button click), that I have seen it before.
Lets say I have some image sequence, like:
http://img514.imageshack.us/img514/5440/60372aeba8595eda.gif
If seen, I want to see the ID associated with it, if not - it will be analyzed and added as new instance of image sequence, that has been seen. I have though about this quite a while, and I admit, this might be a hard problem. I seem to be having hard time of putting this all together, can someone assist (in C#)?
Limitations and uses
I am not trying to recreate copyright detection system, like content id system Youtube has implemented (Margaret Gould Stewart at TED ( link )). The image sequence can be thought about like a (.gif) file, but it is not and there is no direct way to get binary. Similar method could be used, to avoid duplicates in "image sharing database", but it is not what I am trying to do.
My effort
Gaussian blur
Mathematica function to generate Gaussian blur kernels:
getKernel[L_] := Transpose[{L}].{L}/(Total[Total[Transpose[{L}].{L}]])
getVKernel[L_] := L/Total[L]
Turns out, that it is much more efficient to use 2 passes of vector kernel, then matrix kernel. Thy are based on Pascal triangle uneven rows:
{1d/4, 1d/2, 1d/4}
{1d/16, 1d/4, 3d/8, 1d/4, 1d/16}
{1d/64, 3d/32, 15d/64, 5d/16, 15d/64, 3d/32, 1d/64}
Data input, hashing, grayscaleing and lightboxing
Example of source bits, that might be useful:
Lightbox around the known rectangle: FrameX
Using MD5CryptoServiceProvider to get md5 hash of the content inside known rectangle atm.
Using ColorMatrix to grayscale image
Source example
Source example (GUI; code):
Get current content inside defined rectangle.
private Bitmap getContentBitmap() {
Rectangle r = f.r;
Bitmap hc = new Bitmap(r.Width, r.Height);
using (Graphics gf = Graphics.FromImage(hc)) {
gf.CopyFromScreen(r.Left, r.Top, 0, 0, //
new Size(r.Width, r.Height), CopyPixelOperation.SourceCopy);
}
return hc;
}
Get md5 hash of bitmap.
private byte[] getBitmapHash(Bitmap hc) {
return md5.ComputeHash(c.ConvertTo(hc, typeof(byte[])) as byte[]);
}
Get grayscale of the image.
public static Bitmap getGrayscale(Bitmap hc){
Bitmap result = new Bitmap(hc.Width, hc.Height);
ColorMatrix colorMatrix = new ColorMatrix(new float[][]{
new float[]{0.5f,0.5f,0.5f,0,0}, new float[]{0.5f,0.5f,0.5f,0,0},
new float[]{0.5f,0.5f,0.5f,0,0}, new float[]{0,0,0,1,0,0},
new float[]{0,0,0,0,1,0}, new float[]{0,0,0,0,0,1}});
using (Graphics g = Graphics.FromImage(result)) {
ImageAttributes attributes = new ImageAttributes();
attributes.SetColorMatrix(colorMatrix);
g.DrawImage(hc, new Rectangle(0, 0, hc.Width, hc.Height),
0, 0, hc.Width, hc.Height, GraphicsUnit.Pixel, attributes);
}
return result;
}
I think you have a few issues with this:
Not all image sequences [videos] are equal [but many are similar]
Where is your data coming from?
How will you repesent the data related to your viewings?
Size of the data
Issue #1:
Many images can differ slightly by compression, water marking, missing frames, and adding clips. I would suggest sampling the video. For example you may want to consider sub-sampling small sections of the images in the video. Additionally, to avoid noisy images and issues with lossely compression algorithms. You may want to consider grayscaling the frames sampled, and doing a gaussian blur. [Guassian because its "more natural" (short answer)] Once you have enough sub samples to where you have a good confidence of similarity to the video then store it in a database. With the samples you can hash them, or store them to do a % similarity later.
Issue #2
Your datasource is going to influence the tool kits, and libraries that you use.
I would suggest keeping this simple [keep it with gifs and create a custom viewer, dont' try to write a browser plugin while developing your logic]
Issue #3
Using something like Postgres [if there are a lot of large sized objects] or SQLLite is highly suggested for indexing, storing, and recalling past meta data.
Issue #4
The size of the data will have a huge determination on recall, sampling, querying the database, etc.
Overall advice: Don't bite off more than you can handle at this stage. Start small and then grow.
Also take a look at Computer Vision algorithms for more help on the object representation/recall.
The question itself is sure very interesting and challenging, however there are many practical issues as stated by #monksy.
The opportunist pragmatic in me would take a step back, look at the big picture and see if there is another way to solve the problem. For example, if you are building some kind of "image sharing community" and want to avoid duplicates in the database, you could do a simple md5 on the file (animated gifs on the web are usually always the same, it's rare that people modify them).
Another example: if you are analyzing scientific samples (like meteo sequences) it may be easier to directly embed some kind of hash in every file when generating them.
This depends on wether you only want to know wether you've seen an absolutely identical movie again, or you also want to identify movies that are very similar but have been changed a bit (made lighter, have a watermark added, compression changed, etc.)
In the first case, just take any type of hash of the file and use that (because the file will be identical on the binary level.
In the second case (which I think is what you want) you have an interesting image processing problem on your hands. You could find yourself at the front-lines of image processing science with this if you'd want. If that is the case I suggest you start reading about SURF and OpenCV, and continue on from that.
If you want to match very similar, but not identical videos, and don't want to go the ultra-robus scientific route then I'd suggest the following process:
Do the gaussian blur you already do.
Divide each image into a few equally sized rectangles (you'd have to test for the best number, but I'd suggest you start with 9.
For each rectangle in each frame compute the full-colour histogram, then find the most occurring colour in that rectangle. This gives you 9*20 = 180 numbers. This is the "fingerprint" of this movie.
Find the most similar fingerprint in your database, if it is similar enough you already know about it, otherwise you don't.
Step 4 is a bit vague because I'm not really into this field. You are currently using an MD5 hash as a sort of fingerprint, but this is unsuitable in this case because slight differences in the input of a good cryptographic hashing function produce very large differences in the hash. This will mean that two very similar frames will have a totally different MD5 hash, so from the hash you'd never know they were similar.
As long as speed of database lookups is not an issue I'd just go for the sum of square differences as a measure of fingerprint similarity, and set a threshold on that to identify equal movies. However, this is not very fast for huge datasets, and in those cases you'd probably need to transform your fingerprint to something that will allow you to find similar fingerprints faster. One thing you could do here is start by selecting all known movies with very similar average colour for the entire video, then from that select the movies that have very similar average colour in each frame, and in the ones that remain at that point do the full rectangle-by-rectangle fingerprint match. But I'm sure there are even faster options for matching 180 numbers.
Perhaps you can find a way to get a binary copy of the image data of each frame in a variable. Hash that data (md5?) and store each of the hashes. Then you can see if you've ever seen that hash before. If you haven't, it's a new frame.

Faking thermal image camera with aforge or any other .net solution , emgucv, tbeta

I am simulating a thermal camera effect. I have a webcam at a party pointed at people in front of a wall. I went with background subtraction technique and using Aforge blobcounter I get blobs that I want to fill with gradient coloring. My problem = GetBlobsEdgePoints doesn't return sorted point cloud so I can't use it with, for example, PathGradientBrush from GDI+ to simply draw gradients.
I'm looking for simple,fast, algorithm to trace blobs into path (can make mistakes).
A way to track blobs received by blobcounter.
A suggestion for some other way to simulate the effect.
I took a quick look at Emgu.CV.VideoSurveillance but didn't get it to work (examples are for v1.5 and I went with v2+) but I gave up because people say it's slow on forums.
thanks for reading.
sample code of aforge background removal
Bitmap bmp =(Bitmap)e.VideoFrame.Clone();
if (backGroundFrame == null)
{
backGroundFrame = (Bitmap)e.VideoFrame.Clone();
difference.OverlayImage = backGroundFrame;
}
difference.ApplyInPlace(bmp);
bmp = grayscale.Apply(bmp);
threshold.ApplyInPlace(bmp);
Well, could you post some sample image of the result of GetBlobsEdgePoints, then it might be easier to understand what types if image processing algorithms are needed.
1) You may try a greedy algorithm, first pick a point at random, mark that point as "taken", pick the closest point not marked as "taken" and so on.
You need to find suitable termination conditions. If there can be several disjunct paths you need to find out a definition of how far away points need to be to be part of disjunct paths.
3) If you have a static background you can try to create a difference between two time shifted images, like 200ms apart. Just do a pixel by pixel difference and use abs(diff) as index in your heat color map. That will give more like an edge glow effect of moving objects.
This is the direction i'm going to take (looks best for now):
Define a set of points on the blob by my own logic (color of skin blobs should be warmer etc..)
draw gradients around those points
GraphicsPath gp=new GraphicsPath();
var rect = new Rectangle(CircumferencePoint.X - radius, CircumferencePoint.Y - radius, radius*2, radius*2);
gp.AddEllipse(rect);
GradientShaper = new PathGradientBrush(gp);
GradientShaper.CenterColor = Color.White;
GradientShaper.SurroundColors = surroundingColors;
drawBmp.FillPath(GradientShaper,gp);
mask those gradients with blob shape
blobCounter.ExtractBlobsImage(bmp,blob,true);
mask.OverlayImage = blob.Image;
mask.ApplyInPlace(rslt);
colorize with color remapping
tnx for the help #Albin

Image manipulation in C#

I am loading a JPG image from hard disk into a byte[]. Is there a way to resize the image (reduce resolution) without the need to put it in a Bitmap object?
thanks
There are always ways but whether they are better... a JPG is a compressed image format which means that to do any image manipulation on it you need something to interpret that data. The bimap object will do this for you but if you want to go another route you'll need to look into understanding the jpeg spec, creating some kind of parser, etc. It might be that there are shortcuts that can be used without needing to do full intepretation of the original jpg but I think it would be a bad idea.
Oh, and not to forget there are different file formats for JPG apparently (JFIF and EXIF) that you will ened to understand...
I'd think very hard before avoiding objects that are specifically designed for the sort of thing you are trying to do.
A .jpeg file is just a bag o' bytes without a JPEG decoder. There's one built into the Bitmap class, it does a fine job decoding .jpeg files. The result is a Bitmap object, you can't get around that.
And it supports resizing through the Graphics class as well as the Bitmap(Image, Size) constructor. But yes, making a .jpeg image smaller often produces a file that's larger. That's an unavoidable side-effect of Graphics.Interpolation mode. It tries to improve the appearance of the reduced image by running the pixels through a filter. The Bicubic filter does an excellent job of it.
Looks great to the human eye, doesn't look so great to the JPEG encoder. The filter produces interpolated pixel colors, designed to avoid making image details disappear completely when the size is reduced. These blended pixel values however make it harder on the encoder to compress the image, thus producing a larger file.
You can tinker with Graphics.InterpolationMode and select a lower quality filter. Produces a poorer image, but easier to compress. I doubt you'll appreciate the result though.
Here's what I'm doing.
And no, I don't think you can resize an image without first processing it in-memory (i.e. in a Bitmap of some kind).
Decent quality resizing involves using an interpolation/extrapolation algorithm; it can't just be "pick out every n pixels", unless you can settle with nearest neighbor.
Here's some explanation: http://www.cambridgeincolour.com/tutorials/image-interpolation.htm
protected virtual byte[] Resize(byte[] data, int width, int height) {
var inStream = new MemoryStream(data);
var outStream = new MemoryStream();
var bmp = System.Drawing.Bitmap.FromStream(inStream);
var th = bmp.GetThumbnailImage(width, height, null, IntPtr.Zero);
th.Save(outStream, System.Drawing.Imaging.ImageFormat.Jpeg);
return outStream.ToArray(); }

Categories