Image Steganography - c#

I'm working on Steganography application. I need to hide a message inside an image file and secure it with a password, with not much difference in the file size. I am using Least Significant Bit algorithm and could do it successfully with BMP files but it does not work with JPEG, PNG or TIFF files. Does this algorithm work with these files at all? Is there a better way to achieve this? Thanks.

This heavily depends on the way the particular image format works. You'll need to dive into the internals of the format you want to use.
For JPEG, you could fiddle with the last bits of the DCT coefficients for each block.
For palette-based files (GIFs, and some PNGs), you could add extra colours to the palette that look identical to the existing ones, and encode information based on which one you use.

You'll have to distinguish between pixel-based (Bitmap) and palette-based formats (GIF) for which the steganographic technique is quite different. Also be aware that there are image formats like JPG that lose information in the compression process.
I'd also advice some general introduction to steganography including different formats.

Least Significant Bit approach does not work with JPEG and GIF images because you are using the pixel data (raw image) to store hidden information before compression. A pixel p, with data 0x123456 will probably not have this value after compression because its value depends on the compression rate and neighbour pixels. In this case we are talking about algorithms that does not only compact the image (like a ZIP, that keeps the content), but changes the color distribution, texture, and quality in order to decrease the number of bits to represent it.
However, PNG can be used just to compact the image in the same sense of ZIP file, keeping the content. Therefore, you can use the Least Significant Bit for PNG images, so that Wikipedia Steganography page shows example in this format.

As long as the image format is lossless, you can use the LSB steganography in pixels (BMP, PNG, TIFF, PPM). If it is lossy, you have to try something else, as compression and subsequent decompression cause small changes in the pixels and the message is gone. In GIF, you can embed your message into the palette. In JPEG you change the DCT coefficients, a low-level frequency representation of the image, which can be read from and saved as JPEG file losslessly.
There is an extensive research on steganography in JPEG. For introduction, I personally recommend Steganography in Digital Media: Principles, Algorithms, and Applications by Jessica Fridrich - must-read material for serious attempts in steganography. The approaches for various image formats are discussed in-depth there.
Also, LSB is inefficient and very easily detectable, you should not use that. There are better algorithms, however usually heavy on math and complex. Look for "steganography embedding distortion" and "steganography codes".

Related

Extract colors from jpeg file (without Bitmap)

I'm currently trying to figure out how JPEG's are made in depth out of interest. I found documents on the different sections (soi, sof, sos, eoi etc) which are pretty straight forward, but not how to get a single pixel out of there.
My first thought was to make a small image, 2x2 for example, but with all the headers and sections it's still to big to isolate the pixel information without knowing the exact location and method to extract it. I'm sure it's compressed, but is their a way to get it out manually? (as RGB?)
Anyone has a clue on how to do this?
Getting the value of a single pixel of a JPEG image requires parsing some (if not most) of those sections anyway.
There's a good step-by-step guide available at https://www.imperialviolet.org/binary/jpeg/ (though the code is in Haskell, so it might be moderately inscrutable to mere mortals) that explains the concepts behind turning a JPEG into a bunch of RGB values.
This is the only source I know that explains JPEG end-to-end:
https://www.amazon.com/gp/product/B01JXRY4R0/ref=dbs_a_def_rwt_bibl_vppi_i4
Parsing the structure of a JPEG stream is easy. Decoding a JPEG scan is very difficult and involves several compression steps. Plus there are two different types of scan that are commonly in use (progressive & sequential).

Reduce & Optimize Scanned Documents File Size

My customer has about 100,000 scanned documents (jpg) which they work with everyday. I want to know how can I reduce the file size of those images for faster file transfer and browsing.
The documents are scanned in black/white, saved in jpg format. They have a resolution of 150dpi and size of 1275x1753 (width x height). The main problem is their size which is between ~150kb and ~500kb which I think is too high for a black/white picture.
Is there a chance that I can reduce their size with changing the resolution, changing some color mode or something? Tried playing around with Photoshop but no luck.
The scanned documents are just for the sole purpose of Reviewing. So I don't think they need much detail or the original pic size.
Gonna write the program in c#, So tell me if there is a good image library for this purpose.
If your images are JPEG-compressed than they are either grayscale (8 bits per pixel) or full color (24 or 32 bits per pixel). I am not aware of any other JPEG types out there.
Given that, you probably won't get much benefit if you try to convert these images to other formats without changes to their size (number of pixels in both directions) and/or color space.
There is a possibility that JPEG 2000 might compress your images better than JPEG, but another lossy compression will introduce some more artifacts. You might try for yourself and see if this approach is acceptable for you. I can't recommend you any tools for this approach, though.
I would recommend you to try and convert your images to bilevel ones (i.e. with only two colors) and compress them with one of the FAX compression schemes (Group 3 or Group 4). You might try to reduce images sizes at the same time, too. This can be easily achieved using Docotic.Pdf library (Disclaimer: I work for the vendor of the library).
Please take a look at my answer to a question similar to yours. The answer shows how to use RecompressWithGroup4Fax and/or Scale methods to recompress existing images in PDF.
There is also valuable advice from #plinth about JBIG2 compression and other stuff. Well worth reading.

Improving the quality of TIFF images

We have a around 600,000 images that were converted from JPEG to TIFF files and uploaded to our FileNet repository. These TIFF images are multi-page, made by stitching multiple JPEGs.
This was done couple of years ago. Now we started getting complaints from users the quality of the TIFF images are not the same as they were when they were JPEGs.
Is there any way we can improve the quality of TIFF files? If I have to re-migrate this data, can JPEGs be of multiple pages? Please advice.
You can't just add quality to an image, so you can either try improving the appearance of the current information or you'll need to re-create the images to get better information.
To me, it sounds like the initial creation process is the most likely cause of the quality issue. How you create the image is important.
For example, I had a large number of photos I needed to re-size, so I used irfanview's batch convert and the results were horrible. Perhaps I had the settings wrong, I don't know.
I then tried using ImageMagick, and the results were great.
The point being, the conversion process isn't trivial.
If I were you, I'd look at how the images were created, experiment with different settings to determine what gives the best appearance, then re-create your photo gallery.
For photographic material, there's no real reason to use anything other than a jpeg if the target market is the general consumer.
Both TIFF and JPEG support lossless and lossy storage of your images. You mentioned that there was a previous conversion. The conversion was probably a lossy conversion as such you probably won't be able to recover that data to the way it was previously.
That said if you have the original source images you might be able to get back to where you where. Regarding multi-image jpegs, there is such a format *.mpo but I haven't seen it used before so your millage may vary.
You probably converted gray scale or color Jpeg to Tiff. The most common is Tiff G4 which is only 1 bit per pixel. So 24 or 8 bits was converted to 1 bit and you will see a lot of images losses. There are multiple methods to improve image quality but I would have to see the images first to suggest a method.

resize picture c# for web

Goal:
I have lots of pictures in many sizes (both dimensions and file size)
I'd like to convert these files twice:
thumbnail-size pictures
pictures that will look OK on a web page and will be as close to a full screen as possible... and keeping the file size under 500KB.
HTML Questions:
A. What is the best file format to use (jpg, png or other) ?
B. What is the best configuration for web ... as small as possible file size with reasonable quality?
C# Questions
A Is there a good way to achieve this conversion using C# code (if yes, how)?
Try the code in this small C# app for resizing and compressing the graphics. I have reused this code for use in an ASP.NET site without too much work, hopefully you can make use of it. You can run the app to check quality fits your needs etc.
http://blog.bombdefused.com/2010/08/bulk-image-optimizer-in-c-full-source.html
You can pass the image twice, specifying dimensions for a thumbnail, and then again for your display image. It can handle multiple formats (jpg, png, bmp, tiff, gif), and reduce file size significantly without loosing noticeable quality.
On .jpg vs .png, generally jpg is better as you will get a smaller file size than with png. I've generally used this code passing a quality of 90%, which reduces file size significantly, but still looks perfect.
I think PNG is better format for WEB than JPEG that always uses lossy JPG compression, but its degree is selectable, for higher quality and larger files, or lower quality and smaller files. PNG uses ZIP compression which is lossless, and slightly more effective than LZW (slightly smaller files).
In C# you can use System.Drawing namespace types to load, resize and convert mages. This namespace wraps GDI+ API.
A. For graphics I would use png and for fotos jpg.
B. Configuration?
C. There are tons of post that explain that:
http://www.codeproject.com/KB/GDI-plus/imgresizoutperfgdiplus.aspx
Resizing an Image without losing any quality

High quality JPEG compression with c#

I am using C# and want to save images using JPEG format. However .NET reduces quality of the images and saves them with compression that is not enough.
I want to save files with their original quality and size. I am using the following code but compression and quality are not like the original ones.
Bitmap bm = (Bitmap)Image.FromFile(FilePath);
ImageCodecInfo[] codecs = ImageCodecInfo.GetImageEncoders();
ImageCodecInfo ici = null;
foreach (ImageCodecInfo codec in codecs)
{
if (codec.MimeType == "image/jpeg")
ici = codec;
}
EncoderParameters ep = new EncoderParameters();
ep.Param[0] = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality, (long)100);
bm.Save("C:\\quality" + x.ToString() + ".jpg", ici, ep);
I am archiving studio photos and quality and compression is very important. Thanks.
The .Net encoder built-in to the library (at least the default Windows library provided by Microsoft) is pretty bad:
http://b9dev.blogspot.com/2013/06/nets-built-in-jpeg-encoder-convenient.html
Partial Update
I'm now using an approach outlined here, that uses ImageMagick for the resize then jpegoptim for the final compression, with far better results. I realize that's a partial answer but I'll expand on this once time allows.
Older Answer
ImageMagick is the best choice I've found so far. It performs relatively solid jpeg compression.
http://magick.codeplex.com/
It has a couple downsides:
It's better but not perfect. In particular, its Chroma subsampling is set to high detail at 90% or above, then jumps down to a lower detail level - one that can introduce a lot of artifacts. If you want to ignore subsampling, this is actually pretty convenient. But if you wanted high-detail subsampling at say, 50%, you have a larger challenge ahead. It also still won't quite hit quality/compression levels of Photoshop or Google PageSpeed.
It has a special deployment burden on the server that's very easy to miss. It requires a Visual Studio 2008 SDK lib installed. This lib is available on any dev machine with Visual Studio on it, but then you hit the server for the first time and it implodes with an obscure error. It's one of those lurking gotchas most people won't have scripted/automated, and you'll trip over it during some future server migration.
Oldest Answer
I dug around and came across a project to implement a C# JPEG encoder by translating a C project over:
http://www.codeproject.com/Articles/83225/A-Simple-JPEG-Encoder-in-C
which I've simplified slightly:
https://github.com/b9chris/ArpanJpegEncoder
It produces much higher quality JPEGs than the .Net built-in, but still is not as good as Gimp's or Photoshop's. Filesizes also tend to be larger.
BitMiracle's implementation is practically identical to the .Net built-in - same quality problems.
It's likely that just wrapping an existing open source implementation, like Google's jpeg_optimizer in PageSpeed Tools - seemingly libjpeg underneath, would be the most efficient option.
Update
ArpanJpegEncoder appears to have issues once it's deployed - maybe I need to increase the trust level of the code, or perhaps something else is going on. Locally it writes images fine, but once deployed I get a blank black image from it every time. I'll update if I determine the cause. Just a warning to others considering it.
It looks like you're setting the quality to 100%. That means that there will be no compression.
If you change the compression level (80, 50, etc.) and you're unsatisifed with the quality, you may want to try a different image library. LEADTools has a good (non-free) engine.
UPDATE: As a commenter mentioned, 100% quality still does not mean lossless compression when using JPEG. Loading the image, doing something to it, and then saving it again will ultimately result in image degradation. If you need to alter and save an image without losing any of the data you need to use a lossless format such as TIFF, PNG or BMP. I'd go with compressed TIFF (since it's still lossless even though it's compressed) or PNG.
Compression and quality are always a trade off.
JPEGs are always going to be lossy.
You may want to consider using PNG and minifying the files using PNGCrush or PNGauntlet
Regarding the setup of the compression level in .NET, please check this link (everything included): http://msdn.microsoft.com/en-us/library/bb882583.aspx
Rearding your question:
Usually you will save the uploaded image from users as PNG, then you use this PNG as base to generate your JPGs with different sizes (and you put a watermark ONLY on the JPGs, never on the original PNG!)
Advantage of this is: if you change your images-dimensions later on for your platform, you have the original PNG saved and based on this you can re-compute any new image sizes.
It must save the file like its orjinal quality and size
That doesn't make a lot of sense. When you are using lossy compression you are going to lose some information by definition. The point of compressing an image is to reduce the file size. If you need high quality and jpeg isn't doing it for you you may have to go with some type of lossless compression, but your file sizes will not be reduced by much. You could always try using the 'standard' library for compressing to jpeg (libjpeg) and see if that gives you any different results (I doubt it, but I don't know what .NET is using under the hood.)
Compressing the jpeg format by its very nature reduces quality. Perhaps you should look into file compression, such as #ziplib. You may be able to get a reasonable compression over a group of files.

Categories