Unable to scan 3.5GB image using C# Twain/WIA - c#

I'm trying to scan a 2400DPI A3-size image to TIFF with an Epson scanner using C# (which will result in a 3.5GB uncompressed TIFF). I've tried twain-cs, twaindotnet and ntwain as wrappers (which should use the 64-bit capable twaindsm.dll) as well as WIA.
In all cases when telling Twain to scan to file - just over halfway (the expected 2GB mark) it gives an error that the driver doesn't have enough memory to do that. Unless I set it to save using jpeg compression (as the Epson driver doesn't seem to have lossless compression for photo formats).
When telling Twain to do a memory transfer, it does the full scan, but when I tranfer the memory and write it to a TIFF (using libTiff), the first half is okay (which I guess is around 2GB), the last half is just a single line repeated (still not sure which line, as it doesn't seem to be the last line it has scanned). So even though it doesn't generate an error, it has a problem after the 2GB mark.
WIA gives me a hard limit of 1200DPI, which I think is a limit set in the Epson driver. Next to that, I haven't been able to get WIA to tranfer directly to file As I can't find a way to set TYMED_FILE (and all C++ code I find uses low-level minidriver calls). Also haven't found a way to get the stream so I can write it to a file myself. Creating a minidriver then gives me a problem with an unsigned (and certainly not MS certified) driver.
Any help or links that will point me in the right direction will be welcome!

Related

PostScript - Error when Using Ghostscript "pdfwrite"

I want to preface this with the understanding that I am working with legacy code and thus I am having to live with less than ideal situations and am doing some quirky stuff because of that. Until I can get approval to rewrite, I will have to make due.
Context
Here is my situation. The application is a "simple" one in that it reports off of a SQL database. For better or for worse it builds its reports with postscript. It make use of Ghostscript dlls in which it has embedded into the application directory. Here is the kicker, it has been requested that I include SSIS reports whose output is already in PDF format. For compatibility sake, i need to convert these PDFs into postscript even though in most situations they will be converted right back to PDF later on. I know this is most likely bad design but there is certain functionality that requires this and it just is what it is for the time being. I am using GhostScript to handle the conversions.
Observed Behavior
The following behavior is what is observed once the PDF is converted to PS, passed through the application, and then converted back to PDF.
When using the "sDevice=pswrite" everything works except that the reports are compiled with poor resolution despite how I tweek the resolution option.
When leveraging "sDevice=ps2write" which I understand to be the current accepted protocol, the PDF will not render back and produces the following error.
ERROR:
undefined
OFFENDING COMMAND:
U1!‘WVt92\a
STACK:
--nostringval--
20
The above error is only produced when using a report from a report server that is accessed via web client. I can confirm that the PDF returns successfully and is not corrupt.
When running local SSIS packages on the application the produced PDF is able to be handled successfully.
When the origional PDF is converted to PS using PS2Write the comments are populated as follows
%!PS-Adobe-3.0
%%BoundingBox: 0 0 612 792
%%Creator: GPL Ghostscript 905 (ps2write)
%%LanguageLevel: 2
%%CreationDate: D:20171003154139-05'00'
%%Pages: 3
%%EndComments
pswrite produces
%!PS-Adobe-3.0
%%Pages: (atend)
%%BoundingBox: 21 30 761 576
%%HiResBoundingBox: 21.600000 30.400000 760.566016 575.100000
%.....................................
%%Creator: GPL Ghostscript 905 (pswrite)
%%CreationDate: 2017/10/03 15:53:40
%%DocumentData: Clean7Bit
%%LanguageLevel: 2
%%EndComments
%%BeginProlog
Suspicion
I am suspecting that either the PDF is in an incompatible standard that cannot be converted to PostScript. For example, a newer PDF version that cant be handled. Or perhaps it contains something that is incompatible such as a font or img.
Is there anyway to hunt this down for sure? Has anyone come across similar situations and what was the solution? Any pointers as to what to look into or things to try?
To be honest, nobody is likely going to be able to help without seeing the original PDF file. Even a dummy file will be fine provided it exhibits the error.
However, the first thing that springs to mind is that you appear to be using Ghostscript 9.05. That is now 5 years old, the current release is (about to be) 9.22. There have been numerous fixes to ps2write in that time, at least 50 or more, and the first thing I would suggest you do is upgrade and see if the problem goes away.
Secondly, you haven't been clear on why you need to convert the PDF files to PostScript. If all you are doing is feeding those back through Ghostscript along with some additional PostScript in order to convert the assemblage into PDF, you do not need to turn the PDF files into into PostScript first. Ghostscript is entirely capable of taking a mixture of PDF and PostScript files, so you can simply inject the PDF in between the PostScript from your SQL output to produce a single combined PDF.
This has a number of advantages; first and most obviously, you shouldn't get your conversion problem. Secondly, any construct in the PDF file which cannot be represented in PostScript (eg transparency) means that the content will be rendered to an image and the PostScript will simply contain a big bitmap. Just like the pswrite output, avoiding conversion means that won't happen. Thirdly it will be quicker than first converting all the PDF files to PostScript.
If you absolutely can't do that, then I would try current code and see if its better. If not then you have found a bug and I would suggest you report it at https://bugs.ghostscript.com you will need to be able to supply an example file and command line though.

Compressing frames inside an AVI video

I've built a Windows Phone class to convert some WriteableBitmap into a AVI Full Frame (Uncompressed). Videos are really huge. Is there a simple codec implementation existing, like a codec that just zip images, or something that is making a next/previous XOR then zip it in jpeg?
Windows Phone doesn't allow any unsafe code and most DLL cannot be wrapped into a C# WP library. So that's why I'm coding something from scratch. Note that I'm more efficient at coding from scratch than in studying C++ existing sources (I'm not a C++ coder), so what I'm searching is infos about a compressed AVI format that can be achieved without writing 100000 lines. I've used AVI because the specs are simple.
[EDIT]
I've found something very interesting here on codeproject, from a 2004 article. It's a 100% C# source to convert frames to mpeg-1. Sadly that's i frame, and not p frame, so files are 3 times larger than an expected mpeg-1 average file size.
[EDIT]
To describe more my project, what I'll do is to apply some effects on a captured movie. This movie will then be uploaded on Youtube or some other websites. Thus, the user expect the exact resolution used on the phone, at least 25 frames/s, a decent quality, and a short upload time. So I can't stop with a Mpeg-1 I-Frames. I'll need to study about prediction in mpeg-1.
I take this to be a continuation/re-post of your previous question (but with a few more details). As I mentioned in the comments of that post, there is a whole universe of video codecs out there. One reason for the proliferation is that a lot of people like to re-invent wheels. However, a more salient reason is that there are a lot of different use cases for video.
You seem to be asking for a lot, yet there are a lot of variables you have not presented:
You express a need for a video encoder that will run purely in software on a Windows Phone device, which is necessarily a fairly low-powered machine; do you need it to run in real-time? I.e., do you expect a frame of video to be compressed almost immediately after to send in the uncompressed frame (within a few milliseconds)? Or can you let the device think about the compression for awhile?
How large are the video frames? Are you doing screen capture on a WP device, i.e., computer-generated data? Or are you reading raw frames from the camera and hoping to compress those?
Following from the previous point, what type of video data? Computer-generated data will look better with a certain class of codecs. Photo-quality images (from camera) implies a different family of codecs.
What bitrate are you aiming for? If you have 1 second of video, what's the max amount of bytes it should occupy (or so you hope)?
Who is the eventual consumer of the video? In the last post, you indicated you wanted to upload to YouTube. If that's the case, you're in luck, since YouTube -- backed by FFmpeg -- handles nearly every codec in the universe, so you would have a lot of options.
I don't know much about Windows Phone programming. However, any WP device is going to technically have hardware video encoding capabilities. I've done some cursory Googling to determine if you get any access to that at the application programming level but I can't find any evidence that it's possible (and this SO answer states that the functionality is not there).
I hope to impress upon you that writing a video encoder is a LOT of work (look at my username; I know from whence I speak). Generally, they require quite a lot of CPU horsepower (and, consequently, battery power, especially when implemented in pure software). However, you have already made some guesses about a codec that uses standard zlib. In fact, there are a few video codecs based on straight zlib, namely MSZH and ZLIB, collectively the Lossless Codec Libraries. That wiki page has a basic bitstream description (disclosure: I operate that wiki site). I'm confident the WP libraries include access to zlib encoding, so this might be a starting point, and YouTube should be able to digest the resulting files.
There is also a video codec that combines XOR and zlib as you guessed (Dosbox Capture Codec), but it's probably not appropriate for your application.
Do the libraries provide access to standard JPEG (i.e., can it encode JPEG files)? Another option (depending on the video type) would be successive frames of still JPEG images stuffed in the AVI file. This is known as Motion JPEG or MJPEG. However, it's roughly equivalent in bitrate to intra-only MPEG-1, which you expressed as being inadequate.
I hope I have given you some ideas and useful avenues to pursue on your path to a solution.

Should I compress images captured by a webcam when sending through network

Recently, I finished my conference application. People can talk and watch to each other. Therefore I capture images (IntPtr of buffer converted to JPEG) from the webcam (DirectShow library). Right now I do not have any problems, since the program was used in a LAN only. But I'm planning to implement a internet version of it.
So my question is: Should I use something else than JPEG? Should I compare image x and image x+1 and only send differences? Should I use Motion-JPEG? (Sorry, I do not know anything about motion-jpeg, but it sounds relevant).
you are on the right track with recognizing that images change little from frame to frame, and that sending a sequence of jpegs is not the way to go. I believe mjpeg sends a sequence of jpegs, and is a poor choice. I do not use c#, but i believe that ffmpeg (a video compression library) makes a c# wrapper.
FFmpeg is extremely fast, but is not really well documented and is pure ANSI-C. I think that a better approach in your case is, as you already thought, to compress the difference between image x and image x-1, this should be enough to provide a significant bandwidth saving.
You should also include a method to compress the whole frame every once in a while, or compress the whole image when the difference with the previous one is above a certain threshold

How do I check for corrupt TIFF images in C#?

I searched on how to check if a TIFF file is corrupt or not. Most suggests wrapping the Image.FromFile function in a try block. If it throws an OutOfMemoryException, its corrupt. Has anyone used this? Is it effective? Any alternatives?
Please check out the freeware called LibTiff .NET. It has the function to check if every page in a TIF file is corrupted or not. Even partially corrupt also no problem
http://bitmiracle.com/libtiff/
Thanks
Many tiff files won't open in the standard GDI+ .NET. That is, if you're running on Windows XP. Window 7 is much better. So any file which is not supported by GDI+ (i.e. fax, 16 bit gray scale, 48bpp RGB, tiled tiff, piramidical tiled tiff etc.) are then seen as 'corrupt'. And not just that, anything resulting in a bitmap over a few 100 MByte on a 32-bit system will also cause an out-of-memory exception.
If your goal is to support as much as possible of the TIFF standard, please start from LibTiff (derivates). I've used LibTiff.NET from BitMiracle (LGPL), which worked well for me. Please see my other posts
Many of the TIFF utilities are also based on LibTIFF, some of them are ported to C#.NET. This would be my suggestion if you want to validate the TIFF.
As for the TIFF specification suggested in other replies: of course this gives you bit-level control. But to my experience you won't need to go that low to have good TIFF support. The format is so versatile that it will cost you an enormous amount of time to start support from scratch.
It will only be corrupt in the sense that the frameworks methods cant open it.
There are some TIFF types that the framework cannot open -( In my case I cant remember the exact one, think it was one of the FAX type ones...)
That may be enough for you, if you are just looking a using the framework to manipulate images. After all I you cant open it, you cant use it...
ImageMagic - may give you more scope here
Without looking at the tiff, it may be difficult to see if its corrupt from a visual perspective, but if you have issues with processing an image, just create a function that does a basic test for this type of processing and handle the error?

Displaying live video from a raw uncompressed byte source in C#: WPF vs. Win forms

I have a live 16-bit gray-scale video stream that is pushed through a ring-buffer in memory as a raw, uncompressed byte stream (2 bytes per pixel, 2^18 pixels/frame, 32 frames/sec). (This is coming from a scientific grade camera, via a PCI frame-grabber). I would like to do some simple processing on the video (clip dynamic range, colorize, add overlays) and then show it in a window, using C#.
I have this working using Windows Forms & GDI (for each frame, build a Bitmap object, write raw 32-bit RGB pixel values based on my post-processing steps, and then draw the frame using the Graphics class). But this uses a significant chunk of CPU that I'd like to use for other things. So I'm interested in using WPF for its GPU-accelerated video display. (I'd also like to start using WPF for its data binding & layout features.)
But I've never used WPF before, so I'm unsure how to approach this. Most of what I find online about video & WPF involves reading a compressed video file from disk (e.g. WMV), or getting a stream from a consumer-grade camera using a driver layer that Windows already understands. So it doesn't seem to apply here (but correct me if I'm wrong about this).
So, my questions:
Is there a straighforward, WPF-based way to play video from raw, uncompressed bytes in memory (even if just as 8-bit grayscale, or 24-bit RGB)?
Will I need to build DirectShow filters (or other DirectShow/Media Foundation-ish things) to get the post-processing working on the GPU?
Also, any general advice / suggestions for documentation, examples, blogs, etc that are appropriate to these tasks would be appreciated. Thanks!
Follow-up: After some experimentation, I found WriteableBitmap to be fast enough for my needs, and extremely easy to use correctly: Simply call WritePixels() and any Image controls bound to it will update themselves. InteropBitmap with memory-mapped sections is noticeably faster, but I had to write p/invokes to kernel32.dll to use it on .NET 3.5.
My VideoRendererElement, though very efficient, does use some hackery to make it work. You may also want to experiment with the WriteableBitmap in .NET 3.5 SP1.
Also the InteropBitmap is very fast too. Much more efficient than the WB as it's not double buffered. Though it can be subject to video tearing.
Some further Google-searching yielded this:
http://www.codeplex.com/VideoRendererElement
which I'm looking into now, but may be the right approach here. Of course further thoughts/suggestions are still very much welcome.

Categories