Our software needs to produce variable-sized reports, which can easily move past 100 pages. Several of these pages contain large images/Bitmaps.
Is there a reliable way to prevent the overall report from consuming all available memory? Once we have enough pages being generated, the app almost never finishes creating the report without running out of memory. Most of the memory is consumed by Bitmaps that we cannot release. (Attempting to dispose of them before the report is complete causes the report generation to fail.)
John
Have you tried using the cache to disk with ActiveReports?
http://helpcentral.componentone.com/nethelp/AR7Help/OnlineEn/CacheToDiskAndResourceStorage.html
More details here:
http://helpcentral.componentone.com/nethelp/AR7Help/OnlineEn/GrapeCity.ActiveReports.Document.v7~GrapeCity.ActiveReports.Document.SectionDocument~CacheToDisk.html
Set this up prior to running the report. For example:
report.Document.CacheToDisk = true;
report.Run();
I think you could try spliting ur report into smaller chunks ,run them and then merge all the reports into one once all pages have been generated.
One more suggestion in addition to setting CacheToDisk property of ActiverReports to True would be to make use of Image.FromStream instead of Image.FromFile to access the images.
Image.FromFile leaves file handles open, and hence it may result in the Memory Exception.
using (FileStream fs = new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
using (Image original = Image.FromStream(fs))
{
...
}
}
Using an explicit Dispose(), a using() statement or setting the value to null on the bitmap doesn't generally solve the issue with Image.FromFile.
So if you App runs for a time and opens a lot of files consider using Image.FromStream() instead.
Regards,
Mohita
Related
I'm trying to figure out if there is something seriously wrong with the following code. It reads the binary from the database, stores it as a picture and associates with an object of an Animal record.
For each row (record of an animal):
byte[] ba = (byte[])x.ItemArray[1]; //reading binary from a DB row
using (MemoryStream m=new MemoryStream(ba))
{
Image i = Image.FromStream(m); //exception thrown occassionally
c.Photo = i;
listOfAnimals.Add(c);
}
First of all, with 18 pictures loaded (the JPG files have 105 Mb in total), the running app uses 2 gb of memory. With no pictures loaded, it is only 500 Mb.
Often the exception gets raised in the marked point, the source of which is System Drawing.
Could anyone help me optimize the code or tell me what the problem is? I must have used some wrong functions...
According to Image.FromStream Method
OutOfMemoryException
The stream does not have a valid image format.
Remarks
You must keep the stream open for the lifetime of the Image.
The stream is reset to zero if this method is called successively with the same stream.
For more information see: Loading an image from a stream without keeping the stream open and Returning Image using Image.FromStream
Try the following:
Create a method to convert byte[] to image
ConvertByteArrayToImage
public static Image ConvertByteArrayToImage(byte[] buffer)
{
using (MemoryStream ms = new MemoryStream(buffer))
{
return Image.FromStream(ms);
}
}
Then:
byte[] ba = (byte[])x.ItemArray[1]; //reading binary from a DB row
c.Photo = ConvertByteArrayToImage(ba);
listOfAnimals.Add(c);
Checking the documentation, a possible reason for out of memory exceptions are that the stream is not a valid image. If this is the case it should fail reliably for a given image, so check if any particular source image is causing this issue.
Another possibility should be that you simply run out of memory. Jpeg typically gets a 10:1 compression level, so 105Mib of compressed data could use +1Gib of memory. I would recommend switching to x64 if at all possible, I see be little reason not to do so today.
There could also be a memory leak, the best way to investigate this would be with a memory profiler. This might be in just about any part of your code, so it is difficult to know without profiling.
You might also need to care about memory fragmentation. Large datablocks are stored in the large object heap, and this is not automatically defragmented. So after running a while you might still have memory available, just not in any continuous block. Again, switching to x64 would mostly solve this problem.
Also, as mjwills comments, please do not store large files in the database. I just spent several hours recovering a huge database, something that would have been much faster if images where stored as files instead.
I am using the SqlFileStream and when constructing the object I am not sure which FileOptions and allocation size to use. I got this from another article but it did not explain why. Can somone help explain or give me a recommendation?
thanks!
using (var destination = new SqlFileStream(serverPathName, serverTxnContext, FileAccess.Write, FileOptions.Asynchronous, 4096))
{
await file.CopyToAsync(destination);
}
Since it appears like you are trying to copy this file asynchronously, you probably want FileOptions.Asynchronous. This the most responsive way to access your file, because you aren't bound to one thread. FileOptions.RandomAccess and FileOptions.SequentialScan both use caching the access the file however FileOptions.SequentialScan isn't guaranteed to cache optimally. Like the name implies, the large difference is how the access the file either randomly or sequentially. The WriteThrough just skips the cache and goes directly to the file which would be faster but riskier.
Allocation size is just the block size on the drive. If you pass 0 it would use the default size, which for an NTFS formatted drive would be 4KB. 4096 turns out to be 4KB so the person here is just making sure the block size is 4KB.
I am currently working on an app, which loads and uploads a few pictures from the isolated storage and from a Webservice (RESTFul) via Streams. The pictures themselves aren't that big (500kb - 2MB per Stream). But after a few, always different amount of operations (e.q. displaying or downloading a list of pictures) I get the outOfMemory Exception.
I also made sure, that in every case the streams are correctly closed.
using (MemoryTributary mem = new MemoryTributary(imageBytes))
{
bitmapImage.SetSource(mem);
bitmapImage.CreateOptions = BitmapCreateOptions.IgnoreImageCache;
mem.Close();
}
In this special case we also used the downloadable class MemoryTributary which should be able to handle big data better than memoryStreams.
http://www.codeproject.com/Articles/348590/A-replacement-for-MemoryStream
Somehow I think the used resources aren't freed, although the Streams are closed, after using them.
Ok, we got it now.
The UriSource has also to be set null. Also the Source of the XAML Object has to be updated after setting null, because otherwise it seeems to keep the picture, even though the Source was set to null.
A little background: I've been experimenting with using the FILE_FLAG_NO_BUFFERING flag when doing IO with large files. We're trying to reduce the load on the cache manager in the hope that with background IO, we'll reduce the impact of our app on user machines. Performance is not an issue. Being behind the scenes as much as possible is a big issue. I have a close-to-working wrapper for doing unbuffered IO but I ran into a strange issue. I get this error when I call Read with an offset that is not a multiple of 4.
Handle does not support synchronous operations. The parameters to the FileStream constructor may need to be changed to indicate that the handle was opened asynchronously (that is, it was opened explicitly for overlapped I/O).
Why does this happen? And is doesn't this message contradict itself? If I add the Asynchronous file option I get an IOException(The parameter is incorrect.)
I guess the real question is what do these requirements, http://msdn.microsoft.com/en-us/library/windows/desktop/cc644950%28v=vs.85%29.aspx, have to do with these multiples of 4.
Here is the code that demonstrates the issue:
FileOptions FileFlagNoBuffering = (FileOptions)0x20000000;
int MinSectorSize = 512;
byte[] buffer = new byte[MinSectorSize * 2];
int i = 0;
while (i < MinSectorSize)
{
try
{
using (FileStream fs = new FileStream(#"<some file>", FileMode.Open, FileAccess.Read, FileShare.None, 8, FileFlagNoBuffering | FileOptions.Asynchronous))
{
fs.Read(buffer, i, MinSectorSize);
Console.WriteLine(i);
}
}
catch { }
i++;
}
Console.ReadLine();
When using FILE_FLAG_NO_BUFFERING, the documented requirement is that the memory address for a read or write must be a multiple of the physical sector size. In your code, you've allowed the address of the byte array to be randomly chosen (hence unlikely to be a multiple of the physical sector size) and then you're adding an offset.
The behaviour you're observing is that the call works if the offset is a multiple of 4. It is likely that the byte array is aligned to a 4-byte boundary, so the call is working if the memory address is a multiple of 4.
Therefore, your question can be rewritten like this: why is the read working when the memory address is a multiple of 4, when the documentation says it has to be a multiple of 512?
The answer is that the documentation doesn't make any specific guarantees about what happens if you break the rules. It may happen that the call works anyway. It may happen that the call works anyway, but only in September on even-numbered years. It may happen that the call works anyway, but only if the memory address is a multiple of 4. (It is likely that this depends on the specific hardware and device drivers involved in the read operation. Just because it works on your machine doesn't mean it will work on anybody else's.)
It probably isn't a good idea to use FILE_FLAG_NO_BUFFERING with FileStream in the first place, because I doubt that FileStream actually guarantees that it will pass the address you give it unmodified to the underlying ReadFile call. Instead, use P/Invoke to call the underlying API functions directly. You may also need to allocate your memory this way, because I don't know whether .NET provides any way to allocate memory with a particular alignment or not.
Just call CreateFile directly with FILE_FLAG_NO_BUFFERING and then close it before opening with FileStream to achieve the same effect.
I must admit that I never understood what are the streams are all about- I always thought it's an internet thing. But now I run into a code that used a stream to load a file localy and I wonder if there is advantage for using a stream over... well the way I always loaded files:
private void loadingfromStream()
{
DirectoryInfo dirInfo = new DirectoryInfo("c:/");
FileInfo[] fileInfoArr = dirInfo.GetFiles();
FileInfo fileInfo = fileInfoArr[0];
// creating a bitmap from a stream
FileStream fileStream = fileInfo.OpenRead();
Bitmap bitmap = new Bitmap(fileStream);
Image currentPicture = (Image)bitmap
}
vs.
private void loadingUsingImageClass
{
Image currentPicture = Image.FromFile(originalPath);
}
If you know your code will be loading the data from a file, use Image.FromFile - it's obviously rather simpler code, and it's just possible that there are optimizations within the framework when it's dealing with files.
Using a stream is more flexible, but unless you need that flexibility, go with the file solution.
If you want to deal with image files, of course the second solution is better. In your first section, you have Bitmap bitmap = new Bitmap(fileStream); you know that an image file is not always Bitmap, it also can be JPEG/PNG/TIFF and so on. While Image.FromFile is quite professional to deal with image files with different extensions.
Generally speaking, FileStream is common at file issues, while Image.FromFile is more particular at image files. It depends on what kind of files you are going to deal with.
Well, a file is often treated as a stream as well. That's why the primary class to open files is called FileStream. But there's a specific operating system feature that can make dealing with image files a lot more efficient. It is called 'memory mapped files', a feature that maps the content of a file directly to memory. There's some smoke and mirrors involved, but it essentially makes the file directly available without having to read it. The memory you need to store the file data doesn't take space in the paging file.
Very efficient, you'll get it for free when you use FromFile() or the Bitmap(string) constructor for an image in the .bmp format. Loading an image from a stream tends to require twice the amount of memory, always a problem with big images.
As an a addition to JonĀ“s answer:
As far as I see, the two methods don't do the same thing either. The first is given you the first image in "C:\" where the second just give you a image from a path. So the added complexity in the first is not just because it is using streams.
This would be equivalent:
using (var fs = File.OpenRead(path))
using (var img = Image.FromStream(fs))
{
//...
}
and in that case, it is certainly better to just do it with Image.FromFile as Jon explained.