I have an application that works with Enhanced Metafiles.
I am able to create them, save them to disk as .emf and load them again no problem.
I do this by using the gdi32.dll methods and the DLLImport attribute.
However, to enable Version Tolerant Serialization I want to save the metafile in an object along with other data.
This essentially means that I need to serialize the metafile data as a byte array and then deserialize it again in order to reconstruct the metafile.
The problem I have is that the deserialized data would appear to be corrupted in some way, since the method that I use to reconstruct the Metafile raises a "Parameter not valid exception".
At the very least the pixel format and resolutions have changed.
Code use is below.
[DllImport("gdi32.dll")]
public static extern uint GetEnhMetaFileBits(IntPtr hemf, uint cbBuffer, byte[] lpbBuffer);
[DllImport("gdi32.dll")]
public static extern IntPtr SetEnhMetaFileBits(uint cbBuffer, byte[] lpBuffer);
[DllImport("gdi32.dll")]
public static extern bool DeleteEnhMetaFile(IntPtr hemf);
The application creates a metafile image and passes it to the method below.
private byte[] ConvertMetaFileToByteArray(Image image)
{
byte[] dataArray = null;
Metafile mf = (Metafile)image;
IntPtr enhMetafileHandle = mf.GetHenhmetafile();
uint bufferSize = GetEnhMetaFileBits(enhMetafileHandle, 0, null);
if (enhMetafileHandle != IntPtr.Zero)
{
dataArray = new byte[bufferSize];
GetEnhMetaFileBits(enhMetafileHandle, bufferSize, dataArray);
}
DeleteEnhMetaFile(enhMetafileHandle);
return dataArray;
}
At this point the dataArray is inserted into an object and serialized using a BinaryFormatter.
The saved file is then deserialized again using a BinaryFormatter and the dataArray retrieved from the object.
The dataArray is then used to reconstruct the original Metafile using the following method.
public static Image ConvertByteArrayToMetafile(byte[] data)
{
Metafile mf = null;
try
{
IntPtr hemf = SetEnhMetaFileBits((uint)data.Length, data);
mf = new Metafile(hemf, true);
}
catch (Exception ex)
{
System.Windows.Forms.MessageBox.Show(ex.Message);
}
return (Image)mf;
}
The reconstructed metafile is then saved saved to disk as a .emf (Model) at which point it can be accessed by the Presenter for display.
private static void SaveFile(Image image, String filepath)
{
try
{
byte[] buffer = ConvertMetafileToByteArray(image);
File.WriteAllBytes(filepath, buffer); //will overwrite file if it exists
}
catch (Exception ex)
{
System.Windows.Forms.MessageBox.Show(ex.Message);
}
}
The problem is that the save to disk fails. If this same method is used to save the original Metafile before it is serialized everything is OK. So something is happening to the data during serialization/deserializtion.
Indeed if I check the Metafile properties in the debugger I can see that the ImageFlags, PropertyID, resolution and pixelformats change.
Original Format32bppRgb changes to Format32bppArgb
Original Resolution 81 changes to 96
I've trawled though google and SO and this has helped me get this far but Im now stuck.
Does any one have enough experience with Metafiles / serialization to help..?
EDIT: If I serialize/deserialize the byte array directly (without embedding in another object) I get the same problem.
Related
I have tried to dispose and use the 'using' statement to prevent high memory usage. However, it perplexes me when my memory doesn't clear everything.
Below is the method where I save the bitmap pictures into an Excel worksheet.
I ran 2 simulations
I deleted the bitmap list before it comes in here and it uses 200mb+.
I loaded the bitmap list and it uses 600+mb (normal) before this method. After going into loop in this method, it adds another 600mb, totaling to 1.2GB. After exiting this method, it goes down to 600mb+. What am i missing out because I feel that the memory should be around 200mb - 300mb only.
In the code, I used 2 'using' statements to enable auto dispose of the image and the stream.
Thank you for helping!
private void SaveBitMapIntoExcelSht(ref List<Bitmap> bmpLst, IXLWorksheet wksheet, int pixH)
{
string inputCell;
using (MemoryStream stream = new MemoryStream())
{
for (int i = 0; i < bmpLst.Count; i++)
{
Console.WriteLine(GC.GetTotalMemory(true));
inputCell = "A" + (30).ToString();
using (Image image = Image.FromHbitmap(bmpLst[i].GetHbitmap()))//Convert bitmap to hbitmap and store as a stream to save directly into the excel file.
{
Console.WriteLine(GC.GetTotalMemory(true));
// Save image to stream.
image.Save(stream, ImageFormat.Png);
IXLPicture pic = wksheet.AddPicture(stream, XLPictureFormat.Png);
pic.MoveTo(wksheet.Cell(inputCell));
pic.Delete();
pic.Dispose();
image.Dispose();
GC.Collect();
}
}
}
foreach (var bmp in bmpLst)
{
bmp.Dispose();
}
bmpLst.Clear();
Dispose();
GC.Collect();
Console.WriteLine(GC.GetTotalMemory(true));
}
EDIT: ANSWER
For those who are interested, you may find the code below that works.
Previously I got the hbitmap in the using statement but there wasnt a reference to it, therefore I created a var handle so that I can delete it.
Add this in your class
[System.Runtime.InteropServices.DllImport("gdi32.dll")]
public static extern bool DeleteObject(IntPtr hObject);
using (MemoryStream stream = new MemoryStream())
{
for (int i = 0; i < bmpLst.Count; i++)
{
Console.WriteLine(GC.GetTotalMemory(true));
inputCell = "A" + (i * numberOfCells + 1).ToString();
//using (Image image = Image.FromHbitmap(bmpLst[i].GetHbitmap()))
var handle = bmpLst[i].GetHbitmap();
using (Image image = Image.FromHbitmap(handle))//Convert bitmap to hbitmap and store as a stream to save directly into the excel file.
{
// Save image to stream.
image.Save(stream, ImageFormat.Png);
pic = wksheet.AddPicture(stream, XLPictureFormat.Png);
pic.MoveTo(wksheet.Cell(inputCell));
try
{
var source = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap(handle, IntPtr.Zero, Int32Rect.Empty, System.Windows.Media.Imaging.BitmapSizeOptions.FromEmptyOptions());
}
finally
{
DeleteObject(handle);
}
}
}
}
From the documentation of GetHBitmap
You are responsible for calling the GDI DeleteObject method to free the memory used by the GDI bitmap object. For more information about GDI bitmaps, see Bitmaps in the Windows GDI documentation.
and the documentation of FromHbitmap
The FromHbitmap method makes a copy of the GDI bitmap; so you can release the incoming GDI bitmap using the GDI DeleteObject method immediately after creating the new Image.
I.e. you are creating a GDI object you are never removing. So you need to call the pInvoke function DeleteObject on the IntPtr
[DllImport("gdi32.dll", EntryPoint = "DeleteObject")]
[return: MarshalAs(UnmanagedType.Bool)]
public static extern bool DeleteObject([In] IntPtr hObject);
Or use some other way to convert a bitmap to an image.
As a rule of thumb, if you ever see anything returning IntPtr it may represent some unmanaged resource, and if so you need to be extra careful to check if you need to manually dispose/delete something.
Another rule of thumb is to use a memory profiler if you suspect a memory leak. It might not have found this issue since it involves pointers to unmanaged memory, but it should be the first step to investigate.
I'm implementing a Custom Credential Provider in C#. I'm using a C++ project as example. This piece of C++ code provides an image to Windows. The way I see it phbmp is a pointer to the image-bitmap. The code either updates the pointer so it points to a new bitmap (read from Resource) or it loads the bitmap to the address pointed by phbmp. I'm not sure if the pointer itself is changed or not.
// Get the image to show in the user tile
HRESULT CSampleCredential::GetBitmapValue(DWORD dwFieldID, _Outptr_result_nullonfailure_ HBITMAP *phbmp)
{
HRESULT hr;
*phbmp = nullptr;
if ((SFI_TILEIMAGE == dwFieldID))
{
HBITMAP hbmp = LoadBitmap(HINST_THISDLL, MAKEINTRESOURCE(IDB_TILE_IMAGE));
if (hbmp != nullptr)
{
hr = S_OK;
*phbmp = hbmp;
}
else
{
hr = HRESULT_FROM_WIN32(GetLastError());
}
}
else
{
hr = E_INVALIDARG;
}
return hr;
}
Below is the C# equivalent I'm implementing:
public int GetBitmapValue(uint dwFieldID, IntPtr phbmp)
{
if (dwFieldID == 2)
{
Bitmap image = Resource1.TileImage;
ImageConverter imageConverter = new ImageConverter();
byte[] bytes = (byte[])imageConverter.ConvertTo(image, typeof(byte[]));
Marshal.Copy(bytes, 0, phbmp, bytes.Length);
return HResultValues.S_OK;
}
return HResultValues.E_INVALIDARG;
}
What I'm trying to do:
Load the image from resource (this works, it has the correct length)
Convert the Bitmap to an array of bytes
Copy these bytes to the address pointed by phbmp
This crashes, I assume because of memory-allocation.
The parameters in this method are defined by an interface (in CredentialProvider.Interop.dll, which is provided by Microsoft - I think). So I'm pretty sure it's correct and phbmp is not an out-parameter.
Because it is not an out-parameter I can not change phbmp to let it point to my bitmap, right? I have assigned phbmp to Bitmap.GetHbitmap() and that doesn't crash but it isn't working either. I assume that the change to phbmp is only local in this method.
I can understand that it is not possible to alloc memory to a predefined address. It's the other way around: you alloc memory and get an pointer to it. But then this change is local again. How does this work?
Although some people agreed that IntPtr should be an out-parameter (see comments in https://syfuhs.net/2017/10/15/creating-custom-windows-credential-providers-in-net/) the answer was actually:
var bmp = new Bitmap(imageStream);
Marshal.WriteIntPtr(phbmp, bmp.GetHbitmap());
I am creating my own video file format and would like to write out a file header and frame headers.
At the moment I just have placeholders defined as such:
byte[] fileHeader = new byte[FILE_HEADER_SIZE * sizeof(int)];
byte[] frameHeader = new byte[FRAME_HEADER_SIZE * sizeof(int)];
I write them out using the following for the file header:
fsVideoWriter.Write(fileHeader, 0, FILE_HEADER_SIZE);
and this for the frame headers:
fsVideoWriter.Write(frameHeader, 0, FRAME_HEADER_SIZE);
Now that I actually need to make proper use of these headers, I'm not sure if this would be the most convenient way to write them, as I am not sure if it will be easy to read in the individual fields I need into separate variables from the headers.
I thought about doing something like the following:
[StructLayout(LayoutKind.Sequential, Pack = 1)]
struct FileHeader
{
public int x;
public int y;
public int z;
// etc. etc.
}
I would like to define it in such a way that I can upgrade easily as the file format evolves, (i.e. including a version number). Is this the recommended way to define a file/frame header? If so, how should I read/write it using the .NET FileStream class? If this is not the recommended way, please suggest the proper way to do this, as maybe someone has already created a generic video file-related class that handles this sort of thing?
I settled upon the following solution:
Writing out file header
public static bool WriteFileHeader(FileStream fileStream, FileHeader fileHeader)
{
try
{
byte[] buffer = new byte[FILE_HEADER_SIZE];
GCHandle gch = GCHandle.Alloc(buffer, GCHandleType.Pinned);
Marshal.StructureToPtr(fileHeader, gch.AddrOfPinnedObject(), false);
gch.Free();
fileStream.Seek(0, SeekOrigin.Begin);
fileStream.Write(buffer, 0, FILE_HEADER_SIZE);
return true;
}
catch (Exception ex)
{
throw ex;
}
}
Reading in file header
public static bool ReadFileHeader(FileStream fileStream, out FileHeader fileHeader)
{
try
{
fileHeader = new FileHeader();
byte[] buffer = new byte[FILE_HEADER_SIZE];
fileStream.Seek(0, SeekOrigin.Begin);
fileStream.Read(buffer, 0, FILE_HEADER_SIZE);
GCHandle gch = GCHandle.Alloc(buffer, GCHandleType.Pinned);
Marshal.PtrToStructure(gch.AddrOfPinnedObject(), fileHeader);
gch.Free();
// test for valid data
boolean isSuccessful = IsValidHeader(fileHeader);
return isSuccessful;
}
catch (Exception ex)
{
throw ex;
}
}
I used a similar approach for the frame headers as well. The idea is basically to make use of byte buffers and Marshal.
You may want to try the BinaryFormatter Class. But it is more or less a black box. If you need precise control of your file format, you can write your own Formatter and use it to serialize your header object.
How do I check the file type of a file uploaded using FileUploader control in an ASP.NET C# webpage?
I tried checking file extension, but it obviously fails when a JPEG image (e.g. Leonardo.jpg) is renamed to have a PDF's extension (e.g. Leonardo.pdf).
I tried
FileUpload1.PostedFile.ContentType.ToLower().Equals("application/pdf")
but this fails as the above code behaves the same way as the first did.
Is there any other way to check the actual file type, not just the extension?
I looked at ASP.NET how to check type of the file type irrespective of extension.
Edit: I tried below code from one of the posts in stackoverflow. But this down't work. Any idea about this.
/// <summary>
/// This class allows access to the internal MimeMapping-Class in System.Web
/// </summary>
class MimeMappingWrapper
{
static MethodInfo getMimeMappingMethod;
static MimeMappingWrapper() {
// dirty trick - Assembly.LoadWIthPartialName has been deprecated
Assembly ass = Assembly.LoadWithPartialName("System.Web");
Type t = ass.GetType("System.Web.MimeMapping");
getMimeMappingMethod t.GetMethod("GetMimeMapping", BindingFlags.Static | BindingFlags.NonPublic | BindingFlags.Public));
}
/// <summary>
/// Returns a MIME type depending on the passed files extension
/// </summary>
/// <param name="fileName">File to get a MIME type for</param>
/// <returns>MIME type according to the files extension</returns>
public static string GetMimeMapping(string fileName) {
return (string)getMimeMappingMethod.Invoke(null, new[] { fileName });
}
}
Dont use File Extensions to work out MIME Types, instead use "Winista" for binary analysis.
Say someone renames an exe with a jpg extension. You can still determine the real file format. It doesn't detect swf's or flv's but does pretty much every other well known format and you can get a hex editor to add more files it can detect.
Download Winista: here or my mirror or my GitHub https://github.com/MeaningOfLights/MimeDetect.
Where Winista fails to detect the real file format, I've resorted back to the URLMon method:
public class urlmonMimeDetect
{
[DllImport(#"urlmon.dll", CharSet = CharSet.Auto)]
private extern static System.UInt32 FindMimeFromData(
System.UInt32 pBC,
[MarshalAs(UnmanagedType.LPStr)] System.String pwzUrl,
[MarshalAs(UnmanagedType.LPArray)] byte[] pBuffer,
System.UInt32 cbSize,
[MarshalAs(UnmanagedType.LPStr)] System.String pwzMimeProposed,
System.UInt32 dwMimeFlags,
out System.UInt32 ppwzMimeOut,
System.UInt32 dwReserverd
);
public string GetMimeFromFile(string filename)
{
if (!File.Exists(filename))
throw new FileNotFoundException(filename + " not found");
byte[] buffer = new byte[256];
using (FileStream fs = new FileStream(filename, FileMode.Open, FileAccess.Read))
{
if (fs.Length >= 256)
fs.Read(buffer, 0, 256);
else
fs.Read(buffer, 0, (int)fs.Length);
}
try
{
System.UInt32 mimetype;
FindMimeFromData(0, null, buffer, 256, null, 0, out mimetype, 0);
System.IntPtr mimeTypePtr = new IntPtr(mimetype);
string mime = Marshal.PtrToStringUni(mimeTypePtr);
Marshal.FreeCoTaskMem(mimeTypePtr);
return mime;
}
catch (Exception e)
{
return "unknown/unknown";
}
}
}
From inside the Winista method, I fall back on the URLMon here:
public MimeType GetMimeTypeFromFile(string filePath)
{
sbyte[] fileData = null;
using (FileStream srcFile = new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
byte[] data = new byte[srcFile.Length];
srcFile.Read(data, 0, (Int32)srcFile.Length);
fileData = Winista.Mime.SupportUtil.ToSByteArray(data);
}
MimeType oMimeType = GetMimeType(fileData);
if (oMimeType != null) return oMimeType;
//We haven't found the file using Magic (eg a text/plain file)
//so instead use URLMon to try and get the files format
Winista.MimeDetect.URLMONMimeDetect.urlmonMimeDetect urlmonMimeDetect = new Winista.MimeDetect.URLMONMimeDetect.urlmonMimeDetect();
string urlmonMimeType = urlmonMimeDetect.GetMimeFromFile(filePath);
if (!string.IsNullOrEmpty(urlmonMimeType))
{
foreach (MimeType mimeType in types)
{
if (mimeType.Name == urlmonMimeType)
{
return mimeType;
}
}
}
return oMimeType;
}
Update:
To work out more files using magic here is a FILE SIGNATURES TABLE
Checking the names or extension is in no way a reliable idea. The only way you can be sure is that you actually read the content of the file.
i.e. if you want to check the file for image, you should try loading image from the file and if it fails, you can be sure that it is not an image file. This can be done easily using GDI objects.
Same is also true for PDF files.
Conclusion is, don't rely on the user supplied name or extension.
you can check you file type in FileApload by
ValidationExpression="^.+.(([pP][dD][fF])|([jJ][pP][gG])|([pP][nN][gG])))$"
for ex: you can add ([rR][aA][rR]) for Rar file type and etc ...
I have to read image binary from database and save this image binary as a Tiff image on filesystem. I was using the following code
private static bool SavePatientChartImageFileStream(byte[] ImageBytes, string ImageFilePath, string IMAGE_NAME)
{
bool success = false;
try
{
using (FileStream str = new FileStream(Path.Combine(ImageFilePath, IMAGE_NAME), FileMode.Create))
{
str.Write(ImageBytes, 0, Convert.ToInt32(ImageBytes.Length));
success = true;
}
}
catch (Exception ex)
{
success = false;
}
return success;
}
Since these image binaries are being transferred through merge replication, sometimes it happens that image binary is not completely transferred and we are sending request to fetch Image Binary with a nolock hint. This returns in ImageBytes having 1 byte data and it saves it as a 0 kb corrupted tiff image.
I have changed the above code to :-
private static bool SavePatientChartImage(byte[] ImageBytes, string ImageFilePath, string IMAGE_NAME)
{
bool success = false;
System.Drawing.Image newImage;
try
{
using (MemoryStream stream = new MemoryStream(ImageBytes))
{
using (newImage = System.Drawing.Image.FromStream(stream))
{
newImage.Save(Path.Combine(ImageFilePath, IMAGE_NAME));
success = true;
}
}
}
catch (Exception ex)
{
success = false;
}
return success;
}
In this case if ImageBytes is of 1 byte or incomplete, it won't save image and will return success as false.
I cannot remove NOLOCK as we are having extreme locking.
The second code is slower as compared to first one. I tried for 500 images. there was a difference of 5 seconds.
I couldn't understand the difference between these 2 pieces of code and which code to use when. Please help me understand.
In the first version of the code, you are essentially taking a bunch of bytes and writing them to the filesystem. There's no verification of a valid TIFF file because the code neither knows nor cares it's a TIFF file. It's just a bunch of bytes without any business logic attached.
In the second code, you're taking the bytes, wrapping them in a MemoryStream, and then feeding them into an Image object, which parses the entire file and reads it as a TIFF file. This give you the validation you need - it can tell when the data is invalid - but you're essentially going over the entire file twice, once to read it in (with additional overhead for parsing) and once to write it to disk.
Assuming you don't need any validation that requires deep parsing of the image file (# of colors, image dimensions, etc) you can skip this overhead by simply checking if the byte[] ImageBytes is of length 1 (or find any other good indicator of corrupt data) and skip writing if it doesn't match. In effect, do your own validation, rather than using the Image class as a validator.
I think the main difference between the two is that in the second code you are writing the source byte[] to a MemoryStream object first which would mean that if the data becomes essentially independent of the database. So, you could potentially incorporate this MemoryStream into the first code to achieve the same results.