I am trying to upload a select image file from within my WPF application to be stored on Parse however I cannot find the correct method to do this anywhere.
At the moment I have selected my image from 'OpenFileDialog' and have the path for that image stored within a text box.
How do I now upload this file to Parse?
I am familiar with parse and have no problems saving strings, images, video etc in Objective-C but cannot for the life of me think of how to get this to work in a WPF application in C#.
Any help would be massively appreciated.
Here is a piece of code that loads an image file and save the data into a byte array.
private byte[] LoadByteArrayFromFile(string fileName)
{
try
{
using (FileStream fs = new FileStream(fileName, FileMode.Open, FileAccess.Read))
{
byte[] byteArray = new byte[fs.Length];
int bytesRead = 0;
int bytesToRead = (int)fs.Length;
while (bytesToRead > 0)
{
int read = file.Read(byteArray, bytesRead, bytesToRead);
if (read == 0)
break;
bytesToRead -= read;
bytesRead += read;
}
return byteArray;
}
}
catch (Exception ex)
{
return null;
}
}
So you first get the data.
byte[] data = LoadByteArrayFromFile(filename); //OpenFileDialog.Path, full path to the image
And then, construct a ParseFile - you should be familiar with the rest steps.
if (data != null)
{
ParseFile file = new ParseFile(System.IO.Path.GetFileName(filename), data);
await file.SaveAsync();
//then assign the ParseFile into a ParseObject, like the doc says...
}
Related
I need help converting a VERY LARGE binary file (ZIP file) to a Base64String and back again. The files are too large to be loaded into memory all at once (they throw OutOfMemoryExceptions) otherwise this would be a simple task. I do not want to process the contents of the ZIP file individually, I want to process the entire ZIP file.
The problem:
I can convert the entire ZIP file (test sizes vary from 1 MB to 800 MB at present) to Base64String, but when I convert it back, it is corrupted. The new ZIP file is the correct size, it is recognized as a ZIP file by Windows and WinRAR/7-Zip, etc., and I can even look inside the ZIP file and see the contents with the correct sizes/properties, but when I attempt to extract from the ZIP file, I get: "Error: 0x80004005" which is a general error code.
I am not sure where or why the corruption is happening. I have done some investigating, and I have noticed the following:
If you have a large text file, you can convert it to Base64String incrementally without issue. If calling Convert.ToBase64String on the entire file yielded: "abcdefghijklmnopqrstuvwx", then calling it on the file in two pieces would yield: "abcdefghijkl" and "mnopqrstuvwx".
Unfortunately, if the file is a binary then the result is different. While the entire file might yield: "abcdefghijklmnopqrstuvwx", trying to process this in two pieces would yield something like: "oiweh87yakgb" and "kyckshfguywp".
Is there a way to incrementally base 64 encode a binary file while avoiding this corruption?
My code:
private void ConvertLargeFile()
{
FileStream inputStream = new FileStream("C:\\Users\\test\\Desktop\\my.zip", FileMode.Open, FileAccess.Read);
byte[] buffer = new byte[MultipleOfThree];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while(bytesRead > 0)
{
byte[] secondaryBuffer = new byte[buffer.Length];
int secondaryBufferBytesRead = bytesRead;
Array.Copy(buffer, secondaryBuffer, buffer.Length);
bool isFinalChunk = false;
Array.Clear(buffer, 0, buffer.Length);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
if(bytesRead == 0)
{
isFinalChunk = true;
buffer = new byte[secondaryBufferBytesRead];
Array.Copy(secondaryBuffer, buffer, buffer.length);
}
String base64String = Convert.ToBase64String(isFinalChunk ? buffer : secondaryBuffer);
File.AppendAllText("C:\\Users\\test\\Desktop\\Base64Zip", base64String);
}
inputStream.Dispose();
}
The decoding is more of the same. I use the size of the base64String variable above (which varies depending on the original buffer size that I test with), as the buffer size for decoding. Then, instead of Convert.ToBase64String(), I call Convert.FromBase64String() and write to a different file name/path.
EDIT:
In my haste to reduce the code (I refactored it into a new project, separate from other processing to eliminate code that isn't central to the issue) I introduced a bug. The base 64 conversion should be performed on the secondaryBuffer for all iterations save the last (Identified by isFinalChunk), when buffer should be used. I have corrected the code above.
EDIT #2:
Thank you all for your comments/feedback. After correcting the bug (see the above edit), I re-tested my code, and it is actually working now. I intend to test and implement #rene's solution as it appears to be the best, but I thought that I should let everyone know of my discovery as well.
Based on the code shown in the blog from Wiktor Zychla the following code works. This same solution is indicated in the remarks section of Convert.ToBase64String as pointed out by Ivan Stoev
// using System.Security.Cryptography
private void ConvertLargeFile()
{
//encode
var filein= #"C:\Users\test\Desktop\my.zip";
var fileout = #"C:\Users\test\Desktop\Base64Zip";
using (FileStream fs = File.Open(fileout, FileMode.Create))
using (var cs=new CryptoStream(fs, new ToBase64Transform(),
CryptoStreamMode.Write))
using(var fi =File.Open(filein, FileMode.Open))
{
fi.CopyTo(cs);
}
// the zip file is now stored in base64zip
// and decode
using (FileStream f64 = File.Open(fileout, FileMode.Open) )
using (var cs=new CryptoStream(f64, new FromBase64Transform(),
CryptoStreamMode.Read ) )
using(var fo =File.Open(filein +".orig", FileMode.Create))
{
cs.CopyTo(fo);
}
// the original file is in my.zip.orig
// use the commandlinetool
// fc my.zip my.zip.orig
// to verify that the start file and the encoded and decoded file
// are the same
}
The code uses standard classes found in System.Security.Cryptography namespace and uses a CryptoStream and the FromBase64Transform and its counterpart ToBase64Transform
You can avoid using a secondary buffer by passing offset and length to Convert.ToBase64String, like this:
private void ConvertLargeFile()
{
using (var inputStream = new FileStream("C:\\Users\\test\\Desktop\\my.zip", FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[MultipleOfThree];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while(bytesRead > 0)
{
String base64String = Convert.ToBase64String(buffer, 0, bytesRead);
File.AppendAllText("C:\\Users\\test\\Desktop\\Base64Zip", base64String);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
}
}
}
The above should work, but I think Rene's answer is actually the better solution.
Use this code:
public void ConvertLargeFile(string source , string destination)
{
using (FileStream inputStream = new FileStream(source, FileMode.Open, FileAccess.Read))
{
int buffer_size = 30000; //or any multiple of 3
byte[] buffer = new byte[buffer_size];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while (bytesRead > 0)
{
byte[] buffer2 = buffer;
if(bytesRead < buffer_size)
{
buffer2 = new byte[bytesRead];
Buffer.BlockCopy(buffer, 0, buffer2, 0, bytesRead);
}
string base64String = System.Convert.ToBase64String(buffer2);
File.AppendAllText(destination, base64String);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
}
}
}
I'm trying to read a local file and upload it on ftp server. when i read a image file, everything is ok, but when i read a doc or docx file, FileStream returns length = 0. Here is my code:
i checked with some other files, it appears that it only works fine with images and it returns 0 for any other file
if (!ftpClient.FileExists(fileName))
{
try
{
ftpClient.ValidateCertificate += (control, e) => { e.Accept = true; };
const int BUFFER_SIZE = 64 * 1024; // 64KB buffer
byte[] buffer = new byte[BUFFER_SIZE];
using (Stream readStream = new FileStream(tempFilePath, FileMode.Open, FileAccess.Read))
using (Stream writeStream = ftpClient.OpenWrite(fileName))
{
while (readStream.Position < readStream.Length)
{
buffer.Initialize();
int bytesRead = readStream.Read(buffer, 0, BUFFER_SIZE);
writeStream.Write(buffer, 0, bytesRead);
}
readStream.Flush();
readStream.Close();
writeStream.Flush();
writeStream.Close();
DeleteTempFile(tempFilePath);
return true;
}
}
catch (Exception ex)
{
return false;
}
}
I couldn't find whats wrong with it. could you please help me?
While this doesn't answer your specific question, you don't actually need to know the length of your stream. Just keep reading until you hit a zero length read. A zero byte read is guaranteed to indicate the the end of any stream.
Return Value
Type: System.Int32
The total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available, or zero (0) if the end of the stream has been reached.
while (true)
{
int bytesRead = readStream.Read(buffer, 0, BUFFER_SIZE);
if(bytesRead==0)
{
break;
}
writeStream.Write(buffer, 0, bytesRead);
}
alternatively:
readStream.CopyTo(writeStream);
is probably the most concise method of stating your goal...
it was just a silly mistake, i have two fileupload and i've saved the other fileupload, so it creates a zero length file. as it appears the code works fine.
thanks everyone.
public static byte[] ReadMemoryMappedFile(string fileName)
{
long length = new FileInfo(fileName).Length;
using (var stream = File.Open(fileName, FileMode.OpenOrCreate, FileAccess.Read, FileShare.ReadWrite))
{
using (var mmf = MemoryMappedFile.CreateFromFile(stream, null, length, MemoryMappedFileAccess.Read, null, HandleInheritability.Inheritable, false))
{
using (var viewStream = mmf.CreateViewStream(0, length, MemoryMappedFileAccess.Read))
{
using (BinaryReader binReader = new BinaryReader(viewStream))
{
var result = binReader.ReadBytes((int)length);
return result;
}
}
}
}
}
OpenFileDialog openfile = new OpenFileDialog();
openfile.Filter = "All Files (*.*)|*.*";
openfile.ShowDialog();
byte[] buff = ReadMemoryMappedFile(openfile.FileName);
texteditor.Text = BitConverter.ToString(buff).Replace("-"," "); <----A first chance exception of type 'System.OutOfMemoryException' occurred in mscorlib.dll
I get a System.OutOfMemory exception when trying to read large files.
I've read a lot for 4 weeks in all the web... and tried a lot!!! But still, I can't seem to find a good solution to my problem.
Please help me..
Update
public byte[] FileToByteArray(string fileName)
{
byte[] buff = null;
FileStream fs = new FileStream(fileName,
FileMode.Open,
FileAccess.Read);
BinaryReader br = new BinaryReader(fs);
long numBytes = new FileInfo(fileName).Length;
buff = br.ReadBytes((int)numBytes);
//return buff;
return File.ReadAllBytes(fileName);
}
OR
public static byte[] FileToByteArray(FileStream stream, int initialLength)
{
// If we've been passed an unhelpful initial length, just
// use 32K.
if (initialLength < 1)
{
initialLength = 32768;
}
BinaryReader br = new BinaryReader(stream);
byte[] buffer = new byte[initialLength];
int read = 0;
int chunk;
while ((chunk = br.Read(buffer, read, buffer.Length - read)) > 0)
{
read += chunk;
// If we've reached the end of our buffer, check to see if there's
// any more information
if (read == buffer.Length)
{
int nextByte = br.ReadByte();
// End of stream? If so, we're done
if (nextByte == -1)
{
return buffer;
}
// Nope. Resize the buffer, put in the byte we've just
// read, and continue
byte[] newBuffer = new byte[buffer.Length * 2];
Array.Copy(buffer, newBuffer, buffer.Length);
newBuffer[read] = (byte)nextByte;
buffer = newBuffer;
read++;
}
}
// Buffer is now too big. Shrink it.
byte[] ret = new byte[read];
Array.Copy(buffer, ret, read);
return ret;
}
I still get a System.OutOfMemory exception when trying to read large files.
If your file is 4GB, then BitConverter will turn each byte into XX- string, each char in string is 2 bytes * 3 chars per byte * 4 294 967 295 bytes = 25 769 803 770. You need +25Gb of free memory to fit entire string, plus you already have your file in memory as byte array.
Besides, no single object in a .Net program may be over 2GB. Theoretical limit for a string length would be 1,073,741,823 chars, but you also need to have a 64-bit process.
So solution in your case - open FileStream. Read first 16384 bytes (or how much can fit on your screen), convert to hex and display, and remember file offset. When user wants to navigate to next or previous page - seek to that position in file on disk, read and display again, etc.
You need to read the file in chunks, keep track of where you are in the file, page the contents on screen and use seek and position to move up and down in the file stream.
You will not be able to display 4Gb file reading all of it in memory first by any approach.
The approach is to virtualize the data, reading only the visible lines when user scrolls. If you need to do a read-only text viewer then you can use WPF ItemsControl with virtulizing stack panel and bind to custom IList collection which will lazily fetch lines from the file calculating file offset by for the line index.
Help again please. I managed to upload a file from ASP.NET to my WCF service and it works like a charm. Now I want to do the same thing from WinRT without success. My file upload service is based on this post http://www.seesharpdot.net/?p=214. From ASP.NET I upload the file using this code
string filePath = Server.MapPath("~/Files/Happy.jpg");
string fileName = "Happy.jpg";
ServiceReference1.FileMetaData metadata = new ServiceReference1.FileMetaData();
metadata.LocalFilename = fileName;
metadata.FileType = ".jpg";
fileStream = new FileInfo(filePath).OpenRead();
oService.UploadFile(metadata, fileStream);
byte[] buffer = new byte[2048];
int bytesRead = fileStream.Read(buffer, 0, 2048);
while (bytesRead > 0)
{
fileStream.Write(buffer, 0, 2048);
bytesRead = fileStream.Read(buffer, 0, 2048);
}
From WinRT I thought this will work but it does not. No exception is thrown.
FileOpenPicker openPicker = new FileOpenPicker();
openPicker.ViewMode = PickerViewMode.Thumbnail;
openPicker.SuggestedStartLocation = PickerLocationId.PicturesLibrary;
openPicker.FileTypeFilter.Add(".jpg");
openPicker.FileTypeFilter.Add(".jpeg");
openPicker.FileTypeFilter.Add(".png");
StorageFile file = await openPicker.PickSingleFileAsync();
if (file != null)
{
byte[] bytes = await GetByteFromFile(file);
await App.ServiceInstance.UploadFileAsync(bytes);
}
// This is the method to convert the StorageFile to a Byte[]
private async Task<byte[]> GetByteFromFile(StorageFile storageFile)
{
var stream = await storageFile.OpenReadAsync();
using (var dataReader = new DataReader(stream))
{
var bytes = new byte[stream.Size];
await dataReader.LoadAsync((uint)stream.Size);
dataReader.ReadBytes(bytes);
return bytes;
}
}
What is interesting is that my WCF Service method only accepts a byte array (byte[]) as parameter and ignores the messageContract. Do I need to change my WCF service? How would you recommend I go about to fix this? Any help appreciated.
My WCF Service:
public void UploadFile(FileUploadMessage request)
{
Stream fileStream = null;
Stream outputStream = null;
try
{
fileStream = request.FileByteStream;
string rootPath = HttpContext.Current.Server.MapPath("~\\Files"); ; // ConfigurationManager.AppSettings["RootPath"].ToString();
string newFileName = Path.Combine(rootPath, request.MetaData.LocalFileName);
outputStream = new FileInfo(newFileName).OpenWrite();
const int bufferSize = 1024;
byte[] buffer = new byte[bufferSize];
int bytesRead = fileStream.Read(buffer, 0, bufferSize);
while (bytesRead > 0)
{
outputStream.Write(buffer, 0, bytesRead);
bytesRead = fileStream.Read(buffer, 0, bufferSize);
}
}
catch (IOException ex)
{
throw new FaultException<IOException>(ex, new FaultReason(ex.Message));
}
finally
{
if (fileStream != null)
{
fileStream.Close();
}
if (outputStream != null)
{
outputStream.Close();
}
}
}
I had to implement the same, but the WinRT generation of the library is different as to the one for desktop (Console application).
I had to take out Mtom in the binding, and leave the WCF service parameter as a Stream type.
This still allowed me to upload the document as required. However, on the service, i named the file to the md5 checksum value. The windows 8 app then sent another message to the service, with the parameter being the md5 checksum (calculated on the WinRt device) along with the file metadata. The WCF service then looked for the file with the md5 checksum and renamed the file.
So its a 2 step process from what I see as an immediate workaround, which I think i am happy with.
Happy to share the code for the md5 checksum on the service and WinRt side if required.
(Warning: First time on Stackoverflow) I want to be able to read in a pdf via binary but I encounter an issue when writing it back to the isolated storage. When it is written back to isolated storage and I try to open the file but I get an error message from adobe reader saying this is not a valid pdf. The file is 102 Kbytes but when I write it to isolated storage it is 108 Kbytes.
My reasoning for doing this is that I want to be able to split the pdfs. I have tried PDFsharp (doesn't open all pdf types). Here is my code:
public void pdf_split()
{
string prefix = #"/PDFread;component/";
string fn = originalFile;
StreamResourceInfo sr = Application.GetResourceStream(new Uri(prefix + fn, UriKind.Relative));
IsolatedStorageFile iStorage = IsolatedStorageFile.GetUserStoreForApplication();
using (var outputStream = iStorage.OpenFile(sFile, FileMode.CreateNew))
{
Stream resourceStream = sr.Stream;
long length = resourceStream.Length;
byte[] buffer = new byte[32];
int readCount = 0;
while (readCount < length)
{
int read = sr.Stream.Read(buffer, 0, buffer.Length);
readCount += read;
outputStream.Write(buffer, 0, read);
}
}
}