New to Blazor (web-assembly), so go easy :)
I would like to be able to select an xml file from local disk (in the region of 100mb in size), via InputFile and have it loaded in to XmlDocument so I can interrogate it.
Upon trying to load a file of this size, it crashes out on XmlDocument.load(). Unsure why.
I can get it to load smaller file sizes via a OpenReadStream and setting the maxAllowedSize to like 105000000 but they take a complete age in comparison to say loading it from a WPF c# app.
I'm unsure whether the stream is causing the slowness, or whether I'm missing something fundamental in accessing local disk files during the XmlDocument load process of this size?
Any help is greatly appreciated.
So basically, all i want to be able to do is something like this...
<InputFile OnChange="LoadFile">Select file...</InputFile>
#{
private void LoadFile()
{
XmlDocument newXml = new XmlDocument();
newXml.load(ChosenFilePath); //ChosenFilePath or hardcoded path for testing purposes still fails i.e. #"C:\temp\TestFile.xml"
}
}
You need to accept the InputFileChangeEventArgs and get your file through it, that's relatively simple, your event handler should be this:
private void LoadFile(InputFileChangeEventArgs args)
{
// The argument is to increase the maximum size, by default it only allows
// you to open files less than or equal to 500KiB, the number below is 200MiB
// Alternatively you could pass 'args.File.Size' to allow opening an arbitraily large file
var fileStream = args.File.OpenReadStream(200 * 1024 * 1024);
var newXml = new XmlDocument();
newXml.Load(fileStream);
}
Related
I'm basically trying to add podcasts to an XML file and eventually retrieve them from that same XML, using the data in different parts of the application.
I successfully write the data to the XML but every time I reboot the application (debug) and press the "submit" button the XML file resets to 0 entries.
The submit code:
PodcastList podList = new PodcastList();
private void btnAddPod_Click(object sender, EventArgs e)
{
var podUrl = txtAddPod.Text;
var podName = txtEnterName.Text;
podList.AddPod(podUrl, podName, 0);
Xml.SaveListData(podList.GetPodcastList(), "Podcasts.xml");
}
Save to XML:
public static void SaveListData(object obj, string filename)
{
var serializer = new XmlSerializer(obj.GetType());
using (var stream = new StreamWriter(filename))
{
serializer.Serialize(stream, obj);
}
}
I guess the applications creates a new XML-file every time I press submit and has fresh objects. What am I doing wrong? Cheers
XML files are not generally 'appended to' due to the need for opening and closing tags. As opposed to say, other types of text file like log files where appending makes a bit more sense.
When you call serializer.Serialize the whole file gets overwritten.
What your program needs to do is read in the already-existing XML file on startup and store that as a PodcastList(). Your program can then add to it (in memory) and save the whole list as a file.
PodcastList podList = new PodcastList(); is at the class level.
If you want to maintain the state , Reload the XML file Deserialize the file to PodcastList in the Constructor or at the time of the load, Then you will be able to retain and reuse the collection and rewrite the data back to XML file.
An external Windows service I work with maintains a single text-based log file that it continuously appends to. This log file grows unbounded over time. I'd like to prune this log file periodically to maintain, say the most recent 5mb of log entries. How can I efficiently implement the file I/O code in C# .NET 4.0 to prune the file to say 5mb?
Updated:
The way service dependencies are set up, my service always starts before the external service. This means I get exclusive access to the log file to truncate it, if required. Once the external service starts up, I will not access the log file. I can gain exclusive access to the file on desktop startup. The problem is - the log file may a few gigabytes in size and I'm looking for an efficient way to truncate it.
It's going to take the amount of memory that you want to store to process the "new" log file but if you only want 5Mb then it should be fine. If you are talking about Gb+ then you probably have other problems; however, it could still be accomplished using a temp file and some locking.
As noted before, you may experience a race condition but that's not the case if this is the only thread writing to this file. This would replace your current writing to the file.
const int MAX_FILE_SIZE_IN_BYTES = 5 * 1024 * 1024; //5Mb;
const string LOG_FILE_PATH = #"ThisFolder\log.txt";
string newLogMessage = "Hey this happened";
#region Use one or the other, I mean you could use both below if you really want to.
//Use this one to save an extra character
if (!newLogMessage.StartsWith(Environment.NewLine))
newLogMessage = Environment.NewLine + newLogMessage;
//Use this one to imitate a write line
if (!newLogMessage.EndsWith(Environment.NewLine))
newLogMessage = newLogMessage + Environment.NewLine;
#endregion
int newMessageSize = newLogMessage.Length*sizeof (char);
byte[] logMessage = new byte[MAX_FILE_SIZE_IN_BYTES];
//Append new log to end of "file"
System.Buffer.BlockCopy(newLogMessage.ToCharArray(), 0, logMessage, MAX_FILE_SIZE_IN_BYTES - newMessageSize, logMessage.Length);
FileStream logFile = File.Open(LOG_FILE_PATH, FileMode.Open, FileAccess.ReadWrite);
int sizeOfRetainedLog = (int)Math.Min(MAX_FILE_SIZE_IN_BYTES - newMessageSize, logFile.Length);
//Set start position/offset of the file
logFile.Position = logFile.Length - sizeOfRetainedLog;
//Read remaining portion of file to beginning of buffer
logFile.Read(logMessage, logMessage.Length, sizeOfRetainedLog);
//Clear the file
logFile.SetLength(0);
logFile.Flush();
//Write the file
logFile.Write(logMessage, 0, logMessage.Length);
I wrote this really quick, I apologize if I'm off by 1 somewhere.
depending on how often it is written to I'd say you might be facing a race condition to modify the file without damaging the log. You could always try writing a service to monitor the file size, and once it reaches a certain point lock the file, dupe and clear the whole thing and close it. Then store the data in another file that the service controls the size of easily. Alternatively you could see if the external service has an option for logging to a database, which would make it pretty simple to roll out the oldest data.
You could use a file observer to monitor the file:
FileSystemWatcher logWatcher = new FileSystemWatcher();
logWatcher.Path = #"c:\example.log"
logWatcher.Changed += logWatcher_Changed;
Then when the event is raised you can use a StreamReader to read the file
private void logWatcher_Changed(object sender, FileSystemEventArgs e)
{
using (StreamReader readFile = new StreamReader(path))
{
string line;
string[] row;
while ((line = readFile.ReadLine()) != null)
{
// Here you delete the lines you want or move it to another file, so that your log keeps small. Then save the file.
}
}
}
It´s an option.
I have been trying to read a file, and calculate the hash of the contents to find duplicates. The problem is that in Windows 8 (or WinRT or windows store application or however it is called, I'm completely confused), System.IO has been replaced with Windows.Storage, which behaves differently, and is very confusing. The official documentation is not useful at all.
First I need to get a StorageFile object, which in my case, I get from browsing a folder from a file picker:
var picker = new Windows.Storage.Pickers.FolderPicker();
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.MusicLibrary;
picker.FileTypeFilter.Add("*");
var folder = await picker.PickSingleFolderAsync();
var files = await folder.GetFilesAsync(Windows.Storage.Search.CommonFileQuery.OrderByName);
Now in files I have the list of files I need to index. Next, I need to open that file:
foreach (StorageFile file in files)
{
var filestream = file.OpenAsync(Windows.Storage.FileAccessMode.Read);
Now is the most confusing part: getting the data from the file. The documentation was useless, and I couldn't find any code example. Apparently, Microsoft thought getting pictures from the camera is more important than opening a file.
The file stream has a member ReadAsync which I think reads the data. This method needs a buffer as a parameter and returns another buffer (???). So I create a buffer:
var buffer = new Windows.Storage.Streams.Buffer(1024 * 1024 * 10); // 10 mb should be enough for an mp3
var resultbuffer = await filestream.ReadAsync(buffer, 1024 * 1024 * 10, Windows.Storage.Streams.InputStreamOptions.ReadAhead);
I am wondering... what happens if the file doesn't have enough bytes? I haven't seen any info in the documentation.
Now I need to calculate the hash for this file. To do that, I need to create an algorithm object...
var alg = Windows.Security.Criptography.Core.HashAlgorithmProvider.OpenAlgorithm("md5");
var hashbuff = alg.HashData(resultbuffer);
// Cleanup
filestream.Dispose();
I also considered reading the file in chunks, but how can I calculate the hash like that? I looked everywhere in the documentation and found nothing about this. Could it be the CryptographicHash class type with it's 'append' method?
Now I have another issue. How can I get the data from that weird buffer thing to a byte array? The IBuffer class doesn't have any 'GetData' member, and the documentation, again, is useless.
So all I could do now is wonder about the mysteries of the universe...
// ???
}
So the question is... how can I do this? I am completely confused, and I wonder why did Microsoft choose to make reading a file so... so... so... impossible! Even in Assembly I could figure it out easier than.... this thing.
WinRT or Windows Runtime should not be confused with .NET as it is not .NET. WinRT has access to only a subset of the Win32 API but not to everything like the .NET is. Here is a pretty good article on what are the rules and restrictions in WinRT.
The WinRT in general does not have access to the file system. It works with capabilities and you can allow file access capability but this would restrict your app's access only to certain areas. Here is a good example of how do to file access via WinRT.
I am having an xml file like:
<CurrentProject>
// Elements like
// last opened project file to reopen it when app starts
// and more global project independend settings
</CurrentProject>
Now I asked myself wether I should deliver this xml file with above empty elements with the installer for my app or should I create this file on the fly on application start if it does not exist else read the values from it.
Consider also that the user could delete this file and that should my application not prevent from working anymore.
What is better and why?
UPDATE:
What I did felt ok for me so I post my code here :) It just creates the xml + structure on the fly with some security checks...
public ProjectService(IProjectDataProvider provider)
{
_provider = provider;
string applicationPath = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData);
_projectPath = Path.Combine(applicationPath,#"TBM\Settings.XML");
if (!File.Exists(_projectPath))
{
string dirPath = Path.Combine(applicationPath, #"TBM");
if (!Directory.Exists(dirPath))
Directory.CreateDirectory(dirPath);
using (var stream = File.Create(_projectPath))
{
XElement projectElement = new XElement("Project");
projectElement.Add(new XElement("DatabasePath"));
projectElement.Save(stream, SaveOptions.DisableFormatting);
}
}
}
In a similar scenario, I recently went for creating the initial file on the fly. The main reason I chose this was the fact that I wasn't depending on this file being there and being valid. As this was a file that's often read from/written to, there's a chance that it could get corrupted (e.g. if the power is lost while the file is being written).
In my code I attempted to open this file for reading and then read the data. If anywhere during these steps I encountered an error, I simply recreated the file with default values and displayed a corresponding message to the user.
In our customized C# logging system, we use streamWriter = File.CreateText(fileNameStr); to create a file and open a stream for write.
Now we want to monitor the file size to see if it reach the max required size. What I did is the following:
create a FileInfo object for about file: currFileInfo = new FileInfo(fileNameStr);
get file size after each write: curFileInfo.Refresh(); fileSize = curFileInfo.Length;
compare the file size with max file size, if bigger, close the current one and create a new file.
I have print out to see how long it will take to refresh the FileInfo, many times it will take about 15msec.
So I am thinking there may be a better way to do this. what's your suggestion?
This should work:
streamWriter.BaseStream.Position;
This should contain the current position of the stream and if you're using it for appending only, this should contain the correct file size.
FileSystemWatcher fsw=new FileSystemWatcher(filePath);
fsw.NotifyFilter=NotifyFilters.Size;
fsw.Filter="fileName";
fsw.Changed += new FileSystemEventHandler(YourHandler);
fsw.EnableRaisingEvents = True;