how's everyone doing this morning?
I'm writing a program that will parse a(several) xml files.
This stage of the program is going to be focusing on adding/editing skills/schools/abilities/etc for a tabletop rpg (L5R). What I learn by this one example should carry me through the rest of the program.
So I've got the xml reading set up using XMLReader. The file I'm reading looks like...
<skills>
<skill>
<name>some name</name>
<description>a skill</description>
<type>high</type>
<stat>perception</stat>
<page>42</page>
<availability>all</availability>
</skill>
</skills>
I set up a Skill class, which holds the data, and a SkillEdit class which reads in the data, and will eventually have methods for editing and adding.
I'm currently able to read in everything right, but I had the thought that since description can vary in length, once I write the edit method the best way to ensure no data is overwritten would be to just append the edited skill to the end of the file and wipe out its previous entry.
In order for me to do that, I would need to know where skill's file offset is, and where /skill's file offset is. I can't seem to find any way of getting those offsets though.
Is there a way to do that, or can you guys suggest a better implementation for editing an already existing skill?
If you read your XML into LINQ to XML's XDocument (or XElement), everything could become very easy. You can read, edit, add stuff, etc. to XML files using a simple interface.
e.g.,
var xmlStr = #"<skills>
<skill>
<name>some name</name>
<description>a skill</description>
<type>high</type>
<stat>perception</stat>
<page>42</page>
<availability>all</availability>
</skill>
</skills>
";
var doc = XDocument.Parse(xmlStr);
// find the skill "some name"
var mySkill = doc
.Descendants("skill") // out of all skills
.Where(e => e.Element("name").Value == "some name") // that has the element name "some name"
.SingleOrDefault(); // select it
if (mySkill != null) // if found...
{
var skillType = mySkill.Element("type").Value; // read the type
var skillPage = (int)mySkill.Element("page"); // read the page (as an int)
mySkill.Element("description").Value = "an AWESOME skill"; // change the description
// etc...
}
No need to calculate offsets, manual, step-by-step reading or maintaining other state, it is all taken care of for you.
Don't do it! In general, you can't reliably know anything about physical offsets in the serialized XML because of possible character encoding differences, entity references, embedded comments and a host of other things that can cause the physical and logical layers to have a complex relationship.
If your XML is just sitting on the file system, your safest option is to have a method in your skill class which serializes to XML (you already have one to read XML already), and re-serialize whole objects when you need to.
Tyler,
Umm, sounds like you're suffering from a text-book case of premature optimization... Have you PROVEN that reading and writing the COMPLETE skill list to/from the xml-file is TOO slow? No? Well until it's been proven that there IS NO PERFORMANCE ISSUE, right? So we just write the simplest code that works (i.e. does what we want, without worrying too much about performance), and then move on directly to the next bit of trick functionality... testing as we go.
Iff (which is short for if-and-only-if) I had a PROVEN performance problem then-and-only-then I'd consider writing each skill to individual XML-file, to avert the necessisity for rewriting a potentially large list of skills each time a single skill was modified... But this is "reference data", right? I mean you wouldn't de/serialize your (volatile) game data to/from an XML file, would you? Because an RDBMS is known to be much better at that job, right? So you're NOT going to be rewriting this file often?
Cheers. Keith.
Related
Quick question, the following line should be pretty self explanatory :
doc.Descendants("DOB").Select(dob => dob.ToString()).All(dob => DateTime.Parse(dob.ToString()) != DateTime.Parse(processing.DateOfBirth))
But just in case, I want to return false if any value of node (DOB) is the same datetime as
processing.dateofbirth, because i'll need to add the date of birth to the xml if it's not in there.
My two questions are
Is this the shortest amount of code to accomplish this, with linq to xml? (i think it's not)
and
This will be run against several million records, is there a more efficient way to accomplish this?
EDIT
I miscommunicated, sorry. The XML is small. There are millions of rows in a database representing a single person, with a column PersonXml that just has name, dob, number, and a few other things. The rows are read in through a SqlDataReader and validated/updated, this being part of that.
1) Not sure here, at least I cant think of a shorter way to write it.
2) If the format is always the same, you could consider working on the data directly. Either work on the string or on a stream directly. Parsing the string to XDoc will always cut its share. You mentioned the xml file is very small and might not change.
For a project where I am writing a xml file that is way to large to keep in memory, I have written a class (actually found parts of that code on SO) that works on a filestream and looks for matching patterns byte by byte. In your case this might be bad style and a bit unconvenient to work with, but if speed is a matter this will beat XDocument anytime.
-edit- Reading your edits, I think speed is not really something you are too concerned about and this is a task you might have to do only once to correct some old data? In that case I'd suggest you just stick with your solution, maybe use a Taskfactory to spawn threads that run a new task for each row you receive from the db and let it run over night. In my mind the easiest and safest solution
I'm kind of stuck having to use .Net 2.0, so LINQ xml isn't available, although I would be interested how it would compare...
I had to write an internal program to download, extract, and compare some large XML files (about 10 megs each) that are essentially build configurations. I first attempted using libraries, such as Microsoft's XML diff/patch, but comparing the files was taking 2-3 minutes, even with ignoring whitespace, namespaces, etc. (i tested each ignore one at a time to try and figure out what was speediest). The I tried to implement my own ideas - lists of nodes from XmlDocument objects, dictionaries of keys of the root's direct descendants (45000 children, by the way) that pointed to ints to indicate the node position in the XML document... all took at least 2 minutes to run.
My final implementation finishes in 1-2 seconds - I made a system process call to diff with a few lines of context and saved those results to display (our development machines include cygwin, thank goodness).
I can't help but think there is a better, XML specific way to do this that would be just as fast as a plain text diff - especially since all I'm really interested in is the Name element that is the child of each direct descendant, and could throw away 4/5 of the file for my purposes (we only need to know what files were included, not anything else involving language or version)
So, as popular as XML is, I'm sure somebody out there has had to do something similar. What is a fast efficient way to compare these large XML's? (prefereably open source or Free)
edit: a sample of the nodes - I only need to find missing Name elements (there are over 45k nodes as well)
<file>
<name>SomeFile</name>
<version>10.234</version>
<countries>CA,US</countries>
<languages>EN</languages>
<types>blah blah</types>
<internal>N</internal>
</file>
XmlDocument source = new XmlDocument();
source.Load("source.xml");
Dictionary<string, XmlNode> files = new Dictionary<string, XmlNode>();
foreach(XmlNode file in source.SelectNodes("//file"))
files.Add(file.SelectSingleNode("./name").InnerText, file);
XmlDocument source2 = new XmlDocument();
source2.Load("source2.xml");
XmlNode value;
foreach(XmlNode file in source2.SelectNodes("//file"))
if (files.TryGetValue(file.SelectSingleNode("./name").InnerText, out value))
// This file is both in source and source2.
else
// This file is only in source2.
I am not sure exactly what you want, I hope that this example will help you in your quest.
Diffing XML can be done many ways. You're not being very specific regarding the details, though. What does transpire is that the files are large and you need only 4/5 of the information.
Well, then the algorithm is as follows:
Normalize and reduce the documents to the information that matters.
Save the results.
Compare the results.
And the implementation:
Use the XmlReader API, which is efficient, to produce plain text representations of your information. Why plain text representation? Because diff tools predicated on the assumption that there is plain text. And so are our eyeballs. Why XmlReader? You could use SAX, which is memory-efficient, but XmlReader is more efficient. As for the precise spec of that plain text file ... you're just not including enough information.
Save the plain text files to some temp directory.
Use a command-line diff utility like GnuWin32 diff to get some diff output. Yeah, I know, not pure and proper, but works out of the box and there's no coding to be done. If you are familiar with some C# diff API (I am not), well, then use that API instead, of course.
Delete the temp files. (Or optionally keep them if you're going to reuse them.)
I've searched a lot but I couldn't find a propper solution for my problem. I wrote a xml file containing all episode information of a TV-Show. It's 38 kb and contains attributes and strings for about 680 variables. At first I simply read it with the help of XMLTextReader which worked fine with my quadcore. But my wifes five year old laptop took about 30 seconds to read it. So I thought about multithreading but I get an exception because the file is already opened.
Thread start looks like this
while (reader.Read())
{
...
else if (reader.NodeType == XmlNodeType.Element)
{
if (reader.Name.Equals("Season1"))
{
current.seasonNr = 0;
current.currentSeason = season[0];
current.reader = reader;
seasonThread[0].Start(current);
}
else if (reader.Name.Equals("Season2"))
{
current.seasonNr = 1;
current.currentSeason = season[1];
current.reader = reader;
seasonThread[1].Start(current);
}
And the parsing method like this
reader.Read();
for (episodeNr = 0; episodeNr < tmp.currentSeason.episode.Length; episodeNr++)
{
reader.MoveToFirstAttribute();
tmp.currentSeason.episode[episodeNr].id = reader.ReadContentAsInt();
...
}
But it doesn't work...
I pass the reader because I want the 'cursor' to be in the right position. But I also have no clue if this could work at all.
Please help!
EDIT:
Guys where did I wrote about IE?? The program I wrote parses the file. I run it on my PC and on the laptop. No IE at all.
EDIT2:
I did some stopwatch research and figured out that parsing the xml file only takes about 200ms on my PC and 800ms on my wifes laptop. Is it WPF beeing so slow? What can I do?
I agree with most everyone's comments. Reading a 38Kb file should not take so long. Do you have something else running on the machine, antivirus / etc, that could be interfering with the processing?
The amount of time it would take you to create a thread will be far greater than the amount of time spent reading the file. If you could post the actual code used to read the file and the file itself, it might help analyze performance bottlenecks.
I think you can't parse XML in multiple threads, at least not in a way that would bring performance benefits, because to read from some point in the file, you need to know everything that comes before it, if nothing else, to know at what level you are.
Your code, if tit worked, would do something like this:
main season1 season2
read
read
skip read
skip read
read
skip read
skip read
Note that to do “skip”, you need to fully parse the XML, which means you're doing the same amount of work as before on the main thread. The only difference is that you're doing some additional work on the background threads.
Regarding the slowness, just parsing such a small XML file should be very fast. If it's slow, you're most likely doing something else that is slow, or you're parsing the file multiple times.
If I am understanding how your .xml file is being used, you have essentially created an .xml database.
If correct, I would recommend breaking your Xml into different .xml files, with an indexed .xml document. I would think you can then query - using Linq-2-Xml - a set of .xml data from a specific .xml source.
Of course, this means you will still need to load an .xml file; however, you will be loading significantly smaller files and you would be able to, although highly discouraged, asynchronously load .xml document objects.
Your XML schema doesn't lend itself to parallelism since you seem to have node names (Season1, Season2) that contain the same data but must be parsed individually. You could redesign you schema to have the same node names (i.e. Season) and attributes that express the differences in the data (i.e. Number to indicate the season number). Then you can parallelize i.e. using Linq to XML and PLinq:
XDocument doc = XDocument.Load(#"TVShowSeasons.xml");
var seasonData = doc.Descendants("Season")
.AsParallel()
.Select(x => new Season()
{
Number = (int)x.Attribute("Number"),
Descripton = x.Value
}).ToList();
I started thinking about how to handle the save functionality of my app, and thought about 2 options:
The application has nodes like:
Blur
Contrast
Sharpen
Invert
...
1. Interpreting the saved file, like:
Blur name:"Blur01" Amount:5
...
2. Having the saved file in a self executable format, like:
Blur blur = new Blur ();
blur.Name = "Blur01"
blur.Amount = 5
...
Which one should I go for? Is there a better way to handle this?
I want the saved file to be backwards and forwards compatible.
EDIT: Thanks for all the replies. Anyone can please explain why #2 would not be future proof? Is it because one can change the load/open code for #1, but not for #2?
You could probably use XML Serialization, since it's widely accepted and human readable.
Here's a tutorial on that: XML Serialization
I would go with something more like the first option.
Although, in general, I think XML would be a better approach to this than making your own syntax. This is much better from a compatibility/future-proofing standpoint than trying to make your own syntax parsers for your file.
What about something like:
<Filters>
<Blur Name="Blur01" />
<Sharpen Name="Sharpen01" Amount=5 />
</Filters>
I too would go with an XML file as this will allow you to ensure compatibility both forwards and backwards.
This is because you look for properties rather than parsing the file line by line.
For example, if blur changes from:
<Blur>
<name>Blur01</name>
<amount>5</amount>
</Blur>
to:
<Blur>
<name>Blur01</name>
<amount>5</amount>
<feather>true</feather>
</Blur>
Older versions of the app will still be able to read the file as they won't look for the feather property. All you need to do is ensure that you set default values when you create your objects so that the older files can be read without leaving unset data.
In response to the update - there's no reason why you couldn't make #2 future proof. You'd just have to do the versioning yourself.
The reason having a self-executing "save format" is generally bad is that today your "Blur" function might look like:
public class Blur
{
int Amount = 5;
}
but in the future, you might improve your blur "system" to instead have something like:
public class Blur
{
int HorizontalAmount = 5;
int VerticalAmount = 10;
}
and now when you execute that saved file, it will no longer compile because there is no longer an 'Amount' property. Then to get backwards compatibility you will need to 'interpret' the Amount value to now mean HorizontalAmount = 5 AND VerticalAmount = 5 (or whatever).
So really, in the long run, you will be better off by having an interpreted file format from the start.
Are you saving it in a text file?
If that is so wouldn't it be better to save it as XML?
<Blur>
<name>Blur01</name>
<amount>5</amount>
</Blur>
Otherwise I am not sure I understand the question :)
I've been trying to deal with some delimited text files that have non standard delimiters (not comma/quote or tab delimited). The delimiters are random ASCII characters that don't show up often between the delimiters. After searching around, I've seem to have only found no solutions in .NET will suit my needs and the custom libraries that people have written for this seem to have some flaws when it comes to gigantic input (4GB file with some field values having very easily several million characters).
While this seems to be a bit extreme, it is actually a standard in the Electronic Document Discovery (EDD) industry for some review software to have field values that contain the full contents of a document. For reference, I've previously done this in python using the csv module with no problems.
Here's an example input:
Field delimiter =
quote character = þ
þFieldName1þþFieldName2þþFieldName3þþFieldName4þ
þValue1þþValue2þþValue3þþSomeVery,Very,Very,Large value(5MB or so)þ
...etc...
Edit:
So I went ahead and created a delimited file parser from scratch. I'm kind of weary using this solution as it may be prone to bugs. It also doesn't feel "elegant" or correct to have to write my own parser for a task like this. I also have a feeling that I probably didn't have to write a parser from scratch for this anyway.
Use the File Helpers API. It's .NET and open source. It's extremely high performance using compiled IL code to set fields on strongly typed objects, and supports streaming.
It supports all sorts of file types and custom delimiters; I've used it to read files larger than 4GB.
If for some reason that doesn't do it for you, try just reading line by line with a string.split:
public IEnumerable<string[]> CreateEnumerable(StreamReader input)
{
string line;
while ((line = input.ReadLine()) != null)
{
yield return line.Split('þ');
}
}
That'll give you simple string arrays representing the lines in a streamy fashion that you can even Linq into ;) Remember however that the IEnumerable is lazy loaded, so don't close or alter the StreamReader until you've iterated (or caused a full load operation like ToList/ToArray or such - given your filesize however, I assume you won't do that!).
Here's a good sample use of it:
using (StreamReader sr = new StreamReader("c:\\test.file"))
{
var qry = from l in CreateEnumerable(sr).Skip(1)
where l[3].Contains("something")
select new { Field1 = l[0], Field2 = l[1] };
foreach (var item in qry)
{
Console.WriteLine(item.Field1 + " , " + item.Field2);
}
}
Console.ReadLine();
This will skip the header line, then print out the first two field from the file where the 4th field contains the string "something". It will do this without loading the entire file into memory.
Windows and high performance I/O means, use IO Completion ports. You may have todo some extra plumbing to get it working in your case.
This is with the understanding that you want to use C#/.NET, and according to Joe Duffy
18) Don’t use Windows Asynchronous Procedure Calls (APCs) in managed
code.
I had to learn that one the hard way ;), but ruling out APC use, IOCP is the only sane option. It also supports many other types of I/O, frequently used in socket servers.
As far as parsing the actual text, check out Eric White's blog for some streamlined stream use.
I would be inclined to use a combination of Memory Mapped Files (msdn point to a .NET wrapper here) and a simple incremental parse, yielding back to an IEnumerable list of your record / text line (or whatever)
You mention that some fields are very very big, if you try to read them in their entirety to memory you may be getting yourself into trouble. I would read through the file in 8K (or small chunks), parse the current buffer, keep track of state.
What are you trying to do with this data that you are parsing? Are you searching for something? Are you transforming it?
I don't see a problem with you writing a custom parser. The requirements seem sufficiently different to anything already provided by the BCL, so go right ahead.
"Elegance" is obviously a subjective thing. In my opinion, if your parser's API looks and works like a standard BCL "reader"-type API, then that is quite "elegant".
As for the large data sizes, make your parser work by reading one byte at a time and use a simple state machine to work out what to do. Leave the streaming and buffering to the underlying FileStream class. You should be OK with performance and memory consumption.
Example of how you might use such a parser class:
using(var reader = new EddReader(new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.Read, 8192)) {
// Read a small field
string smallField = reader.ReadFieldAsText();
// Read a large field
Stream largeField = reader.ReadFieldAsStream();
}
While this doesn't help address the large input issue, a possible solution to the parsing issue might include a custom parser that users the strategy pattern to supply a delimiter.