JSON.NET writing invalid JSON? - c#

It appears that JSON.NET is writing invalid JSON, although I wouldn't be surprised if it was due to my misuse.
It appears that it is repeating the last few characters of JSON:
/* ... */ "Teaser":"\nfoo.\n","Title":"bar","ImageSrc":null,"Nid":44462,"Vid":17}]}4462,"Vid":17}]}
The repeating string is:
4462,"Vid":17}]}
I printed it out to the console, so I don't think this is a bug in Visual Studio's text visualizer.
The serialization code:
static IDictionary<int, ObservableCollection<Story>> _sectionStories;
private static void writeToFile()
{
IsolatedStorageFile storage = IsolatedStorageFile.GetUserStoreForApplication();
using (IsolatedStorageFileStream stream = storage.OpenFile(STORIES_FILE, FileMode.OpenOrCreate))
{
using (StreamWriter writer = new StreamWriter(stream))
{
writer.Write(JsonConvert.SerializeObject(_sectionStories));
}
}
#if DEBUG
StreamReader reader = new StreamReader(storage.OpenFile(STORIES_FILE, FileMode.Open));
string contents = reader.ReadToEnd();
JObject data = JObject.Parse(contents);
string result = "";
foreach (char c in contents.Skip(contents.Length - 20))
{
result += c;
}
Debug.WriteLine(result);
// crashes here with ArgumentException
// perhaps because JSON is invalid?
var foo = JsonConvert.DeserializeObject<Dictionary<int, List<Story>>>(contents);
#endif
}
Am I doing something wrong here? Or is this a bug? Are there any known workarounds?
Curiously, JObject.Parse() doesn't throw any errors.
I'm building a Silverlight app for Windows Phone 7.

When writing the file you specify
FileMode.OpenOrCreate
If the file exists and is 16 bytes longer than the data you intend to write to it (from an older version of your data that just happens to end with the exact same data) then that data will still be present when you're done writing your new data.
Solution:
FileMode.Create
From:
http://msdn.microsoft.com/en-us/library/system.io.filemode.aspx
FileMode.Create: Specifies that the operating system should create a new file. If the file already exists, it will be overwritten

Related

XML file from ZIP Archive is incomplete in C#

I've work with large XML Files (~1000000 lines, 34mb) that are stored in a ZIP archive. The XML file is used at runtime to store and load app settings and measurements. The gets loadeted with this function:
public static void LoadFile(string path, string name)
{
using (var file = File.OpenRead(path))
{
using (var zip = new ZipArchive(file, ZipArchiveMode.Read))
{
var foundConfigurationFile = zip.Entries.First(x => x.FullName == ConfigurationFileName);
using (var stream = new StreamReader(foundConfigurationFile.Open()))
{
var xmlSerializer = new XmlSerializer(typeof(ProjectConfiguration));
var newObject = xmlSerializer.Deserialize(stream);
CurrentConfiguration = null;
CurrentConfiguration = newObject as ProjectConfiguration;
AddRecentFiles(name, path);
}
}
}
}
This works for most of the time.
However, some files don't get read to the end and i get an error that the file contains non valid XML. I used
foundConfigurationFile.ExtractToFile();
and fount that the readed file stops at line ~800000. But this only happens inside this code. When i open the file via editor everything is there.
It looks like the zip doesnt get loaded correctly, or for that matter, completly.
Am i running in some limitations? Or is there an error in my code i don't find?
The file is saved via:
using (var file = File.OpenWrite(Path.Combine(dirInfo.ToString(), fileName.ToString()) + ".pwe"))
{
var zip = new ZipArchive(file, ZipArchiveMode.Create);
var configurationEntry = zip.CreateEntry(ConfigurationFileName, CompressionLevel.Optimal);
var stream = configurationEntry.Open();
var xmlSerializer = new XmlSerializer(typeof(ProjectConfiguration));
xmlSerializer.Serialize(stream, CurrentConfiguration);
stream.Close();
zip.Dispose();
}
Update:
The problem was the File.OpenWrite() method.
If you try to override a file with this method it will result in a mix between the old file and the new file, if the new file is shorter than the old file.
File.OpenWrite() doenst truncate the old file first as stated in the docs
In order to do it correctly it was neccesary to use the File.Create() method. Because this method truncates the old file first.

GeoJSON File Read Exception Handling in .NET?

Getting this error, somewhere, within a 1gb GeoJSON file.
System.ArgumentOutOfRangeException: 'According to the GeoJSON v1.0
spec a LineString must have at least two or more positions. (Parameter
'coordinates')'
The input file is an open-source US roadway file, made up of LineString. I need to log what is causing the Exception and continue processing. How can that be done? The code looks like this:
var featuresAll = "jsonfiles\\MotorVehicleUseMapRoads.json".CreateFromJsonFile<FeatureCollection>();
...
public static T CreateFromJsonFile<T>(this String fileName)
{
T data;
using (FileStream fileStream = new FileStream(fileName, FileMode.Open))
{
data = CreateFromJsonStream<T>(fileStream);
}
return data;
}
Thanks.

Out of memory exception reading "Large" file

I'm trying to serialize an object into a string
The first problem I encountered was that the XMLSerializer.Serialize method threw an Out of memory exception, I've trying all kind of solutions and none worked so I serialized it into a file.
The file is about 300mb's (32 bit process, 8gb ram) and trying to read it with StreamReader.ReadToEnd also results in Out of memory exception.
The XML format and loading it on a string are not an option but a must.
The question is:
Any reason that a 300mb file will throw that kind of exception? 300mb is not really a large file.
Serialization code that fails on .Serialize
using (MemoryStream ms = new MemoryStream())
{
var type = obj.GetType();
if (!serializers.ContainsKey(type))
serializers.Add(type,new XmlSerializer(type));
// new XmlSerializer(obj.GetType()).Serialize(ms, obj);
serializers[type].Serialize(ms, obj);
ms.Position = 0;
using (StreamReader sr = new StreamReader(ms))
{
return sr.ReadToEnd();
}
}
Serialization and read from file that fails on ReadToEnd
var type = obj.GetType();
if (!serializers.ContainsKey(type))
serializers.Add(type,new XmlSerializer(type));
FileStream fs = new FileStream(#"c:/temp.xml", FileMode.Create);
TextWriter writer = new StreamWriter(fs, new UTF8Encoding());
serializers[type].Serialize(writer, obj);
writer.Close();
fs.Close();
using (StreamReader sr = new StreamReader(#"c:/temp.xml"))
{
return sr.ReadToEnd();
}
The object is large because its an elaborate system entire configuration object...
UPDATE:
Reading the file in chucks (8*1024 chars) will load the file into a StringBuilder but the builders fails on ToString().... starting to think there is no way which is really strange.
Yeah, if you're using 32-bit, trying to load 300MB in one chunk is going to be awkward, especially when using approaches that don't know the final size (number of characters, not bytes) in advance, thus have to keep doubling an internal buffer. And that is just when processing the string! It then needs to rip that into a DOM, which can often take several times as much space as the underlying data. And finally, you need to deserialize it into the actual objects, usually taking about the same again.
So - indeed, trying to do this in 32-bit will be tough.
The first thing to try is: don't use ReadToEnd - just use XmlReader.Create with either the file path or the FileStream, and let XmlReader worry about how to load the data. Don't load the contents for it.
After that... the next thing to do is: don't limit it to 32-bit.
Well, you could try enabling the 3GB switch, but... moving to 64-bit would be preferable.
Aside: xml is not a good choice for large volumes of data.
Exploring the source code for StreamReader.ReadToEnd reveals that it internally makes use of the StringBuilder.Append method:
public override String ReadToEnd()
{
if (stream == null)
__Error.ReaderClosed();
#if FEATURE_ASYNC_IO
CheckAsyncTaskInProgress();
#endif
// Call ReadBuffer, then pull data out of charBuffer.
StringBuilder sb = new StringBuilder(charLen - charPos);
do {
sb.Append(charBuffer, charPos, charLen - charPos);
charPos = charLen; // Note we consumed these characters
ReadBuffer();
} while (charLen > 0);
return sb.ToString();
}
which most probably throws this exception that leads to the this question/answer: interesting OutOfMemoryException with StringBuilder
al

Isolated Storage adding caracters at the end of a Stream

I'm having problems converting long into string.
What I'm doing is trying to save a DateTime.Now.Ticks property in isolatedStorage, then retrieve it afterwords. This is what I did to save it:
IsolatedStorageFile appStorage = IsolatedStorageFile.GetUserStoreForApplication();
using (var file = appStorage.CreateFile("appState"))
{
using (var sw = new StreamWriter(file))
{
sw.Write(DateTime.Now.Ticks);
}
}
When I retrieve the file, I do it like this:
if (appStorage.FileExists("appState"))
{
using (var file = appStorage.OpenFile("appState", FileMode.Open))
{
using (StreamReader sr = new StreamReader(file))
{
string s = sr.ReadToEnd();
}
}
appStorage.DeleteFile("appState");
}
Until here I have no problem, but when I try to convert the string I retrieved, the compiler throws a FormatExeption. This are the two ways I tried to do it with:
long time = long.Parse(s);
long time = (long)Convert.ToDouble(s);
So is there any other ways to so this?
EDIT:
The problem is not in the conversion but rather in the StreamWriter adding extra characters.
I suspect you are seeing some other data at the end. Something else may have written other data to the stream.
I think you should use StreamWriter.WriteLine() instead of StreamWriter.Write() to write the data and then call StreamReader.ReadLine() instead of StreamReader.ReadToEnd() to read it back in.

MonoTouch - UIDevice.CurrentDevice.Name - UTF8

We've noticed that UTF8 characters don't come out correctly when using UIDevice.CurrentDevice.Name in MonoTouch.
It comes out as "iPad 2 ??", if you use some of the special characters like holding down the apostrophe key on the iPad keyboard. (Sorry don't know the equivalent to show these characters in windows)
Is there a recommended workaround to get the correct text? We don't mind to convert to UTF8 ourselves. I also tried simulating this from a UITextField and it worked fine--no UTF8 problems.
The reason this is causing problems is we are sending this text off to a web service, and it's causing XML parsing issues.
Here is a snipped of the XmlWriter code (_parser.WriteRequest):
using (XmlWriter xmlWriter = XmlWriter.Create(textWriter, new XmlWriterSettings
{
#if DEBUG
Indent = true,
#else
Indent = false, NewLineHandling = NewLineHandling.None,
#endif
OmitXmlDeclaration = true
}))
{
xmlWriter.WriteStartDocument();
xmlWriter.WriteStartElement("REQUEST");
xmlWriter.WriteAttributeString("TYPE", "EXAMPLE");
xmlWriter.WriteEndElement();
xmlWriter.WriteEndDocument();
}
The TextWriter is passed in from:
public Response MakeRequest(Request request)
{
var httpRequest = CreateRequest(request);
WriteRequest(httpRequest.GetRequestStream(), request);
using (var httpResponse = httpRequest.GetResponse() as HttpWebResponse)
{
using (var responseStream = httpResponse.GetResponseStream())
{
var response = new Response();
ReadResponse(response, responseStream);
return response;
}
}
}
private void WriteRequest(Stream requestStream, Request request)
{
if (request.Type == null)
{
throw new InvalidOperationException("Request Type was null!");
}
if (_logger.Enabled)
{
var builder = new StringBuilder();
using (var writer = new StringWriter(builder, CultureInfo.InvariantCulture))
{
_parser.WriteRequest(writer, request);
}
_logger.Log("REQUEST: " + builder.ToString());
using (requestStream)
{
using (StreamWriter writer = new StreamWriter(requestStream))
{
writer.Write(builder.ToString());
}
}
}
else
{
using (requestStream)
{
using (StreamWriter writer = new StreamWriter(requestStream))
{
_parser.WriteRequest(writer, request);
}
}
}
}
_logger writes to Console.WriteLine, it is enabled in #if DEBUG mode. Request is just a storage class with properties, sorry easy to confuse with HttpWebRequest.
I'm seeing ?? in both XCode's console and MonoDevelop's console. I'm also assuming the server is receiving them strangely as well, as I get an error. Using UITextField.Text with the same strange characters instead of the device description works fine with no issues. It makes me think the device description is the culprit.
EDIT: this fixed it -
Encoding.UTF8.GetString (Encoding.ASCII.GetBytes(UIDevice.CurrentDevice.Name));
Okay, I think I know the problem. You're creating a StringWriter, which always reports its encoding as UTF-16 (unless you override the Encoding property). You're then taking the string from that StringWriter (which will start with <?xml version="1.0" encoding="UTF-16" ?>) and writing it to a StreamWriter which will default to UTF-8. That mixture of encodings is causing the problem.
The simplest approach would be to change your code to pass a Stream directly to the XmlWriter - a MemoryStream if you really want, or just requestStream. That way the XmlWriter can declare that it's using the exact encoding that it's actually writing the binary data in - you haven't got an intermediate step to mess things up.
Alternatively, you could create a subclass of StringWriter which allows you to specify the encoding. See this answer for some sample code.
MonoTouch simply calls NSString.FromHandle on the value it receive from the call on UIDevice.CurrentDevice.Name. That just like most string are created from NSString inside all bindings.
That should get you a string that you can see it MonoDevelop (no ?) so I can't rule out a bug.
Can you tell us exactly how the device is named ? if so then please open a bug report and we'll check this possibility.

Categories