I'm running into an issue with serializing / deserializing using the Convert.ToBase64String and FromBase64String.
My code will currently take a class, serialize to memory stream, then convert the memory stream into a string using ToBase64String(.
When I try to deserialize, the first thing I'm doing is FromBase64String().
Upon running the FromBase64String(), though, it will sometimes throw an error stating that the string is an invalid length.
Would anyone be able to provide clarity as to why it's not properly converting to base 64?
Edit:
Thanks everyone. I was able to figure out what the issue was: I was forgetting to clear the memorystream before serializing more data.
Related
I am running into issues when I execute the following C# code
byte[] addr = new byte[IntPtr.Size];
IntPtr conv = (IntPtr)(BitConverter.ToInt64(addr, 0));
The error I am getting is:
System.ArgumentException: Destination array is not long enough to copy all the items in the collection. Check array index and length.
at System.ThrowHelper.ThrowArgumentException(ExceptionResource resource)
at System.BitConverter.ToInt64(Byte[] value, Int32 startIndex)
I am still pretty much a noob in C# so learn as I go... Do you have an idea what I am missing here? I understand there appears to be an issue with the destination array, but not really sure why... Any guidance highly appreciated.
The underlying error here is that the source array is 4 bytes long and to read an Int64 it needs to read 8 bytes.
The oddity is that you get an error about a destination array. The method you call shouldn't be copying anything to any arrays, so why is it talking about a destination?
The error messages in the source use a mechanism where the error is identified by an id, in this case Arg_ArrayPlusOffTooSmall.
It seems reasonable to use such an id as the name of the id matches the error.
However, most of the uses of Arg_ArrayPlusOffTooSmall are when the array is used as the destination of a write rather than as the source of a read, so the error message is incorrectly made more 'helpful' by adding details which are incorrect in this case.
Congratulations, you've found a bug in .net!
I am working on to optimize our code so that we can read, create and send xml file which can be very large in size (2GB).
For read and create we are using XmlReader class.
We actually get an XML string from some other service. If we store the XML string in a string variable it takes the same amount of memory. That point aside, please suggest the best way to deal with the XML string so the memory out of bound exception doesn't occur.
I can not show code over here due to company policies but that should not matter because code is already working but in case of large xml string its giving:
memory exception
...as mentioned.
EXPLANATION :
We get 2GB Xml from a service.
We process it using streaming.
Since we need to read that xml using xmlreader, we pass the xml in the form of string to create a new xml with almost same size (2GB)
byte[] msg = Buffer.ExtractMessage(messageStart, messageEnd);
string msg1 = Encoding.UTF8.GetString(msg);
CreateNewXMLFileFromTheCurrentXmlString(msg1);
We then send that new xml to some other service.
The best way would be to use a well normalized and indexed database if that's possible for you.
Then getting the data by using LINQ should solve your problems.
And the problem is the source and not your logic as XML files shouldn't be as big as yours.
Take a look here:
LINQ TO XML
There isn't much documentation around it.
http://msdn.microsoft.com/en-us/library/system.net.http.httpcontent.readasbytearrayasync(v=vs.118).aspx
If it does not guarantee that the whole Content how do I know when to stop reading?
You don't tell it when to stop reading. It returns a Task<byte[]>. So, after some amount of time, it will either
finish reading the entire body and then give it to you as a single byte[], or
encounter a problem and throw an exception.
If no exception is thrown, it has successfully read the entire body as a byte[].
So, I'm dealing with some pretty nasty legacy data, and I need pass some of it to a RESTful API.
I'm using WebApi Client (the nuget package), and I'm running into a problem: Sometimes, one of my model objects contains a string that has an invalid XML character (like 0xf1). There is no reason that these values should be in the data, so I really just want to filter them.
My problem: When the XmlMediaTypeFormatter attempts to serialize my object graph, and it encounters one of these bad values, it throws. (expected)
What I would like to do is to make it silently fallback to a character that can be encoded.
I tried replacing the UTF8Encoding (see code below), but I still get the exception. It seems that somewhere deep in the bowels of the DatacontractSerializer, they are using their own encoding object.
Anyone know of a way to get the XmlMediaFormatter to use fallback characters when an encoding error occurs?
Here is what I have tried so far:
var formatter = new System.Net.Http.Formatting.XmlMediaTypeFormatter();
formatter.SupportedEncodings.Clear();
// the second param in the ctor is throwOnInvalidBytes = false
var newUtf8Encoding = new System.Text.UTF8Encoding(false, false);
formatter.SupportedEncodings.Add(newUtf8Encoding);
var content = new System.Net.Http.ObjectContent(typeof(MyEntity), myInstance, formatter);
var stream = new MemoryStream();
content.CopyToAsync(stream).Wait(); // exception here, I hoped that fallback would occur
stream.Close();
I know that our long term solution must be to fix the data.
The only way to keep the data fixed is to fix the legacy code that is writing the bad values, and that is going to take significant time and effort. We will do it, but I need a stop-gap.
I've been looking to do some binary serialization to file and protobuf-net seems like a well-performing alternative. I'm a bit stuck in getting started though. Since I want to decouple the definition of the classes from the actual serialization I'm not using attributes but opting to go with .proto files, I've got the structure for the object down (I think)
message Post {
required uint64 id = 1;
required int32 userid = 2;
required string status= 3;
required datetime created = 4;
optional string source= 5;
}
(is datetime valid or should I use ticks as int64?)
but I'm stuck on how to use protogen and then serialize a IEnumerable of Post to a file and read it back. Any help would be appreciated
Another related question, is there any best practices for detecting corrupted binary files, like if the computer is shut down while serializing
Re DateTime... this isn't a standard proto type; I have added a BCL.DateTime (or similar) to my own library, which is intended to match the internal serialization that protobuf-net uses for DateTime, but I'm fairly certain I haven't (yet) updated the code-generator to detect this as a special-case. It would be fairly easy to add if you want me to try... If you want maximum portability, a "ticks" style approach might be pragmatic. Let me know...
Re serializing to a file - if should be about the same as the Getting Started example, but note that protobuf-net wants to work with data it can reconstruct; just IEnumerable<T> might cause problems - IList<T> should be fine, though (it'll default to List<T> as a concrete type when reconstructing).
Re corruption - perhaps use SerializeWithLengthPrefix - it can then detect issues even at a message boundary (where they are otherwise undetectable as an EOF). This (as the name suggests) writes the length first, so it knows whether is has enough data (via DeserializeWithLengthPrefix). Alternatively, reserve the first [n] bytes in your file for a hash / checksum. Write this blank spacer, then the data, calculate the hash / checksum and overwrite the start. Verify during deserialization. Much more work.