The .NET framework is full of examples where a method call will return a Stream that you can then read and implement as you need to. But how does this work under the covers? What backs the stream?
Say I am writing a parser that takes some inputs and parses some data into a pre-defined format. If for example I created a MemoryStream and then write my content to it using a StreamWriter and then have the method return the stream, I will run into issues because the writer will close the underlying stream when it completes and the caller won't be able to read it as expected.
How is this typically managed? Is the content for the stream stored in the object until needed (like a byte[]) and then when the method requesting the Stream is invoked it creates it at that time or what?
A stream is an abstraction of a sequence of bytes, such as a file, an input/output device, an inter-process communication pipe, or a TCP/IP socket. The Stream class and its derived classes provide a generic view of these different types of input and output, isolating the programmer from the specific details of the operating system and the underlying devices.
[MSDN said]
so i suppose you using serialize , for Serialization using different formats in .NET by using stream need to define your demands
Serialization is the process of converting an object in to bytes for persistence storage. The deserialization process converts the bytes to object without any loss of data. Serialization is used for storing values in files or database, send an object through the network and to convert back to the original object format. The .NET Framework provide a set of Framework Class Libraries (FCL) for making the serialization process easy. It is very useful for sending data between two different applications.
The .NET Framework supports binary serialization and XML serialization formats. XML serialization serializes only public fields. But, binary serialization will serialize all private and public fields. Serialization can be performed either as basic or custom. Basic serialization happens when a class has the SerializableAttribute attribute applied. Basic serialization doesn't support for versioning. A custom serialization class must be marked SerializableAttribute and implement the ISerializable interface. The user can implement custom serialization for both Binary and XML serialization formats. GetObjectData needs to be overridden for a custom application. The sample application uses custom serialization for both binary and XML serialization. The .NET Framework supports designer serialization which is associated with development tools.
Custom serialization
Custom serialization is the process of controlling the serialization and deserialization process. Custom serialization can be implemented by running custom methods during and after serialization or by implementing the ISerializable interface. Custom serialization is used for versioning the serialization object. If the serialized object has changed the object state (added a new file in later version), custom serialization is used for getting the values without loss of data. The versioning of the serialized object may fail due to missing attributes.
If the user wants to use custom methods during and after serialization, the user should apply the custom serialization support using OnDeserializedAttribute, OnDeserializingAttribute, OnSerializedAttribute and OnSerializingAttribute attributes for customizing the data during serialization and deserialization. The OptionalFieldAttribute attribute is used for ignoring the old version data for deserialization. The formatter doesn't give any error during deserialization. It allows for updating the object before and after serialization/deserialization.
i think below link help you
http://www.codeproject.com/Articles/422474/Serialization-using-different-formats-in-NET
The stream can be backed by many different things. That's the whole idea of streams deriving from the Stream abstract base class.
The stream can be backed by an OS level file stream, by memory, by an HTTP connection, or anything else that can fulfill the Stream contract.
In the case of a MemoryStream the backing storage is just a block of memory.
In the case of StreamWriter, calling Dispose() on it will close the underlying stream. Make sure you don't dispose the writer as long as you still want to use the stream. Also, if you want to re-access a MemoryStream after writing to it, be sure and set the position to the beginning, e.g.:
memStream.Seek(0, SeekOrigin.Begin);
StreamWriter has an overloaded constructor that you can use to instruct the write to not close the stream.
Also, the Stream should have a .WriteBytes method that allows you to avoid a StreamWriter altogether.
Related
I have some data that is persisted in a database. The serialized content was originally written using the default NetDataContractSerializer with a DataContract attribute on the classes.
Now I want to move to using classes that implement IXmlSerializable to have more control over the serialized content and make it leaner and faster.
How will I be able to read the current content as well as store future content in the new way.
I have looked into ISurrogateSelector, but it doesn't seem to do the trick as the ISerializationSurrogate interface only supports getting and setting object data as the ISerializable interface, but no way to specify ReadXml and WriteXml as in IXmlSerializable interface. As I am reading in the stream, I don't have any information about the format other than the stream itself.
The deserialization must produce the same class instance regardless of the original serialization method.
I need to make a solution that uses .NET Framework 4.6
Does protobuf-net use BinaryFormatter or other formatter as an base serializer to serialize an object as byte[], and then write to stream?
add:
I use protobuf-net serialize data and want to deserialize in golang, is there any serializer can do the work in go?
Protobuf-net is a ground-up implementation of the "Protocol Buffers" serialization format, with an idiomatic .NET API. It has nothing to do with BinaryFormatter (although it can be used to create custom ISerializable implementations for use with BinaryFormatter, if you still play in that world).
If you want to use Protocol Buffers (protobuf) with Go, just pick one of the Go implementations from this list.
Most protobuf libraries are "contract first", meaning: you need a .proto schema; to get that from protobuf-net, use Serializer.GetProto<T>() for the T that you are using as a root type.
Note: if you are serializing DateTime or TimeSpan, it would be a good idea to make sure that you are using DataFormat.WellKnown on those members - it'll make it much easier to work in a cross-platform way with other libraries; but note that this is not a data-compatible change: it fundamentally changes how those values are stored, so : if you have existing data you'll need to think of a migration strategy.
The book CLR Via C# presents a simple way to clone objects via binary serialization.
It specifies StreamingContextStates.Clone when creating the BinaryFormatter like so:
var formatter = new BinaryFormatter
{
Context = new StreamingContext(StreamingContextStates.Clone)
};
The documentation for StreamingContextStates.Clone says that it
Specifies that the object graph is being cloned. Users can assume that the cloned graph will continue to exist within the same process and be safe to access handles or other references to unmanaged resources.
Well fair enough - but I don't really know what this actually means. In what way does this actually change the behaviour of the BinaryFormatter? Can anyone list any concrete effects that using this flag has?
Serialization is the subject here.
MS provided an "abstract" mini-framework to allow serialization of objects.
Binary formatter is a specific implementation of that mini-framework concepts.
A developer may choose to use that framework concepts to create his own custom formatter - or -
MS itself when creating the mini-framework thought of further implementation of serialization.
So they provided those flags as part of the framework.
To answer your specific question: those flags will not have any effect to binary formatter as it is already implemented as a tool (if you like) to track the object graph and simply convert it into bytes of raw data.
If you create your own serializer which in example can save the object to a database or to a file or to shared memory or whatever - you would want the user who using your serializer to specify the corresponding flag.
Unless I totally misunderstood MS devs since 2003 .. :) (which is possible!)
I'm writing a program that builds up a tree structure made up of classes that inherit from an abstract Node class. There are a number of different type of nodes built into my program. However, I also want to allow more advanced users to be able to reference my library and write their own derivations of Node. These plug-in libraries are then loaded when my app starts up through Assembly.Load(). Thus all the potential Node types used by my application will not be known until run time.
In addition, I want to be able to serialize and deserialize these trees to and from XML files. I have some experience with XMLSerializer, DataContractSerializer, and implementing IXmlSerializable. Typically, I go with DataContractSerializer as it usually requires less code then implementing IXmlSerializable, and can serialize private fields where XmlSerializer can not.
Yet with this project I also have to consider that other users will be creating classes that derive from my class, and will also have to add whatever code or attributes are required to serialize them as well.
Considering this are there reasons I should go with one serialization mechanism over another?
If the serialization and deserialization will only occur within your application, and if there is no requirement that anyone else be able to read the serialized data, then the serialization format doesn't impact the API: as far as a user of the API is concerned, you will serialize into an opaque file and deserialize from the same.
In this case, use DataContractSerializer, as it can serialize into binary if necessary.
What is a better approach to serialize custom class: using XMLSerializer or BinarryFormatter and [Serializable] attribute on class?
It's not possible to answer this, without knowing how you will use the resulting file, and the lifetime of it.
The decision is based on the fact that it is harder to "upgrade" the binary format. If your object model changes, it won't deserialise correctly. But if you've implemented a custom XML serialisation/deserialisation, then you can handle the "new" cases appropriately, and life will be good.
So decide more about how you will use it, who you are sharing information with, and what the possible changes to the model are.
FWIW, I sometimes use both types of serialisation in a given project.
That really depends on how you use the serialized class. If you want to pass it to other programs or want to easily debug it, use XML (but mind that XMLSerializer might produce non-compliant XML output, like multiple root elements).
In all other cases, you can use the binary formatter. But note that XML is more suitable if you change the class later - you can use XMLIgnore and the like to keep the XML format intact.
The decision will sometimes also be made for you based on what the serialized output will be used for - while you could expose a WebService to take a binary array that is a binary serialized item, you couldn't utilize the web service easily from anything but .Net (and the end client would probably need a reference to the type).
Using XML means that the service could be exposed to any end client regardless of the platform/environment on the end client