I recently changed the Session State to use SQL server and then realized not all my entities were serializable.
Now trying to find out which entities are being put in session throughout the code seems to be a big pain, so I was thinking why not make all of them serializable?
Will this have any performance hit? Why are classes not marked as Serializable by default?
Adding [Serializable] doesn't impact performance; simply most classes don't need to be serializable, and the ability to store internal state doesn't always make sense - it only really makes sense for "entity"or "DTO" objects.
Some things to watch: make sure events aren't serialized; mark then [field: NonSerialized] - otherwise you might grow more data than you expect ;p
Also; BinaryFormatter can be... rather voluminous. If you want smaller data, I would suggest looking at (for example) protobuf-net (100% free), which you can hook via ISerializable - it is generally significantly more efficient than BinaryFormatter. Let me know if you are interested in more details.
Related
I have to apply [Serializable()] attribute for all classes, but I want to know is there any way to make classes Serializable globally instead of applying this attribute individually for all classes?
No, there isn't a way of applying this globally - you'd have to visit each type and add the attribute.
However: applying this globally is a really, really bad idea. Knowing exactly what you're serializing, when, and why is really important - whether this is for session-state, primary persistence, cache, or any other use-case. Statements like
I have to apply [Serializable()] attribute for all classes
tells me that you are not currently in control of what you are storing.
Additionally, since [Serializable] maps (usually) to BinaryFormatter, it is important to know that there are a lot of ways (when using BinaryFormatter) in which it is possible to accidentally drag unexpected parts of your model into the serialized data. The most notorious of these is "events", but: there are others.
When I see this type of question, what I envisage is that you're using types from your main data model as the thing that you are putting into session-state, but frankly: this is a mistake - and leads to questions like this. Instead, the far more maneagable approach is to create a separate model that exists purely for this purpose:
it only has the data that you need to have available in session
it is marked [Serializable] if your provider needs that - or whatever other metadata is needed for the sole purpose for which it exists
it does not have any events
it doesn't involve any tooling like ORM contexts, database connections etc
ideally it is immutable (to avoid confusion over what happens if you make changes locally, which can otherwise sometimes behave differently for in-memory vs persisted storage)
just plain simple basic objects - very easy to reason about
can be iterated separately to your main domain objects, so you don't have any unexpected breaks because you changed something innocent-looking in your domain model and it broke the serializer
I have serialized a C# class using protobuf-net. The resultant byte array is stored in a database. This is for performance reasons and I probably won't be able to change this design. The C# language doesn't make it possible to prevent classes being modified, and the class structure being passed in for deserialization with time may require changes that will not match that used for serialization, causing retrieval to fail.
Other than the wrapper technique suggested here, is there a pattern or technique for handling this kind of problem?
The only othe technique that comes to my mind is to version the classes that need to be deserialized in order to not loose anything when you need to make some changes. When you serialize an instance of those classes, you have to serialize also the version of the class (it could be a field of the the class itself).
I don't think this is the best solution but a solution.
The versioning strategy could become very difficult to manage when the changes (and the versions) start to grow.
I'm using Azure Cache preview and need to make some classes Serializable.
Is there any disadvantage of making class to be Serializable - such as performance issue?
[Serializable]
public class MyClass {}
I found few related questions, but they are not about disadvantages.
Are all .NET exceptions serializable?
Drawbacks of marking a class as Serializable
Thank in advance
No, there is no intrinsic overhead simply in being marked as [Serializable]. The only problem I'd have is that BinaryFormatter and NetDataContractSerializer are lousy serializers, and most other serializers aren't interested in this flag (ok, I may be biased)
Assuming you're talking about the cost of using the Serializable attribute rather than the actual serialization process, one drawback would be that a third party would normally assume that the class is designed to be serializable. Whilst this may be the case, if you're just marking it serializable for the sake of it (and if there's a chance a third party might interact with it), then you'd probably want to spend time ensuring that the class is suited to be serialised in an efficient manner. So it's more of a resource tradeoff than a technical one, from that point of view.
From a technical standpoint, as Marc Gravell said, just using the attribute won't really have any overhead.
I'm storing some objects in my viewstate and I was wondering if there are any disadvantages to making a class Serializable?
Is it bad practice to make all of the classes Serializable?
Firstly. Avoid viewstate.
Generally serialization (textual) is used for transferring objects.
You should avoid marking any class as serializable that is not a DTO (Data transfer object) or message class. We do this for several reasons. What ever picks up your class in serialized format may not have the method information (which is in the original assembly) of a non DTO class. Secondly, a class may reference a resource (DB connection, file handle, etc) Do NOT serialize these, since de serialization does not re-establish resource connections and state, unless explicitly designed for, but is still a bad idea.
So in summary: Do NOT serialize when you have contextual methods and storing data for a thrid party to use. (Like a service response with methods is a bad idea). And do NOT serialize when the class contains a resource reference. Keep your serializable object clean from methods as much as possible. This might involve a little re factoring into a service type pattern.
Do serialize DTO's and messages.
This is more of a design choice.
It is a good practice to make all classes that are actually Serializable as Serializable. I would just use common sense, and set it for those classes that are intended for crossing process boundaries (DTO classes).
So it those classes which:
All their properties are simple types
And if they have complex properties, their types themselves are serializable
Marking it as [Serializable] (or ISerializable) is necessary for anything using BinaryFormatter, which may well include viewstate under the default configuration. As for good vs bad practice... well, most classes don't need to be serialized, and IMO even when they are, using BinaryFormatter is not always the best choice*. And specifically, marking it as both [Serializable] and [DataContract] will cause an exception IIRC.
*=actually, IMO BinaryFormatter is very rarely a good choice, but I might be biased... and I deliberately don't use viewstate ;p
what is the best Solution for mapping class object to lightweight class object by example:
Customer to CustomerDTO both have the same properties names, i was thinking of the best optimized solution for mapping between them , i know reflection slow me down badly , and making methods for every mapping is time consuming so any idea ?
thanks in advance.
AutoMapper. There's also ValueInjecter and Emit Mapper.
If reflection is slowing you down too much, try Fasterflect: http://www.codeproject.com/KB/library/fasterflect_.aspx
If you use the caching mechanism, it is not much slower than hand-written code.
I've been playing about with this, and have the following observations. Should Customer inherit from CustomerDTO or read/write to a CustomerDTO? I've found that some DTO generators only generate dumb fixed size array collections for vectors of data items within the DTO others will allow you to specify a LIST<> or some such collection. The high-level collection does not need to appear in the serialised DTO but effects which approach you take. If your solution adds high-level collections then you can inherit if it doesn't then you probably want to read/write to a intermediate DTO.
I've used Protocol Buffers and XSDObjectGenerator for my DTO Generation (at different times!).
A new alternative is UltraMapper.
Is faster than anything i tried up to Feb-2017. (2x faster than Automapper in any scenario)
It is more reliable than AutoMapper (no StackOverflows, no depth limits, no self-reference limits).
UtraMapper is just 1300 lines of code instead of more than 4500+ of Automapper and it is easier to understand, maintain and contribute to the project.
It is actively developed but at this moment it needs community review.
Give it a try and leave a feedback on the page project!.