Is there any known issue or common mistake which would result in a list being deserialized with the correct number of items but for all of those items to be uninitialized? Note: The item type is marked with ProtoContract and has several ProtoMember(s).
My types....
[ProtoContract(SkipConstructor = true)]
[ProtoInclude(2, typeof(LoadBoardAction))]
public abstract class GameBoardAction : IGameBoardAction
{
}
[ProtoContract(SkipConstructor = true)]
public class LoadBoardAction : GameBoardAction
{
[ProtoMember(10)]
private List<GameTileState> _tiles;
}
[ProtoContract]
public struct GameTileState
{
[ProtoMember(1)]
public Point Coordinate;
[ProtoMember(2)]
public TileType Type;
}
Serialization...
// Note: action is a LoadBoardAction with 15 elements in _tiles
var stream = new MemoryStream();
Serializer.Serialize(stream, action);
var buffer = stream.ToArray();
stream.Close();
// buffer gets sent across network here...
Deserialization...
// stream is a MemoryStream initialized to the byte[] which has
// be read from a network packet.
var action = Serializer.Deserialize<GameBoardAction>(stream);
Action will have 15 elements in _tiles but GameTileState.Coordinate is always (0,0) and GameTypeState.Type is always TileType.Null.
Edit:
The enum TileType is actually serialized/deserialized correctly, the issue is Point, which is an XNA struct type and hence its public fields X/Y are not marked for serialization in any way. This is definitely the issue. I just assumed that it knew to read/write the public fields of a struct when that struct was marked as a ProtoMember of a known type.
While I wish this worked automatically it looks like the work around is to add something like this (for all XNA math types)...
RuntimeTypeModel.Default.Add(typeof(Point), false).Add("X", "Y")
Edit:
Verified that adding the above mentioned line fixes the issue.
Related
I have a library of fairly heavy-weight DTOs that is currently being used by some WCF services. We are attempting to bring it into protobuf-net world, with as little modification as possible. One particular set of items is giving me trouble in serialization. I'm going to simply them here because it gets a little complicated, but the gist of the problem is:
public class Key
{
public string Id {get; set;}
}
public class KeyCollection : IEnumerable<Key>
{
private readonly List<Key> list;
#region IEnumerable
// etc...
#endregion
}
public class Item
{
public long Id { get; set; }
}
public abstract class ContainerBase
{ }
public abstract class ContainerBase<T> : ContainerBase
where T : Item
{ }
public abstract class ContainerType1Base : ContainerBase<Item>
{
public KeyCollection Keys { get; set; }
}
public class ContainerType1 : ContainerType1Base
{ }
I've left out the decorators because I don't they're the problem, mostly because if I add void Add(Key item) { } to KeyCollection the whole thing seems to work. Otherwise, I run into problems attempting to serialize an instance of ContainerType1.
Actually, changing the signature of KeyCollection is kind of prohibitive, so I'm attempting to follow this answer to try to do it programatically. Specifically, setting itemType and defaultType to null on the "Keys" ValueMember of ContainerType1, ContainerType1Base and ContainerBase<Item>. I also set IgnoreListHandling to true on KeyCollection... which totally doesn't work. I get a generic "failed to deserialize" exception on the client, which I can post here if it would help. On the server side, I serialize it out using Serializer.Serialize(), and I spit out Serializer.GetProto<>() as well as JSON of the object, and they all seem to be work okay.
How can I turn off the list handling? Related to that, is there a way to turn on extra debugging while serializing to try to get some more information of the problem?
Fundamentally, the code shown looks fine. Unfortunately, there's currently a "feature" in gRPC that means that it discards the original exception when a marshaller (serializer) fails for some reason, so gRPC does not currently expose the actual problem. I have submitted a fix for this - it may or may not be accepted.
In the interim, I suggest that you simply remove gRPC from the equation, and simulate just the marshaller workload; to do this, on the server: generate the data you are trying to send, and do:
var ms = new MemoryStream();
Serializer.Serialize(ms, yourDataHere);
var payload = Convert.ToBase64String(ms.ToArray());
and obtain the value of payload (which is just a string). Now at the client, reverse this:
var ms = new MemoryStream(Convert.FromBase64String(thatStringValue));
Serialize.Deserialize<YourTypeHere>(ms);
My expectation here is that this should throw an exception that will tell you what the actual problem is.
If the gRPC change gets merged, then the fault should be available via:
catch (RpcException fault)
{
var originalFault = fault.Status.DebugException;
// ^^^
}
I'm trying to think through a new class I'm trying to implement, and have an idea that I'm not sure is good or bad. I want to create a class that holds device settings (ex. inch vs. metric) as well as codes that correspond to the settings. I think it would be nice to have code that looks like this:
Device myDevice = new Device();
myDevice.units = Device.Inches;
myDevice.MoveTo(1,2,3, Device.Rapid);
and the Device class file would be:
class Device
{
public static DeviceUnit Inches = DeviceUnit("G21");
public static DeviceUnit Metric = DeviceUnit("G20");
public static DeviceMovement Rapid = DeviceMovement("G00");
public static DeviceMovement Feed = DeviceMovement("G01");
public DeviceUnit units;
public Device()
{
// Default to metric system
units = Device.Metric;
}
public Device(DeviceUnit customUnit)
{
units = customUnit;
}
public MoveTo(float x, float y, float z, DeviceMovement movement)
{
string command = string.Format($"{units.gcode} {movement.gcode} ");
command += string.Format($"X{x} Y{y} Z{z}\r\n");
Debug.Write(command);
}
}
Device Unit struct:
public struct DeviceUnit
{
public string gcode;
public DeviceUnit(string code)
{
gcode = code;
}
}
DeviceMovement struct:
public struct DeviceMovement
{
public string gcode;
public DeviceUnit(string code)
{
gcode = code;
}
}
My worry is I might end up being 'overkill' on the amount of structs I use. Already I'm thinking I should make another to store Incremental (G90) vs Absolute (G91) positioning. I'd like to make this flexible so that in the future I can load the gcode strings from an XML configuration file so that I can quickly create new XML files for new machine configurations.
Is using multiple structs too overkill for this task?
Should I combine the structs together somehow?
The struct have a meaning if it has multi properties that represent complex object.
I find that your struct DeviceUnit, DeviceMovement are only one property of type string, so why struct ?
let DeviceUnit, DeviceMovement string property. but wait :)
Q: Is using multiple structs too overkill for this task?
A: No, Struct is not overkill if it is used to describe an object (which may be complex device property) with many properties.
example:
public struct Dimension
{
//avoid using constructor. You can initialize with object initializer
public int x;
public int y;
public int z;
}
for example: All devices of windows are stored in WMI classes like The Win32_Printer WMI which has more than 40 property, and most of properties are a complex object.
q: Should I combine the structs together somehow?
A: simply you define a class named Device which have properties and method.
If one of the properties is a complex object, it should be of type struct or class.
You build Object model for the device , so select the type of the properties carefully.
but in your code , really your are not in need to the struct at all , use simple properties like:
public static string Inches {get;set;} = "G21"; // in c#6 you can initialize properties
My Question: Why Static properties?
My Question: Why you initialize properties with default values.
A: You can create xml file for every device and load it during object instantiation, and this give you more capability:
Use one class (or more specialized classes) to represent your device
You can add the following method to your device class:
public LoadDevice(string xmlFilename)
{
// read xml file , e.g Linq to xml
// set properties
}
Here your ceiling is the sky :)
BTW, you should use new keyword if the struct has constructor. so it should:
public static DeviceUnit Inches = new DeviceUnit("G21"); //:)
i hope this is not a repeated question.
I've a class like this :
[Serializable]
class MyClass
{
int type;
List <FileStream> listfile;
string content_text;
public MyClass(int t)
{
type = t;
}
public MyClass()
{
type = 0;
}
}
i need to send an object of Myclass in a Socket with the method Socket.Send(byte []).
So i've to serialize this object. But also if i add [Serializable], FileStream isn't serializable, and i get an exception runtime.
Someone can help me ?
Thank you very much.
When you add Serializable to a class it doesn't change anything, it simply just tells the CLR that your class can be serialized.
Beacuse of this, any classes that are part of an object and are not by default Serializable will not get changed in anyway, and as such will cause the exception you see when you attempt to.
The following link will show you the MSDN documentation for the Serializable attribute :
http://msdn.microsoft.com/en-us/library/system.serializableattribute(v=vs.110).aspx
You can however mark individual properties in your class as not serializable by using the 'NonSerializedAttribute' as follows:
[Serializable]
class MyClass
{
int type;
[NonSerialized]
List<FileStream> listfile;
string content_text;
public MyClass(int t)
{
type = t;
}
public MyClass()
{
type = 0;
}
}
This will prevent the exception occurring, BUT it will do so by removing the actual property from the serialized output, which means if your wanting to pass the filestream objects over your socket, then your fresh out of luck unfortunately because your just not going to be able to.
Now, that said, you can attempt to read the contents of the file stream(s) into byte arrays, and you could then easily send those byte arrays as 'byte[]' is serializable without any issues.
I would however recommend that if your going to start sending byte[] arrays of arbitrary length over a socket, that you look at using a binary streaming protocol (Perhaps Google Proto Buffers) rather than the default text serialized objects you get out of the box.
I am switching some of my DataContractSerializer usage over to protocol-buffers serialization (specifically using protobuf-net) with the goal of faster serialization and smaller serialized data size for storing in a database blob.
I found that changing my object model has a big impact on the message size. I take this to mean that my serialized data is being artificially inflated due to my choice of object model, and I'd like to fix that.
Specifically my question is: could I change my protobuf-net usage, or possibly serialization library, to get a smaller message size? I'll give an object model and what I have been able to figure out so far below.
In my case I'm serializing OCR data... here is a simplified object model:
[ProtoContract(SkipConstructor = true, UseProtoMembersOnly = true)]
public class OcrTable
{
[ProtoMember(1)]
public List<OcrTableCell> Cells;
}
[ProtoContract(SkipConstructor = true, UseProtoMembersOnly = true)]
public class OcrTableCell
{
[ProtoMember(1)]
public int Row;
[ProtoMember(2)]
public int Column;
[ProtoMember(3)]
public int RowSpan;
//...
[ProtoMember(10)]
public int Height;
[ProtoMember(11)]
public List<OcrCharacter> Characters;
}
[ProtoContract(SkipConstructor = true, UseProtoMembersOnly = true)]
public class OcrCharacter
{
[ProtoMember(1)]
public int Code;
[ProtoMember(2)]
public int Data;
[ProtoMember(3)]
public int Confidence;
//...
[ProtoMember(11)]
public int Width;
}
Since the data is ultimately just a bunch of associated primitives (mostly int's), I assume the benefits of packed-bits serialization would be helpful, but in the current class structure, all the actual lists are of custom types.
To allow for packed bits serialization, I tinkered with dropping the custom types altogether, and having multiple lists of primitives, correlated by their sequence. For example:
[ProtoContract(SkipConstructor = true, UseProtoMembersOnly = true)]
public class OcrTableCell
{
[ProtoMember(1)]
public int Row;
//...
[ProtoMember(10)]
public int Height;
[ProtoMember(11, IsPacked=true)]
public List<int> CharacterCode;
[ProtoMember(12, IsPacked=true)]
public List<int> CharacterData;
//...
[ProtoMember(21, IsPacked=true)]
public List<int> CharacterWidth;
}
Here you can see I replaced List<OcrCharacter> with multiple lists: one for each field in OcrCharacter. This has a fairly large impact on serialized data size, in some cases reducing by two-thirds (even after gzipping).
I don't think its practical to make changes like these to my object model just to support serialization ... and keeping a second "helper" model to prepare for serialization seems undesirable.
Still it bugs me that I have an artificially inflated serialized data size just because of the object model for the data.
Is there a better choice of serialization parameters or library to serialize this type of object graph? I did try setting DataFormat=DataFormat.Group on the ProtoMember attributes applied to lists, but saw 0 change in the message size which confused me.
There is nothing inside protobuf-net that is going to magiacally rearrange your object model to exploit specific features; that requires detailed knowledge of the data, which is something that is obvious to a human but pretty hard to generalize. Without investing significant time, the answer here is simply: it is going to serialize it as it is laid out in the model - and if that isn't the perfect scenario: so be it.
As for the Group data-format not helping: grouped sub-messages only applies to things like List<OcrCharacter>; since the field-number is 11, it guarantees to need 2 bytes overhead: 1 byte for the start-group marker, one byte for the end-group marker. The alternative is length-prefixed, which will need 1 byte for the field-header, and a variable number of bytes for the length of the sub-message, encoded as a varint. If each sub-message is less than 128 bytes, this will still only require one byte to encode the length (so 2 bytes overall) - which is probably why it isn't making any difference: each individual OcrCharacter is small enough (less than 128 bytes) that Group can't help.
Given a Queue<MyMessage>, where MyMessage is the base class for some types of messages: all message types have different fields, so they will use a different amount of bytes. Therefore it would make sense to measure the fill level of this queue in terms of bytes rather than of elements present in the queue.
In fact, since this queue is associated with a connection, I could better control the message flow, reducing the traffic if the queue is nearly full.
In order to get this target, I thought to wrap a simple Queue with a custom class MyQueue.
public class MyQueue
{
private Queue<MyMessage> _outputQueue;
private Int32 _byteCapacity;
private Int32 _currentSize; // number of used bytes
public MyQueue(int byteCapacity)
{
this._outputQueue = new Queue<MyMessage>();
this._byteCapacity = byteCapacity;
this._currentSize = 0;
}
public void Enqueue(MyMessage msg)
{
this._outputQueue.Enqueue(msg);
this._currentSize += Marshal.SizeOf(msg.GetType());
}
public MyMessage Dequeue()
{
MyMessage result = this._outputQueue.Dequeue();
this._currentSize -= Marshal.SizeOf(result.GetType());
return result;
}
}
The problem is that this is not good for classes, because Marshal.SizeOf throws an ArgumentException exception.
Is it possible to calculate in some way the size of an object (instance of a class)?
Are there some alternatives to monitor the fill level of a queue in terms of bytes?
Are there any queues that can be managed in this way?
UPDATE: As an alternative solution I could add a method int SizeBytes() on each message type, but this solution seems a little ugly, although it would perhaps be the most efficient since You cannot easily measure a reference type.
public interface MyMessage
{
Guid Identifier
{
get;
set;
}
int SizeBytes();
}
The classes that implement this interface must, in addition to implementing the SizeBytes() method, also implement an Identifier property.
public class ExampleMessage
{
public Guid Identifier { get; set; } // so I have a field and its Identifier property
public String Request { get; set; }
public int SizeBytes()
{
return (Marshal.SizeOf(Identifier)); // return 16
}
}
The sizeof operator can not be used with Guid because it does not have a predefined size, so I use Marshal.SizeOf(). But at this point perhaps I should use the experimentally determined values: for example, since Marshal.SizeOf() returns 16 for a Guid and since a string consists of N char, then the SizeBytes() method could be as following:
public int SizeBytes()
{
return (16 + Request.Length * sizeof(char));
}
If you could edit the MyMessage base class with a virtual method SizeOf(), then you could have the message classes use the c# sizeof operator on its primitive types. If you can do that, the rest of your code is gold.
You can get an indication of the size of your objects by measuring the length of their binary serialization. Note that this figure will typically be higher than you expect, since .NET may also include metadata in the serialized representation. This approach would also require all your classes to be marked with the [Serializable] attribute.
public static long GetSerializedSize(object root)
{
using (var memoryStream = new MemoryStream())
{
var binaryFormatter = new BinaryFormatter();
binaryFormatter.Serialize(memoryStream, root);
return memoryStream.Length;
}
}