C# protobuf-net serialized object to java - c#

So I have a small problem:
A message is sent using MQTT, it consists of a series of serialized objects using protobuf-net in C# (I can't modify this code, but I have access to the source). At the other end i receive the serialized objects in Java, the problem is I can't seem to be able to deserialize the objects using protobuf, if anyone ever had this problem and solved it, please help :)
Example of object from C#:
using ProtoBuf;
namespace Concentrator.Services
{
[ProtoContract]
public class MeterID
{
private byte[] _id;
[ProtoMember(1)]
public byte[] ID
{
get { return _id; }
set { _id = value.Length == 16 ? value : null; }
}
[ProtoMember(2)] public string MeterType;
}
}
My attempt at recreating the same object in Java (the .proto file):
syntax = "proto2";
package mqtt.entity;
option java_package = "mqtt.entity";
option java_outer_classname = "ProtoMeter";
message Meter {
optional bytes ID = 1;
optional string MeterType = 2;
}
message MeterID {
repeated Meter mid = 1;
}
A solution to this example would be a huge help, Thanks a lot.
The code where the object is deserialized in C#:
var ms = new MemoryStream(data, 7, data.Length - 9)
var res = Serializer.Deserialize<List<MeterID>>(ms);
this works in C#, I'm trying to achieve the same thing in java

The message in your C# code matches just:
message MeterID {
optional bytes ID = 1;
optional string MeterType = 2;
}
There is no need for a 2-level model (unless you're using *WithLengthPrefix in the C# code). You can also get that output by using:
var proto = Serializer.GetProto<MeterID>();
With your edit, a List<MeterID> could be mapped as
message List_MeterID {
repeated MeterID items = 1;
}
to be used in combination with the previous MeterID fragment. Which is what you have in the question. So it comes down to "what currently happens?".

try regenerate proto-file by GetProto<T>

Related

unity - Error In Read From JSON

This is part of a code that someone wrote for me in answer to a question:
JSONObject jsonRead = new JSONObject(jsonString);
packsInfo = new Pack[jsonRead.Count];
for(int i =0; i < jsonRead.Count; i ++)
{
Pack pack = new Pack();
pack.number = (int)jsonRead[i]["number"].i;
pack.angle = (int)jsonRead[i]["angle"].i;
pack.zPosition = jsonRead[i]["z_position"].f;
pack.beatCaseDistance =jsonRead[i]["beat_case_distance"].f;
pack.gunDistance = jsonRead[i]["gun_distance"].f;
packsInfo[i] = pack;
}
I used it in my code. Every thing worked right but these lines:
pack.number = (int)jsonRead[i]["number"].i;
pack.angle = (int)jsonRead[i]["angle"].i;
I realized that .f is a get{} method for reading float from JSON and I find nothing for reading int. And if I delete .i, it gives me the error that he can't convert JSON to int implicitly.
.i returns long type. Because i using cast from long to int
int i = (int)json["number"].i.
Why is there no have fields or methods with returned int value? I don't know! But you can create it in JSONObject.cs! It will be something like this:
public int Int{
get {
return (int)i;
}
}

Deserialize data into protobuf class

I am trying to deserialize data from Mongodb to MyClass that has been created by protoc (version 3).
I am using MongoDB.Driver (version 4)
BsonSerializer.Deserialize(result, MyClass.GetType());
Which uses XmlSerializer to deserialize the data.
The problem lies in how protobuf represents its collections.
A List is created as Google.Protobuf.Collections.RepeatedField
with no setter. XmlSerializer cannot deserialize fields with no setters.
How can I solve this problem in a way that it doesn't become an ugly hack?
Options I have come up with:
Create my own Serializer that MongoDB can use to handle properties with no setter.
Create my own class generator that adds a public setter.
Create a new class as a mapper between MongoDB's serialized data and protobufs class.
Option 1 seems daunting, option 2 makes me create a new fork, which is a pain to maintain. Option 3 is the easiest by far, but also messy, I'd prefer not to create another layer of complexity if there are better ways to deal with this.
So my question is:
Is there any other ways to solve this problem? Is there anything I am missing that is already built-in or maybe I am missing something trivial?
Edit:
This is a snippet of what is getting generated by protoc verison 3:
/// <summary>Field number for the "Recipients" field.</summary>
public const int RecipientsFieldNumber = 5;
private static readonly pb::FieldCodec<string> _repeated_recipients_codec =
pb::FieldCodec.ForString(42);
private readonly pbc::RepeatedField<string> recipients_ = new pbc::RepeatedField<string>();
public pbc::RepeatedField<string> Recipients {
get { return recipients_; }
}
It comes from this proto-file:
syntax = "proto3";
package DataModels.Data;
message MailTemplateMessage {
string UUID = 1;
string SubjectLine = 2;
string Body = 3;
string Sender = 4;
repeated string Recipients = 5;
}

Protobuf-net lazy streaming deserialization of fields

Overall aim: To skip a very long field when deserializing, and when the field is accessed to read elements from it directly from the stream without loading the whole field.
Example classes The object being serialized/deserialized is FatPropertyClass.
[ProtoContract]
public class FatPropertyClass
{
[ProtoMember(1)]
private int smallProperty;
[ProtoMember(2)]
private FatArray2<int> fatProperty;
[ProtoMember(3)]
private int[] array;
public FatPropertyClass()
{
}
public FatPropertyClass(int sp, int[] fp)
{
smallProperty = sp;
fatProperty = new FatArray<int>(fp);
}
public int SmallProperty
{
get { return smallProperty; }
set { smallProperty = value; }
}
public FatArray<int> FatProperty
{
get { return fatProperty; }
set { fatProperty = value; }
}
public int[] Array
{
get { return array; }
set { array = value; }
}
}
[ProtoContract]
public class FatArray2<T>
{
[ProtoMember(1, DataFormat = DataFormat.FixedSize)]
private T[] array;
private Stream sourceStream;
private long position;
public FatArray2()
{
}
public FatArray2(T[] array)
{
this.array = new T[array.Length];
Array.Copy(array, this.array, array.Length);
}
[ProtoBeforeDeserialization]
private void BeforeDeserialize(SerializationContext context)
{
position = ((Stream)context.Context).Position;
}
public T this[int index]
{
get
{
// logic to get the relevant index from the stream.
return default(T);
}
set
{
// only relevant when full array is available for example.
}
}
}
I can deserialize like so: FatPropertyClass d = model.Deserialize(fileStream, null, typeof(FatPropertyClass), new SerializationContext() {Context = fileStream}) as FatPropertyClass; where the model can be for example:
RuntimeTypeModel model = RuntimeTypeModel.Create();
MetaType mt = model.Add(typeof(FatPropertyClass), false);
mt.AddField(1, "smallProperty");
mt.AddField(2, "fatProperty");
mt.AddField(3, "array");
MetaType mtFat = model.Add(typeof(FatArray<int>), false);
This will skip the deserialization of array in FatArray<T>. However, I then need to read random elements from that array at a later time. One thing I tried is to remember the stream position before deserialization in the BeforeDeserialize(SerializationContext context) method of FatArray2<T>. As in the above code: position = ((Stream)context.Context).Position;. However this seems to always be the end of the stream.
How can I remember the stream position where FatProperty2 begins and how can I read from it at a random index?
Note: The parameter T in FatArray2<T> can be of other types marked with [ProtoContract], not just primitives. Also there could be multiple properties of type FatProperty2<T> at various depths in the object graph.
Method 2: Serialize the field FatProperty2<T> after the serialization of the containing object. So, serialize FatPropertyClass with length prefix, then serialize with length prefix all fat arrays it contains. Mark all of these fat array properties with an attribute, and at deserialization we can remember the stream position for each of them.
Then the question is how do we read primitives out of it? This works OK for classes using T item = Serializer.DeserializeItems<T>(sourceStream, PrefixStyle.Base128, Serializer.ListItemTag).Skip(index).Take(1).ToArray(); to get the item at index index. But how does this work for primitives? An array of primitives does not seem to be able to be deserialized using DeserializeItems.
Is DeserializeItems with LINQ used like that even OK? Does it do what I assume it does (internally skip through the stream to the correct element - at worst reading each length prefix and skipping it)?
Regards,
Iulian
This question depends an awful lot on the actual model - it isn't a scenario that the library specifically targets to make convenient. I suspect that your best bet here would be to write the reader manually using ProtoReader. Note that there are some tricks when it comes to reading selected items if the outermost object is a List<SomeType> or similar, but internal objects are typically either simply read or skipped.
By starting again from the root of the document via ProtoReader, you could seek fairly efficiently to the nth item. I can do a concrete example later if you like (I haven't leapt in unless you're sure it will actually be useful). For reference, the reason the stream's position isn't useful here is: the library aggressively over-reads and buffers data, unless you specifically tell it to limit its length. This is because data like "varint" is hard to read efficiently without lots of buffering, as it would end up being a lot of individual calls to ReadByte(), rather than just working with a local buffer.
This is a completely untested version of reading the n-th array item of the sub-property directly from a reader; note that it would be inefficient to call this lots of times one after the other, but it should be obvious how to change it to read a range of consecutive values, etc:
static int? ReadNthArrayItem(Stream source, int index, int maxLen)
{
using (var reader = new ProtoReader(source, null, null, maxLen))
{
int field, count = 0;
while ((field = reader.ReadFieldHeader()) > 0)
{
switch (field)
{
case 2: // fat property; a sub object
var tok = ProtoReader.StartSubItem(reader);
while ((field = reader.ReadFieldHeader()) > 0)
{
switch (field)
{
case 1: // the array field
if(count++ == index)
return reader.ReadInt32();
reader.SkipField();
break;
default:
reader.SkipField();
break;
}
}
ProtoReader.EndSubItem(tok, reader);
break;
default:
reader.SkipField();
break;
}
}
}
return null;
}
Finally, note that if this is a large array, you might want to use "packed" arrays (see the protobuf documentation, but this basically stores them without the header per-item). This would be a lot more efficient, but note that it requires slightly different reading code. You enable packed arrays by adding IsPacked = true onto the [ProtoMember(...)] for that array.

File Parsing where one column of an array needs to be converted to a different type

Still learning and looking for a little guidance - I've got a CSV file that I'm reading into my program using a method with two foreach loops like so:
int i = 0;
int j = 0;
string File = File.ReadAllText("c:\\employees.txt");
string[,] filesArray = new string[File.ReadLines("c:\\employees.txt").Count(), 4];
foreach (string row in rawFile.Split('\n'))
{
foreach (string col in row.Trim().Split(','))
{
filesArray[i, j] = col;
j++;
}
j = 0;
i++;
}
return filesArray;
All well and good and I can dumbly display the text fine but the format of the CSV file is
Z0003, EmployeeNameHere, 00001
and I want to do some math and other calculations based on the values in filesArray[2, 0] and so on but I'm trying to find what would be best practice for this situation.
I can think of ways that don't seem elegant and honestly it's been a bit of a mess trying to find the exact answer to this question through Google. I don't want to pick up bad habits at such an early stage!
Your problem right now is you have the data (even though getting it is ugly), but it is all in strings. No matter what you do, you will have to convert to a decimal or other numeric format to do math on it.
I would recommend using a library like FileHelpers to read your CSV data into an "employee" class first. This will give you strongly typed objects. See the 'Quick Start Delimited' entry on the left side. your class would look something like this:
[DelimitedRecord(",")]
public class Employee {
// fields in same order as in the file
public string EmployeeId { get; set; }
public string EmployeeName { get; set; }
public int MyNumber { get; set; }
}
Suggestions for current code:
What is rawFile ?? you get the lines using ReadAllLines()
Follow .NET naming guidelines. var file = ... not var File =
Don't use the same name as common .Net classes (e.g. File). Call it fileLines, etc
Don't read the file twice to get the # of lines. use new string[fileLines.Count, 4]
You can use LINQ and calls to split easier if you don't use a [,] multi-dimensional array.
To convert between string and int, you will need int.Parse or int.TryParse
Add error checking to make sure your lines are the correct length, etc
Some sample code:
var data = fileLines.Select(line => line.Split(','))
.Where(arr => arr.Length == 4) // should have 4 fields
.ToArray();
var xxxx = data[0][1]; // should be "EmployeeNameHere"
// this assumes your data is ALL valid
var numbers = data.Select(x => int.Parse(x[2])).ToList();
var sum = numbers.Sum();
// there is no "pretty" way to do TryParse
var numbers = new List<int>();
foreach(var arr in data) {
int temp = 0;
if (int.TryParse(arr[2], out temp)) {
numbers.Add(temp);
}
}

Protobuf-net r282 having problems deserializing object serialized with r249

I've just updated from r249 to r282. Other than replacing the dll I've made no changes. Unfortunately, now deserializing the objects created before the update takes significantly longer. What used to take two seconds now takes five minutes.
Were there syntax changes between versions? Is there anything it no longer supports?
My classes are all using ProtoContract, ProtoMember, and ProtoInclude. I am running VS2010. As far as I was concerned there were no problems with my protocol buffer code. I'm only trying to upgrade because I figured it's good to have the most recent version.
Edit - 2010.09.09
One of the properties of my object is an array of ushorts. I've just noticed that this property did not serialize/deserialize properly with r282. The resulting values of the array are all zeros. The array had values before being serialized (r282) but not after deserialization (r282).
It turns out that despite my efforts, yes there was a breaking change in data format in one of the earlier builds. This only impacts ushort data, which was omitted from the handling at one point. This is regrettable, but the good news is that no data is lost - it is simply a bit inconvenient to access (it is essentially written via a string at the moment).
Here's my suggested workaround; for a member like:
[ProtoBuf.ProtoMember(1)]
public ushort[] Data {get;set;}
Replace that with:
[ProtoBuf.ProtoMember(1)]
private string[] LegacyData {get;set;}
private bool LegacyDataSpecified { get { return false; } set { } }
/* where 42 is just an unused new field number */
[ProtoBuf.ProtoMember(42, Options = MemberSerializationOptions.Packed)]
public ushort[] Data { get; set; }
[ProtoBuf.ProtoAfterDeserialization]
private void SerializationCallback()
{
if (LegacyData != null && LegacyData.Length > 0)
{
ushort[] parsed = Array.ConvertAll<string, ushort>(
LegacyData, ushort.Parse);
if (Data != null && Data.Length > 0)
{
int oldLen = parsed.Length;
Array.Resize(ref parsed, parsed.Length + Data.Length);
Array.Copy(Data, 0, parsed, oldLen, Data.Length);
}
Data = parsed;
}
LegacyData = null;
}
This imports old-style data into LegacyData and merges it during (after) serialization, or writes new-style data from Data. Faster, smaller, and supports both old and new data.

Categories