C# JSON.NET How can I create generic telegram? - c#

I am building a C# (.NET 4.5) generic "telegram" - for lack of a better word - that communicates between a plc and a regular pc using tcp and JSON, especially JSON.NET. I have a tcp server set up and working, and I can create Json strings out of objects, which can then be passed on. So I am left with defining a standard way to pass information, commands and events back and forth. What I am now struggling with is how to easily format the "payload". I have a JsonTelegram class with a dynamic variable called "payload" - more or less this:
private dynamic _payload;
public dynamic payload
{
get
{
return _payload;
}
set
{
_payload = value;
updateContents();
}
}
So if I update this variable like this:
myTelegram.payload = rampTime;
I would like to see the final JSON be this:
{
"purpose": "Update this variable",
"payload": {
"rampTime": 2000
},
"timeStamp": "2000-01-01T00:00:00.000-00:00",
"returnID": "2000-01-01T00:00:00.000-00:00"
}
It is the payload part that I am really struggling with, because that code there will end up looking like this:
{
"purpose": "Update this variable",
"payload": 2000,
"timeStamp": "2000-01-01T00:00:00.000-00:00",
"returnID": "2000-01-01T00:00:00.000-00:00"
}
when updateContents() uses this:
... Newtonsoft.Json.JsonConvert.SerializeObject(_payload) ...
Do you see how the word "ramp" gets replaced with "payload"? I know I could just make a method and pass both the object and the name of the object - this would definitely work. But I would like to get trickier than that, just because it should be possible, and also because down the road I can see some other reasons that this design may work better. I want the code to simply know the name of the variable that called it and use that. I have seen many suggestions using some stuff I barely understand, but when I try to adapt it, it seems to get hung up on one issue or another. So I thought I would ask a fresh question, because surely many people want to do the same thing with JSON and the new dynamics?
StackOverflow has information on attributes and dictionaries, reflection, something to do with t, expressions, and so on. For instance, this question: How do you get a variable's name as it was physically typed in its declaration? and this Finding the variable name passed to a function
I don't mind making the class code as messy and hacky as all get out, but I want the calling code to be as clean as "myTelegram.payload = rampTime;"
I'm sorry that I can't wade through all the suggestions well enough - I spend most of my time writing plc code, and just now have a plc that one side does C#.
Also, if there is simply a more elegant way to do things that is totally different, I am eager to hear it.
Thanks!

It sounds like you're looking for an ExpandoObject. With an ExpandoObject, your telegram looks something like:
class Telegram
{
private dynamic _payload = new ExpandoObject();
public dynamic payload
{
get
{
return _payload;
}
}
public string ToJson()
{
...
}
}
And in your code:
var telegram = new Telegram();
telegram.payload.rampTime = 2000;
Console.WriteLine(telegram.ToJson());

Related

Easiest way to retrieve values from D365 dataverse JSON?

I've recently started working with JSON structures from Microsoft Dynamics 365 dataverse. A lot of their odata is structured like this:
{
"#odata.context":"https://....",
"value":[
{
"#odata.etag":"W/\"Jz....",
"dataAreaId":"foo",
"ItemNumber":"TEST",
"IsPhantom":"No"
}
]
}
I simply want to get the value of ItemNumber, which in this case is TEST. This seems like it should be very simple, but after an hour and a dozen different approaches, I'm wondering what I'm missing.
When using Newtsonsoft, it seems like all their approaches require a fully-baked class that perfectly matches the json structure, and the correct specification of complex combinations of <T>. This is tedious because I work with many different json data sets and almost all are different from each other in terms of the attributes and their types. I tried things like:
dynamic try1 = JsonConvert.DeserializeObject<dynamic>(jsonResult);
dynamic try2 = JObject.Parse(jsonResult);
dynamic try3 = JsonConvert.DeserializeObject<ExpandoObject>(jsonResult, new ExpandoObjectConverter());
JObject try4 = (JObject)JsonConvert.DeserializeObject<dynamic>(jsonResult);
Then I gave up on Newtonsoft and tried JavaScriptSerializer, but ran into similar dead ends.
var try5 = new JavaScriptSerializer().Deserialize<Dictionary<string, object>>(jsonResult);
string try6 = Utilities.SafeTrim(jsonObj["value"]);
In every case I end up with a valid object, but I can never figure out how to traverse down into the object and grab the value I want. I frequently end up with completely useless constructs like this from the VS watch window, which it delivers just to mock me:
new System.Linq.SystemCore_EnumerableDebugView<System.Collections.Generic.KeyValuePair<string, Newtonsoft.Json.Linq.JToken>>(new System.Linq.SystemCore_EnumerableDebugView<Newtonsoft.Json.Linq.JToken>((new System.Linq.SystemCore_EnumerableDebugView<System.Collections.Generic.KeyValuePair<string, Newtonsoft.Json.Linq.JToken>>(try1).Items[1]).Value).Items[0]).Items[2]
I'm sure someone has the correct and elegant way to solve this, which would be fine. But what I would really love is a way to take any JSON string and convert to an object, array, or collection that I can easily tease out any value with nested array syntax (or something equally simple). So for my example, maybe something like this:
jsonObject["value"]["ItemNumber"]
Is something like that so difficult?
This code:
const string jsonInput = #"{
'odata.context':'https://....',
'value':[{
'#odata.etag':'W/z....',
'dataAreaId':'foo',
'ItemNumber':'TEST',
'IsPhantom':'No'
}]}";
dynamic try1 = JsonConvert.DeserializeObject(jsonInput)!;
Console.WriteLine(try1["value"][0]["ItemNumber"]);
returns
TEST
as expected

Get a specific part of my JSON data in C#

I have this JSON:
{
"response":
{
"data":
[
{
"start":1,
"subjects":["A"]
},
{
"start":3,
"subjects":["B"]
},
{
"start":2,
"subjects":["C"]
}
]
}
}
And I want to get only the "subject" data from the object with it's "start" value to be the smallest one that is greater than 1.3, which in this case would be C. Would anybody happen to know how such a thing can be achieved using C#?
I want to extend a bit on the other answers and shed more light into the subject.
A JSON -- JavaScript Object Notation - is just a way to move data "on a wire". Inside .NET, you shouldn't really consider your object to be a JSON, although you may colloquially refer to a data structure as such.
Having said that, what is a JSON "inside" .NET? It's your call. You can, for instance treat it as a string, but you will have a hard time doing this operation of finding a specific node based on certain parameters/rules.
Since a JSON is a tree-like structure, you could build your on data structure or use the many available on the web. This is great if you are learning the workings of the language and programming in general, bad if you are doing this professionally because you will probably be reinventing the wheel. And parsing the JSON is not a easy thing to do (again, good exercise).
So, the most time-effective way of doing? You have two options:
Use a dynamic object to represent your JSON data. A dynamic is a "extension" to .NET (actually, an extension to the CLR, that is called DLR) which lets you create objects that doesn't have classes (they can be considered to be "untyped", or, better, to use duck typing).
Use a typed structure that you defined to hold your data. This is the canonical, object-oriented, .NET way of doing it, but there's a trade-off in declaring classes and typing everything, which is costly in terms of time. The payoff is that you get better intellisense, performance (DLR objects are slower than traditional objects) and more safe code.
To go with the first approach, you can refer to #YouneS answer. You need to add a dependency to your project, Newtonsoft.Json (a nuget), and call deserialize to convert the JSON string to a dynamic object. As you can see from his answer, you can access properties in this object as you would access then on a JavaScript language. But you'll also realize that you have no intellisense and things such as myObj.unexistentField = "asd" will be allowed. That is the nature of dynamic typed objects.
The second approach is to declare all types. Again, this is time consuming and on many cases you'll prefer not to do it. Refer to Microsoft Docs to get more insight.
You should first create your data contracts, as below (forgive me for any typos, I'm not compiling the code).
[DataContract]
class DataItem
{
[DataMember]
public string Start { get; set; }
[DataMember]
public string[] Subjects { get; set; }
}
[DataContract]
class ResponseItem
{
[DataMember]
public DateItem[] Data { get; set; }
}
[DataContract]
class ResponseContract
{
[DataMember]
public ResponseItem Response { get; set; }
}
Once you have all those data structures declared, deserialize your json to it:
using (var ms = new MemoryStream(Encoding.Unicode.GetBytes(json)))
{
var deserializer = new DataContractJsonSerializer(typeof(ResponseContract));
return (T)deserializer.ReadObject(ms);
}
The code above may seem a bit complicated, but follow a bit of .NET / BCL standards. The DataContractJsonSerializer work only with streams, so you need to open a stream that contains your string. So you create a memory stream with all the bytes from the json string.
You can also use Newtonsoft to do that, which is much simpler but, of course, still requires that extra dependency:
DataContract contract = JsonConvert.DeserializeObject<DataContract>(output);
If you use this approach you don't need the annotations (all those DataMember and DataContract) on your classes, making code a bit more clean. I very much prefer using this approach than DataContractJsonSerializer, but it's your call.
I've talked a lot about serializing and deserializing objects, but your question was, "How do I find a certain node?". All the discussion above was just a prerequisite.
There are, again and as usual, a few ways of achieving what you want:
#YouneS answer. It's very straightforward and achieves what you are looking for.
Use the second approach above, and then use your typed object to get what you want. For instance:
var contract = JsonConvert.DeserializeObject<DataContract>(output);
var query = from dataItem in contract.Response.Data
where dataItem.Start > 1.3
order by dataItem.Start;
var item = query.FirstOrNull();
Which will return the first item which, since it's ordered, should be the smallest. Remember to test the result for null.
You can use a feature from Newtonsoft that enables to directly find the node you want. Refer to the documentation. A warning, it's a bit advanced and probably overkill for simple cases.
You can make it work with something like the following code :
// Dynamic object that will hold your Deserialized json string
dynamic myObj = JsonConvert.DeserializeObject<dynamic>(YOUR-JSON-STRING);
// Will hold the value you are looking for
string[] mySubjectValue = "";
// Looking for your subject value
foreach(var o in myObj.response.data) {
if(o.start > 1.3)
mySubjectValue = o.subjects;
}

Parsing (many) JSON different objects to C# classes. Is strongly typed better?

I have been working on a client - server project. The server side implemented on PHP. The client implemented on C#. The websocket is used for connection between them.
So, here is the problem. Client will make a request. Json is in use for sending objects and validating against the schema. The request MUST HAVE it's name and MAY contain args. Args are like associative array (key => value).
Server will give a response. Response MAY contain args, objects, array of objects. For example, client sends a request like:
{
"name": "AuthenticationRequest",
"username": "root",
"password": "password",
"etc": "etc"
}
For this, server will reply with an AuthSuccess or AuthFailed response like:
{
"name": "AuthFailed",
"args": [
{
"reason": "Reason text"
}]
}
If response is AuthSuccess, client will send a requst of who is online. Server must send an array of users.
And so on. So the problem is, how to store those responses on a client side. I mean, the way of creating a new object for each response type is insane. They will be hundreds of request types, and each of them requires it's own response. And any changing in structure of request will be very very hard...
Need some kind of pattern or trick. I know it's kind of a noob way... But if anyone has a better idea of implementing request/response structure, please tell it.
Best regards!
I'd definitely go with a new class for each request type. Yes, you may need to write a lot of code but it'll be safer. The point (to me) is who will write this code?. Let's read this answer to the end (or directly jump to last suggested option).
In these examples I'll use Dictionary<string, string> for generic objects but you may/should use a proper class (which doesn't expose dictionary), arrays, generic enumerations or whatever you'll feel comfortable with.
1. (Almost) Strongly Typed Classes
Each request has its own strongly typed class, for example:
abstract class Request {
protected Request(string name) {
Name = name;
}
public string Name { get; private set; }
public Dictionary<string, string> Args { get; set; }
}
sealed class AuthenticationRequest : Request
{
public AuthenticationRequest() : base("AuthenticationRequest") {
}
public string UserName { get; set; }
public string Password { get; set; }
}
Note that you may switch to a full typed approach also dropping Dictionary for Args in favor of typed classes.
Pros
What you saw as a drawback (changes are harder) is IMO a big benefit. If you change anything server-side then your request will fail because properties won't match. No subtle bugs where fields are left uninitialized because of typos in strings.
It's strongly typed then your C# code is easier to maintain, you have compile-time checks (both for names and types).
Refactoring is easier because IDE can do it for you, no need to blind search and replace raw strings.
It's easy to implement complex types, your arguments aren't limited to plain string (it may not be an issue now but you may require it later).
Cons
You have more code to write at very beginning (however class hierarchy will also help you to spot out dependencies and similarities).
2. Mixed Approach
Common parameters (name and arguments) are typed but everything else is stored in a dictionary.
sealed class Request {
public string Name { get; set; }
public Dictionary<string, string> Args { get; set; }
public Dictionary<string, string> Properties { get; set; }
}
With a mixed approach you keep some benefits of typed classes but you don't have to define each request type.
Pros
It's faster to implement than a almost/full typed approach.
You have some degree of compile-time checks.
You can reuse all code (I'd suppose your Request class will be also reused for Response class and if you move your helper methods - such as GetInt32() - to a base class then you'll write code once).
Cons
It's unsafe, wrong types (for example you retrieve an integer from a string property) aren't detected until error actually occurs at run-time.
Changes won't break compilation: if you change property name then you have to manually search each place you used that property. Automatic refactoring won't work. This may cause bugs pretty hard to detect.
Your code will be polluted with string constants (yes, you may define const string fields) and casts.
It's hard to use complex types for your arguments and you're limited to string values (or types that can be easily serialized/converted to a plain string).
3. Dynamic
Dynamic objects let you define an object and access it properties/methods as a typed class but they will be actually dynamically resolved at run-time.
dynamic request = new ExpandoObject();
request.Name = "AuthenticationRequest";
request.UserName = "test";
Note that you may also have this easy to use syntax:
dynamic request = new {
Name = "AuthenticationRequest",
UserName = "test"
};
Pros
If you add a property to your schema you don't need to update your code if you don't use it.
It's little bit more safe than an untyped approach. For example if request is filled with:
request.UserName = "test";
If you wrongly write this:
Console.WriteLine(request.User);
You will have a run-time error and you still have some basic type checking/conversion.
Code is little bit more readable than completely untyped approach.
It's easy and possible to have complex types.
Cons
Even if code is little bit more readable than completely untyped approach you still can't use refactoring features of your IDE and you almost don't have compile-time checks.
If you change a property name or structure in your schema and you forget to update your code (somewhere) you will have an error only at run-time when it'll happen you use it.
4. Auto-generated Strongly Typed Classes
Last but best...so far we did forget an important thing: JSON has schema with which it can be validatd (see json-schema.org).
How it can be useful? Your fully typed classes can be generated from that schema, let's take a look to JSON schema to POCO. If you don't have/don't want to use a schema you still can generate classes from JSON examples: take a look to JSON C# Class Generator project.
Just create one example (or schema) for each request/response and use a custom code generator/build task to build C# classes from that, something like this (see also MSDN about custom build tools):
Cvent.SchemaToPoco.Console.exe -s %(FullPath) -o .\%(Filename).cs -n CsClient.Model
Pro
All the pros of above solutions.
Cons
Nothing I can think about...
Why is it a problem to create a class for each kind of Request / Response? If you have hundreds of different kinds of Requests and Responses, you might want to try and categorize them better.
I would argue there are common patterns across your requests or responses. Eg. a FailureResponse might always contain some status information and maybe an UserData-object (which could be anything depending on the use-case). This can be applied to other categories likewise (eg. SuccessResponse).
dynamic is a new static type that acts like a placeholder for a type not known until runtime. Once the dynamic object is declared, it is possible to call operations, get and set properties on it, even pass the dynamic instance pretty much as if it were any normal type. dynamic gives us a lot of rope to hang themselves with. When dealing with objects whose types can be known at compile time, you should avoid the dynamic keyword at all costs
You can read more about Dynamic

How to parse any constant Lua table preferably without loading it in the Lua VM?

I have a bunch of data in the form of a Lua table and I would like to parse that data into a usable structure in C#.
The problem with Lua tables is that there are optional fields, tables are very dynamic and are not just a dictionary with one type for the key and one type for the value. It's possible to have one Lua table with both string and integer keys, and values of type integer, string and even table.
Sadly, the data that I'm parsing makes use of the dynamic nature of the language and isn't really structured in any straight-forward way. This requires a dynamic representation of the data, using for example Dictionary<object, dynamic>.
The format of the data is e.g. (from http://ideone.com/9nzXvt)
local mainNode =
{
[0] =
{
Name = "First element",
Comments = "I have comments!",
Data = { {123, 456}; foo = { "bar" } }
},
[1337] =
{
Name = "Another element",
Data = { {0}; foo = nil }
}
}
Are there any libraries out there to do this? Is there any way to accomplish this without parsing the data character by character?
You can use the luainterface library
There's some sample code here.
You'll want to use a combination of DoFile (to load the file) and GetTable to read the table into a LuaTable object that you can use the result from c#. The LuaTable exposes an IDictionaryEnumerator through GetEnumerator.
EDIT:
if you had this table constructor:
local t = { os.time() }
print(t[1]);
the function in the constructor would need to be executed to initialize the data.
for constant literals, you can have string constants like so:
local a = [==[
hello
there""\"]==]
with different levels of equal signs
a numeric literal can have the form:
0X1.921FB54442D18P+1
with P as a binary exponent.
faithfully reproducing the lua syntax for constant literals without using the lightweight lua VM would require re-implementing a good chunk of the lua language spec. not much benefit it re-inventing the wheel.
I know this is an old post, but this could be useful for people who arrive here after this post...
You could also look at Xanathar's MoonSharp (Moon#) project; I have just started to try it and it seems to work well with wrapping up the dynamic tables, with nested tables etc. You just give the interpreter a a script and it will parse and hold the parsed objects in the interpreter context.
Links:
http://www.moonsharp.org
https://github.com/xanathar/moonsharp/
Example:
[TestMethod]
public void Test_GetProperty_ForValidTypeAndKey_ReturnsValue()
{
// Arrange
String luaScript = MockLuaScripts.TEST_OBJECT_WITH_STRING;
Script context = new Script();
String expectedResult = MockLuaScripts.ValidValue1;
// Act
/* Run the script */
context.DoString(luaScript);
/* Get the object */
DynValue resultObject = context.Globals.Get(MockLuaScripts.ObjectInstance1);
/* Get the property */
DynValue tableValue = instance.Table.Get((MockLuaScripts.ValidKey1);
String actualResult = tableValue.String();
/* Or you can use..
String actualResult = tableValue.ToObject<String>();
*/
// Assert
Assert.AreEqual(expectedResult, actualResult);
}
Apologies if the above code is not exactly correct as it is taken from one of my test classes and converted for posting here. Please excuse the wrapped up mock-data constants, but they are in essence the Lua script and expected values.
When trying to access entries in Lua table via an incorrect key the DynValue has a DataType of "Nil", so are easy to handle with a conditional check.
More examples on usage of Xanathar's Moonsharp can be found on Xanathar's website and his git hub repo. (See links below). He seems to be very helpful with any issues or questions that you may come across too.
Links:
http://www.moonsharp.org
https://github.com/xanathar/moonsharp/
I have started to write some extensions which have units test which show further usage in one of my repos (See links below)
Links:
https://github.com/dibley1973/MoonsharpExtensions

What is a good design when trying to build objects from a list of key value pairs?

So if I have a method of parsing a text file and returning a list of a list of key value pairs, and want to create objects from the kvps returned (each list of kvps represents a different object), what would be the best method?
The first method that pops into mind is pretty simple, just keep a list of keywords:
private const string NAME = "name";
private const string PREFIX = "prefix";
and check against the keys I get for the constants I want, defined above. This is a fairly core piece of the project I'm working on though, so I want to do it well; does anyone have any more robust suggestions (not saying there's anything inherently un-robust about the above method - I'm just asking around)?
Edit:
More details have been asked for. I'm working on a little game in my spare time, and I am building up the game world with configuration files. There are four - one defines all creatures, another defines all areas (and their locations in a map), another all objects, and a final one defines various configuration options and things that don't fit else where. With the first three configuration files, I will be creating objects based on the content of the files - it will be quite text-heavy, so there will be a lot of strings, things like names, plurals, prefixes - that sort of thing. The configuration values are all like so:
-
key: value
key: value
-
key: value
key: value
-
Where the '-' line denotes a new section/object.
Take a deep look at the XmlSerializer. Even if you are constrained to not use XML on-disk, you might want to copy some of its features. This could then look like this:
public class DataObject {
[Column("name")]
public string Name { get; set; }
[Column("prefix")]
public string Prefix { get; set; }
}
Be careful though to include some kind of format version in your files, or you will be in hell's kitchen come the next format change.
Making a lot of unwarranted assumptions, I think that the best approach would be to create a Factory that will receive the list of key value pairs and return the proper object or throw an exception if it's invalid (or create a dummy object, or whatever is better in the particular case).
private class Factory {
public static IConfigurationObject Factory(List<string> keyValuePair) {
switch (keyValuePair[0]) {
case "x":
return new x(keyValuePair[1]);
break;
/* etc. */
default:
throw new ArgumentException("Wrong parameter in the file");
}
}
}
The strongest assumption here is that all your objects can be treated partly like the same (ie, they implement the same interface (IConfigurationObject in the example) or belong to the same inheritance tree).
If they don't, then it depends on your program flow and what are you doing with them. But nonetheless, they should :)
EDIT: Given your explanation, you could have one Factory per file type, the switch in it would be the authoritative source on the allowed types per file type and they probably share something in common. Reflection is possible, but it's riskier because it's less obvious and self documenting than this one.
What do you need object for? The way you describe it, you'll use them as some kind (of key-wise) restricted map anyway. If you do not need some kind of inheritance, I'd simply wrap a map-like structure into a object like this:
[java-inspired pseudo-code:]
class RestrictedKVDataStore {
const ALLOWED_KEYS = new Collection('name', 'prefix');
Map data = new Map();
void put(String key, Object value) {
if (ALLOWED_KEYS.contains(key))
data.put(key, value)
}
Object get(String key) {
return data.get(key);
}
}
You could create an interface that matched the column names, and then use the Reflection.Emit API to create a type at runtime that gave access to the data in the fields.
EDIT:
Scratch that, this still applies, but I think what your doing is reading a configuration file and parsing it into this:
List<List<KeyValuePair<String,String>>> itemConfig =
new List<List<KeyValuePair<String,String>>>();
In this case, we can still use a reflection factory to instantiate the objects, I'd just pass in the nested inner list to it, instead of passing each individual key/value pair.
OLD POST:
Here is a clever little way to do this using reflection:
The basic idea:
Use a common base class for each Object class.
Put all of these classes in their own assembly.
Put this factory in that assembly too.
Pass in the KeyValuePair that you read from your config, and in return it finds the class that matches KV.Key and instantiates it with KV.Value
public class KeyValueToObjectFactory
{
private Dictionary _kvTypes = new Dictionary();
public KeyValueToObjectFactory()
{
// Preload the Types into a dictionary so we can look them up later
// Obviously, you want to reuse the factory to minimize overhead, so don't
// do something stupid like instantiate a new factory in a loop.
foreach (Type type in typeof(KeyValueToObjectFactory).Assembly.GetTypes())
{
if (type.IsSubclassOf(typeof(KVObjectBase)))
{
_kvTypes[type.Name.ToLower()] = type;
}
}
}
public KVObjectBase CreateObjectFromKV(KeyValuePair kv)
{
if (kv != null)
{
string kvName = kv.Key;
// If the Type information is in our Dictionary, instantiate a new instance of that class.
Type kvType;
if (_kvTypes.TryGetValue(kvName, out kvType))
{
return (KVObjectBase)Activator.CreateInstance(kvType, kv.Value);
}
else
{
throw new ArgumentException("Unrecognized KV Pair");
}
}
else
{
return null;
}
}
}
#David:
I already have the parser (and most of these will be hand written, so I decided against XML). But that looks like I really nice way of doing it; I'll have to check it out. Excellent point about versioning too.
#Argelbargel:
That looks good too. :')
...This is a fairly core piece of the
project I'm working on though...
Is it really?
It's tempting to just abstract it and provide a basic implementation with the intention of refactoring later on.
Then you can get on with what matters: the game.
Just a thought
<bb />
Is it really?
Yes; I have thought this out. Far be it from me to do more work than neccessary. :')

Categories