Pros and Cons of using json array and json object - c#

My question is little theoretical but it's interesting one.
I want to know which approach is best while using web api.
I've two methods
One has List<Customer> as parameter
public void Create(List<Customer>) { //... }
Second has CustomerList as parameter (Where CustomerList class has property public List<Customer> customers {get; set;})
public void Create(CustomerList customer) { //... }
I know how for method 1, I need to pass json array and for method two I need to wrap json array to json object.
But my question is which approach is best and why?

When it comes to APIs, I always try to be as explicit as possible when it comes to the contract it exposes. So when you expect a List, use a JSON array. It will be much clearer to users of your API what is expected.
Having said that, the advantage of using a JSON object is that it won't be a breaking change if you decide to accept extra properties with your request later on.
In the end, it all depends on your use-case. Is this going to be an endpoint exposed to other people than just you? Do you already know you're going to want to accept additional data along with the customer list in the future? These are some of the questions you need to ask yourself.

Related

Get a specific part of my JSON data in C#

I have this JSON:
{
"response":
{
"data":
[
{
"start":1,
"subjects":["A"]
},
{
"start":3,
"subjects":["B"]
},
{
"start":2,
"subjects":["C"]
}
]
}
}
And I want to get only the "subject" data from the object with it's "start" value to be the smallest one that is greater than 1.3, which in this case would be C. Would anybody happen to know how such a thing can be achieved using C#?
I want to extend a bit on the other answers and shed more light into the subject.
A JSON -- JavaScript Object Notation - is just a way to move data "on a wire". Inside .NET, you shouldn't really consider your object to be a JSON, although you may colloquially refer to a data structure as such.
Having said that, what is a JSON "inside" .NET? It's your call. You can, for instance treat it as a string, but you will have a hard time doing this operation of finding a specific node based on certain parameters/rules.
Since a JSON is a tree-like structure, you could build your on data structure or use the many available on the web. This is great if you are learning the workings of the language and programming in general, bad if you are doing this professionally because you will probably be reinventing the wheel. And parsing the JSON is not a easy thing to do (again, good exercise).
So, the most time-effective way of doing? You have two options:
Use a dynamic object to represent your JSON data. A dynamic is a "extension" to .NET (actually, an extension to the CLR, that is called DLR) which lets you create objects that doesn't have classes (they can be considered to be "untyped", or, better, to use duck typing).
Use a typed structure that you defined to hold your data. This is the canonical, object-oriented, .NET way of doing it, but there's a trade-off in declaring classes and typing everything, which is costly in terms of time. The payoff is that you get better intellisense, performance (DLR objects are slower than traditional objects) and more safe code.
To go with the first approach, you can refer to #YouneS answer. You need to add a dependency to your project, Newtonsoft.Json (a nuget), and call deserialize to convert the JSON string to a dynamic object. As you can see from his answer, you can access properties in this object as you would access then on a JavaScript language. But you'll also realize that you have no intellisense and things such as myObj.unexistentField = "asd" will be allowed. That is the nature of dynamic typed objects.
The second approach is to declare all types. Again, this is time consuming and on many cases you'll prefer not to do it. Refer to Microsoft Docs to get more insight.
You should first create your data contracts, as below (forgive me for any typos, I'm not compiling the code).
[DataContract]
class DataItem
{
[DataMember]
public string Start { get; set; }
[DataMember]
public string[] Subjects { get; set; }
}
[DataContract]
class ResponseItem
{
[DataMember]
public DateItem[] Data { get; set; }
}
[DataContract]
class ResponseContract
{
[DataMember]
public ResponseItem Response { get; set; }
}
Once you have all those data structures declared, deserialize your json to it:
using (var ms = new MemoryStream(Encoding.Unicode.GetBytes(json)))
{
var deserializer = new DataContractJsonSerializer(typeof(ResponseContract));
return (T)deserializer.ReadObject(ms);
}
The code above may seem a bit complicated, but follow a bit of .NET / BCL standards. The DataContractJsonSerializer work only with streams, so you need to open a stream that contains your string. So you create a memory stream with all the bytes from the json string.
You can also use Newtonsoft to do that, which is much simpler but, of course, still requires that extra dependency:
DataContract contract = JsonConvert.DeserializeObject<DataContract>(output);
If you use this approach you don't need the annotations (all those DataMember and DataContract) on your classes, making code a bit more clean. I very much prefer using this approach than DataContractJsonSerializer, but it's your call.
I've talked a lot about serializing and deserializing objects, but your question was, "How do I find a certain node?". All the discussion above was just a prerequisite.
There are, again and as usual, a few ways of achieving what you want:
#YouneS answer. It's very straightforward and achieves what you are looking for.
Use the second approach above, and then use your typed object to get what you want. For instance:
var contract = JsonConvert.DeserializeObject<DataContract>(output);
var query = from dataItem in contract.Response.Data
where dataItem.Start > 1.3
order by dataItem.Start;
var item = query.FirstOrNull();
Which will return the first item which, since it's ordered, should be the smallest. Remember to test the result for null.
You can use a feature from Newtonsoft that enables to directly find the node you want. Refer to the documentation. A warning, it's a bit advanced and probably overkill for simple cases.
You can make it work with something like the following code :
// Dynamic object that will hold your Deserialized json string
dynamic myObj = JsonConvert.DeserializeObject<dynamic>(YOUR-JSON-STRING);
// Will hold the value you are looking for
string[] mySubjectValue = "";
// Looking for your subject value
foreach(var o in myObj.response.data) {
if(o.start > 1.3)
mySubjectValue = o.subjects;
}

Parsing (many) JSON different objects to C# classes. Is strongly typed better?

I have been working on a client - server project. The server side implemented on PHP. The client implemented on C#. The websocket is used for connection between them.
So, here is the problem. Client will make a request. Json is in use for sending objects and validating against the schema. The request MUST HAVE it's name and MAY contain args. Args are like associative array (key => value).
Server will give a response. Response MAY contain args, objects, array of objects. For example, client sends a request like:
{
"name": "AuthenticationRequest",
"username": "root",
"password": "password",
"etc": "etc"
}
For this, server will reply with an AuthSuccess or AuthFailed response like:
{
"name": "AuthFailed",
"args": [
{
"reason": "Reason text"
}]
}
If response is AuthSuccess, client will send a requst of who is online. Server must send an array of users.
And so on. So the problem is, how to store those responses on a client side. I mean, the way of creating a new object for each response type is insane. They will be hundreds of request types, and each of them requires it's own response. And any changing in structure of request will be very very hard...
Need some kind of pattern or trick. I know it's kind of a noob way... But if anyone has a better idea of implementing request/response structure, please tell it.
Best regards!
I'd definitely go with a new class for each request type. Yes, you may need to write a lot of code but it'll be safer. The point (to me) is who will write this code?. Let's read this answer to the end (or directly jump to last suggested option).
In these examples I'll use Dictionary<string, string> for generic objects but you may/should use a proper class (which doesn't expose dictionary), arrays, generic enumerations or whatever you'll feel comfortable with.
1. (Almost) Strongly Typed Classes
Each request has its own strongly typed class, for example:
abstract class Request {
protected Request(string name) {
Name = name;
}
public string Name { get; private set; }
public Dictionary<string, string> Args { get; set; }
}
sealed class AuthenticationRequest : Request
{
public AuthenticationRequest() : base("AuthenticationRequest") {
}
public string UserName { get; set; }
public string Password { get; set; }
}
Note that you may switch to a full typed approach also dropping Dictionary for Args in favor of typed classes.
Pros
What you saw as a drawback (changes are harder) is IMO a big benefit. If you change anything server-side then your request will fail because properties won't match. No subtle bugs where fields are left uninitialized because of typos in strings.
It's strongly typed then your C# code is easier to maintain, you have compile-time checks (both for names and types).
Refactoring is easier because IDE can do it for you, no need to blind search and replace raw strings.
It's easy to implement complex types, your arguments aren't limited to plain string (it may not be an issue now but you may require it later).
Cons
You have more code to write at very beginning (however class hierarchy will also help you to spot out dependencies and similarities).
2. Mixed Approach
Common parameters (name and arguments) are typed but everything else is stored in a dictionary.
sealed class Request {
public string Name { get; set; }
public Dictionary<string, string> Args { get; set; }
public Dictionary<string, string> Properties { get; set; }
}
With a mixed approach you keep some benefits of typed classes but you don't have to define each request type.
Pros
It's faster to implement than a almost/full typed approach.
You have some degree of compile-time checks.
You can reuse all code (I'd suppose your Request class will be also reused for Response class and if you move your helper methods - such as GetInt32() - to a base class then you'll write code once).
Cons
It's unsafe, wrong types (for example you retrieve an integer from a string property) aren't detected until error actually occurs at run-time.
Changes won't break compilation: if you change property name then you have to manually search each place you used that property. Automatic refactoring won't work. This may cause bugs pretty hard to detect.
Your code will be polluted with string constants (yes, you may define const string fields) and casts.
It's hard to use complex types for your arguments and you're limited to string values (or types that can be easily serialized/converted to a plain string).
3. Dynamic
Dynamic objects let you define an object and access it properties/methods as a typed class but they will be actually dynamically resolved at run-time.
dynamic request = new ExpandoObject();
request.Name = "AuthenticationRequest";
request.UserName = "test";
Note that you may also have this easy to use syntax:
dynamic request = new {
Name = "AuthenticationRequest",
UserName = "test"
};
Pros
If you add a property to your schema you don't need to update your code if you don't use it.
It's little bit more safe than an untyped approach. For example if request is filled with:
request.UserName = "test";
If you wrongly write this:
Console.WriteLine(request.User);
You will have a run-time error and you still have some basic type checking/conversion.
Code is little bit more readable than completely untyped approach.
It's easy and possible to have complex types.
Cons
Even if code is little bit more readable than completely untyped approach you still can't use refactoring features of your IDE and you almost don't have compile-time checks.
If you change a property name or structure in your schema and you forget to update your code (somewhere) you will have an error only at run-time when it'll happen you use it.
4. Auto-generated Strongly Typed Classes
Last but best...so far we did forget an important thing: JSON has schema with which it can be validatd (see json-schema.org).
How it can be useful? Your fully typed classes can be generated from that schema, let's take a look to JSON schema to POCO. If you don't have/don't want to use a schema you still can generate classes from JSON examples: take a look to JSON C# Class Generator project.
Just create one example (or schema) for each request/response and use a custom code generator/build task to build C# classes from that, something like this (see also MSDN about custom build tools):
Cvent.SchemaToPoco.Console.exe -s %(FullPath) -o .\%(Filename).cs -n CsClient.Model
Pro
All the pros of above solutions.
Cons
Nothing I can think about...
Why is it a problem to create a class for each kind of Request / Response? If you have hundreds of different kinds of Requests and Responses, you might want to try and categorize them better.
I would argue there are common patterns across your requests or responses. Eg. a FailureResponse might always contain some status information and maybe an UserData-object (which could be anything depending on the use-case). This can be applied to other categories likewise (eg. SuccessResponse).
dynamic is a new static type that acts like a placeholder for a type not known until runtime. Once the dynamic object is declared, it is possible to call operations, get and set properties on it, even pass the dynamic instance pretty much as if it were any normal type. dynamic gives us a lot of rope to hang themselves with. When dealing with objects whose types can be known at compile time, you should avoid the dynamic keyword at all costs
You can read more about Dynamic

Why does SignalR use IList in its contracts and everywhere in its internals instead of IEnumerable?

I'm sending messages to individual users depending on their roles, to accomplish that I have the following piece of code:
public static void Add(Guid userId, IEnumerable<SnapshotItem> snapshot)
{
var hub = GlobalHost.ConnectionManager.GetHubContext<FeedbackHub>();
var items = ApplicationDbContext.Instance.InsertSnapshot(userId, Guid.NewGuid(), snapshot);
foreach (var sendOperation in ConnectedUsers.Instance.EnumerateSendOperations(items))
{
hub.Clients.Users(sendOperation.Groups.SelectMany(x => x.Users).Select(x => x.Id).ToList()).OnDataFeedback(sendOperation.Items);
}
}
I'm not sure why do I have to invoke .ToList() each time I need to send something, my backing store is HashSet<String> and I want SignalR to work with that type of store instead of converting it to List each time since it would obviously consume processing power and memory.
Since in the backstage SignalR is doing simple iteration over the argument users or connectionIds, wouldn't it be more wise to use IEnumerable instead of IList, I've looked into the SignalR sources, shouldn't be to hard to achieve? Is there a particular reason for using the IList?
Edit
Created an issue on SignalR github page, will have to wait for one of the actual devs in order to clear things out...
There's no good reason for this as far as I can see digging through the older source code. The irony of it is that the IList<string> gets handed into the MultipleSignalProxy class where it is promptly mapped to a different format using another LINQ expression and then that is .ToList()'d. So, based on that exact usage in the implementation, they really don't need anything more than IEnumerable<string>.
My best answer would be that SignalR internally uses the enhanced function of IList like getting the count, or iterating over the collection and the additional use of index based access you would use for IList, but not ICollection. The only reason to use the more robust class is because somewhere they are using it, or feel the need for that additional functionality. Otherwise I would assume best practices of using the lighter class of ICollection, or IEnumerable, basically the base class of that Enumerable->Collection->List heirarchy.
G

Strategies for "Flexible Webservice"

I am building webservices for many different clients to connect to a database of automotive parts. This parts have a wide variety of properties. Different clients will need different subsets of properties to 'do their thing.'
All clients will need at least an ID, a part number, and a name. Some might need prices, some might need URL's to images, etc. etc. The next client might be written years from now and require yet a different subset of properties. I'd rather not send more than they need.
I have been building separate 'PartDTO's' with subsets of properties for each of these requirements, and serving them up as separate webservice methods to return the same list of parts but with different properties for each one. Rather than build this up for each client and come up with logical names for the DTO's and methods, I'd like a way for the client to specify what they want. I'm returning JSON, so I was thinking about the client passing me a JSON object listing the properties they want in the result-set:
ret = { ImageUrl: true, RetailPrice: true, ... }
First off, does this make sense?
Second, What I'd rather not lose here is the nice syntax to return an IEnumerable < DTO > and let the JSON tools serialize it. I could certainly build up a 'JSON' string and return that, but that seems pretty kludgey.
Suggestions? C# 'dynamic'?
This is a very good candidate for the Entity-Attribute-Value model. Basically you have a table of ID, Name, Value and you allow each customer/facet to store whatever they want... Then when they query you return their name-value pairs and let them use them as they please.
PROS: super flexible. Good for situations where a strong schema adds tons of complexity vs value. Single endpoint for multiple clients.
CONS: Generally disliked pattern, very hard to select from efficiently and also hard to index. However, if all you do is store and return collections of name-value, it should be fine.
I ended up going the dictionary-route. I defined a base class:
public abstract DictionaryAsDTO<T> : IReadOnlyDictionary<string, object>
{
protected DictionaryAsDTO(T t, string listOfProperties)
{
// Populate an internal dictionary with subset of t's props based on string
}
}
Then a DTO for Part like so:
public PartDTO : DictionaryAsDTO<Part>
{
public PartDTO(Part p, string listOfProperties) : base(p, listOfProperties) {}
// Override method to populate base's dictionary with Part properties based on
// listOfProperties
}
Then I wrote a JSON.NET converter for DictionaryAsDTO which emits JSON-y object-properties instead of key-value-pairs.
The web service builds an IEnumerable based on queries that return IEnumerable and serializes them.
Viola!

Deserializing JSON object to runtime type in WinRT (C#)

I have a small WinRT client app to my online service (Azure Web Service). The server sends a JSON encoded object with (with potential additional metadata) to the client and the client's responsibility would be to deserialize this data properly into classes and forward it to appropriate handlers.
Currently, the objects received can be deserialized with a simple
TodoItem todo = JsonConvert.DeserializeObject<TodoItem>(message.Content);
However, there can be multiple types of items received. So what I am currently thinking is this:
I include the type info in the header serverside, such as "Content-Object: TodoItem"
I define attributes to TodoItem on the client side (see below)
Upon receiving a message from the server, I find the class using the attribute I defined.
I call the deserialization method with the resolved type
(Example of the attribute mentioned in 2.)
[BackendObjectType="TodoItem"]
public class TodoItem
My problem with this approach however is the Type to Generics in the deserialization as I can't call:
Type t = ResolveType(message);
JsonConvert.DeserializeObject<t>(message.Content);
I tried finding some solutions to this and getting method info for the DeserializeObject and calling it using reflection seemed to be the way to go. However, GetMethod() does not exist in WinRT and I was not able to find an alternative I could use to retrieve the generic version of the DeserializeObject (as fetching by the name gives me the non-generic overload). I don't mind using reflection and GetMethod as I can cache (?) the methods and call them every time a message is received without having to resolve it every time.
So how do I achieve the latter part and/or is there another way to approach this?
Alright, I feel like this was not really a problem at all to begin with as I discovered the DeserializeObject(string, Type, JsonSerializerSettings) overload for the method. It works splendidly. However, I would still like to hear some feedback on the approach. Do you think using attributes as a way to resolve the type names is reasonable or are there better ways? I don't want to use the class names directly though, because I don't want to risk any sort of man-in-the-middle things be able to initialize whatever.
Just a few minutes ago we have posted the alternative way to do what you want. Please look here, if you will have any questions feel free to ask:
Prblem in Deserialization of JSON
Try this
http://json2csharp.com/
Put your Json string here it will generate a class
then
public static T DeserializeFromJson<T>(string json)
{
T deserializedProduct = JsonConvert.DeserializeObject<T>(json);
return deserializedProduct;
}
var container = DeserializeFromJson<ClassName>(JsonString);

Categories