I hope someone can help me. This is code I use to retrieve a bunch of data about a particular device with an api (in this case, a Twinkly light string).
Here's my code, which is partially functional.
HttpResponseMessage result = await httpClient.GetAsync(uri);
string response = await result.Content.ReadAsStringAsync();
JObject jObject = JObject.Parse(response);
Layout layout = new Layout();
layout = JsonConvert.DeserializeObject<Layout>(response);
I say it's "partially" functional because every property that is in the root level deserializes into the model just fine, but the json also returns a property called "coordinates" which consists of an array entry for each bulb, and each entry has three values for x,y,z.
I have tried a lot of stuff to get the data from the coordinates array and i can break-mode view that the data is in there.
However it doesn't deserialize properly. I have the correct number of elements in the coordinates array, but they are all x:0, y:0, z:0
Here is my model schema. I hope someone can help me with this. This is my first foray into api work, and the first time i've had a nested model like this.
internal class Layout
{
public int aspectXY { get; set; }
public int aspectXZ { get; set; }
public LedPosition[] coordinates { get; set; }
public string source { get; set; } //linear, 2d, 3d
public bool synthesized { get; set; }
public string uuid { get; set; }
}
internal class LedPosition
{
double x { get; set; }
double y { get; set; }
double z { get; set; }
}
Note: I've tried assigning the properties manually like this:
JToken dataToken = jObject.GetValue("coordinates");
and that indeed received the data but it didn't help me as it merely moved the issue.
you don' t need parse and deserialized in the same time, it would be enough
var response = await result.Content.ReadAsStringAsync();
var layout = JsonConvert.DeserializeObject<Layout>(response);
to make LedPosition properties visible make them public too
public class LedPosition
{
public double x { get; set; }
public double y { get; set; }
public double z { get; set; }
}
since it is used by another class this class should be public too
public class Layout
One thing I've learned recently from the big dog CTO at my work was you can actually copy the JSON you're expecting, and go to Edit -> Paste Special -> Paste JSON as Classes in Visual Studio and it'll paste it as the classes you need, with the proper names/properties. Really slick. Maybe try that and see if it comes out with a different model than what you have now.
This is my first foray into api work
Two things I want to point out, then..
does the api you're using publish a swagger/open api document?
No - see 2 below
Yes - take a look at tools like NSwag(Studio), Autorest and others. You feed the swagger.json into them and they crank out a few thousand lines of code that creates a client that does all the http calling, deserializing, the classes of data etc. if means your code would end up looking like:
var client = new TwinklyLightClient();
var spec = client.GetTwinklyLightSpec();
foreach(var coord in spec.Coords)
Console.Write(spec.X);
This is how APIs are supposed to be; the tools that create them operate to rules, the tools that describe them operate to rules so the consumption of them can be done by tools operating to rules - writing boilerplate json and http request bodies is a job for a computer because it's repetitive and always follows the same pattern
The API doesn't publish a spec we can use to get the computer to write the boring bits for us. Durn. Well, you can either make the spec yourself (not so hard) or go slightly more manual
Take your json (view it raw and copy it)
Go to any one of a number of websites that turn json into code - I like http://QuickType.io because it does a lot of languages, has a lot of customization and gives advanced examples of custom type deser, but there are others - and paste that json in
Instantly it's transformed into eg C# and can be pasted into your project
It gives an example of how to use it in the comments - a one liner something like:
var json = httpCallHereTo.GetTheResponseAsJsonString();
var twinklyLightSpec = TwinklyLightSpec.FromJson(json);
Yes, visual studio can make classes from json, but it's not very sophisticated - it does the job, but these sites that make json to c# go further in allowing you to choose arrays or lists, what the root object is called, decorating every property with a JsonProperty attribute that specifies the json name and keeps the c# property to c# naming conventions (or allows you to rename it to suit you)..
..and they work out of the box, which would resolve this problem you're having right now
Related
I am using .NET Framework and ASP.NET Core to create a REST web Api.
This web api has a call that gets a request model to save data and some call that later retrieves the data.
Most of the data is structured information I need in the backend and it is saved into different fields and tables in the database. On retrieval it is loaded from those tables and returned.
This all works.
However, I now have a requirement where the caller wants to save and later retrieve arbitrary data (lets just say a random json) as one of those fields. I can save and load json from the database that is not a problem, my problem is to build the web api model for my request.
[HttpPost]
public IActionResult Save([FromBody] ApiCallRequestModel request)
{
// ...
}
public sealed class ApiCallRequestModel
{
// structured, well known information
public int? MaybeSomeNumber { get; set; }
[Required]
public string SomeText { get; set; }
[Required]
public SubModel SomeData { get; set; }
// one field of unknown json data
public ??? CustomData { get; set; }
}
I could think of dynamic or maybe even ExpandoObject or JObject to try and I might, but I would like a solution that works because it's best practice, not just because I tried and it didn't fail today with my simple tests.
If everything else fails, I could just make the field a string and tell the client to put serialized json into it. But that's a workaround I would see as a last resort if this question yields no answers.
It has proven to be extremly hard to google this topic, since all words I would use lead me to pages explaining Json serialization of my request model itself. I know how that works and it's not a problem. The mix of structured data and free json is what I cannot find out from a somewhat authorative source.
So what type would you use here, what is the best practice for receiving arbitrary json in one property of your model?
So to sum this up, as suggested I used a JToken from the Json.NET nuget package, since I already had that package in my project.
[HttpPost]
public IActionResult Save([FromBody] ApiCallRequestModel request)
{
// ...
}
public sealed class ApiCallRequestModel
{
// structured, well known information
public int? MaybeSomeNumber { get; set; }
[Required]
public string SomeText { get; set; }
[Required]
public SubModel SomeData { get; set; }
// one field of unknown json data
public JToken CustomData { get; set; }
}
Works like a charm.
Excuse me right off the bat. I am sort of new.
I have an object that contains few properties of which one of the property in that itself is a List. Now, we do not know how big the list of values in the input payload would be like (It could be 1000, it could be 100,000). We are logging this request payload before we process.
We use _logger.Verbose ("Some String...", {object});
When we log, the log file (We use Serilog) saves it as a notepad file with huge values, in JSON format.
Now, when the input is too big, the logger tries to log but fails and retries many times due to big payload.
I am looking for a way to split or do some looping and split and store or something. I dont know how to do in C# code. I tried googling and researched a lot but futile. I found SKIP and TASK methods of Lambda but unsure how to use.
Code below:In this case, imagine, "Model" is like 1000, or 100,1000 it could be anything. I am just looking for a loop logic in C# to divide to a decent number and process.
public class Make
{
public int ID { get; set;}
public string Name { get; set;}
public string Category { get; set;}
public List<Model> Models { get;set;}
}
public class Model
{
public string Name { get; set;}
public string County { get; set;}
public string Submodel { get; set;}
}
public ProcessCars ( Make object)
{
_logger.Verbose ("Some String...", {object});`
// Processing///
//.....//
}
I understand the purpose of yours is to view or debug the values of your list.
If I were you, I would ask myself a few questions
Do I need to write all values? Why can't I filter first before logging?
What's the purpose of writing to a text file, when you can log to database? Serilog support DB logging.
Is it a best practice to log large values to a text file?
Suppose I have a model with 20 fields, and in my index page, I want to list all models that are stored in my database.
In index page, instead of listing all fields of the model, I only to list 3 fields.
So, I make two class:
class CompleteModel {
public int Id { get; set; }
public string Field01 { get; set; }
public string Field02 { get; set; }
public string Field03 { get; set; }
public string Field04 { get; set; }
public string Field05 { get; set; }
...
public string Field20 { get; set; }
}
now, in my Controller, I can use:
await _context.CompleteModel.ToListAsync();
but I feel that it does not seem to be the right way to do it, because I'm getting all fields and using only 3 fields.
So, I made this code:
class ViewModel {
public string Field02 { get; set; }
public string Field04 { get; set; }
public string Field08 { get; set; }
}
var result = _context.CompleteModel.Select(
x => new {
x.Field02,
x.Field04,
x.Field08
}).ToListAsync();
var listResults = new List<IndexViewModel>();
if (result != null)
{
listResults.AddRange(results.Select(x => new IndexViewModel
{
Field02 = x.Field02,
Field04 = x.Field04,
Field08 = x.Field08
}));
}
I think this is a lot of code to do this.
First, I selected all the fields that I want, then, copied everything to another object.
There's a "more directly" way to do the same thing?
Like:
_context.CompleteModel.Select(x => new IndexViewModel { Field02, Field04, Field08 });
You could use AutoMapper to reduce the boiler plate so you're not manually copying field values over.
If you include the AutoMapper NuGet package then you'd need to have the following in your startup somewhere to configure it for your classes:
Mapper.Initialize(cfg => cfg.CreateMap<CompleteModel, ViewModel>());
You could then do something like the following:
var results = await _context.CompleteModel.ToListAsync();
var viewModelResults = results.Select(Mapper.Map<ViewModel>).ToList();
There are a lot of configuration options for the package so do take a look at the documentation to see if it suits your needs and determine the best way to use it if it does.
In my view this is one of the weaknesses of over abstraction and layering. The VM contains the data that is valuable to your application within the context of use (screen, process etc). The data model contains all the data that could be stored that might be relevant. At some point you need to match the two.
Use EF Projection to fetch only the data you need from the database into projected data model classes (using the EF POCO layer to define the query, but not to store the resultant data).
Map the projected classes onto your VM, if there is a naieve mapping, using Automapper or similar. However unless you are just writing CRUD screens a simple field by field mapping is of little value; the data you fetch from your data store via EF is in its raw, probably relational form. The data required by your VM is probably not going to fit that form very neatly (again, unless you are doing a simple CRUD form), so you are going to need to add some value by coding the relationship between the data store and the View Model.
I think concentrating on the count of lines of code would lead to the wrong approach. I think you can look at that code and ask "is it adding any value". If you can delegate the task to Automapper, then great; but your VM isn't really pulling its weight other than adding some validation annotation if you can consistently delegate the task of data model to VM data copying.
I am working on a REST API for a project using Visual Studio 2013 with C# and ASP.NET, and I need some guidance.
When the webpage performs a POST, I am passing along a number of fields as a JSON object. By defining a data transfer object in my C# code, I can easily read the values from the JSON, but only if I define all the fields (with the same name).
Here is my current (working) code:
public class AgencyPostDTO
{
public string AgencyName { get; set; }
public string Address1 { get; set; }
public string Address2 { get; set; }
public string City { get; set; }
public string State { get; set; }
public string ZIP { get; set; }
}
// POST: api/Agency
public string Post(AgencyPostDTO Agency)
{
int success;
success = SQLUpdateAgency(Agency);
if (success < 1)
{
return "Failed";
}
else
{
return "Success";
}
}
So far no problems. I need to pass the data over to a second function, where I will perform some data processing (including converting the data into XML) and send the data/XML to MS SQL using a stored procedure:
public int SQLUpdateAgency(AgencyPostDTO Agency)
{
string xml = Agency.SerializeObject();
... code to call SQL stored procedure ommitted here
}
Now to my problem. I would prefer if I did not have to define the parameters of the data transfer object AgencyPostDTO in the code, and instead the code would just read the incoming JSON and pass it along to the next function, where I create the XML containing everything passed along.
As it works now, if the JSON contains for example an email address field, it will be dropped unless I define it in AgencyPostDTO.
So why do I want to do this? For future ease of maintenance. The users may come and say they want to add additional fields to the web form. I can then simply have our SQL expert add that column to the table, give me the name of it and I add an input field to the HTML form and make sure it is included in the JSON sent over. That way we never have to touch the already written, tested and working code. The new field is simply passed though the whole process.
Can this be done? If so, any suggestions on how?
If you used JSON.NET to handle the deserialisation of your objects then that has support for dynamic properties. Once you'd read your JSON string, you could convert it to a JArray or JObject and from there by using the .Children() call to get a list of all properties to convert it to any XML object you needed.
Have a look here:
Deserialize json object into dynamic object using Json.net
I recently encountered a (hopefully) small issue when toying around with a Web API project that involves returning object graphs so that they can be read as JSON.
Example of Task Object (generated through EF) :
//A Task Object (Parent) can consist of many Activities (Child Objects)
public partial class Task
{
public Task()
{
this.Activities = new HashSet<Activity>();
}
public int TaskId { get; set; }
public string TaskSummary { get; set; }
public string TaskDetail { get; set; }
public virtual ICollection<Activity> Activities { get; set; }
}
within my ApiController, I am requested a specific Task (by Id) along with all of it's associated Activities, via:
Example of Single Task Request
//Simple example of pulling an object along with the associated activities.
return repository.Single(t => t.Id == id).Include("Activities");
Everything appears to be working fine - however when I attempt to navigate to a URL to access this, such as /api/tasks/1, the method executes as it should, but no object is returned (just a simple cannot find that page).
If I request an Task that contains no activities - everything works as expected and it returns the proper JSON object with Activities : [].
I'm sure there are many way to tackle this issue - I just thought I would get some insight as to what people consider the best method of handling this.
Considered Methods (so far):
Using an alternative JSON Parser (such as Newtonsoft.JSON) which fixed the issue but appended $id and $refs throughout the return data, which could make parsing for Knockout difficult I believe.
Using projection and leveraging anonymous types to return the data. (Untested so far)
Removing the Include entirely and simply accessing the Child Data through another request.
Any and all suggestions would be greatly appreciated.
I had a similar issue with EF types and Web API recently. Depending on how your generated EF models are setup, the navigation properties may result in circular dependencies. So if your generated Activity class has a Task reference the serializer will try to walk the object graph and get thrown in a little nasty cycle.
One solution would be to create a simple view model to get the serializer working
public class TaskViewModel {
public TaskViewModel ()
{
this.Activities = new List<ActivityViewModel>();
}
public int TaskId { get; set; }
public string TaskSummary { get; set; }
public string TaskDetail { get; set; }
public virtual IList<ActivityViewModel> Activities { get; set; }
}
public class ActivityViewModel{
public ActivityViewModel()
{
}
//Activity stuff goes here
//No reference to Tasks here!!
}
Depending on what you're doing, you may even be able to create a flatter model than this but removing the Task reference will help the serialization. That's probably why it worked when Activities was empty