I'm attempting to update my RestSharp calls to v107 and noticed a slight difference in how parameters are added. What I have working in 106 doesn't appear to work in 107.
What I have is that I'm adding an object with nested objects to a POST request. My object looks like this:
class CallParameters
{
public CallResources Resources {get;set;}
public CallFilters Filters {get;set;}
}
class CallResources
{
public bool IncludeOrgData {get;set;}
public bool IncludeDemoData {get;set}
}
class CallFilters
{
public string LastModified {get;set}
public List<string> Users {get;set;}
}
I know I can serialize that (once I set the fields) to look like:
{"Resources" : {"IncludeOrgData" : true, "IncludeDemoData" : true}, "Filters" : { "LastModifed" : "1/1/21", "Users" : ["User1", User2", "User3"]}}
With 106, I was able to create my rest call by serializing my object class to a string and adding it via AddJsonBody like this
CallParameters NewCallParameters = new CallParameters();
...Set the fields within the class...
string JsonBody = JsonConvert.SerializeObject(NewCallParameters);
var request = new RestRequest { Method = Method.POST };
var client = new RestClient();
var response = client.Execute(request.AddJsonBody(JsonBody));
Worked great. Passed over all the parameters right
In 107 I know it's slightly different. Using the same serialize and AddJsonObject, I show a single parameter is added with no name, and the value of the whole serialized object. This causes my call to return
The request is invalid - An error has occurred
If I use 'AddObject' to add the entire class CallParamters class as a whole, when I trace through things and look at my request parameters collection, I do see two parameters listed. They are named correctly ('Resources' and 'Filters') but the values are 'MyApp.CallResources' and 'MyApp.CallFilters'. It's like they are not serializing when adding. This returns:
At least one additional resource is required: IncludeOrgData, IncludeDemoData
If I add the objects separately as 'CallResourses' and 'CallFilters', I wind up with four parameters names for all the fields within. Value is correct aside from the Users List, which just shows 'System.Collections.Generic.List[System.String]'. Also having four parameters doesn't seem right either and the call also fails with invalid parameters. This returns
The request is invalid - Resources field is required
I have experimented with 'AddParamter' some. If I add the whole class as a parameter I'm not sure what to name it. Also the call fails since it has no name. If I add them separately like this:
string MyResources = JsonSerializer.Serialize(NewCallParameters.Resources);
string MyFilters = JsonSerializer.Serialize(NewCallParameters.Filters);
request.AddParamter("Resources", MyResources );
request.AddParamter("Filters", MyFilters);
I do get two parameters named correctly. The values look good and show all the entries. The return result though from the Rest service I'm calling states
At least one additional resource is required: IncludeOrgData, IncludeDemoData'
So it seems it's not seeing the values in the 'Resources' parameter. Do I need to add each as a separate parameter? How would I add nested parameters that way? I feel like I'm close but not getting something.
It helps to read the documentation. RestSharp has embedded serialization. Don't serialize your object when using AddJsonBody. As you need Pascal-case fields, you'd need to either change JSON options for the default serializer or use the RestSharp.Serializers.NewtonsoftJson package.
If you prefer to add a pre-serialized string, use AddStringBody
Related
Okay, what do I need?
I'm looking for a class (object by schema instance), which has a maximal number of hard defined fields. And a dynamic way to "use" (create, read, update, delete) a "sub-object" of it.
Like this:
public class Books
{
public int Id;
public string Title;
public string Isbn;
public int Pages;
public int Price;
public string Author;
public string DescriptionSmall;
public string DescriptionLong;
public string Publisher;
//create constructor:
public Books(int id, string title, string isbn, ...) {
Id = id,
Title = title,
Isbn = isbn,
...
//only the setted fields should be usable
}
//add fields (only pre-defined should be possible)
public void|bool Add(Dictionary<string, object>) { //or List<> overload for only field names (without values)
//add fields
}
//get a return (only pre-defined should be possible)
public void|bool return Get(Dictionary<string, object>) {
//return as a sub-object
}
//delete void|bool fields
public Delete(Dictionary<string, object>) {
//delete fields
}
//update fields
public Update(...) {
...
}
}
//and than I can use it like a object, createtd by instance or what ever. :/
var smallBooks = new Books(id = 1, Title = "Lord of tha weed"); //can use all methods but for the moment only the created fields, but can add more fields by add()...
I dislike to have hunderds of models/entites for all possible field-combinations.
The problem is, that I try to update a database via GraphQL server.
Is there a way to return a part of the object, which is a "sub-" object, itself?
Yeah I know, I also can create a dynamic obejct with help by ExpandoObject or create a Collection/Dictionary to send.
It's important, that the not used fields are not only NULL, because some fields in my database are nullable and can work with NULL as a value.
-------------------------[Addition: 2021-07-27]-------------------------
Okay, I obviously expressed myself ambiguously. I have a table in the DB and the fields in it are firmly defined. When updating data, however, I only need a few fields from the complete list of all fields. The problem, I would like to keep the selection of the fields dynamic instead of having to create numerous entities or DTOs as part of the main model.
I'm looking for a simple way to create a dynamic sub-object in the code that contains the same methods, but only a (freely selectable) selection of the total fields.
Within my project it happens from time to time that data should be collected and processed before that data is finally sent to the server as a dynamic subset, e.g. as an update (GraphQL update mutation). An existing data set (main model) should, as it were, reflect the current status of the database. But only individual fields are required for an update.
I just want to avoid having to create countless classes for all combinations and choices.
Is there a way to derive a dynamic partial selection from the main class, which, for example, only contains 1 to x fields, but possibly has a reference to the main class and its fields (sync). - So an instance from the main class and dynamically derive further sub-objects from this object.
I used to simply create a dictionary<string, object> with field names and values and use mainDict.GetKeys() or mainDict.Select(f => f.key / f.value) from this selection in a complex method to create a "small copy" to update only separate fields (via uploading to the server). - But I'd like to use oop with classes, instances and maybe dynamic objects (expando objects). - But I don't know how. ;) Sorry.
I'm open to any suggestion. If possible, simple and with little code, but as dynamic as possible.
To expand on my comment. A more complex would be like this:
In this example the Course has some fixed number of fields but expands on that using the CouresField (many-to-many) connection to connect several FormFields. The FieldValue entity holds the values for the fields in combination with the UserFieldValue (many-to-many) for each user (in this case).
Now maintaining all of this is a lot of work but you can use the EntityFramework to maintain the many-to-many connections for you. No need to do all the coding yourself. It will be automatically generated.
My task is to create a asp.net API in which:
User posts XML to my APi endpoint
API Passes the whole of the XML directly to a stored procedure
Stored procedure returns some XML back to the API for the user
Example call to api:
https://localhost:44308/api/OrderProcessing?<?xml version="1.0"encoding="utf-8"?><Param1>abc123</Param1><Param2>5</Param2><Param3>123456</Param3>etc.. etc
I've found many examples of calling a procedure from asp.net and all of them show the XML nodes being parsed into parameters to pass to the stored procedure:
usp_myproc #param1 =abc123, #param2 =5, #param3 =123456 etc.
I can't find out how to pass the xml as a single string as shown below EG
usp_myproc #param1 =xml version="1.0"encoding="utf-8"?><Param1>abc123</Param1><Param2>5</Param2><Param3>123456</Param3>etc.. etc
So far I have the following code which, which of course just passes the first parameter, eg. usp_myproc #param1 not the whole string
namespace OrderProcessingApp.Controllers
{
public class OrderProcessingController : ApiController
{
[HttpGet]
public IHttpActionResult StockOrder(string param1)
{
StockFinderEntities sd = new StockFinderEntities();
var results = sd.fn_ProcessOrder(param1).ToList();
}
}
}
Any help very gratefully received
There's a lot to unpack here and quite some information missing, see also my comments on your question. There are answerable parts in your question though, but things are far from optimal.
If I get this straight, you want this:
An API endpoint that accepts GET requests with the query string comprising entirely of an XML string.
In this action method, pass the XML string to a SQL user defined function.
This function parses the XML, does its thing, and returns zero or more records.
These records are then returned as a list to the caller.
So first things first, you've got to read that XML. Your query string appears to have no keys (it directly starts with ?<xml...>). Is this intentional, or an oversight in creating this question? Are you really sure you want to read XML from the query string, or can you change this design?
If you can change it, the idiomatic way would be to use query string parameters, which are then bound to your action method parameters:
// GET .../StockOrder?param1=42¶m2=Foo¶m3=Bar
public IHttpActionResult StockOrder(string param1, string param2, string param3) { ... }
Assuming you really want to read the raw query string as XML, as opposed to a single parameter:
// GET .../StockOrder?xmlString=<xml...>
public IHttpActionResult StockOrder(string xmlString) { ... }
Then you need no parameters at all:
// GET .../StockOrder?<xml...>
public IHttpActionResult StockOrder()
{
string xmlString = this.Request.RequestUri.Uri.Query.Value;
...
}
So now you have the XML string, and you'll pass it to a user defined function in your database. While Entity Framework will pack it up in a SQL parameter, at least you'll be safe from SQL injection here:
// GET .../StockOrder?<xml...>
public IHttpActionResult StockOrder()
{
string xmlString = this.Request.RequestUri.Uri.Query.Value;
if (xmlString?.Length > 0)
{
// Remove the question mark from the start
xmlString = xmlString.Substring(1);
}
var results = sd.fn_ProcessOrder(xmlString).ToList();
...
}
But this is still far from optimal! The caller could pass invalid XML, which would only be detected by SQL Server, which is way too late in the process. Or worse, it could be a specially crafted XML that SQL Server would choke on (like an XML bomb), or worse (executing arbitrary code contained within the XML). So I would not be a fan of sending user input in the form of XML directly to a SQL Server, no.
The optimal way would look like this:
public class GetStockOrderModel
{
[Required]
public string Param1 { get; set; }
[Required]
public string Param2 { get; set; }
[Required]
public string Param3 { get; set; }
}
// GET .../StockOrder?param1=42¶m2=Foo¶m3=Bar
public IHttpActionResult StockOrder([FromUri] GetStockOrderModel model)
{
if (!ModelState.IsValid)
{
// ...
}
var records = sd.fn_ProcessOrder(model.Param1, model.Param2, model.Param3).ToList();
var models = records.Select(o => new
{
FieldToReturn = o.Field1,
OtherField = o.Field2,
// ...
}).ToList();
return Ok(models);
}
Here, you have a model to bind to and perform validation, the database function is altered to accept actual parameters instead of XML, and a model decoupled from the database is returned.
Now if you say you cannot alter the function, because other applications call it, you might still be able to. The function you call exists of two parts: one that reads the parameters from the XML, and the other that executes the query using those parameters.
So you keep the original function, and let it call the new one once it extracted the parameters. You can then also call that new function, using the parameters you have in code.
And really, you don't want to pass user XML to your database server. Sure, it'll only be called by trusted parties, you'll say. But is everyone at those parties to be trusted? And don't they ever make mistakes?
So if you really don't want to change anything about the endpoint nor the function, and you're going to use that fourth (or second) code block, at least read the XML string in your controller, parse it, extract the parameters, validate them, and rebuild the XML from a safe template string, or something like that.
Because to ASP.NET, that XML is treated as a plain old string, so you have none of its validation or security measures.
I'm trying to work with JSON files to store a Class and I'm stuck with the deserialization.
I'm using the following NameSpace:
using System.Text.Json.Serialization;
I have a very simple class, made of 2 properties:
public EnumOfType Type { get; set; }
public double Price { get; set; }
I have 4 instances of this classe that I store in a list. When quiting the application, this list is saved in a JSON file.
string jsonString;
jsonString = JsonSerializer.Serialize(myListOfInstances);
File.WriteAllText(FileName, jsonString);
When I'm opening the Application, I want the JSON file to be loaded to recreate the instances.
I'm using the following method, which apparently works well.
string jsonString = File.ReadAllText(FileName);
myListOfInstances = JsonSerializer.Deserialize<List<MyClass>>(jsonString);
So far so good. When I check the content of the list, it is correctly populated and my 4 instances are there.
But then... how to use them?
Before the JSON, I was creating each instance (for example:)
MyClass FirstInstance = New MyClass();
FirstInstance.Type = EnumOfType.Type1;
FirstInstance.Price = 100.46;
Then I could manipulate it easily, simply calling FirstInstance.
myWindow.Label1.Content = FirstInstance.Price.ToString("C");
FirstInstance.Method1...
Now that the instances are in my list, I don't know how to manipulate them individually because I don't know how to call them.
It's probably obvious to most, but I'm still in the learning process.
Thank you for your help,
Fab
Based on how you have loaded the JSON file into your program, it looks like your variable myListOfInstances already contains all four MyClass objects ready to go. At this point you can use List accessors (or Linq if you want to be fancy) and do things such as the following:
myListOfInstances[0] //Gives you the first item in the list accessed by index
myListOfInstances.First() //Gives you the first item in the list (using linq)
foreach(var item in myListOfInstances) {
// this will iterate through all four items in the list storing each instance in
//the 'item' variable
}
etc...
EDIT: From my comment below. If you need to access values in a a list directly, you can search for specific conditions in the list using linq with the 'Where' method. The syntax is something like this:
myListOfInstances.Where(x => x.Property == SomePropertyToMatch)
I'm writing a Web API ApiController with several PUT methods that receive JSON data. The JSON is not deterministic and hence cannot be hard-mapped to a custom C# object, but needs to be received as Dictionaries/Sequences (Maps/Lists).
I have tried using an IDictionary for the data parm of the PUT method in the controller, and this sort of works -- the data appears to be mapped from JSON to the dictionary. However, it's necessary to declare the dictionary as <String,Object>, and there's no clear way to then retrieve the Object values as their appropriate types. (I've found a few suggested kluges in my searching, but they are just that.)
There is also a System.Json.JsonObject type which I finally managed to get loaded via NuGet, but when I use that the system does not appear to know how to map the data.
How is this typically done? How do you implement an ApiController method that receives generic JSON?
I can see three basic approaches:
Somehow make Dictionary/Sequence work with Object or some such.
Make something like System.Json.JsonObject work, perhaps by swizzling the routing info.
Receive the JSON as a byte array and then parse explicitly using one of the C# JSON toolkits available.
(As to how dynamic the data is, JSON objects may have missing entries or extraneous entries, and in some cases a particular entry may be represented as either a single JSON value or a JSON array of values. (Where "value" is JSON array, object, string, number, Boolean, or null.) In general, except for the array/not array ambiguity, the relation between keys and value types is known.)
(But I should note that this is a large project and I'll be receiving JSON strings from several other components by other authors. Being able to examine the received type and assert that it's as expected would be quite useful, and may even be necessary from a security standpoint.)
(I should add that I'm a relative novice with C# -- have only been working with it for about 6 months.)
You've got to know what kind of data you're expecting, but I have had success doing this in the past using dynamic typing.
Something like this:
[Test]
public void JsonTester()
{
string json = "{ 'fruit':'banana', 'color':'yellow' }";
dynamic data = JsonConvert.DeserializeObject(json);
string fruit = data["fruit"];
string color = data["color"];
Assert.That(fruit == "banana");
Assert.That(color == "yellow");
}
Edit:
You either need to know the type you want to deserialize to beforehand - in which case you can deserialize it to that type immediately.
Or you can deserialize it to a dynamic type, and then convert it to your static type once you know what you want to do with it.
using Newtonsoft.Json;
using NUnit.Framework;
public class DTO
{
public string Field1;
public int Field2;
}
public class JsonDeserializationTests
{
[Test]
public void JsonCanBeDeserializedToDTO()
{
string json = "{ 'Field1':'some text', 'Field2':45 }";
var data = JsonConvert.DeserializeObject<DTO>(json);
Assert.That(data.Field1 == "some text");
Assert.That(data.Field2 == 45);
}
[Test]
public void JsonCanBeDeserializedToDynamic_AndConvertedToDTO()
{
string json = "{ 'Field1':'some text', 'Field2':45 }";
var dynamicData = JsonConvert.DeserializeObject<dynamic>(json);
var data = new DTO { Field1 = dynamicData["Field1"], Field2 = dynamicData["Field2"] };
Assert.That(data.Field1 == "some text");
Assert.That(data.Field2 == 45);
}
}
I have a request to read string messages from a queue and "process them". Each message has a 4 digit "identifier/key" a the start, followed by date, time and another number...from then on, each message is different and requires different processing.
My thought was to use a factory to create an object of the required type and ALSO call the asbtract constructor at the same time.
Is this a sensible approach to take?
If so...how?
e.g.
1000,2013-02-13,09:00:00,492,....................
4000,2013-02-13,09:00:01,492,....................
1000,2013-02-13,09:00:02,74664,....................
4003,2013-02-13,09:00:03,1010,....................
4000,2013-02-13,09:00:04,493,....................
To build object of classes
Message1000 : AbstractMessage, IMessageThing
Message4000 : AbstractMessage, IMessageThing
Message4003 : AbstractMessage, IMessageThing
Where AbstractMessage contains a default constructor and properties for key, date, time, number etc.
If it makes sense depends on your requirements.
You could analyse the string like this:
// inside your actual factoryMethod...
var lines = ...;
foreach(var line in lines)
{
var tokens = line.Split(',');
// for split: you can also specify the max. amount of items if the ..... part can
// consist of more the dots.
CreateMessageObject(tokens); // eventually add to list of AbstractMessage or whatever
}
static FactoryClassConstructor()
{
_typeMap = new Dictionary<string, Type>();
_typeMap.Add("Message1000", typeof(Message1000));
// todo: add other message types
// you also could write a method which will use the class name of the
// type returned by typeof(XYZ) to assure the correct value as key
}
private Dictionary<string, Type> _typeMap;
private AbstractMessage CreateMessageObject(string[] tokens)
{
// simple error checking
if(tokens.Count != 5)
// todo: error handling
return null;
var type = typeMap[tokens[0]];
var instance = Activator.CreateInstance(type);
instance.Date = DateTime.Parse(tokens[1]);
instance.Time = DateTime.Parse(tokens[2]);
// todo initialize other properties
}
Of course you still need to do some error handling but I hope I could give you a good starting point.
The reason why i would use a dictionary is performance. Activator.CreateInstance is not very fast and the lookup with Type.GetType is also slow.
Instead of using the type as Value in the Dictionary you could also use something like this:
Dictionary<string, Action<IMessageThing>> _factories;
_factories = new Dictionary<string, Action<IMessageThing>>();
_factories.Add("Message1000", () => new Message1000());
and to create your object you could call:
var instance = _factories["Message1000"]();
Yes you can and is a correct and sensible approach. The things change a little if you can have a default constructor or not, and changes too if constructor will differ from one concrete implementation to the other. The simplest approach is to have a parameterless constructor.
With this prerequisite you can have something like this:
Type t = Type.GetType(string.Format("Handlers.MyHandlers.Message{0}",messageType));
var handler = Activator.CreateInstance(t) as IMessageThing;
In order to pass the string to the message, you can have a function defined in the IMessageThing interface, lets call it Init that you call immediately after the message creation, or probably better, have a costructor taking a string in the AbstractMessage class, and call it in the activator like this:
var handler = Activator.CreateInstance(t,body) as IMessageThing;
In the constructor of AbstractMessage call an abstract function Init(string body), so each concrete message need to implement its own parser.
Add some more error handling, and you have done.
One way is to split the string on , but set the max count to say 5, this should group all the values AFTER the number as one value:
var parts = your_string.split(new char[] {','},5);
Then you just need to use Activator.CreateInstance() to create your message instance. For example:
Type type = Type.GetType(String.Format("Message{0}",parts[0]));
var instance = Activator.CreateInstance(type) as IMessageThing;
You can then fill out the rest of the properties from parts.
You can either pass each message to a handler, the handler will check if it can handle this type of message. If so it will parse and return some object, otherwise it will return e.g. null and you will know to ask a different handler.
Or you build a parser that knows about the initial part that follows a common format, and then use a lookup table to find a specific message handler that will parse the remaining message and instantiate the correct type. (Pass the common parts to its constructor).
I don't understand what you mean with "create an object of the required type and ALSO call the asbtract constructor". There is no such thing as an abstract constructor. If you mean a constructor of an abstract base class, it is inevitable that it will get called when a subclass is instantiated.