When I use eaz and newtonsoft json it encrypts all my json data. Is there a way to decrypt it in c# or disable the encryption of the data?
I use virtualization and encryption with password
I talked with Eazfuscator back in 2020 and was able to integrate this using the constructor nicely.
One thing you MUST do however is use the json property name attribute as obfuscated properties will get renamed, you can't be sure what name it will get and also can't guarantee that the "next" compilation/ obfuscation generates the same name.
Anyway easy enough to fix method 1 use the names from the json to map via constructor
private class MyProprietaryData
{
internal MyProprietaryData(string a1, int a2)
{
AccountNumber=a1;
Code=a2;
}
[JsonProperty("a1")]
public string AccountNumber{get;}
[JsonProperty("a2")]
public int Code{get;}
}
method 2 use json to populate the fields and not the properties
private class MyProprietaryData
{
[JsonProperty("a1")]
string _accountNumber;
[JsonProperty("a2")]
int _code;
internal MyProprietaryData(string accountNumber, int code)
{
AccountNumber=accountNumber;
Code=code;
}
[JsonIgnore]
public string AccountNumber =>_accountNumber;
[JsonIgnore]
public int Code => _code;
}
As well as all the combinations that .net gives you, only thing you should remember is that field renaming will affect run-time generated mappers like json serialization does. This works on the new .net json as well as the newtonsoft json.
You didn't ask it but a free piece of advice, when excepting external data like with json, xml, gRPC always assume data is manipulated and validate the data, you can do this by having a checksum field that you serialize in the data and that you check, in case of json in the [jsonconstructor] annotated constructor.
Related
I am making a .NET 6 project, and I need to accomplish a task against an external API.
Let's define a model to set up the context.
class MyModel
{
public bool field1;
public string field2;
}
So, that model will be used in two actions.
First, a GET request, that can NOT contain some of the fields above, and
a POST request, that also can NOT contain some of the fields above.
For example:
// GET
{
MyModel: {
field1: true
}
}
// POST
{
MyModel: {
field1: false,
field2: "some value"
}
}
So, when I perform a GET operation, I want to include some particular fields. And send it two another external API, so I read from API1, and send it to API2.
When I perform a POST operation, I read from API2, and insert data into API1, but I just want to include some predefined fields.
My desire it's to create a custom attribute to reflect that behaviour:
class MyModel
{
[Read, Write]
public bool field1;
[Write]
public string field2;
}
Also note that a field can be used also for the GET operation, althought for the POST operation.
I am getting lost in how to implement the behaviour of the Newtonsoft.Json package to make it understand how to deserialize and serialize those specific fields based on an input operation.
My first approach it's to define a Read and Write interfaces, and when a request for write is made to the external service, it will serialize the fields that has a Read annotation to send the data, and when a request is made for read, the model will know how to deserialize the content into the attributes that has a Write.
As you may notice, concepts are inverted. Read means send data to the external API (because it will be read from another but internal API of the project), and Write means read data from the external API (because will be inserted in our internal API).
Thanks.
I would suggest to simply go for two models.
A Read Model
class MyReadModel
{
public bool field1;
}
And a Write Model
class MyWriteModel
{
public bool field1;
public string field2;
}
This makes it very clear.
Think of other situations: E.g. you want to generate a API Documentation with Swagger. You would also have to deal with your custom attributes.
In you controller you will have to methods:
[YourAllowAttribute]
[HttpGet]
public async Task<ActionResult<MyReadModel>> Get()
{
// stuff
}
[YourPostAttribute]
[HttpPost]
public async Task<ActionResult<MyReadModel>> Write()
{
// stuff
}
Security trimming can be achieved by implementing two attributes or policies e.g. [YourAllowAttribute] or [YourPostAttribute].
With this approach you will not mix concerns in your model.
I am trying to get around that C# prefers to have classes generated (I know they are easy to generate, but currently my format and parameters are changing a lot due to development in both client and server end).
Example of what I most often find when I try to find out to deserialize is that you first have to know the exact structure - then build a class - and then you can refer to it later on (it's fine, but it's just not what I would like to do):
Json format 1:
[{"FirstName":"Bob","LastName":"Johnson"},{"FirstName":"Dan","LastName":"Rather"}]
public class People
{
public string FirstName { get; set; }
public string LastName { get; set;}
}
public List<People> peopleList;
. . . // (same as first example)
//Change the deserialize code to type <List<Class>>
peopleList = deserial.Deserialize<List<People>>(response);
That of course is easy as long as the reply doesn't change format, if for example the reply changes to a nested field :
Json format 2:
[{"FirstName":"Bob","LastName":"Johnson"},{"data":{"nestedfield1"
:"ewr"} }]
I would of course have to change the class to represent that, but at the moment we are moving back and forth in formats and I would somehow like if there was a way where I could try to access json elements directly in the string?
For example, like I can do in Python:
mystring1 = reply ["firstName"] mystring2 = reply ["data"]["nestedfield1"]
Is there any way to achieve this in C#? It would speed up development rapidly if there was a way of accessing it without first referencing the output in the code to then once again reference the class variable that was created when referencing it.
And note it's for rapid development, not necessarily for the final implementation where I can see advantages by doing the class approach.
Another way of asking was maybe can it serialize taking any format (as long as its JSON) and dynamically build up a struct where I can access it with named keys and not as class variables?
to deserialize json without using classes you can use using Newtonsoft.Json
here's the code:
using System;
using Newtonsoft.Json;
using System.Text;
public class Program
{
public static void Main()
{
var myJSONString = "[{\"FirstName\":\"Bob\",\"LastName\":\"Johnson\"},{\"FirstName\":\"Dan\",\"LastName\":\"Rather\"}]";
dynamic obj = JsonConvert.DeserializeObject<dynamic>(myJSONString);
Console.WriteLine(obj[0].FirstName);
}
}
The obj will perform the same way you use when generating classes,
it can take any json string and deserialize into dynamic object regardless of structure of the json. Keep in mind that you won't get VS intellisense support.
UPDATE
Here's fiddle:
https://dotnetfiddle.net/xeLDpK
I have a json String(array) like this:
[{"user":{"name":"abc","gender":"Male","birthday":"1987-8-8"}},
{"user":{"name":"xyz","gender":"Female","birthday":"1987-7-7"}}]
and want to parse it to json object using .net4.0 framework only, and i cannot use DataContractJsonSerializer as it requires class and i am receiving dynamic data over web services within 1 minute which keep changing and it is in Name-value format,i tried using JavaScriptSerializer but i am unable to add system.web.extension in my vs2010 project .net4.0 supported framework,and i don't want to use any third party library,actually i am New-bie in wpf,so please it would be great help,thanks in advance!
Well there are two main issues I can see here, and one additional.
1) If you do not have particular class that you could deserialize JSON to, then you have to rely on some "dictionary-like" structure (e.g. dynamic or JToken)to be able to access all fields. However data you presented seems to be structured, so maybe consider creating POCO to get advantage of strongly-typed structure. Both can be easily achieved using ready-to-use libraries.
2) You say you don't want to use any third party library, but actually there is nothing wrong with it. Actually you should be doing so to avoid reinventing the wheel as Tewr mentioned. It's perfectly fine to use in fact industry-standard library such as Newtonsoft Json so you can avoid tons of bugs, unnecessary work and future troubles. If your point is to learn by writing JSON (de)serializer it's perfectly fine, but I'd recommend against using it in production code.
Side note: you mentioned you receive data over web-service, and it seems you receive simply JSON array (as top-level object). It's considered as security hole. More information may be found here:
https://haacked.com/archive/2008/11/20/anatomy-of-a-subtle-json-vulnerability.aspx/
What are "top level JSON arrays" and why are they a security risk?
EDIT 2017-11-05:
Ok, so you should create classes representing response from your web service (you can use feature of VS called Edit > Paste Special > Paste JSON As Classes):
public class Response
{
public User[] users { get; set; }
}
public class User
{
public string Name { get; set; }
public string gender { get; set; }
public string birthday { get; set; }
}
Now using Nuget install package Newtonsoft.Json and using following code you'll deserialize JSON response to .NET classes:
string responseText = "";//Get it from web service
var response = JsonConvert.DeserializeObject<Response>(responseText);
Hope this help!
I am using a 3rd party server that exposes an API via REST(so it is not possible to change the JSON). The JSON it returns is in a format like:
[
{
"#noun":"tag",
"#version":0,
"#tag":"myFoo"
}
]
I created a C# object to represent this item
public class ResponseItem
{
public string noun {get;set;}
.....
}
however, when I try to use the JavaScriptSerializer to deserialize this object, the properties do NOT get assigned. The serializer seems to be unable to handle the properties with the # symbol in front of the name.
Any ideas on how to solve this?
Ok, so after some finagling, I ditched the JavaScriptSerializer. I switched over to the DataContractJsonSerializer. I then use well defined data contracts and use the DataMember attribute to specify the name.
i.e.
[DataContract]
public class ResponseItem
{
[DataMember(Name="#noun")]
public string Noun {get;set;}
....
}
There may be a better/different way, but this works and is an acceptable solution
I'm currently attempting to build a service to retrieve and serialize a Sitecore data item to JSON, so our Javascript code can access Sitecore content data.
I've tried serializing the object directly with JavascriptSerializer and JSON.Net; both broke due to recursion likely due to the various circular references on the child properties.
I've also attempted to serialize the item to XML (via item.GetOuterXml()), then converting the Xml to JSON. The conversion worked fine; but it only retrieves fields that were set on the item itself, not the fields that were set in the _standardvalues. I tried calling item.Fields.ReadAll() before serializing, as well as a foreach loop with calls to item.Fields.EnsureField(Field.id); however, neither resulted in retrieving the missing fields. However, debugging the code; the Fields array appears to contain all inherited fields from its base template as well as the ones set on the item; so I'm guessing GetOuterXml is just ignoring all fields that weren't set specifically on the item.
The more I look at this, the more it looks like I'm going to need a custom model class to encapsulate the data item and the necessary fields, decorate it with the appropriate JSON.Net serialization attributes, and serialize from there. This feels like a dirty hack though.
So before I go down this road; I wanted to know if anyone here had experience serializing Sitecore content items to JSON for client-side consumption, and is there an easier way that I'm missing. Any constructive input is greatly appreciated.
Cheers,
Frank
I would suggest pursuing your approach of creating a custom model class to encapsulate just the item data you need to pass to the client. Then serialize that class to JSON. This cuts down on the amount of data you're sending over the wire and allows you to be selective about which data are being sent (for security reasons).
The CustomItem pattern and partial classes lend themselves to this approach very well. In the code samples below, the .base class is your base custom item wrapper. You can use this class to access fields and field values in a strongly-typed manner. The .instance class could be used for JSON serialization.
By splitting out the properties you want serialized, you have granular control over the data being sent back to the requesting client and you don't have to worry as much about circular references. If you need to make any changes to field definitions, you could simply change your .base class with minimal impact on your JSON serialization.
Hope this helps!
MyCustomItem.base.cs
public partial class MyCustomItem : Sitecore.Data.Items.CustomItem
{
public const string TitleFieldName = "Title";
public MyCustomItem(Item innerItem) : base(innerItem)
{
}
public static implicit operator MyCustomItem(Item innerItem)
{
return innerItem != null ? new MyCustomItem(innerItem) : null;
}
public static implicit operator Item(MyCustomItem customItem)
{
return customItem != null ? customItem.InnerItem : null;
}
public string Title
{
get { return InnerItem[TitleFieldName]); }
}
}
MyCustomItem.instance.cs
[JsonObject(MemberSerialization.OptIn)]
public partial class MyCustomItem
{
[JsonProperty("Title")]
public string JsonTitle
{
get { return Title; }
}
}
I wonder if you wouldn't be better off using an XSLT to recursively build the JSON?