How do I convert json to json-ld in.Net - c#

I am trying to convert json to json-ld. So far I have tried the json-ld.net liberary from nuget (it is part of nuget3): https://www.nuget.org/packages/json-ld.net/
var jtoken = JsonLD.Util.JSONUtils.FromString(response);
var options = new JsonLdOptions();
options.SetBase("http://json-ld.org/test-suite/tests/");
options.SetProduceGeneralizedRdf(true);
var context = JsonLD.Util.JSONUtils.FromString(Properties.Resources.jasonldcontext);
options.SetExpandContext((JObject)context);
var jtokenout = JsonLdProcessor.Compact(jtoken, context, options);
var sz = JSONUtils.ToString(jtokenout);
the context resource:
{"#context": {
"ex": "http://example.org/",
"term1": {"#id": "ex:term1", "#type": "ex:datatype"},
"term2": {"#id": "ex:term2", "#type": "#id"}
}}
My json is present and valid. It comes from REST service. (response), and jtoken is populated. However, sz only contains the context:
context":{"ex":"http://example.org/","term1":
{"#id":"ex:term1","#type":"ex:datatype"},"term2":
{"#id":"ex:term2","#type":"#id"}}}

MXTires Microdata .NET is a good one. Converts .Net classes to Schema.org structured data in form of JSON-LD.
Nuget Link | Usage Link

I think I framed the question incorrectly. POCO to JSON-LD can be accomplished easily with JsonLD.Entities on GitHub. If I start with POCO or convert JSON to POCO, then this works easily.
var person = new Person
{
Id = new Uri("http://t-code.pl/#tomasz"),
Name = "Tomasz",
LastName = "Pluskiewicz"
};
var #context = JObject.Parse("{ '#context': 'http://example.org/context/Person' }");
var contextProvider = new StaticContextProvider();
contextProvider.SetContext(typeof(Person), #context);
// when
IEntitySerializer serializer = new EntitySerializer(contextProvider);
dynamic json = serializer.Serialize(person);

Related

How to serialize/deserialize from ksql avro format to c# using confluent platform

I am using KsqlDb a table with the following form:
KSQL-DB Query
create table currency (id integer,name varchar) with (kafka_topic='currency',partitions=1,value_format='avro');
C# model
public class Currency
{
public int Id{get;set;}
public string Name{get;set;}
}
Now i want to know how should i write/read data from this topic in C# using the Confluent library:
Writing
IProducer<int, Currency> producer=....
Currency cur=new Currency();
Message<int,Currency> message = new Message<int, Currency>
{
Key = msg.Id,
Timestamp = new Timestamp(DateTime.UtcNow, TimestampType.CreateTime),
Value = msg
};
DeliveryResult<int,Currency> delivery = await this.producer.ProduceAsync(topic,message);
Reading
IConsumer<int,Currency> iconsumer = new ConsumerBuilder<int, Currency>(config)
.SetKeyDeserializer(Deserializers.Int32) //i assume i need to use the id from my dto
.SetValueDeserializer(...) //what deserializer
.Build();
ConsumeResult<int,Currency> result = consumer.Consume();
Currency message = // what deserializer JsonSerializer.Deserialize<Currency>(result.Message.Value);
I am not sure how to go about this so i tried looking for serializer. I found this library AvroSerializer , but i do not get where the author fetches the schema.
Any help on how to read/write to a specific topic that would match with my ksqldb models ?
Update
After some research and some answers here i have started using the schemaRegistry
var config = new ConsumerConfig
{
GroupId = kafkaConfig.ConsumerGroup,
BootstrapServers = kafkaConfig.ServerUrl,
AutoOffsetReset = AutoOffsetReset.Earliest
};
var schemaRegistryConfig = new SchemaRegistryConfig
{
Url = kafkaConfig.SchemaRegistryUrl
};
var schemaRegistry = new CachedSchemaRegistryClient(schemaRegistryConfig);
IConsumer<int,Currency> consumer = new ConsumerBuilder<int, Currency>(config)
.SetKeyDeserializer(new AvroDeserializer<int>(schemaRegistry).AsSyncOverAsync())
.SetValueDeserializer(new AvroDeserializer<Currency>(schemaRegistry).AsSyncOverAsync())
.Build();
ConsumeResult<int, Currency> result = consumer.Consume();
Now i am getting another error:
Expecting data framing of length 5 bytes or more but total data size
is 4 bytes
As someone kindly pointed out it seems i retrieving only the id from the schema registry.
How can i just : insert into currency (id,name) values (1,3) and retrieve it in C# as a POCO (listed above) ?
Update 2
After i have found this source program it seems i am not able to publish messages to tables for some reason.
There is no error when sending the message but it is not published to Kafka.
I found this library AvroSerializer , but i do not get where the author fetches the schema.
Unclear why you need to use a library other than the Confluent one, but they get it from the Schema Registry. You can use CachedSchemaRegistryClient to get the schema string easily, however you shouldn't need this in the code as the deserializer will download from the registry on its own.
If you refer to the examples/ in the confluent-kafka-dotnet repo for Specific Avro consumption, you can see they generate the User class from User.avsc file, which seems to be exactly what you want to do here for Currency rather than write it yourself
I have solved the problem by defining my custom serializer , thus implementing the ISerializer<T> and IDeserializer<T> interfaces which in their belly are just wrappers over System.Text.Json.JsonSerializer or NewtonsoftJson.
Serializer
public class MySerializer:ISerializer<T>
{
byte[] Serialize(T data, SerializationContext context)
{
var str=System.Text.Json.JsonSerializer.Serialize(data); //you can also use Newtonsoft here
var bytes=Encoding.UTF8.GetBytes(str);
return bytes;
}
}
Usage
var config = new ConsumerConfig
{
GroupId = kafkaConfig.ConsumerGroup,
BootstrapServers = kafkaConfig.ServerUrl,
AutoOffsetReset = AutoOffsetReset.Earliest
};
IConsumer<int,Currency> consumer = new ConsumerBuilder<int, Currency>(config)
.SetValueDeserializer(new MySerializer<Currency>())
.Build();
ConsumeResult<int, Currency> result = consumer.Consume();
P.S
I am not even using the schema registry here afteri implemented the interface

How to create new AutoML DataSet for simple classification (C#)

As part of ML automation process I want to dynamically create new AutoML model. I'm using C# (.net framework) and Google.Cloud.AutoML.V1.
After trying to run CreateDataSet code:
var autoMlClient = AutoMlClient.Create();
var parent = LocationName.FromProjectLocation(_projectId, _locationId);
var dataset = new Google.Cloud.AutoML.V1.Dataset();
dataset.DisplayName = "NewDataSet";
var response = autoMlClient.CreateDataset(parent, dataset);
I get the following error:
Field: dataset.dataset_metadata; Message: Required field not set
According to this user manual I should set Dataset Metadata Type, but the list contains only specific types of classifications (Translation/ImageClassifications etc.), I can't find a simple classification type.
How do I create a simple classification data set with the API ? in the AutoML UI its just with a simple button click ("NEW DATASET") - and have to provide only name & region - no classification type.
I also tried to set:
dataset.TextClassificationDatasetMetadata =
new TextClassificationDatasetMetadata() { ClassificationType = ClassificationType.Multiclass };
But I was unable to import data to it (got too many errors of invalid inputs from the input CSV file), I guess its related to the reason that the input format is not suitable for Text Classification.
UPDATE
I've just notice that the Nuget works with AutoML v1 but v1 beta does contains TablesDatasetMetadata Dataset Metadata Type for normal classifications. I'm speechless.
I also experienced this scenario today while creating a dataset using the NodeJS client. Since the Google AutoML table service is in the beta level you need to use the beta version of the AutoML client. In the Google cloud documentation they have used the beta client to create a dataset.
In NodeJS importing the beta version require('#google-cloud/automl').v1beta1.AutoMlClient instead of importing the normal version (v1) require('#google-cloud/automl').v1 worked for me to successfully execute the create dataset functionality.
In C# you can achieve the same through a POST request. Hope this helps :)
After #RajithaWarusavitarana comment, and my last question update , below is the code that did the trick. The token is being generated by GoogleClientAPI nuget and AutoML is handled by REST.
string GcpGlobalEndPointUrl = "https://automl.googleapis.com";
string GcpGlobalLocation = "us-central1"; // api "parent" parameter
public string GetToken(string jsonFilePath)
{
var serviceAccountCredentialFileContents = System.IO.File.ReadAllText(jsonFilePath);
var credentialParameters = NewtonsoftJsonSerializer.Instance.Deserialize<JsonCredentialParameters>(serviceAccountCredentialFileContents);
var initializer = new ServiceAccountCredential.Initializer(credentialParameters.ClientEmail)
{
Scopes = new List<string> { "https://www.googleapis.com/auth/cloud-platform" }
};
var cred = new ServiceAccountCredential(initializer.FromPrivateKey(credentialParameters.PrivateKey));
string accessToken = cred.GetAccessTokenForRequestAsync("https://oauth2.googleapis.com/token").Result;
return accessToken;
}
public void GetDataSetList(string projectId, string token)
{
var restClient = new RestClient(GcpGlobalEndPointUrl);
var createDataSetReqUrl = $"v1beta1/projects/{projectId}/locations/{GcpGlobalLocation}/datasets";
var createDataSetReq = new RestRequest(createDataSetReqUrl, Method.GET);
createDataSetReq.AddHeader("Authorization", $"Bearer {token}");
var createDatasetResponse = restClient.Execute(createDataSetReq);
createDatasetResponse.Dump();
}
I took the token generation code from google-api-dotnet-client Test File

Deserialize Avro Spark

I'm pushing a stream of data to Azure EventHub with the following code leveraging Microsoft.Hadoop.Avro.. this code runs every 5 seconds, and simply plops the same two Avro serialised items 👍🏼:
var strSchema = File.ReadAllText("schema.json");
var avroSerializer = AvroSerializer.CreateGeneric(strSchema);
var rootSchema = avroSerializer.WriterSchema as RecordSchema;
var itemList = new List<AvroRecord>();
dynamic record_one = new AvroRecord(rootSchema);
record_one.FirstName = "Some";
record_one.LastName = "Guy";
itemList.Add(record_one);
dynamic record_two = new AvroRecord(rootSchema);
record_two.FirstName = "A.";
record_two.LastName = "Person";
itemList.Add(record_two);
using (var buffer = new MemoryStream())
{
using (var writer = AvroContainer.CreateGenericWriter(strSchema, buffer, Codec.Null))
{
using (var streamWriter = new SequentialWriter<object>(writer, itemList.Count))
{
foreach (var item in itemList)
{
streamWriter.Write(item);
}
}
}
eventHubClient.SendAsync(new EventData(buffer.ToArray()));
}
The schema used here is, again, v. simple:
{
"type": "record",
"name": "User",
"namespace": "SerDes",
"fields": [
{
"name": "FirstName",
"type": "string"
},
{
"name": "LastName",
"type": "string"
}
]
}
I have validated this is all good, with a simple view in Azure Stream Analytics on the portal:
So far so good, but i cannot, for the life of me correctly deserialize this in Databricks leverage the from_avro() command under Scala..
Load (the exact same) schema as a string:
val sampleJsonSchema = dbutils.fs.head("/mnt/schemas/schema.json")
Configure EventHub
val connectionString = ConnectionStringBuilder("<CONNECTION_STRING>")
.setEventHubName("<NAME_OF_EVENT_HUB>")
.build
val eventHubsConf = EventHubsConf(connectionString).setStartingPosition(EventPosition.fromEndOfStream)
val eventhubs = spark.readStream.format("eventhubs").options(eventHubsConf.toMap).load()
Read the data..
// this works, and i can see the serialised data
display(eventhubs.select($"body"))
// this fails, and with an exception: org.apache.spark.SparkException: Malformed records are detected in record parsing. Current parse Mode: FAILFAST. To process malformed records as null result, try setting the option 'mode' as 'PERMISSIVE'.
display(eventhubs.select(from_avro($"body", sampleJsonSchema)))
So essentially, what is going on here.. i am serialising the data with the same schema as deserializing, but something is malformed.. the documentation is incredibly sparse on this front (very very minimal on the Microsoft website).
The issue
After additional investigation, (and mainly with the help of this article) I found what my problem was: from_avro(data: Column, jsonFormatSchema: String) expects spark schema format and not avro schema format. The documentation is not very clear on this.
Solution 1
Databricks provides a handy method from_avro(column: Column, subject: String, schemaRegistryUrl: String)) that fetches needed avro schema from kafka schema registry and automatically converts to correct format.
Unfortunately, it is not available for pure spark, nor is it possible to use it without a kafka schema registry.
Solution 2
Use schema conversion provided by spark:
// define avro deserializer
class AvroDeserializer() extends AbstractKafkaAvroDeserializer {
override def deserialize(payload: Array[Byte]): String = {
val genericRecord = this.deserialize(payload).asInstanceOf[GenericRecord]
genericRecord.toString
}
}
// create deserializer instance
val deserializer = new AvroDeserializer()
// register deserializer
spark.udf.register("deserialize_avro", (bytes: Array[Byte]) =>
deserializer.deserialize(bytes)
)
// get avro schema from registry (but I presume that it should also work with schema read from a local file)
val registryClient = new CachedSchemaRegistryClient(kafkaSchemaRegistryUrl, 128)
val avroSchema = registryClient.getLatestSchemaMetadata(topic + "-value").getSchema
val sparkSchema = SchemaConverters.toSqlType(new Schema.Parser().parse(avroSchema))
// consume data
df.selectExpr("deserialize_avro(value) as data")
.select(from_json(col("data"), sparkSchema.dataType).as("data"))
.select("data.*")

dynamic object to build reports in Aspose.Words LINQ Reporting Engine

I'm getting from client json string:
{ "Client": { "Name": "John" } }
And in document I have the following tag:
<<[client.name]>>
and try to inject it:
var obj = JsonConvert.DeserializeObject(input.DataJson);
var engine = new ReportingEngine();
engine.BuildReport(document, obj);
But it doesn't work.
Can I inject that json with case insensetive checking of properties? Or I have to modify json to do its properties as lowercase?
How can I do that?
I am afraid, LINQ Reporting Engine currently does not support dynamic objects as data sources. We have logged a new feature request for your scenario. The ID of this issue is WORDSNET-16421. We will inform you via this thread as soon as the requested feature is implemented. You may convert JSON string to DataSet to make it work as described in the following example:
// Assume you have following in document
// <<[Client.Name]>>
string json = "{ \"Client\": { \"Name\": \"John\" } }";
XmlDocument Xml = (XmlDocument)JsonConvert.DeserializeXmlNode(json);
DataSet ds = new DataSet();
ds.ReadXml(new MemoryStream(Encoding.UTF8.GetBytes(Xml.InnerXml)));
Document doc = new Document(MyDir + #"in.docx");
ReportingEngine engine = new ReportingEngine();
engine.BuildReport(doc, ds.Tables[0].Rows[0], "Client");
doc.Save(MyDir + #"18.2.docx");
I work with Aspose as Developer Evangelist.

Dynamics CRM how to get list of all entities

Working with CRM 2013, how can I get a list of all entities in the CRM via the connectionManager class? I want to get all the entities for the current connection.
Thank you for your comment and answer it work now,
this is my function
public static EntityMetadata[] GetEntities ( IOrganizationService organizationService)
{
Dictionary<string, string> attributesData = new Dictionary<string, string>();
RetrieveAllEntitiesRequest metaDataRequest = new RetrieveAllEntitiesRequest();
RetrieveAllEntitiesResponse metaDataResponse = new RetrieveAllEntitiesResponse();
metaDataRequest.EntityFilters = EntityFilters.Entity;
// Execute the request.
metaDataResponse = (RetrieveAllEntitiesResponse)organizationService.Execute(metaDataRequest);
var entities = metaDataResponse.EntityMetadata;
return entities;
}
and i call my function in the windows app form like this:
var allEntities = CRMHelpers.GetEntities(service);
foreach (EntityMetadata Entity in allEntities)
{
cbxEntity.Items.Add(Entity.LogicalName);
}
If you are looking for getting the entity metadata using code (C#) then we have inbuilt messages to get all entities and if required attribute level information as well. You can use the message "RetrieveAllEntitiesRequest". A sample code would be as follows to achieve the same.
RetrieveAllEntitiesRequest retrieveAllEntityRequest = new RetrieveAllEntitiesRequest
{
RetrieveAsIfPublished = true,
EntityFilters = EntityFilters.Attributes
};
RetrieveAllEntitiesResponse retrieveAllEntityResponse = (RetrieveAllEntitiesResponse)serviceProxy.Execute(retrieveAllEntityRequest);
If you need to get a specific entity information then you may use the message "RetrieveEntityRequest". A sample for the same would be as follows,
RetrieveEntityRequest entityRequest = new RetrieveEntityRequest
{
EntityFilters = EntityFilters.Attributes,
LogicalName = entityName,
RetrieveAsIfPublished = true
};
RetrieveEntityResponse entityResponse = (RetrieveEntityResponse)serviceProxy.Execute(entityRequest);
Hope this is what you were looking for. Let us know if you need any more information on the same.

Categories