Create index with multi field mapping syntax with NEST 2.x - c#

I just can't seem to get the syntax correct for multi field mapping in NEST 2.0--if that's the correct terminology. Every example I've found for mapping seems to be <= the 1.x version of NEST. I'm new to Elasticsearch and NEST, and I've been reading their documentation, but the NEST documentation hasn't been completely updated for 2.x.
Basically, I don't need to index or store the entire type. Some fields I need for indexing only, some fields I'll need to index and retrieve, and some I don't need for indexing, just for retrieval.
MyType
{
// Index this & allow for retrieval.
int Id { get; set; }
// Index this & allow for retrieval.
// **Also**, in my searching & sorting, I need to sort on this **entire** field, not just individual tokens.
string CompanyName { get; set; }
// Don't index this for searching, but do store for display.
DateTime CreatedDate { get; set; }
// Index this for searching BUT NOT for retrieval/displaying.
string CompanyDescription { get; set; }
// Nest this.
List<MyChildType> Locations { get; set; }
}
MyChildType
{
// Index this & allow for retrieval.
string LocationName { get; set; }
// etc. other properties.
}
I've have been able to index the entire object and child as-is using the following as an example:
client.Index(item, i => i.Index(indexName));
However, the actual object is a lot larger than this, and I really don't need most of it. I've found this, which looks like what I think I want to do, but in an older version: multi field mapping elasticsearch
I think "mapping" is what I'm going for, but like I said, I'm new to Elasticsearch and NEST and I'm trying to learn the terminology.
Be gentle! :) It's my first time to ask a question on SO. Thanks!

In addition to Colin's and Selçuk's answers, you can also fully control the mapping through the fluent (and object initializer syntax) mapping API. Here's an example based on your requirements
void Main()
{
var pool = new SingleNodeConnectionPool(new Uri("http://localhost:9200"));
var connectionSettings = new ConnectionSettings(pool);
var client = new ElasticClient(connectionSettings);
client.Map<MyType>(m => m
.Index("index-name")
.AutoMap()
.Properties(p => p
.String(s => s
.Name(n => n.CompanyName)
.Fields(f => f
.String(ss => ss
.Name("raw")
.NotAnalyzed()
)
)
)
.Date(d => d
.Name(n => n.CreatedDate)
.Index(NonStringIndexOption.No)
)
.String(s => s
.Name(n => n.CompanyDescription)
.Store(false)
)
.Nested<MyChildType>(n => n
.Name(nn => nn.Locations.First())
.AutoMap()
.Properties(pp => pp
/* properties of MyChildType */
)
)
)
);
}
public class MyType
{
// Index this & allow for retrieval.
public int Id { get; set; }
// Index this & allow for retrieval.
// **Also**, in my searching & sorting, I need to sort on this **entire** field, not just individual tokens.
public string CompanyName { get; set; }
// Don't index this for searching, but do store for display.
public DateTime CreatedDate { get; set; }
// Index this for searching BUT NOT for retrieval/displaying.
public string CompanyDescription { get; set; }
// Nest this.
public List<MyChildType> Locations { get; set; }
}
public class MyChildType
{
// Index this & allow for retrieval.
public string LocationName { get; set; }
// etc. other properties.
}
This produces the mapping
{
"properties": {
"id": {
"type": "integer"
},
"companyName": {
"type": "string",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
},
"createdDate": {
"type": "date",
"index": "no"
},
"companyDescription": {
"type": "string",
"store": false
},
"locations": {
"type": "nested",
"properties": {
"locationName": {
"type": "string"
}
}
}
}
}
Calling .AutoMap() causes NEST to infer the mapping based on the property types and any attributes applied to them. Then .Properties() overrides any of the inferred mappings. For example
CompanyName is mapped as a multi_field with the field companyName analyzed using the standard analyzer and companyName.raw not analyzed. You can reference the latter in your queries using .Field(f => f.CompanyName.Suffix("raw"))
Locations is mapped as a nested type (automapping by default would infer this as an object type mapping). You can then define any specific mappings for MyChildType using .Properties() inside of the Nested<MyChildType>() call.

As far as I can see, you don't have any complex types that you are trying map. So you can easily use NEST attributes to map your objects.
Check this out:
[Nest.ElasticsearchType]
public class MyType
{
// Index this & allow for retrieval.
[Nest.Number(Store=true)]
int Id { get; set; }
// Index this & allow for retrieval.
// **Also**, in my searching & sorting, I need to sort on this **entire** field, not just individual tokens.
[Nest.String(Store = true, Index=Nest.FieldIndexOption.Analyzed, TermVector=Nest.TermVectorOption.WithPositionsOffsets)]
string CompanyName { get; set; }
// Don't index this for searching, but do store for display.
[Nest.Date(Store=true, Index=Nest.NonStringIndexOption.No)]
DateTime CreatedDate { get; set; }
// Index this for searching BUT NOT for retrieval/displaying.
[Nest.String(Store=false, Index=Nest.FieldIndexOption.Analyzed)]
string CompanyDescription { get; set; }
[Nest.Nested(Store=true, IncludeInAll=true)]
// Nest this.
List<MyChildType> Locations { get; set; }
}
[Nest.ElasticsearchType]
public class MyChildType
{
// Index this & allow for retrieval.
[Nest.String(Store=true, Index = Nest.FieldIndexOption.Analyzed)]
string LocationName { get; set; }
// etc. other properties.
}
After this declaration, to create this mapping in elasticsearch you need to make a call similar to:
var mappingResponse = elasticClient.Map<MyType>(m => m.AutoMap());
With AutoMap() call NEST will read your attributes from your POCO and create a mapping request accordingly.
Also see "Attribute Based Mapping" section from here.
Cheers!

At the time of writing, Nest does not offer a way to map a property in your class to multiple fields in your document mapping using built in attributes. However, it does provide the facilities needed to do anything with your mappings that you could do if you wrote the JSON yourself.
Here's a solution I've put together for my own needs. It shouldn't be hard to use it as the starting point for whatever you need to do.
First, here's an example of the mapping I want to generate
{
"product":{
"properties":{
"name":{
"type":"string",
"index":"not_analyzed",
"fields":{
"standard":{
"type":"string",
"analyzer":"standard"
}
}
}
}
}
}
The product document would then have the name field, which is indexed but not analyzed, and the name.standard field, which uses the standard analyzer.
The C# class that I generate the mapping from looks like this
[ElasticsearchType]
public class Product
{
[WantsStandardAnalysisField]
public string Name { get; set; }
}
Note the WantsStandardAnalysisField attribute. That's a custom attribute with no special properties added. Literally just:
public class WantsStandardAnalysisField : Attribute {}
If I were to use AutoMap as-is, my custom attribute would be ignored and I would get a mapping that has the name field, but not name.standard. Luckily, AutoMap accepts an instance of IPropertyVisitor. A base class called NoopPropertyVisitor implements the interface and does nothing at all, so you can subclass it and override only the methods you care about. When you use a property visitor with AutoMap, it will generate a document mapping for you but give you a chance to modify it before it gets sent to Elastic Search. All we need to do is look for properties marked with our custom attribute and add a field to them.
Here's an example that does that:
public class ProductPropertyVisitor : NoopPropertyVisitor
{
public override void Visit(IStringProperty type, PropertyInfo propertyInfo, ElasticsearchPropertyAttributeBase attribute)
{
base.Visit(type, propertyInfo, attribute);
var wsaf = propertyInfo.GetCustomAttribute<WantsStandardAnalysisField>();
if (wsaf != null)
{
type.Index = FieldIndexOption.NotAnalyzed;
type.Fields = new Properties
{
{
"standard",
new StringProperty
{
Index = FieldIndexOption.Analyzed,
Analyzer = "standard"
}
}
};
}
}
}
As you can see, we can do pretty much anything we want with the generated property, including turning off analysis for the main property and adding a new field with its own settings. For fun, you could add a couple properties to the custom attribute allowing you to specify the name of the field you want and the analyzer to use. You could even modify the code to see if the attribute has been added multiple times, letting you add as many fields as you want.
If you were to run this through any method that generates a mapping using AutoMap, such as:
new TypeMappingDescriptor<Product>().AutoMap(new ProductPropertyVisitor())
You'll get the desired multi-field mapping. Now you can customize mappings to your heart's content. Enjoy!

I think you have at least 2 possibilities to solve your problem:
On indexing: Create something like a metadata model, which is stored just for retrieving. See the _source field to limit the return to this field.
On searching: Specify the fields you want to query: if you don`t want to query the CreatedDate, just don't include it in your search.
In my case I am using both of these approaches to get very fast results :-)

Related

How to map two class properties based on Customer

I have an application in which I am getting Shopify orders via Shopify webhooks. Now I have an issue is when I have only one client I mapped Shopify to receive an order via the traditional way with object properties and save it into the database.
Now I got two more clients and all clients want to map class fields according to them.
Let's suppose I got a Shopify order object with the following details:
{
"id": 4655316533326,
"checkout_id": 29212941516878,
"user_id": 71894499406,
"note": "hello world",
"subtotal_price_set": {
"shop_money": {
"amount": "109.00",
"currency_code": "USD"
}
"total_tax": "0.00",
"total_tax_set": {
"shop_money": {
"amount": "0.00",
"currency_code": "USD"
},
"presentment_money": {
"amount": "0.00",
"currency_code": "USD"
}
}
}
I have a class name Order as well in my application which contains some of the following properties:
public class Order {
public string OrderNo { get; set; }
public string DebtorID { get; set; }
public virtual string DeliveryAddress1 { get; set; }
public virtual string DeliveryAddress2 { get; set; }
public virtual string RIRTNo { get; set; }
public List<OrderLine> Lines { get; set; }
public string OrderNote { get; set; }
Public Customer Customer { get; set; }
}
Now some customers want me to map shopify order Note property with mine Order.Customer.Note
and one want to map that Note proerty with Order.Note
same with OrderNo. One wants to map it directly with Order.OrderNo and others want to map it with ``RIRTNo```
How to handle this kind of situation?
I created one table on the database to keep this mapping information.
I can get the customer detail from the Shopify order and then I make the DB query to get map information but the problem is in the database the detail is in dictionary like structure.
Consider this below mapping info for x Customer:
the key is a Shopify object property
"Order": {
"note": "Customer.Note"
"Id": "OrderId"
}
the key is the Shopify property name and the value is my application object name but I can't figure out how to make them based on this info.
First, you can make a web form to admin this task
we need a table we will call it MappingInfo table.
The idea is for 1 field you have 2 or 3 options to map it with the other class
so you will insert into this table all your customer's mapping options
after that, you will use the Automapper like this
var config = new MapperConfiguration(GetMapperConfigurationExpression());
var _mapper = config.CreateMapper();
private MapperConfigurationExpression GetMapperConfigurationExpression(Int customerId)
{
MapperConfigurationExpression mapperConfiguraitonExpression = new MapperConfigurationExpression();
// Get Data From MappingInfo Table where MappingInfo.customerId == customerId "from the parameters" and MappingInfo.PropertyName == "AppId"
if (MappingInfo.OptionId==1)
mapperConfiguraitonExpression.CreateMap<srcMyClass, destMyClass>().ForPath(dest => dest.AppId.Code, opt => opt.MapFrom(src => src.InvoiseId))
if (MappingInfo.OptionId==2)
mapperConfiguraitonExpression.CreateMap<srcMyClass, destMyClass>().ForPath(dest => dest.AppId.Code, opt => opt.MapFrom(src => src.LogecalId))
}
Basically, this depends on the Customer's mind.
So if you don't have any way to mapping yet.
you can divide your customers into 3 or 4 groups or more.
then you can use a factory design pattern
or simply can use an "if" statement depending on a new field "CustomerGroupId" and this should come with the entity to your code
Then:
if (Customer.CustomerGroupId == 1)
{
// do mapping like Order.Note = shopify.Note
}
else if (Customer.CustomerGroupId == 1)
{
// do mapping like Order.Customer.Note = shopify.Note
}
A very simple option is to use the "mapping info" within the Order class, and then modify the OrderNote property to use the mapping for getting/setting the note.
For example, create a method that provides the mapping:
private string _map = "note"; // default value when SetMap is not used
public void SetMap(string map)
=> _map = map;
And then OrderNote becomes:
private string _note;
public string OrderNote {
get => GetNote();
set => SetNote(value);
}
private string GetNote()
{
if(_map == "note") return _note;
if(_map == "Customer.Note") return Customer.Note;
throw new ArgumentOutOfRangeException("Invalid _map");
}
private void SetNote(string value)
{
if(_map == "note") { _note = value; return; }
if(_map == "Customer.Note") { Customer.Note = value; return; }
throw new ArgumentOutOfRangeException("Invalid _map");
}
Use the same pattern for each type of mapping.

Using Contains() in a Realm query

Let's say we have a realm results taken with
RealmDb.All<Entry>();
Then I want to do some search over those results using not yet supported techniques, like StartsWith on a function return or on a property which is not mapped in realm etc, so I get a subset
IEnumerable<Entry> subset = bgHaystack;
var results = subset.Where(entry => entry.Content.ToLower().StartsWith(needle));
To get somehow these as part of RealmResults, I extract the entry ids like this:
List<int> Ids = new List<int>();
foreach (Entry entry in entries)
{
Ids.Add(entry.Id);
}
return Ids;
and finally I want to return a subset of RealmResults (not IEnumerable) of only those Entries that contain those ids, how can I do that? IDE says the Contains method is not supported.
Can I use some kind of predicate or a comparer for that?
Entry is my model class
using System.ComponentModel.DataAnnotations.Schema;
using Realms;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System;
namespace Data.Models
{
[Table("entry")]
public class Entry : RealmObject
{
public class EntryType
{
public const byte Word = 1;
public const byte Phrase = 2;
public const byte Text = 3;
};
[Key]
[PrimaryKey]
[Column("entry_id")]
public int Id { get; set; }
[Column("user_id")]
public int UserId { get; set; }
[Column("source_id")]
public int SourceId { get; set; }
[Indexed]
[Column("type")]
public byte Type { get; set; }
[Column("rate")]
public int Rate { get; set; }
[Column("created_at")]
public string CreatedAt { get; set; }
[Column("updated_at")]
public string UpdatedAt { get; set; }
[NotMapped]
public Phrase Phrase { get; set; }
[NotMapped]
public Word Word { get; set; }
[NotMapped]
public Text Text { get; set; }
[NotMapped]
public IList<Translation> Translations { get; }
[NotMapped]
public string Content
{
get {
switch (Type)
{
case EntryType.Phrase:
return Phrase?.Content;
case EntryType.Word:
return Word?.Content;
case EntryType.Text:
return Text?.Content;
}
return "";
}
}
}
}
According to the documentation, Realm .NET supports LINQ, so that's promising. In your specific example, you indicate that StartsWith isn't supported, but I see that on the above page, specifically here.
Now, your example makes clear that Entry is a RealmObject, so it's not clear where you'd possibly get a RealmResult from (nor does their documentation on that page mention a RealmResult). Specifically, the home page indicates that you're really only going to ever work with Realm, RealmObject and Transaction, so I'm going to just assume that you meant that you'll need a resulting RealmObject per their examples.
The way you presently have your data object set up, you're rather stuck calling it like you are (though if I could make a recommendation to simplify it a little bit:
var entries = RealmDb.All<Entry>().ToList();
var results = entries.Where(entry => entry.Content.ToLower().StartsWith(needle));
var ids = results.Select(a => a.Id).ToList();
Now, your big issue with just combining the filter predicate in line 2 with the end of line 1: Content itself is marked with a [NotMapped] attribute. Per the documentation again:
As a general rule, you can only create predicates with conditions that
rely on data in Realm. Imagine a class
class Person : RealmObject
{
// Persisted properties
public string FirstName { get; set; }
public string LastName { get; set; }
// Non-persisted property
public string FullName => FirstName + " " + LastName;
}
Given this class, you can create queries with conditions that apply to
the FirstName and LastName properties but not to the FullName
property. Likewise, properties with the [Ignored] attribute cannot be
used.
Because you're using [NotMapped], I've got to believe that's going to behave similarly to [Ignored] and further, because it's just a computed value, it's not something that Realm is going to be able to process as part of the query - it simply doesn't know it because you didn't map it to the information Realm is storing. Rather, you'll have to compute the Content property when you've actually got the instances of your Entry objects to enumerate through.
Similarly, I expect you'll have issues pulling values from Phrase, Word and Text since they're also not mapped, and thus not stored in the record within Realm (unless you're populating those in code you didn't post before executing your Where filter).
As such, you might instead consider storing separate records as a PhraseEntry, WordEntry, and TextEntry so you can indeed perform exactly that filter and execute it on Realm. What if you instead used the following?
public class Entry : RealmObject
{
[Key]
[PrimaryKey]
[Column("entry_id")]
public int Id { get; set; }
[Column("user_id")]
public int UserId { get; set; }
[Column("source_id")]
public int SourceId { get; set; }
[Column("rate")]
public int Rate { get; set; }
[Column("created_at")]
public string CreatedAt { get; set; }
[Column("updated_at")]
public string UpdatedAt { get; set; }
[Column("content")]
public string Content { get; set; }
[NotMapped]
public IList<Translation> Translations { get; }
}
[Table("wordEntry")]
public class WordEntry : Entry
{
}
[Table("phraseEntry")]
public class PhraseEntry : Entry
{
}
[Table("textEntry")]
public class TextEntry : Entry
{
}
And now, you can offload the filtering to Realm:
var wordEntries = RealmDb.All<WordEntry>.Where(entry =>
entry.Content.StartsWith(needle, StringComparison.OrdinalIgnoreCase)).ToList();
var phraseEntries = RealmDb.All<PhraseEntry>.Where(entry => entry.Content.StartsWith(needle, StringComparison.OrdinalIgnoreCase)).ToList();
var textEntries = RealmDb.All<TextEntry>.Where(entry => entry.Content.StartsWith(needle, StringComparison.OrdinalIgnoreCase)).ToList();
var entries = new List<Entry>();
entries.AddRange(wordEntries);
entries.AddRange(phraseEntries);
entries.AddRange(textEntries);
var ids = entries.Select(entry => entry.Id).ToList();
It's not quite as brief as storing it all in one table, but I'm not immediately seeing any Realm documentation that indicates support for executing the same query against multiple tables simultaneously, so at least this would allow you to leave the filtering to the database and work against a more limited subset of values locally.
Finally, so we have all that and I missed your final question up top. You indicate that you want to return a subset of your entries based on some collection of ids you create. In the logic you provide, you're retrieving all the Id properties in all your results, so there's really no further subset to pull.
That said, let's assume you have a separate list of ids that for whatever complicated reason, you were only able to derive after retrieving the list of Entry types from above (themselves all PhraseEntry, WordEntry or TextEntry objects).
At this point, since you've already pulled all the values from Realm and have them locally, just execute another Where statement against them. Because a List implements IEnumerable, you can thus execute the LINQ locally without any of the Realm restrictions:
var myLimitedIdSet = new List<int>()
{
10, 15, 20, 25 //Really complicated logic to narrow these down locally
};
var resultingEntries = entries.Where(entry => myLimitedIdSet.Contains(entry.Id)).ToList();
And you're set. You'll have only those entries that match the IDs listed in myLimitedIdSet.
Edit to address comment
You see this error because of the detail provided at the top of this page in the documentation. Specifically (and adapting to your code):
The first statement gives you a new instance of Entry of a class that implements IQueryable... This is standard LINQ implementation - you get an object representing the query. The query doesn't do anything until you made a further call that needs to iterate or count the results.
Your error is then derived by taking the result from RealmDb.All<Entry>() and trying to cast it to an IEnumerable<Entry> to operate against it as though you have local data. Until you call ToList() onRealmDb.All` you simply have a LINQ representation of what the call will be, not the data itself. As such, when you further refine your results with a Where statement, you're actually adding that to a narrowed version of the IQueryable statement, which will also fail because you lack the appropriate mapping in the Realm dataset.
To skip the optimization I provided above, the following should resolve your issue here:
var bgHaystack = realm.All<Entry>().ToList(); //Now you have local data
var results = bgHaystack.Where(entry => entry.Content.ToLower().StartsWith(needle));
Unfortunately, given your provided code, I don't expect that you'll see any matches here unless needle is an empty string. Not only is your Content property not part of the Realm data and you thus cannot filter on it within Realm, but neither are your Phrase, Word or Text properties mapped either. As a result, you will only ever see an empty string when getting your Content value.
You can further refine the results variable above to yield only those instances with a provided ID as you see fit with normal LINQ (as again, you'll have pulled the data from Realm in the first line).
var limitedIds = new List<int>{10, 20, 30};
var resultsLimitedById = results.Select(a => limitedIds.Contains(a.Id)).ToList();
I've updated my examples above to reflect the use of ToList() in the appropriate places as well.

Mongo .Net Driver PipelineStageDefinitionBuilder.Project automatically ignores all Id values with a facet

When using the PipelineStageDefinitionBuilder when creating projection stages for an aggregation pipeline it appears to be always ignoring any Id values in the dataset. I'm using the Mongo .Net driver 2.8 in a .Net Core app. Below are the steps for reproduction.
The same projection worked when using the IAggregateFluent syntax on Aggregate() however I needed to use the builders for a facet. When running the builder against Aggregate it also works, however within a facet it fails to bind any Id values.
Just empty classes with id for testing (Added Type to show normal mapping works):
public class DatabaseModel
{
public Guid Id { get; set; }
public string Type { get; set; }
}
public class ProjectionClass
{
public Guid Id { get; set; }
public string Type { get; set; }
}
When I create the projection with the below, it produces a query sucessfully, however within all models returned the Id value is set to null. The query seems to have a Id_ : 0 value but the same also seems to be produced in normal aggregation so I don't think this is related?
var typeFilter = Builders<DatabaseModel>.Filter.Eq(x => x.Type, "Full");
var aggregationPipeline = new EmptyPipelineDefinition<DatabaseModel>()
.AppendStage(PipelineStageDefinitionBuilder.Match(typeFilter))
.AppendStage(PipelineStageDefinitionBuilder.Project<DatabaseModel, ProjectionClass>(x => new ProjectionClass
{
Id = x.Id,
Type = x.Type,
}));
var normalAggregationResult = await db.Aggregate(aggregationPipeline).ToListAsync();//The id's appear here
var databaseModelsFacet = AggregateFacet.Create("DatabaseModels", aggregationPipeline);
var faucetResult = db.Aggregate().Facet(databaseModelsFacet).SingleOrDefault().Facets;
var projectionModels = faucetResult.
Single(x => x.Name == "DatabaseModels")
.Output<ProjectionClass>();// This results in missing Id's (Including in nested objects with anything named Id)
Resulting mongo query
{[{
"$match" : { "Type" : "Full" } },
{ "$project" : { "Id" : "$_id", "Type" : "$Type", "_id" : 0 }
}]}
Is there any way to be able to run a projection using the pipeline builders with a facet while not ignoring the Id? I have seen examples using similar queries but haven't seen this as an issue. It could be an issue with facet as it only appears to happen when using this.
Thanks!
UPDATE 6/1/2020: Updated question after finding it only seems to occur with facet
It seems to be an Driver issue, (or other issue is when the structure does not match the fields), as Id cant simply be serialized to Id , but if you choose any other value it will work for example
[BsonNoId]
public class DatabaseModel
{
[BsonRepresentation(BsonType.ObjectId)]
public string Identifier { get; set; }
public string Type { get; set; }
}

Create RavenDB index based on dynamic properties

In RavenDB 4.2, I want to create an Index/Map based on a dynamic object. Better put, on dynamic properties which are not known at compile-time.
Here is an example of the raw JSON I'm ingesting:
{
"id": "A",
"detections":
[
{
"steps": [
{
"object": {
"id": "A1",
"target": {
"domain_name": "foobar.com"
}
},
"object": {
"id": "A2",
"target": {
"ipv4": "127.0.0.1"
}
}
}
]
}
]
}
The above sample is ingested from a 3rd party and stored in a RavenDB collection. Roughly translated, the following model has the challenge:
public class Step
{
public string Id { get; set; }
public DateTime When {get; set;}
public dynamic Object { get; set; } // aware that it's not handy naming
}
The pickle in this is that the object.target.X property name is dynamic. They cannot be strong-typed and can be a lot of things, like: domain_name, ipv4, ipv6, dns, shoe_size, hair_colour etc. This is why the entire steps.object is ingested and stored as either System.Object or dynamic.
My objective is to basically do a SelectMany() on each object.target and extract the property name (key) and value. This would make my RavenDB Index something like this:
public class StepsIndex : AbstractIndexCreationTask<Models.Step, StepsIndex.Result>
{
public class Result
{
public DateTime When { get; set; }
public string TargetKey { get; set; }
public string TargetValue { get; set; }
// ... removed other properties for brevity
}
public StepsIndex()
{
Map = steps =>
from block in blocks
from detection in blocks.Detections
from step in detection.Steps
select new Result
{
// extract property name (key), like 'domain_name'
TargetKey = step.Object.target.GetType().GetProperties()[0].Name,
// extract property value, like 'foobar.com'
TargetValue = step.Object.target.GetType().GetProperty(s.Object.target.GetType().GetProperties()[0].Name).GetValue(s.Object.target, null)
};
}
}
Unfortunately this doesn't work due to step.Object being dynamic and resulting in the following error during compile-time:
Error [CS1963] An expression tree may not contain a dynamic operation
Second option I've tried is to cast it to JSON in the expression, which also fails because Raven's projection is not aware of Newtonsoft.Json during runtime:
// Error CS0103: The name 'JObject' does not exist in the current context
// Error CS0103: The name 'JsonConvert' does not exist in the current context
TargetKey = JObject.Parse(JsonConvert.SerializeObject(ass.Object))["target"][0].Value<string>(),
A third option I thought of was perhaps changing the dynamic Object to System.Object Object, but haven't found a neat way to extract the property key/values without knowning the property.
The question: how can I extract these dynamic property keys and values and Map them to a RavenDB index?
RavenDB allows to index dynamic fields:
See:
https://ravendb.net/docs/article-page/4.2/Csharp/indexes/using-dynamic-fields
https://github.com/ravendb/book/blob/v4.0/Ch10/Ch10.md#dynamic-data

In MongoDb, how can you set a value on an object in a array property?

My goal is to put a "deleted at" timestamp on specific object in an array of a document.
If the document looks like this:
{
"subdoc": [
{
"key": 1,
"value": "abc",
"isActive": true
},
{
"key":5,
"value": "ade",
"isActive":true
}
]
}
I would like to be able to say "look for the document that has subdoc.key == 5 and subdoc.value == "ade"; set subdoc.isActive to false and set subdoc.deleteAt = current db timestamp. With a resulting document like this:
{
"subdoc": [
{
"key": 1,
"value": "abc",
"isActive": true
},
{
"key":5,
"value": "ade",
"isActive": false,
"deletedAt": Timestamp(1425911075,1)
}
]
}
Is this doable?
Update: After further review of the mongo docs, this does seem doable with the "$ (update)" operator. That gets me what I need, but I'm hoping for a less magical-strings way of doing this using the C# driver?
My working find/update looks like this:
// find
{
"subdoc.key":"2",
"subdoc.value":"ade"
}
// update
{
"$currentDate": {
"subdoc.$.deleteAt": {
"$type": "timestamp"
}
}
}
Update: I should clarify that this updated time stamp field is used for synchronization by many sometimes-connected mobile clients in a load-balanced environment (multiple web servers, multiple worker processes, and a mongo cluster) with a high transaction volume, which makes it crucial that this time stamp has a single point truth, is logically sequential in the context of the app, and is as high precision as possible (fractions of a second). Otherwise, records could be missed in a sync.
For the moment, I'm using the above approach to ensure time stamped values are generated by the mongo database instance. And I'm pretty satisfied with this approach.
you can use the c# driver to wrap the mongo entities in c# objects. Then in your code you can use linq to query the DB and update your objects as required. Then just save them to the DB to persist your changes.
Below is a small piece of code to query a Parent collection in the test DB.
The C# driver provides as AsQueryable extension to allow us to write our queries directly in Linq. The driver will then automatically build the required query and execute it against the collection.
The sample below looks for any sub documents in the subdoc list that have a value on the key field of 5
If it finds any, it updates the deletedAt date and then saves it back to the DB.
var client = new MongoClient();
var database = client.GetServer().GetDatabase("test");
var parentCollection = database.GetCollection<Parent>("Parent");
var parent = parentCollection.AsQueryable().FirstOrDefault(p => p.subdoc.Any(f => f.key == 5));
if (parent != null)
{
var fooList = parent.subdoc.Where(f => f.key == 5);
foreach (var foo in fooList)
{
foo.deletedAt = DateTime.UtcNow;
}
}
parentCollection.Save(parent);
Below are the two c# entities used to map to the Mongo documents. We can use the [BsonIgnoreIfNull] attribute of the mongo c# driver to only serialize the deletedAt field if it contains a value. We also use a nullable DateTime object in our code to allow nulls to be stores if required.
public class Foo
{
[BsonId]
public ObjectId Id { get; set; }
public int key { get; set; }
public string value { get; set; }
public bool isActive { get; set; }
[BsonIgnoreIfNull]
public DateTime? deletedAt { get; set; }
}
public class Parent
{
[BsonId]
public ObjectId Id { get; set; }
public List<Foo> subdoc { get; set; }
}
See most recent update. A combination of the positional and $currentDate operators is serving my purpose.

Categories