I have a Conversations collection, containing Conversation documents like so (simplified to get to the point):
public class Conversation
{
public ObjectId Id { get; set; }
public Dictionary<ObjectId, DateTime> Members { get; set; }
}
The Dictionary Members is mapping the Member Ids to their entry date in the conversation.
Then, in the Conversations collection, if, for a particular User, I want to query all the Conversations he's in, what I'm doing is the following:
public List<Conversation> GetConversations(in User user)
{
Query = Builder.Eq(doc => doc.Members.ContainsKey(user.Id), true);
return Find(Query).ToList();
}
The question being, is this the right way to do it?
If it is, why? How does MongoDB process this kind of query (which at
first glance looks to be complex and demanding lot of computing
power)?
If it's not how'd you do it better?
Query = Builder.Eq(doc => doc.Members.ContainsKey(user.Id), true);
Looks like a collection scan.
Is it possible for you to track conversations in every Member document?
It can be achieved with array or subcollection.
Related
I'm developing a cross-platform app with xamarin.forms and I'm trying to look for a way to store a List of Objects directly into ElasticSearch so I can later search for results based on the objects of the lists. My scenario is the folloring:
public class Box {
[String(Index = FieldIndexOption.NotAnalyzed)]
public string id { get; set; }
public List<Category> categories { get; set; }
}
public class Category {
[String(Index = FieldIndexOption.NotAnalyzed)]
public string id { get; set; }
public string name { get; set; }
}
My aim is to be able to search for all the boxes that have a specific category.
I have tried to map everything properly like it says in the documentation but if I do it like that, when I store a box, it only stores the first category.
Is there actually a way to do it or is it just not possible with NEST?
Any tips are very welcome!
Thanks
It should just work fine with AutoMap using the code in the documentation:
If the index does not exist:
var descriptor = new CreateIndexDescriptor("indexyouwant")
.Mappings(ms => ms
.Map<Box>(m => m.AutoMap())
);
and then call something like:
await client.CreateIndexAsync(descriptor).ConfigureAwait(false);
or, when not using async:
client.CreateIndex(descriptor);
If the index already exists
Then forget about creating the CreateIndexDescriptor part above and just call:
await client.MapAsync<Box>(m => m.Index("existingindexname").AutoMap()).ConfigureAwait(false);
or, when not using async:
client.Map<Box>(m => m.Index("existingindexname").AutoMap());
Once you succesfully created a mapping for a type, you can index the documents.
Is it possible that you first had just one category in a box and mapped that to the index (Before you made it a List)? Because then you have to manually edit the mapping I guess, for example in Sense.
I don't know if you already have important data in your index but you could also delete the whole index (the mapping will be deleted too) and try it again. But then you'll lose all the documents you already indexed at the whole index.
I have a database table which represents accounts with a multi-level hierarchy. Each row has an "AccountKey" which represents the current account and possibly a "ParentKey" which represents the "AccountKey" of the parent.
My model class is "AccountInfo" which contains some information about the account itself, and a List of child accounts.
What's the simplest way to transform this flat database structure into a hierarchy? Can it be done directly in LINQ or do I need to loop through after the fact and build it manually?
Model
public class AccountInfo
{
public int AccountKey { get; set; }
public int? ParentKey { get; set; }
public string AccountName { get; set; }
public List<AccountInfo> Children { get; set; }
}
LINQ
var accounts =
from a in context.Accounts
select new AccountInfo
{
AccountKey = a.AccountKey,
AccountName = a.AccountName,
ParentKey = a.ParentKey
};
The structure you currently have is actually a hierarchy (an adjacency list model). The question is, do you want to keep this hierarchical model? If you do, there's a Nuget package called MVCTreeView. This package works directly with the table structure you describe - in it, you can create a Tree View for your UI, implement CRUD operations at each level, etc. I had to do exactly this and I wrote an article on CodeProject that shows how to cascade delete down an adjacency list model table in SQL via C#. If you need more specifics, leave a comment, and I'll edit this post.
http://www.codeproject.com/Tips/668199/How-to-Cascade-Delete-an-Adjace
You can simply create an association property for the parent key:
public class AccountInfo {
... // stuff you already have
public virtual AccountInfo Parent { get; set; }
}
// in the configuration (this is using Code-first configuration)
conf.HasOptional(a => a.Parent).WithMany(p => p.Children).HasForeignKey(a => a.ParentKey);
With this setup, you can traverse the hierarchy in either direction in queries or outside of queries via lazy-loading if you want lazy loading of the children, make sure to make the property virtual.
To select all children for a given parent, you might run the following query:
var children = context.Accounts
.Where(a => a.AccountKey = someKey)
.SelectMany(a => a.Children)
.ToArray();
I'm wondering how to handle when an object is used in multiple locations. Given th following code (just example code) :-
public class Group
{
public ObjectId Id { get; set; }
public string Name { get; set; }
public List<Person> People { get; set; }
public List<Meeting> Meetings { get; set; }
}
public class Meeting
{
public string Subject { get; set; }
public List<Person> Attendees { get; set; }
}
public class Person
{
public string Name { get; set; }
}
If I store the group as a mongodb document, it will serialize all the people and meetings. However the same Person object can be refered to in the People List and as an attendee of a meeting. However once serialized they become separate objects. How can I maintain that the same "Person" object is both in the People list and Meetings list?
Or is there are better way to model this? One thing that could be done is put the "People" in a separate Document and embeded / reference it? This then starts to create more and more separate collections, ideally I'd like to maintain references within a single document.
Or within a document should I Id each person and have one master list and then only store lists of Ids in the "Meetings" and use some kind of helper method to resolve the Id from the master list? Can be done, but a little bit ugly.
I'm not an expert with MongoDB but I think in this scenario each of these items should be a separate collection with references to get the results you are after.
Meetings have a Group ID and a list of Person ID attendees.
Groups have a list of Person ID members (people).
If a person can only belong to one group then they can have a single group ID.
Once they go into the database the only option you have with your existing design is checking for name equality which as you say can be done but doesn't seem like the right approach.
Essentially you are using the embedded relationship model with how you are storing 'Person' in 'Group' and 'Meeting' but if you want the same 'Person' object for both then you need to use references for Attendees or both. This seems like the simplest approach to me while not 'fighting' against the standard behaviour.
I have a collection in RavenDB of this class...
public class Report
{
public string User { get; set; }
public int Quarter { get; set; }
public int Year { get; set; }
public string ReportData { get; set; }
}
There is only one report per quarter, per year for each user (so the identifying key is { User, Quarter, Year }. I want to create a function to save a list of these Reports, overwriting old ones or inserting new ones as needed. I came up with this:
public void Save(IList<Report> reports)
{
session.Query<Report>()
.Join(reports,
x => new { x.User, x.Quarter, x.Year },
y => new { y.User, y.Quarter, y.Year },
(x, y) => new { OldReport = x, NewReport = y })
.ForEach(report =>
{
if (report.OldReport != null)
report.OldReport.InjectFrom(report.NewReport);
else
session.Store(report.NewReport);
});
session.SaveChanges();
}
However, RavenDB does not support the .Join operator. Edit: I just realized that this also needs to be a right-outer-join for this to work, but I think it communicated my intent. I know I need to do some sort of Map Reduce to make this happen, but I'm new to RavenDB I can't find any good examples relevant to what I am doing. Has anyone tried something like this?
P.S. The .InjectFrom() operation is using Omu.ValueInjecter, if anyone was wondering.
There are multiple ways to do this, but the easiest way would be to provide your own document key instead of using the one Raven generates. This is often referred to as using "structured" or "semantic" keys. Here is a good description of the general technique.
Simply add a string Id property to your class. You want the document key to reflect the unique key you described, so probably it should have a value such as "reports/2013/Q1/bob" (but you might want a more unique value for user).
You can let .Net construct the key for you in the property getter, such as:
public class Report
{
public string Id
{
get { return string.Format("reports/{0}/Q{1}/{2}", Year, Quarter, User); }
}
public string User { get; set; }
public int Quarter { get; set; }
public int Year { get; set; }
public string ReportData { get; set; }
}
Now when you store these documents, you simply store them:
foreach (var report in reports)
session.Store(report);
If there is already a document with the same key, it will be overwritten with your new data. Otherwise, a new document will be written.
If you can't manipulate the document key, other techniques you could look into are:
You could run a query to delete any documents matching your changed data first. Then you could insert all of the data. But getting the query right will be difficult since there are multiple fields to match on. It is possible, but the technique is challenging.
You could use the Patching API to manipulate the data of the document already stored. Although you would still have to query to figure out which are new inserts and which are updates. Also, the patch would have to be tested against your entire database, so it would be slow.
I'm sure there are a few other ideas, but your safest and easiest bet is to go with semantic keys for the reports.
I have an entity that looks like this:
public class Album
{
public virtual string Name { get; set; }
public virtual IEnumerable<Media> { get; set; }
public virtual IEnumerable<Picture>
{
get { return Media.OfType<Picture>(); }
}
public virtual IEnumerable<Video>
{
get { return Media.OfType<Video>(); }
}
public virtual IEnumerable<Audio>
{
get { return Media.OfType<Audio>(); }
}
}
Where Media is the abstract base class and Picture, Video, and Audio are subtypes of Media, so the IEnumerable<Media> collection is heterogenous.
I have a DTO for Album that looks like this:
public class AlbumDTO
{
public string Name { get; set; }
public int PictureCount { get; set; }
public int VideoCount { get; set; }
public int AudioCount { get; set; }
}
Where each count is being populated by doing <collection>.Count();. Although this code works fine and I get the count for each media type, the generated SQL is less than ideal:
SELECT * FROM Media WHERE media.Album_id = 1
SELECT * FROM Media WHERE media.Album_id = 2
SELECT * FROM Media where media.Album_id = 3
In other words, it's grabbing all the Media first from the database, and then performing the OfType<T>.Count() afterwards in memory. Problem is, if I'm doing this over all the Albums, it will select all the Media from the database, which potentially could be thousands of records. Preferably, I'd like to see something like this (I'm using table-per-hierarchy mapping):
SELECT COUNT(*) FROM Media WHERE media.Album_id = 1 AND discriminator = 'Picture'
SELECT COUNT(*) FROM Media WHERE media.Album_id = 1 AND discriminator = 'Video'
SELECT COUNT(*) FROM Media WHERE media.Album_id = 1 AND discriminator = 'Note'
Does anyone know how I can configure NHibernate to do this? Or will I have to modify my Album entity in order to get the correct behavior?
First off, your code won't compile; you're missing the property name of the IEnumerable<Media> (I assume it's Media), and also of the filters.
Second, you have to understand a little about what's going on. From this behavior, I'm pretty sure you've mapped your Album with a HasMany relationship to Media. NH lazy-loads by default, so when you first retrieve the Album from the DB, Media is given a reference to an NHibernate object called a PersistentBag. This is simply a placeholder that looks like an IEnumerable, and holds the logic to populate the real list when it is actually needed. All it can do is pull the records as mapped in the HBM when its GetEnumerator() method is called (and that happens in virtually every Linq method). So, when you call OfType, you're not working with an NHibernate IQueryable anymore, which can build a SQL statement that does exactly what you want. Instead, you're asking for each element in the list you think you already have, and NHibernate complies.
You have some options if all you want is the Count. The easiest, if possible, is simply to go back to the Session and ask for a whole new query on Album:
session.Linq<Album>().Where(a=>a.Id = 1).Select(a=>a.Media.OfType<Picture>()).Count();
This will directly build a statement that goes to the DB and gets the count of the records. You're not lazy-loading anything, you're asking the Repository for a count, which it knows how to directly translate to SQL.