RavenDB: Asynchronous SaveChanges affecting later updates? - c#

As part of learning RavenDB, I am trying to update a collection of stocks based on a list I download nightly.
I have a Stock class where Id is the stock symbol:
public class Stock
{
public string Id { get; set; }
public StockStatus Status { get; set; }
}
I'm trying to sync the list with this algorithm:
Update or insert all stocks downloaded now as "StillActive".
Any stocks with the status of "Active" from last time means they weren't in the update and need to be "Deleted".
All stocks still "StillActive" become the new "Active" stocks.
Here is the implementation:
List<Stock> stocks = DownloadStocks();
using (var session = RavenContext.Store.OpenSession())
{
foreach (Stock stock in stocks)
{
stock.Status = StockStatus.StillActive;
session.Store(stock);
}
session.SaveChanges();
session.PatchUpdateCutoffNow("Stocks/ByStatus", "Status:Active", "Status", StockStatus.Deleted);
session.PatchUpdateCutoffNow("Stocks/ByStatus", "Status:StillActive", "Status", StockStatus.Active);
}
PatchUpdateCutoffNow is an extension method that does an UpdateByIndex with a Cutoff of now:
public static void PatchUpdateCutoffNow(this IDocumentSession session, string indexName, string query, string name, object val)
{
session.Advanced.DatabaseCommands.UpdateByIndex(indexName,
new IndexQuery() { Query = query, Cutoff = DateTime.Now },
new[]
{
new PatchRequest
{
Type = PatchCommandType.Set,
Name = name,
Value = val.ToString()
}
});
}
I end up with a lot of stocks Deleted that shouldn't be. My guess is the SaveChanges is asynchronous and doesn't finish by the time the PatchUpdateCutoffNow starts so I end up with some amount of Stocks with a status of "Deleted" when they should be "Active". I guess the IndexQuery cutoff doesn't apply since the SaveChanges isn't directly against the "Stocks/ByStatus" index.
Is there a way to make SaveChanges synchronous or some other way to do this that fits more with the NoSQL/RavenDB way of thinking?

The documents are stored right away, but the patch commands work on an index which has not updated yet. Maybe this helps, inserted between SaveChanges() and the patching:
using (var s = RavenContext.Store.OpenSession()) {
s
.Query<Stock>("Stocks/ByStatus")
.Customize(c => c.WaitForNonStaleResultsAsOfNow())
.Take(0)
.ToArray();
}
By the way, you don't need a session for DatabaseCommands, you can call them directly on the store.

Related

Sorted and indexed WinForms ListBox items

There is Client app with ListBox containing Records sorted by Record Time attribute. On application start Client loads Records from server and start listening to server updates. When server inform Client about new Record Client add Record to ListBox (theoretically with Time before last showed Recort Time). When Server inform about update or delete Client find message by ID and update or delete it (Theoretically when Time of Record is changed order of Records in ListBox must be changed).
I guess somethink like sorted dictionary with value time comparator is required.
public partial class RecordsForm : Form
private System.Windows.Forms.ListBox recordsListBox;
peivate SortedDictionary<long, Record> recordsDictionary;
public RecordsForm()
{
InitializeComponent();
// this is not working because comparer must compare keys
recordsDictionary = new SortedDictionary<long, Record>(new RecordComparer());
var recordsBbinding = new BindingSource();
recordsBbinding.DataSource = recordsDictionary;
recordsListBox.DataSource = recordsBbinding;
}
public HandleCreateUpdate(Record record)
{
recordsDictionary[record.Id] = record;
}
public HandleDelete(Record record)
{
if (recordsDictionary.ContainsKey(record.Id))
{
recordsDictionary.Remove(record.Id);
}
}
}
class Record {
public long Id { get; set; }
public DateTime Time { get; set; }
public String Title { get; set; }
}
class RecordComparer : Comparer<Record>
{
public override int Compare(Record left, Record right)
{
return left.Time.CompareTo(right.Time);
}
}
Or exists another pattern used for something like this?
EDIT: Added screen
List of Records is always sorted by Time desc. I want to synchronize this list across clients. When one edit/add/delete record other clients may reflects changes without reloading entire list or iterating over all items.

How query specific item from json object which has dictionary?

How to query the below json object collection using SQL/LINQ, to get 'ordername' which has item(s) ONLY from particular 'factory' i.e., factory1?
Note: json object collections are nothing but cosmos db collection
I know below json structure can be improved, by replacing dictionary with array but that change not feasible at the moment.
{
id:123,
ordername:order1,
itemdictionary: {
item1: {
id: "item1",
name: "milk",
manufacture:
{
name:factory1,
location:location1
}
},
item2: {
id: "item2",
name: "curd",
manufacture:
{
name:factory2,
location:location2
}
}
}
orderamt:"5$"
}
{
id:1234,
ordername:order2,
itemdictionary: {
item1: {
id: "item1";
name: "honey",
manufacture:
{
name:factory3,
location:location3
}
},
item2: {
id: "item2",
name: "milk",
manufacture:
{
name:factory1,
location:location1
}
}
}
orderamt:"7$"
}
c# representation:
public class OrderModel
{
public string OrderAmt{ get; set; }
public string Id{ get; set; }
public Dictionary<string,ItemDescription> ItemDictionary{ get; set; }
public class ItemDescription
{
public string Id { get; set; }
public string Name{ get; set; }
public Manufacture Manufacture{ get; set; }
}
}
public class Manufacture
{
public string Location{ get; set; }
public string Name{ get; set; }
}
Query tried, returning all records without applying filter but need to apply filter like (manufacture.name == "factory1") :
var query = dbClient.CreateDocumentQuery<OrderModel>(collectionUri,
feedOptions).Where(w => w.ItemDictionary.Values.Where(i =>
i.manufacture.name == "factory1") != null).AsDocumentQuery();
Firstly, like you mentioned yourself, this dictionary is a rather inefficient model for your needs. You really should plan to migrate to a better model at some point, it may be easier (and cheaper) than constantly hassling with issues like you are having now.
I don't think you can do it with a clean indexed SQL query alone. What you could try:
Append factory summary to model
If you do have some tools set up to upgrade models, then you could normalize the used factories to /factories[] or similar. This extra field could be indexed and queried trivially and efficiently.
Note that adding a field to cosmosDB will not break your existing applications. It's cheap and simple as long as you have a way of enforcing this new array will be kept in sync on insert/update (ex via some pipe on data layer) + upgrading any old documents via your schema upgrade tools.
Prefilter in server, final filter in client
If you really-really can't touch the model at all then best you can do, is work with partial filter on cosmosDB side and apply final filter in client. Depending on data, this can be costly and messy.
For example:
Set up index for first item:
/itemdictionary/item1/manufacture/name
SQL query:
SELECT *
FROM c
where (c.itemdictionary.item1.manufacture.name = #factory)
and ( not is_defined(c.itemdictionary.item2.manufacture.name) or c.itemdictionary.item2.manufacture.name = #factory)
and ( not is_defined(c.itemdictionary.item3.manufacture.name) or c.itemdictionary.item3.manufacture.name = #factory)
and ( not is_defined(c.itemdictionary.item4.manufacture.name) or c.itemdictionary.item4.manufacture.name = #factory)
In client after SQL query has returned the results, eliminate results with other factories. That is execute .AsDocumentQuery() before applying the predicate .Where(w => w.ItemDictionary.All(i => i.manufacture.name == "factory1")) on fetched results.
This is inefficient if orders got many items on average and the chosen N items do not limit the results sufficiently. Note that you don't have to choose prefilter indexes sequentially: depending on data it may make more sense to precheck [1,2, 10, 20] or similar. You can also play with N to get the best out of prefilter for the average case.
Filter in client with UDF
I usually hate UDFs due to their development cost and maintenance pains, but it is an option to script the filter with JS (which can do anything) and do all the filtering on server side.
WIth UDF you do save on bandwidth, but do note though:
UDF is mixing your querying logic to data layer. All clients of this data will share a single UDF version on server.
You still need to prefilter with an indexable property to avoid a costly full-scan.
UDF queries are RU-costly. Full-scan + UDF is usually a no-go.

Wpf - How to control observable collection updates

In the parent there is a Observable Collection PendingPayment that has a list of all pending payments of sales with a column amount paid.
Then the user can select a particular sale and open it in new child window.
The thing thats going wrong is if the user just edits the text box paid amount in child window and closes the window without saving the new paid amount to database,the observable collection containing Amount paid column in the parent window gets updated.
What I want is it the collection to get updated only when the values are updated in the database.
This can be achieved by creating a copy of your sale object when the user select it in the list, and then using this copy as the view model of your child view.
You will then be able to set the new values in the original object from your list only once the save button has been clicked and the database update succeed.
An other way to proceed if you need to edit only few of the object properties would be to create and editor object and use it as the child window's view model.
Something like this :
public class Sale
{
public int PaidAmount { get; set; }
public int Some { get; set; }
public int More { get; set; }
public int Properties { get; set; }
}
public class SaleEditor
{
private Sale _sale;
public int PaidAmount { get; set; }
public SaleEditor(Sale sale)
{
_sale = sale;
PaidAmount = sale.PaidAmount;
}
public void Save()
{
// update your data here
_sale.PaidAmount = PaidAmount;
}
}
If you need your original object to update the database, then the save method could first update the object and the revert the changes if DB update failed :
public void Save()
{
var oldAmount = _sale.PaidAmount;
_sale.PaidAmount = PaidAmount;
if (!SalesDB.Update(_sale))
_sale.PaidAmount = oldAmount;
// you could also read back the value from DB
}
Whenever possible (I've never see a reason why it cannot),for listing purpose use proxy or flatted objects, you can implement this using projections query. Then user select an item from a list and the only thing you need to grab is a key to load the full object with its required object graph as the use case might dictate.
Here is a sample implementation using Entity Framework and c# lambda expressions:
Using anonymous object:
var anonymousListProjection = DbContext.PendingPayments.Select( pp=>
new { pp.Order, pp.Amount})
Using a hardcoded proxy:
var hardcodedListProjection = DbContext.PendingPayments.Select( pp=>
new PendingPaymentProxy { Order = pp.Order, Amount = pp.Amount})
//To return an observable:
var observableColl = new ObservableCollection<PendingPaymentProxy>
(hardcodedListProjection.Tolist());
public class PendingPaymentProxy
{
public string Order { get; set; }
public decimal Amount{ get; set; }
}
Apart from avoiding possibles performance problems due to unintentional loading real objects, this way you only have to worry for your list when the user do save in the detail view.

Code-First Entity Framework w/ Stored Procedure returning results from complex Full-text Searches

I am looking for design advice for the following scenario:
I have a code-first EF5 MVC application. I am building a full-text search function which will incorporate multiple weighted columns from many tables. As I cannot create view with an index from these tables (some of them contain text / binary columns), I have created a stored procedure which will output the ID of my object (eg. PersonID) and the rank associated with that object based on the search terms.
My current approach is to create a helper class for executing full text searches which call the stored procedure(s) and load all the objects from the context based on the returned IDs.
My questions are:
Does my approach seem sensible / follow reasonable best practice?
Has anyone else done something similar with any lessons learned?
Is there a way to do this more efficiently (i.e. have the results of the stored procedure return/map to the entities directly without an additional look-up required?)
UPDATE
Moved my detailed implementation from an edit of the question into its own answer to be more in line with what is recommended frequently # meta.stackexchange.com
Seeing as you can't use SQL methods like containstable with entityframework code first which the rest of your application could be using you could be 'forced' to do something with a storedprocedure like your describe. Whether it's best practice I don't know. However it it gets the job done I don't see why it wouldn't be sensible.
Yes - I have and still am working on a project build around EF codefirst where I had to do a fairly complex search that included several search parameters marked as 'must have' and several values marked as 'nice to have' and in from that return a weighted result.
Depending on the complexity of the result set I don't think you need to do a second roundtrip to the database and I will show you a way I have been doing it below.
Bear in mind that below is simply an example:
public List<Person> GetPeople(params string[] p)
{
var people = new List<Person>();
using (var db = new DataContext())
{
var context = ((IObjectContextAdapter)db).ObjectContext;
db.Database.Connection.Open();
var command = db.Database.Connection.CreateCommand();
command.CommandText = "SomeStoredProcedureReturningWeightedResultSetOfPeople";
command.CommandType = System.Data.CommandType.StoredProcedure;
//Add parameters to command object
people = context.Translate<Person>(command.ExecuteReader()).ToList();
}
return people;
}
Even though the storedprocedure will have a column for the weight value it won't get mapped when you translate it.
You could potentially derive a class from Person that includes the weight value if you needed it.
Posting this as an answer rather than an edit to my question:
Taking some of the insight provided by #Drauka's (and google) here is what I did for my initial iteration.
Created the stored procedure to do the full text searching. It was really too complex to be done in EF even if supported (as one example some of my entities are related via business logic and I wanted to group them returning as a single result). The stored procedure maps to a DTO with the entity id's and a Rank.
I modified this blogger's snippet / code to make the call to the stored procedure, and populate my DTO: http://www.lucbos.net/2012/03/calling-stored-procedure-with-entity.html
I populate my results object with totals and paging information from the results of the stored procedure and then just load the entities for the current page of results:
int[] projectIDs = new int[Settings.Default.ResultsPerPage];
foreach (ProjectFTS_DTO dto in
RankedSearchResults
.Skip(Settings.Default.ResultsPerPage * (pageNum - 1))
.Take(Settings.Default.ResultsPerPage)) {
projectIDs[index] = dto.ProjectID;
index++;
}
IEnumerable<Project> projects = _repository.Projects
.Where(o=>projectIDs.Contains(o.ProjectID));
Full Implementation:
As this question receives a lot of views I thought it may be worth while to post more details of my final solution for others help or possible improvement.
The complete solution looks like:
DatabaseExtensions class:
public static class DatabaseExtensions {
public static IEnumerable<TResult> ExecuteStoredProcedure<TResult>(
this Database database,
IStoredProcedure<TResult> procedure,
string spName) {
var parameters = CreateSqlParametersFromProperties(procedure);
var format = CreateSPCommand<TResult>(parameters, spName);
return database.SqlQuery<TResult>(format, parameters.Cast<object>().ToArray());
}
private static List<SqlParameter> CreateSqlParametersFromProperties<TResult>
(IStoredProcedure<TResult> procedure) {
var procedureType = procedure.GetType();
var propertiesOfProcedure = procedureType.GetProperties(BindingFlags.Public | BindingFlags.Instance);
var parameters =
propertiesOfProcedure.Select(propertyInfo => new SqlParameter(
string.Format("#{0}",
(object) propertyInfo.Name),
propertyInfo.GetValue(procedure, new object[] {})))
.ToList();
return parameters;
}
private static string CreateSPCommand<TResult>(List<SqlParameter> parameters, string spName)
{
var name = typeof(TResult).Name;
string queryString = string.Format("{0}", spName);
parameters.ForEach(x => queryString = string.Format("{0} {1},", queryString, x.ParameterName));
return queryString.TrimEnd(',');
}
public interface IStoredProcedure<TResult> {
}
}
Class to hold stored proc inputs:
class AdvancedFTS :
DatabaseExtensions.IStoredProcedure<AdvancedFTSDTO> {
public string SearchText { get; set; }
public int MinRank { get; set; }
public bool IncludeTitle { get; set; }
public bool IncludeDescription { get; set; }
public int StartYear { get; set; }
public int EndYear { get; set; }
public string FilterTags { get; set; }
}
Results object:
public class ResultsFTSDTO {
public int ID { get; set; }
public decimal weightRank { get; set; }
}
Finally calling the stored procedure:
public List<ResultsFTSDTO> getAdvancedFTSResults(
string searchText, int minRank,
bool IncludeTitle,
bool IncludeDescription,
int StartYear,
int EndYear,
string FilterTags) {
AdvancedFTS sp = new AdvancedFTS() {
SearchText = searchText,
MinRank = minRank,
IncludeTitle=IncludeTitle,
IncludeDescription=IncludeDescription,
StartYear=StartYear,
EndYear = EndYear,
FilterTags=FilterTags
};
IEnumerable<ResultsFTSDTO> resultSet = _context.Database.ExecuteStoredProcedure(sp, "ResultsAdvancedFTS");
return resultSet.ToList();
}

Performing an effcient upsert in mongodb

I have the following C# model class:
public class Thingy
{
public ObjectId Id { get; set; }
public string Title { get; set; }
public DateTime TimeCreated { get; set; }
public string Content { get; set; }
public string UUID { get; set; }
}
and the following ASP.MVC controller action:
public ActionResult Create(Thingy thing)
{
var query = Query.EQ("UUID", thing.UUID);
var update = Update.Set("Title", thing.Title)
.Set("Content", thing.Content);
var t = _collection.Update(query, update, SafeMode.True);
if (t.UpdatedExisting == false)
{
thing.TimeCreated = DateTime.Now;
thing.UUID = System.Guid.NewGuid().ToString();
_collection.Insert(thing);
}
/*
var t = _collection.FindOne(query);
if (t == null)
{
thing.TimeCreated = DateTime.Now;
thing.UUID = System.Guid.NewGuid().ToString();
_collection.Insert(thing);
}
else
{
_collection.Update(query, update);
}
*/
return RedirectToAction("Index", "Home");
}
This method either does an update or insert. If it needs to do an insert, it must set the UUID and TimeCreated members. If it needs to do an update, it must leave UUID and TimeCreated alone, but must update the members Title and Content.
The code that's commented out works, but does not seem to be most efficient. When it calls FindOne, that is one trip to mongodb. Then if it goes to the else clause, it does another query and an update operation, so that's 2 more trips to mongodb.
What is a more efficient way to do what I'm trying to accomplish?
As mentioned in the linked SO answer, for upserts to work, you need to update the entire document, not just a few properties.
Personally I would separate the Create and Edit into separate MVC actions. SRP. Creating a Thingy has different considerations from updating it.
If you still want to do an upsert instead of separate insert/update calls, you will need to use the following code:
_collection.Update(
Query.EQ("UUID", thing.UUID),
Update.Replace(thing),
UpsertFlags.Upsert
);
The question now becomes, how do we ensure the thing has the appropriate values for both cases, ie insert as well as update.
My assumption is (based on your code model binding to a Thingy instance), your view is sending back all fields (including UUID and TimeCreated). Which implies, in case of an update, the view already has the values pre-populated for UUID and TimeCreated. So in the case of a Thingy being updated, the thing object has the latest values.
Now in case of an create, when the view is rendered, you could store DateTime.MinValue for the TimeCreated field. In your Create MVC action, you could check if TimeCreated is DateTime.MinValue, then set it to current time and also store a new value for UUID.
This way, in the case of a insert as well, the thing has the latest values. We can thus safely do an Upsert.
I take this approach when doing upserts for Mongo from the controller
public ActionResult Create(Thingy model)
{
var thing = _collection.FindOneAs<Thingy>(Query.EQ("UUID", model.UUID));
if(thing == null)
{
thing = new Thingy{
TimeCreated = DateTime.Now,
UUID = System.Guid.NewGuid().ToString(),
Id = ObjectId.GenerateNewId()
}
}
else
{
thing.Content = model.Content;
//other updates here
}
_collection.Save<Thingy>(thing);
return View();
}

Categories