Solr 4.0 and Solrnet Atomic Updates - c#

As we all know solr 4.0 supports atomic updates.
http://wiki.apache.org/solr/UpdateXmlMessages#Optional_attributes_for_.22field.22
Is this supported in solrnet yet?
If yes can I know the syntax.
Thanks a ton.

Thanks to the link you provided, do the following (with obvious changes to match your requirements and assuming you're using some DI container so that your ISolrOperations and ISolrConnection are taken care of via registration of SolrFacility):
private readonly ISolrOperations<Document> _solr;
private readonly ISolrConnection _solrConnection;
public SolrRecordRepository(ISolrOperations<Document> solr, ISolrConnection solrConnection)
{
_solr = solr;
_solrConnection = solrConnection;
}
...
public void UpdateField(int id, string fieldName, int value, bool optimize = false)
{
var updateXml = string.Format("<add><doc><field name='id'>{0}</field><field name='{1}' update='set'>{2}</field></doc></add>", id, fieldName, value);
_solrConnection.Post("/update", updateXml);
_solr.Commit();
if (optimize)
_solr.Optimize();
}

To the best of my knowledge, SolrNet does not yet support atomic updates and I do not see it listed on the SolrNet Project Issues List or any mention in the SolrNet Commits on GitHub.

Please note that the atomic updates are quite limited. If you expect an update feature a-la database - it's not yet there. Under the hood the document is recreated using the stored fields. it can be convenient when you don't want to resend all fields and don't care to store all the fields in the index.
As far as I know, the 'real' update is about to come soon.

Related

Update Azure search document schema

I'm encountering an error when trying to update an Azure search document schema through a code-first approach.
Our current entity has the schema:
public class SearchDocument
{
[DataAnnotations.Key]
public string ID;
[IsSearchable]
public string Title;
[IsSearchable]
public string Content;
}
but I want to add a field so it becomes this:
public class SearchDocument
{
[DataAnnotations.Key]
public string ID;
[IsSearchable]
public string Title;
[IsSearchable]
public string Content;
[IsSortable]
public bool Prioritize;
}
and when running a re-indexing query, I get the error:
"The request is invalid. Details: parameters : The property 'Prioritize' does not exist on type 'search.documentFields'. Make sure to only use property names that are defined by the type."
Which makes sense... but is there a way check beforehand if the schema matches, and update the Azure schema? I know other cloud databases, like Parse (dead) and Backendless (may have changed since I last used it), had automatic entity updating based on the entity schema POSTed, so is there any way to do this in Azure?
I've found a couple articles on MSDN about updating the indexers, but haven't been able to test (due to a closed dev environment... I know, I'm sorry).
One thing in particular that concerns me is the warning in the first link:
Important
Currently, there is limited support for index schema updates. Any schema
updates that would require re-indexing such as changing field types are not
currently supported.
So is this even possible? Or do I have to update the schema locally and then log into the Azure portal and update the indexer schema manually? Any help is very appreciated!
Azure Search does support incremental changes to an Azure Search index. What this means is that you are able to add new fields (as you are doing here with your Prioritize field). Based on what you have stated, it looks like you are using the .NET SDK. For that reason, I wonder if you have tried the CreateOrUpdate index operation? For example:
serviceClient.Indexes.CreateOrUpdate

What is the best way to set up a Provider base-state using Pact.Net and .Net Core?

In the (Ruby) documentation of Pact, there is the possibility to add a Provider base-state in the provider states. I'm using Pact.Net and use ProviderStateMiddleware, but I can't figure out how to set up the base-state with this implementation. Is it possible to do this and/or does anyone have any experience setting this up?
Thanks in advance!
There is no built in functionality for a base state (that I know of - Neil Campbell, the maintainer, may correct me). I would recommend implementing a method that gets called at the start of each provider state set up call that clears the datastore completely, and then sets up the base state data.
Thank you for asking. I have recently started researching pact-net, and I am also interested in guidance.
I found this Example Workshop for .Net Core very helpful.
Supplementing the example ProviderMiddleware, we added something like the following (assuming relational db with EF):
private void EnsureBaseState()
{
_context.Database.EnsureDeleted();
_context.SaveChanges();
}
private void EnsureSecondState()
{
EnsureBaseState();
_context.ExampleItems.Add(new ExampleItem { Id = 1, Name = "sample item" });
_context.SaveChanges();
}
This is how we are currently managing multiple states, with the additional states calling a base state in the middleware.

Why does SignalR use IList in its contracts and everywhere in its internals instead of IEnumerable?

I'm sending messages to individual users depending on their roles, to accomplish that I have the following piece of code:
public static void Add(Guid userId, IEnumerable<SnapshotItem> snapshot)
{
var hub = GlobalHost.ConnectionManager.GetHubContext<FeedbackHub>();
var items = ApplicationDbContext.Instance.InsertSnapshot(userId, Guid.NewGuid(), snapshot);
foreach (var sendOperation in ConnectedUsers.Instance.EnumerateSendOperations(items))
{
hub.Clients.Users(sendOperation.Groups.SelectMany(x => x.Users).Select(x => x.Id).ToList()).OnDataFeedback(sendOperation.Items);
}
}
I'm not sure why do I have to invoke .ToList() each time I need to send something, my backing store is HashSet<String> and I want SignalR to work with that type of store instead of converting it to List each time since it would obviously consume processing power and memory.
Since in the backstage SignalR is doing simple iteration over the argument users or connectionIds, wouldn't it be more wise to use IEnumerable instead of IList, I've looked into the SignalR sources, shouldn't be to hard to achieve? Is there a particular reason for using the IList?
Edit
Created an issue on SignalR github page, will have to wait for one of the actual devs in order to clear things out...
There's no good reason for this as far as I can see digging through the older source code. The irony of it is that the IList<string> gets handed into the MultipleSignalProxy class where it is promptly mapped to a different format using another LINQ expression and then that is .ToList()'d. So, based on that exact usage in the implementation, they really don't need anything more than IEnumerable<string>.
My best answer would be that SignalR internally uses the enhanced function of IList like getting the count, or iterating over the collection and the additional use of index based access you would use for IList, but not ICollection. The only reason to use the more robust class is because somewhere they are using it, or feel the need for that additional functionality. Otherwise I would assume best practices of using the lighter class of ICollection, or IEnumerable, basically the base class of that Enumerable->Collection->List heirarchy.
G

How to send argument to class in Quartz.Net

I'm using Quartz.Net (version 2) for running a method in a class every day at 8:00 and 20:00 (IntervalInHours = 12)
Everything is OK since I used the same job and triggers as the tutorials on Quartz.Net, but I need to pass some arguments in the class and run the method bases on those arguments.
Can any one help me how I can use arguments while using Quartz.Net?
You can use JobDataMap
jobDetail.JobDataMap["jobSays"] = "Hello World!";
jobDetail.JobDataMap["myFloatValue"] = 3.141f;
jobDetail.JobDataMap["myStateData"] = new ArrayList();
public class DumbJob : IJob
{
public void Execute(JobExecutionContext context)
{
string instName = context.JobDetail.Name;
string instGroup = context.JobDetail.Group;
JobDataMap dataMap = context.JobDetail.JobDataMap;
string jobSays = dataMap.GetString("jobSays");
float myFloatValue = dataMap.GetFloat("myFloatValue");
ArrayList state = (ArrayList) dataMap["myStateData"];
state.Add(DateTime.UtcNow);
Console.WriteLine("Instance {0} of DumbJob says: {1}", instName, jobSays);
}
}
To expand on #ArsenMkrt's answer, if you're doing the 2.x-style fluent job config, you could load up the JobDataMap like this:
var job = JobBuilder.Create<MyJob>()
.WithIdentity("job name")
.UsingJobData("x", x)
.UsingJobData("y", y)
.Build();
Abstract
Let me to extend a bit #arsen-mkrtchyan post with significant note which might avoid a painful support Quartz code in production:
Problem (for persistance JobStore)
Please remember about JobDataMap versioning in case you're using persistent JobStore, e.g. AdoJobStore.
Summary (TL;DR)
Carefully think on constructing/editing your JobData otherwise it will lead to issues on triggering future jobs.
Enable “quartz.jobStore.useProperties” config parameter as official documentation recommends to minimize versioning problems. Use JobDataMap.PutAsString() later.
Details
It's also stated in the documentation, however, not so highlighted, but might lead to big maintenance problem if e.g. you removing some parameter in the next version of you app:
If you use a persistent JobStore (discussed in the JobStore section of this tutorial) you should use some care in deciding what you place in the JobDataMap, because the object in it will be serialized, and they therefore become prone to class-versioning problems.
Also there is related note about configuring JobStore mentioned in the relevant document:
The “quartz.jobStore.useProperties” config parameter can be set to “true” (defaults to false) in order to instruct AdoJobStore that all values in JobDataMaps will be strings, and therefore can be stored as name-value pairs, rather than storing more complex objects in their serialized form in the BLOB column. This is much safer in the long term, as you avoid the class versioning issues that there are with serializing your non-String classes into a BLOB.

MongoDb and self referencing objects

I am just starting to learn about mongo db and was wondering if I am doing something wrong....I have two objects:
public class Part
{
public Guid Id;
public ILIst<Materials> Materials;
}
public class Material
{
public Guid MaterialId;
public Material ParentMaterial;
public IList<Material> ChildMaterials;
public string Name;
}
When I try to save this particular object graph I receive a stack overflow error because of the circular reference. My question is, is there a way around this? In WCF I am able to add the "IsReference" attribute on the datacontract to true and it serializes just fine.
What driver are you using?
In NoRM you can create a DbReference like so
public DbReference<Material> ParentMaterial;
Mongodb-csharp does not offer strongly typed DbReferences, but you can still use them.
public DBRef ParentMaterial;
You can follow the reference with Database.FollowReference(ParentMaterial).
Just for future reference, things like references between objects which are not embedded within a sub-document structure, are handled extremely well by a NoSQL ODB, which is generally designed to deal with transparent relations in arbitrarity complex object models.
If you are familiar with Hibernate, imagine that without any mapping file AT ALL and orders of magnitude faster performance because there is no runtime JOIN behind the scenes, all relations are resolved with the speed of a b-tree lookup.
Here is a video from Versant (disclosure - I work for them), so you can see how it works.
This is a little boring in the beginning, but shows every single step to take a Java application and make it persistent in an ODB... then make it fault tolerant, distributed, do some parallel queries, optimize cache load, etc...
If you want to skip to the cool part, jump about 20 minutes in and you will avoid the building of the application and just see the how easy it is to dynamically evolve schema, add distribution and fault tolerance to any existing application ):
If you want to store object graphs with relationships between them requiring multiple 'joins' to get to the answer you are probably better off with a SQL-style database. The document-centric approach of MongoDB and others would probably structure this rather differently.
Take a look at MongoDB nested sets which suggests some ways to represent data like this.
I was able to accomplish exactly what I needed by using a modified driver from NoRM mongodb.

Categories