I am currently working on a site to allow users to search through a custom product catalog. I have been looking around and would love to leverage Orchard CMS to help me develop this site. I have currently gone through Ron Petersons youtube series on custom Orchard Modules and the Skywalker blog series.
I feel like my goal is possible, but I'm looking for some validation on whether my strategy will work within the Orchard framework.
This is my current situation:
I have an default Orchard configuration pointing to a SQL DB (named
Product-Orchard)
I have a custom DAL that points to another SQL DB (named Products).
Products are made up of your typical information (Product Name,
Description, Price, etc).
The custom DAL has a POCO model called Product (with a Repository to
interact with) with the properties Name, Description, Price.
Now, based on the information I read about creating Orchard modules it seems like the method of creating a custom module with custom content is to:
Create a Module through code gen tools (We'll call it ProductModule)
Create a custom Content Part (ProductPart)
Create a custom Content Part Record (ProductPartRecord) to act as the data model for the part.
Create a custom ContentPartHandler (ProductPartHandler) that handles the persistance of the Content Part.
Create a custom Driver that is the entry for preparing the Shapes for rendering of the UI.
Potentially create a Service that interacts with the Drivers?
This is where things start to get jumbled and I'm not sure if this is possible or not. What I would like to do is to create a custom Content Type that is backed by my custom DAL rather than having the data be stored through the ContentPartRecord inside the Product-Orchard DB, but still allow it to be indexed by the Lucene module to allow for searching of the Product catalog.
Is it possible to create a custom ContentType and/or ContentPart that is backed by a different datasource and still leverage the Lucene search capabilities?
In high level terms I'd like a Product ContentType where the ContentItems are actually stored in my secondary database, not the Orchard database (and still want to be able to leverage Lucene search via Projections).
For those searching for a similar answer, the following solution is what I settled on. There is no easy mechanism I could find to interact with a separate DAL and perform the Lucene indexing.
Create the Orchard Module
Create new Content Part/Type via aMigration
Use Orchard Command infrastructure to import data from your secondary database
Use the OnIndexing event in the Content Part handler to allow Lucene to index your datasource.
Create a lookup property (I called mine ConcreateProperty) that is populated through a Service I created in the module to interact with the secondary DAL in the OnLoaded event.
My final Handler looked like this:
public class HomePartHandler : ContentHandler {
public HomePartHandler(IRepository<HomePartRecord> repository, IHomeSearchMLSService homeSearchService) {
Filters.Add(StorageFilter.For(repository));
OnLoaded<HomePart>((ctx, part) =>
{
part.ConcreteProperty = homeSearchService.GetByMlsNumber(part.MlsId) ?? new PropertyDetail();
});
OnIndexing<HomePart>((context, homePart) => context.DocumentIndex
.Add("home_StreetFullName", homePart.Record.StreetFullName).RemoveTags().Analyze().Store()
.Add("home_City", homePart.Record.City).RemoveTags().Analyze().Store()
.Add("home_State", homePart.Record.State).RemoveTags().Analyze().Store()
.Add("home_Zip", homePart.Record.Zip).RemoveTags().Analyze().Store()
.Add("home_Subdivision", homePart.Record.Subdivision).RemoveTags().Analyze().Store()
.Add("home_Beds", homePart.Record.Beds).RemoveTags().Analyze().Store()
.Add("home_Baths", homePart.Record.Baths).RemoveTags().Analyze().Store()
.Add("home_SquareFoot", homePart.Record.SquareFoot).RemoveTags().Analyze().Store()
.Add("home_PropertyType", homePart.Record.PropertyType).RemoveTags().Analyze().Store()
.Add("home_ListPrice", homePart.Record.ListPrice).RemoveTags().Analyze().Store()
.Add("home_MlsId", homePart.Record.MlsId).RemoveTags().Analyze().Store()
.Add("home_Latitude", (double)homePart.Record.Latitude).RemoveTags().Analyze().Store()
.Add("home_Longitude", (double)homePart.Record.Longitude).RemoveTags().Analyze().Store()
);
}
}
This allows me to create a search service for searching through all my data and then hook it up to the model via the Concrete Property, which actually works better from a performance standpoint anyway.
Related
I'm currently working one a custom CRM-style solution (EF/Winforms/OData WebApi) and I wonder how to implement a quite simple requirement:
Let's say there is a simple Project entity. It is possible to assign Tasks to it. There is a DefaultTaskResponsible defined in the Project. Whenever a Task is created, the Project's DefaultTaskResponsible is used as the Task.Responsible. But it is possible change the Task.Responsible and even set it to null.
So, in a 'normal' programming world, I would use a Task constructor accepting the Project and set the Responsible there:
public class Task {
public Task(Project p) {
this.Responsible = p.DefaultTaskResponsible;
...
}
}
But how should I implement something like this in a CRM-World with Lookup views? In Dynamics CRM (or in my custom solution), there is a Task view with a Project Lookup field. It does not make sense to use a custom Task constructor.
Maybe it is possible to use Business Rules in Dynamics CRM and update the Responsible whenever the Project changes (not sure)?! But how should I deal with the WebApi/OData Client?
If I receive a Post to the Task endpoint without a Responsible I would like to use the DefaultTaskResponsible, e.g.
POST [Organization URI]/api/data/tasks
{
"project#odata.bind":"[Organization URI]/api/data/projects(xxx-1)"
}.
No Responsible was send (maybe because it is an older client), so use the default one. But if a Responsible is set, the passed value should be used instead, e.g.
POST [Organization URI]/api/data/tasks
{
"project#odata.bind":"[Organization URI]/api/data/projects(xxx-1)",
"responsible#odata.bind": null
}.
In my TaskController I only see the Task model with the Responsible being null, but I don't know if it is null because it was set explicitly or because it wasn't send in the request.
Is there something wrong with my ideas/concepts? I think it is quite common to initialize properties based on other objects/properties, isn't it?
This question is probably out of scope for this forum, but it is a subject I am interested in. A few thoughts:
A "Task" is a generic construct which traditionally can be associated with many different types of entities. For example, you might not only have tasks associated with Projects, but also with Customer records and Sales records. To run with your code example it would look like:
public Task(Entity parent) {}
Then you have to decide whether or not your defaulting of the Responsible party is specific to Projects, or generic across all Entities which have Tasks. If the latter, then our concept looks like this:
public Task(ITaskEntity parent)
{
this.Responsible = parent.DefaultResponsible; //A property of ITaskEntity
}
This logic should be enforced at the database "pre operation" level, i.e. when your CRM application receives a request to create a Task, it should make this calculation, then persist the task to the database. This suggests that you should have a database execution pipeline, where actions can be taken before or after database operations occur. A standard simple execution pipeline looks like this:
Validation -> Pre Operation -> Operation (CRUD) -> Post Operation
Unless you are doing this for fun, I recommend abandoning the project and using an existing CRM system.
There seem to be plenty of examples of filtering by single and multiValueExtendedProperties in Microsoft Graph but these seem to be legacy. How do you filter by an OpenTypeExtension created like so?
ev.Extensions.Add(new OpenTypeExtension() {
AdditionalData = bag,
Id = Constants.LibraryId
});
The ultimate goal is to filter events by whether our library created them.
Did you create these as Schema extensions (registering the shape of the extension in advance), or as open extensions? In order to use the filter operation on extended properties, you'll need to be using Schema extensions. In that case, you can filter as you normally would on an resource's property (eg. ~/me/messages?$filter=myExtension/favoriteColor eq 'green'). Here is more information about creating schema extensions on Graph resources.
I am attempting to use the SharedVariable of the IPluginExecutionContext between different calls to the same plugin. I have the following scenario:
The user is attempting to create a new Entity Record and the plugin has been triggered on the Pre stage. Based on some logic, I am setting a SharedVariable like so:
var context = (IPluginExecutionContext) serviceProvider.GetService(typeof (IPluginExecutionContext));
context.SharedVariables.Add("MySharedVariable", true);
I then attempt to update other records of the same entity like so:
var qe = new QueryExpression("new_myentity");
qe.Criteria.AddCondition("ecs_myfield", ConditionOperator.Equal,"someValue");
var results = service.RetrieveMultiple(qe);
foreach (var foo in results.Entities)
{
//Do something to foo
service.Update(foo);
}
I also have a plugin registered for Update on the Pre stage, however, I want to check MySharedVariable and do something else based on whether or not it is set.
In the Update, the context does not contain the key for 'MySharedVariable'. I have confirmed this by using the ITracingService.
Is there some restriction on passing shared variables between plugins that are run on different records?
The plugin execution mode for both the Create and Update is set to Synchronous and as already explained, both are registered on the Pre Operation stage
I don't use SharedVariables often, but I'm sure they are available in the same Execution Context (for example from a Pre Event to a Post Event for the same message on the same record).
They can't be used to share values between different plugins on different messages on different records (as in your case: set the value inside the Create of one record and retrieve the value inside the Update message of a different record)
For your situation I think it is preferable to use a custom entity to store the values, or create an additional attribute to the entity.
Hi by looking at the scenario you explained.
I will not be able to test this my self. But If you change the Update plugin from Pre to Post.
If you change the update plugin from PRE-Operation to Post Operation. You will definitely get the SharedVariable in the execution context.
Pass Data Between Plug-Ins
CRM 2011 Plugins – Shared Variables
Ok, I'm still getting the hang of asp.net and the MVC framework and converting my knowledge over from classic ASP and VB - so please be gentle.
I've got my first view (/home/details/X) functioning well thanks to previous help pointing me in the right direction, now I need to add data from multiple tables and queries/views to the MVC view (I hate that SQL and MVC both use the word view for different meanings).
I'm not looking for someone to write the answer for me (unless they're feeling really energetic), more so for someone to point me in the right direction of what I should be looking at and reading up on to understand it and do this myself.
My problem
There are multiple datasets which I need to display in this view, and each different data set has a proper PK/FK 1-M relationship established, and the resultant records would need to be looped through.
How I would have done this previously
In my classic ASP days, I would have just defined the SQL query at the head of the page where the data was to be used, with a select statement along the lines of:
SELECT * FROM query_name
WHERE query_uniquecolumnname = Request.QueryString("value")
Once that was done, you'd set the do while query_name NOT BOF/EOF up, then drop in the field names you wanted from that query and it was all done.
How do I acheive this now?
So, fast forwarding from my classic ASP knowledge, how do I acheive the same outcome with MVC?
The tables/views I wish to use are already defined within my data model (and the relationships are showing up in there which I would assume is a plus), I just need to work out how I could call these within the page and use the ID of the record being displayed in the Details view to ensure only related data is displayed.
Thanks in advance
The concept you are looking for is called a ViewModel. Essentially this is a custom class that you write that contains all the data that would be used in your view. So it is responsible for amalgamating all the data from the different tables and exposing it as properties. If you're using a data access layer, this is often as simple as bringing a few entities together. If you're using raw SQL to do it, then you would execute your queries when the properties were accessed.
Then you would make your View inherit from the ViewModel, like so:
<%# Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master"
Inherits="System.Web.Mvc.ViewPage<MvcApplication1.Models.MyViewModel>" %>
Now in your View, you can access all the different properties of your object simply by writing statements like:
<%= Html.TextBox("MyProperty", Model.MyProperty) %>
To construct your view from your controller, create a new instance of your class (MyViewModel), pass it the ID of the details record that you need, and the logic in your class will take care of getting the right data. Then return your view from your controller like normal.
var myDetailsModel = new MyViewModel(detailsID);
return View(myDetailsModel);
I would recommend reading this primer on ASP.NET MVC
http://weblogs.asp.net/scottgu/archive/2009/04/28/free-asp-net-mvc-nerddinner-tutorial-now-in-html.aspx
It covers most basic scenarios you'll need to get up and running.
If however you want to combine multiple resultsets into one, and then return it as a view, you should create a custom object, and map the resultset to it, then you can bind against your custom object in the view.
When I need to display multiple things like this on a web page, I use typically use RenderAction to do it.
RenderAction allows you to use a controller method dedicated to that particular part of the view (a subview, in effect), so you can pass a single data set of strongly-typed data to that "subview".
RenderAction is in the Microsoft.Web.Mvc ("futures") assembly.
If you are new at all of this, I apologize; this is a bit bleeding edge, but you're going to need to know it anyway. Be sure to check out the NerdDinner tutorial first.
http://www.andreas-schlapsi.com/2008/11/01/renderaction-and-subcontrollers-in-aspnet-mvc/
http://davidhayden.com/blog/dave/archive/2009/04/04/...
I am attempting to have a ReportHandler service to handle report creation. Reports can have multiple, differing number of parameters that could be set. In the system currently there are several different methods of creating reports (MS reporting services, html reports, etc) and the way the data is generated for each report is different. I am trying to consolidate everything into ActiveReports. I can't alter the system and change the parameters, so in some cases I will essentially get a where clause to generate the results, and in another case I will get key/value pairs that I must use to generate the results. I thought about using the factory pattern, but because of the different number of query filters this won't work.
I would love to have a single ReportHandler that would take my varied inputs and spit out report. At this point I'm not seeing any other way than to use a big switch statement to handle each report based on the reportName. Any suggestions how I could solve this better?
From your description, if you're looking for a pattern that matches better than Factory, try Strategy:
Strategy Pattern
Your context could be a custom class which encapsulates and abstracts the different report inputs (you could use the AbstractFactory pattern for this part)
Your strategy could implement any number of different query filters or additional logic needed. And if you ever need to change the system in the future, you can switch between report tools by simply creating a new strategy.
Hope that helps!
In addition to the strategy pattern, you can also create one adaptor for each of your underlying solutions. Then use strategy to vary them. I've built similar with each report solution being supported by what I called engines, In addition to the variable report solution we have variable storage solution as well - output can be stored in SQL server or file system.
I would suggest using a container then initializing it with the correct engine, e.g.:
public class ReportContainer{
public ReportContainer ( IReportEngine reportEngine, IStorageEngine storage, IDeliveryEngine delivery...)
}
}
/// In your service layer you resolve which engines to use
// Either with a bunch of if statements / Factory / config ...
IReportEngine rptEngine = EngineFactory.GetEngine<IReportEngine>( pass in some values)
IStorageEngine stgEngine = EngineFactory.GetEngine<IStorageEngien>(pass in some values)
IDeliverEngine delEngine = EngineFactory.GetEngine<IDeliverEngine>(pass in some values)
ReportContainer currentContext = new ReportContainer (rptEngine, stgEngine,delEngine);
then ReportContainer delegates work to the dependent engines...
We had a similar problem and went with the concept of "connectors" that are interfaces between the main report generator application and the different report engines. By doing this, we were able to create a "universal report server" application. You should check it out at www.versareports.com.