So, I'm using CRM 2011. In order to improve performance I have started using the ExecuteMultipleRequest. It works fine when creating many records at once. Great! The issue I have is that right after I have done a
context.Execute(myMultipleRequest);
and gotten a valid response with id's back, if I then do a
context.myEntitiesSet.Where(x => x.Name == "foo")
(basically query the objects just created) I don't get valid objects back, meaning their id's are empty (Guid.Empty).
So, it seems I have to choose to either use:
use context.Create(), context.Update(), context.Where(...), et.c. or
use context.Execute(multiple) and context.RetrieveMultiple()
There doesn't seem to be a middle ground, as the Context doesn't seem to update which entities it is tracking when I'm using the ExecuteMultipleRequest. That is my basic problem. I can create objects just fine, but if I want to query them I can't use a linq query on the context, I must then use RetrieveMultiple.
Have I gotten this backwards, or is this well known when using CRM? I am an experienced developer, but relatively new to CRM.
Should I have to call context.AttachObject() myself for all newly created entities when using ExecuteMultipleRequest?
Any help would be appreciated. Oh, and I'm using early bound objects.
I don't believe the CrmLinqProvider has been extended to handle your instance. The ExecuteMultipleRequest returns an ExecuteMultipleResponse object that contains the results of each request. You'll need to loop through this to determine the ids, and update them yourself.
Related
I have spent 2 days bashing my head against this problem, and I can't seem to crack it (the problem that is). The same code was working fine until I added database relationships, and I have since read a lot about lazy-loading.
I have two database tables with a 1:1 relationship between them. PromoCode table tracks codes, and has a PK column named id. CustomerPromo table has a column PromoId which is linked to the PromoCode table id. These two tables have no other relationships. I generated all this in SQL Server Management Studio, then generated the model from the database.
To make matters slightly more complicated, I'm doing this inside a WCF data service, but I don't believe that should make a difference (it worked before database relationships were added). After enabling logging, I always get an Exception in the log file with text:
DataContext accessed after Dispose.
My function currently returns all entries from the table:
using (MsSqlDataContext db = new MsSqlDataContext())
{
// This causes issues with lazy-loading
return db.PromoCodes.ToArray();
}
I have read numerous articles/pages/answers and they all say to use the .Include() method. But this doesn't work for me:
return db.PromoCodes.Include(x => x.CustomerPromos).ToArray();
I've tried the "magic string" version as well:
return db.PromoCodes.Include("CustomerPromos").ToArray();
The only code I've managed to get to work is this:
PromoCode[] toReturn = db.PromoCodes.ToArray();
foreach (var p in toReturn)
p.CustomerPromos.Load();
return toReturn;
I've tried added a .Where() criteria to the query, I've tried .Select(), I've tried moving the .Include() after the .Where() (this answer says to do it last, but I think that's only due to nested queries). I've read about scenarios where .Include() will silently fail, and after all this I'm no closer.
What am I missing? Syntax problem? Logic problem? Once I get this "simple" case working, I also need to have nested Includes (i.e. if CustomerPromo table had a relationship to Customer).
Edit
Including all relevant code. The rest is either LINQ to SQL, or WCF Data Services configuration. This is all there is:
[WebGet]
[OperationContract]
public PromoCode[] Test()
{
using (MsSqlDataContext db = new MsSqlDataContext())
{
return db.PromoCodes.Include(x => x.CustomerPromos).ToArray();
}
}
If I call that through a browser directly (e.g. http://<address>:<port>/DataService.svc/Test) I get a reset connection message and have to look up the WCF logs to find out "DataContext accessed after Dispose.". If I make the same query through an AJAX call in a webpage I get an AJAX error with status error (that's all!).
I prematurely posted the previous answer when I didn't actually have any child data to fetch. At the time I was only interested in fetching parent data, and that answer worked.
Now when I actually need child data as well I find it didn't work completely. I found this article which indicates that .Include() (he says Including() but I'm not sure if that's a typo) has been removed, and the correct solution is to use DataLoadOptions. In addition, I also needed to enable Unidirectional Serialisation.
And to top it off, I no longer need DeferredLoadingEnabled. So now the final code looks like this:
using (MsSqlDataContext db = new MsSqlDataContext())
{
DataLoadOptions options = new DataLoadOptions();
options.LoadWith<PromoCode>(p => p.CustomerPromos);
db.LoadOptions = options;
return db.PromoCodes.ToArray();
}
After setting Unidirectional Serialisation it will happily return a parent object without having to load the child, or explicitly set DeferredLoadingEnabled = false;.
Edit: This did not solve the problem entirely. At the time of testing there wasn't any child data, and I wasn't trying to use it. This only allowed me to return the parent object, it doesn't return child objects. For the full solution see this answer.
Contrary to everything I've read, the answer is not to use .Include() but rather to change the context options.
using (MsSqlDataContext db = new MsSqlDataContext())
{
db.DeferredLoadingEnabled = false; // THIS makes all the difference
return db.PromoCodes.ToArray();
}
This link posted in the question comments (thanks #Virgil) hint at the answer. However I couldn't find a way to access LazyLoadingEnabled for LINQ to SQL (I suspect it's for EntityFramework instead). This page indicated that the solution for LINQ to SQL was DeferredLoadingEnabled.
Here is a link to the MSDN documentation on DeferredLoadingEnabled.
I have a question about Saving a list of object in ASP.NET MVC.
First I'm not using EntityFramework or Nh like ORM tool, just use Ado.net
suppose I have an object Product, and I want to collect all the products data via javascript and batch update the product list in one call.
my question is when should I differentiate which item is inserted, updated, or deleted?
one strategy is that I have a enum property on the DTO object and
also on the javascript ViewModel, and when I add an item into the
viewModel, I marked this object to add, and if I changed one Item, I
marked it to updated. so when this request come to the action, I can
know which items to be insert or update.
pros: it's easy on server side, don't need to differentiate the object status from server side.
cons: if I want to publish this action to webapi that will be called by third party, that may need third party user to
differentiate the state of the object.
differentiate the data from server side, just give me a list of object, on the server side, first retrive the current data from database, compare the data, then check which record to be inserted or updated.
pros: all the compare are done from server side.
cons: proformance issue
what ever the data passed from client, just remove the current data and insert the new data
I hope someone could give me an advice, what's the best practice to handle this situation, I think it's quite common but I can't find a best solution.
I've seen option 1 where added/deleted/modified items are maintained in javascript arrays and posted back to server. But for some reason, I didn't like it maybe because of writing client side code to maintain state.
So, I had used second option and thanks to LINQ for making my task easier. Assuming list has some unique id, below is pseudo code. Note: newly added items should have unique random id's, otherwise there might be chance of treating them as already existing item. In my case its GUID, so there was no chance of overriding.
var submittedIds = vmList.Select(a=>a.Id).ToList();
var dbIds = dbList.Select(d=>d.Id).ToList();
//Added items
var newIds = submittedIds.Except(dbIds).ToList();
//loop over newIds and construct list object to contain newly added items
//Deleted items
var deletedIds = dbIds.Except(submittedIds).ToList();
//Modified items
var modifiedIds = dbIds.Intersect(submittedIds).ToList();//if the values don't change, update statement won't do any harm here
This approach gives reasonable performance unless you are dealing with huge lists.
I think third option is not good. For ex: if you plan to implement audit features on your tables, it will give you wrong functionality. If a new record is inserted, you will have entries for all records as deleted and then one inserted which is wrong because only one is inserted.
3rd strategy is suitable for simple situations e.g. when you want to update a Purchase Order items, an Order will not have too much OrderLineItems. However, you have to take care concurrency issue.
I think your first strategy is best suitable in general case. It's also easy to implement. When you want to publish your service to a 3rd party, it's usual that a client must follow the service definition and requirement.
Update
For 1st strategy: If you don't want your clients have to specify status for their data, then do it for them. You can separate the SaveOrder service into smaller services: CreateOrder, UpdateOrder, DeleteOrder.
Currently our website is facing a problem with slow response times (more than 1 min) when we query CRM from our website. We are using CRM 2011 though a web service. When we investigated we found that the time was spent at the point of querying CRM.
We have used the CrmSvcUtil.exe to generate our proxy classes that map to CRM entities. Then we create an instance of context and query CRM using LINQ with C#.
When we query, We load our parent object with LINQ to CRM and then we use LoadProperty to load the related children.
I would like to know if anyone out there using a different method of querying CRM, and if you have come across issues like this in your implementation.
I’ve included a simplified sample query below.
public void SelectEventById(Guid id)
{
var crmEventDelivery = this.ServiceContext.EventDeliverySet.FirstOrDefault(eventDelivery => eventDelivery.Id == id);
if (crmEventDelivery != null)
{
this.SelectCrmEventDeliveryWithRelationships(crmEventDelivery);
}
}
private void SelectCrmEventDeliveryWithRelationships(EventDelivery crmEventDelivery)
{
// Loading List of Venue Delivery on parent crmEventDelivery thats been passed
this.ServiceContext.LoadProperty(crmEventDelivery, Attributes.EventDelivery.eventdelivery_venuedelivery);
foreach (var venueDelivery in crmEventDelivery.eventdelivery_venuedelivery)
{
// Loading Venue on each Venue Delivery
ServiceContext.LoadProperty(venueDelivery, Attributes.VenueDelivery.venue_venuedelivery);
}
// Loading List of Session Delivery on parent crmEventDelivery thats been passed
this.ServiceContext.LoadProperty(crmEventDelivery, Attributes.EventDelivery.eventdelivery_sessiondelivery);
foreach (var sessionDelivery in crmEventDelivery.eventdelivery_sessiondelivery)
{
// Loading Presenters on each Session Delivery
ServiceContext.LoadProperty(sessionDelivery, Attributes.SessionDelivery.sessiondelivery_presenterbooking);
}
}
Like mentioned on the other answers your main problem is the number of web service calls. What no one mentioned is that you can retrieve many objects with a single call using query joins. So you could try something like:
var query_join = (from e in ServiceContext.EventDeliverySet
join v in ServiceContext.VenueDeliverySet on e.EventDeliveryId equals v.EvendDeliveryId.Id
join vn in ServiceContext.VenueSet on v.VenueDeliveryId equals vn.VenueDeliveryId.Id
join s in ServiceContext.SessionDeliverSet on e.EventDeliveryId equals s.EvendDeliveryId.Id
where e.EventDeliveryId == id // *improtant (see below)
select new { EventDelivery = e, VenueDelivery = v, Venue = vn, SessionDeliver = s }).ToList();
Then you can run a foreach on query_join and put it together.
***improtant: do not use the base Id property (e.Id), stick with e.EntityNameId.Value (don't know why but it took a while for me to figure it out. Id returns default Guid value "00000..").
Based on what you've provided this looks like a standard lazy-load issue, except my guess is that each lazy load is resulting in a web service call. This would be called a "chatty" service architecture. Your goal should be to make as few service calls as possible to retrieve data for a single request.
Calling to fill in details can seem like a good idea because you can re-use the individual service methods for cases where you only want data 1 or 2 levels deep, or all the way down, but you pay a steep performance penalty.
You would be better off defining a web service call that returns a complete object graph in scenarios like this. I don't know if/what you're using for an ORM layer within the CRM but if you make a specific call to fetch a complete graph of Deliveries then the ORM can eager-fetch the data into fewer SQL statements. Fewer calls to the web service (and subsequently fewer calls into the CRM's data store) should noticeably improve your performance.
So I can see why this might take a while. I think as everyone else have commented you are making quite a few web service calls. If you get a moment it would be interesting to know if the individual calls are slow or its just because you are making so many, I would suggest profiling this.
In any case I suspect you would get better performance by not using the strongly type entities.
I would suggest using a FetchXml query, this will allow you to build a Sql Xml-Style query. Basically you should be able to replace your many we bservice calls with a single call. The MSDN has an example, also check out the Stunnware FetchXml designer, Products > Stunnware Tools > Download and Evaluation. It was built for Crm 4 but supports virtually all the features you will need.
If you dont fancy that, you could also try a QueryExpression or OData, both of which should allow you to get your data in one hit.
After trying all the suggested tips in the other answers and doing further profiling, in our particular scenario with our use of CRM, and how it was set up - we decided to simply bypass it.
We ended up using some of the in-built views, this is not a recommended approach in the CRM documentation, but we really needed to achieve higher performance and the CRM approach in this instance was just in our way.
To anyone else reading this, see the other answers too.
Because the query does not know what fields will be needed later, all columns are returned from the entity when only the entity is specified in the select clause. In order to specify only the fields you will use, you must return a new object in the select clause, specifying the fields you want to use.
So instead of this:
var accounts = from acct in xrm.AccountSet
where acct.Name.StartsWith("Test")
select acct;
Use this:
var accounts = from acct in xrm.AccountSet
where acct.Name.StartsWith("Test")
select new Account()
{
AccountId = acct.AccountId,
Name = acct.Name
};
Check out this post more details.
To Linq or not to Linq
I use DataContexts for 90% of my data access. But if, for instance, User1 modifies a record and User2 query the DataContext, he won't see the modifications. So I recreate my DataContext EVERY time I acces data (before every LINQ to SQL use).
There must be a better way to query the tables! Either I have to get DataContexts to be synchronized with the tables, either I have to find a way to directly query the tables.
Any help would be appreciated!
Thank you!
So I recreate my DataContext EVERY
time I acces data (before every LINQ
to SQL use). There must be a better
way to query the tables!
No, there is no better way and there is no need for one.
Creating a DataContext is relatively cheap, the only expensive part is the Connection and that is handled by the ConnectionPool.
So you can just think and reason in terms of Queries (they are the real expense) and the alternative, caching resultsets.
You can clear the cache in Linq2Sql as per the following blog post:
http://blog.robustsoftware.co.uk/2008/11/clearing-cache-of-linq-to-sql.html
The author also puts the code inside an extension method for us :)
Note: you will probably need to change the Context in the method signature. Once this has been done you can call db.ClearCache(); and it will all be lovely... tm.
I think there is another way of getting the data but can't remember where I found the information so may take me a while, will comment on this answer if i find it.
hth,
Stu
I believe that you may misunderstand the nature of the DataContext.
In most situtions, you'll want to create an instance of the DataContext with the Using statement. Within the Using statement you should then proceed to do your LINQ queries for your CRUD operations.
using (YourDataContext ctx = new YourDataContext()) {
someTable stObj = (from st in ctx.someTable
select st).FirstOrDefault();
stObj.SomeColumn = 1001;
ctx.SaveChanges();
}
Any other DataContext that is opened after the DataContext above has been saved will see the changes that were made to SomeColumn.
I've got a list of entity object Individual for an employee survey app - an Individual represents an employee or outside rater. The individual has the parent objects Team and Team.Organization, and the child objects Surveys, Surveys.Responses. Responses, in turn, are related to Questions.
So usually, when I want to check the complete information about an Individual, I need to fetch Individuals.Include(Team.Organization).Include(Surveys.Responses.Question).
That's obviously a lot of includes, and has a performance cost, so when I fetch a list of Individuals and don't need their related objects, I don't bother with the Includes... but then the user wants to manipulate an Individual. So here's the challenge. I seem to have 3 options, all bad:
1) Modify the query that downloads the big list of Individuals to .Include(Team.Organization).Include(Surveys.Responses.Question). This gives it bad performance.
2) Individuals.Load(), TeamReference.Load(), OrganizationReference.Load(), Surveys.Load(), (and iterate through the list of Surveys and load their Responses and the Responses' Questions).
3) When a user wishes to manipulate an Individual, I drop that reference and fetch a whole brand new Individual from the database by its primary key. This works, but is ugly because it means I have two different kinds of Individuals, and I can never use one in place of the other. It also creates ugly problems if I'm iterating across a list repeatedly, as it's tricky to avoid loading and dropping the fully-included Individuals repeatedly, which is wasteful.
Is there any way to say
myIndividual.Include("Team.Organization").Include("Surveys.Responses.Question");
with an existing Individual entity, instead of taking approach (3)?
That is, is there any middle-ground between "fetch everything from the database up-front" and "late-load one relationship at a time"?
Possible solution that I'm hoping I could get insight about:
So there's no way to do a manually-implemented explicit load on a navigational-property? No way to have the system interpret
Individual.Surveys = from survey in MyEntities.Surveys.Include("Responses.Question")
where survey.IndividualID = Individual.ID
select survey; //Individual.Surveys is the navigation collection property holding Surveys on the Individual.
Individual.Team = from team in MyEntities.Teams.Include("Organization")
where team.ID = Individual.TeamID
select team;
as just loading Individual's related objects from the database instead of being an assignment/update operation? If this means no actual change in X and Y, can I just do that?
I want a way to manually implement a lazy or explicit load that isn't doing it a dumb (one relation at a time) way. Really, the Teams and Organizationss aren't the problem, but the Survey.Responses.Questions are a massive buttload of database hits.
I'm using 3.5, but for the sake of others (and when my project finally migrates to 4) I'm sure responses relevant to 4 would be appreciated. In that context, similar customization of lazy loading would be good to hear about too.
edit: Switched the alphabet soup to my problem domain, edited for clarity.
Thanks
The Include statement is designed to do exactly what you're hoping to do. Having multiple includes does indeed eager load the related entities.
Here is a good blog post about it:
http://thedatafarm.com/blog/data-access/the-cost-of-eager-loading-in-entity-framework/
In addition, you can use strongly typed "Includes" using some nifty ObjectContext extension methods. Here is an example:
http://blogs.microsoft.co.il/blogs/shimmy/archive/2010/08/06/say-goodbye-to-the-hard-coded-objectquery-t-include-calls.aspx