Can I have a dynamicly updated dataContext? - c#

I use DataContexts for 90% of my data access. But if, for instance, User1 modifies a record and User2 query the DataContext, he won't see the modifications. So I recreate my DataContext EVERY time I acces data (before every LINQ to SQL use).
There must be a better way to query the tables! Either I have to get DataContexts to be synchronized with the tables, either I have to find a way to directly query the tables.
Any help would be appreciated!
Thank you!

So I recreate my DataContext EVERY
time I acces data (before every LINQ
to SQL use). There must be a better
way to query the tables!
No, there is no better way and there is no need for one.
Creating a DataContext is relatively cheap, the only expensive part is the Connection and that is handled by the ConnectionPool.
So you can just think and reason in terms of Queries (they are the real expense) and the alternative, caching resultsets.

You can clear the cache in Linq2Sql as per the following blog post:
http://blog.robustsoftware.co.uk/2008/11/clearing-cache-of-linq-to-sql.html
The author also puts the code inside an extension method for us :)
Note: you will probably need to change the Context in the method signature. Once this has been done you can call db.ClearCache(); and it will all be lovely... tm.
I think there is another way of getting the data but can't remember where I found the information so may take me a while, will comment on this answer if i find it.
hth,
Stu

I believe that you may misunderstand the nature of the DataContext.
In most situtions, you'll want to create an instance of the DataContext with the Using statement. Within the Using statement you should then proceed to do your LINQ queries for your CRUD operations.
using (YourDataContext ctx = new YourDataContext()) {
someTable stObj = (from st in ctx.someTable
select st).FirstOrDefault();
stObj.SomeColumn = 1001;
ctx.SaveChanges();
}
Any other DataContext that is opened after the DataContext above has been saved will see the changes that were made to SomeColumn.

Related

Eager-loading using LINQ to SQL with Include()

I have spent 2 days bashing my head against this problem, and I can't seem to crack it (the problem that is). The same code was working fine until I added database relationships, and I have since read a lot about lazy-loading.
I have two database tables with a 1:1 relationship between them. PromoCode table tracks codes, and has a PK column named id. CustomerPromo table has a column PromoId which is linked to the PromoCode table id. These two tables have no other relationships. I generated all this in SQL Server Management Studio, then generated the model from the database.
To make matters slightly more complicated, I'm doing this inside a WCF data service, but I don't believe that should make a difference (it worked before database relationships were added). After enabling logging, I always get an Exception in the log file with text:
DataContext accessed after Dispose.
My function currently returns all entries from the table:
using (MsSqlDataContext db = new MsSqlDataContext())
{
// This causes issues with lazy-loading
return db.PromoCodes.ToArray();
}
I have read numerous articles/pages/answers and they all say to use the .Include() method. But this doesn't work for me:
return db.PromoCodes.Include(x => x.CustomerPromos).ToArray();
I've tried the "magic string" version as well:
return db.PromoCodes.Include("CustomerPromos").ToArray();
The only code I've managed to get to work is this:
PromoCode[] toReturn = db.PromoCodes.ToArray();
foreach (var p in toReturn)
p.CustomerPromos.Load();
return toReturn;
I've tried added a .Where() criteria to the query, I've tried .Select(), I've tried moving the .Include() after the .Where() (this answer says to do it last, but I think that's only due to nested queries). I've read about scenarios where .Include() will silently fail, and after all this I'm no closer.
What am I missing? Syntax problem? Logic problem? Once I get this "simple" case working, I also need to have nested Includes (i.e. if CustomerPromo table had a relationship to Customer).
Edit
Including all relevant code. The rest is either LINQ to SQL, or WCF Data Services configuration. This is all there is:
[WebGet]
[OperationContract]
public PromoCode[] Test()
{
using (MsSqlDataContext db = new MsSqlDataContext())
{
return db.PromoCodes.Include(x => x.CustomerPromos).ToArray();
}
}
If I call that through a browser directly (e.g. http://<address>:<port>/DataService.svc/Test) I get a reset connection message and have to look up the WCF logs to find out "DataContext accessed after Dispose.". If I make the same query through an AJAX call in a webpage I get an AJAX error with status error (that's all!).
I prematurely posted the previous answer when I didn't actually have any child data to fetch. At the time I was only interested in fetching parent data, and that answer worked.
Now when I actually need child data as well I find it didn't work completely. I found this article which indicates that .Include() (he says Including() but I'm not sure if that's a typo) has been removed, and the correct solution is to use DataLoadOptions. In addition, I also needed to enable Unidirectional Serialisation.
And to top it off, I no longer need DeferredLoadingEnabled. So now the final code looks like this:
using (MsSqlDataContext db = new MsSqlDataContext())
{
DataLoadOptions options = new DataLoadOptions();
options.LoadWith<PromoCode>(p => p.CustomerPromos);
db.LoadOptions = options;
return db.PromoCodes.ToArray();
}
After setting Unidirectional Serialisation it will happily return a parent object without having to load the child, or explicitly set DeferredLoadingEnabled = false;.
Edit: This did not solve the problem entirely. At the time of testing there wasn't any child data, and I wasn't trying to use it. This only allowed me to return the parent object, it doesn't return child objects. For the full solution see this answer.
Contrary to everything I've read, the answer is not to use .Include() but rather to change the context options.
using (MsSqlDataContext db = new MsSqlDataContext())
{
db.DeferredLoadingEnabled = false; // THIS makes all the difference
return db.PromoCodes.ToArray();
}
This link posted in the question comments (thanks #Virgil) hint at the answer. However I couldn't find a way to access LazyLoadingEnabled for LINQ to SQL (I suspect it's for EntityFramework instead). This page indicated that the solution for LINQ to SQL was DeferredLoadingEnabled.
Here is a link to the MSDN documentation on DeferredLoadingEnabled.

How to get updated entities with Linq after ExecuteMultipleRequest

So, I'm using CRM 2011. In order to improve performance I have started using the ExecuteMultipleRequest. It works fine when creating many records at once. Great! The issue I have is that right after I have done a
context.Execute(myMultipleRequest);
and gotten a valid response with id's back, if I then do a
context.myEntitiesSet.Where(x => x.Name == "foo")
(basically query the objects just created) I don't get valid objects back, meaning their id's are empty (Guid.Empty).
So, it seems I have to choose to either use:
use context.Create(), context.Update(), context.Where(...), et.c. or
use context.Execute(multiple) and context.RetrieveMultiple()
There doesn't seem to be a middle ground, as the Context doesn't seem to update which entities it is tracking when I'm using the ExecuteMultipleRequest. That is my basic problem. I can create objects just fine, but if I want to query them I can't use a linq query on the context, I must then use RetrieveMultiple.
Have I gotten this backwards, or is this well known when using CRM? I am an experienced developer, but relatively new to CRM.
Should I have to call context.AttachObject() myself for all newly created entities when using ExecuteMultipleRequest?
Any help would be appreciated. Oh, and I'm using early bound objects.
I don't believe the CrmLinqProvider has been extended to handle your instance. The ExecuteMultipleRequest returns an ExecuteMultipleResponse object that contains the results of each request. You'll need to loop through this to determine the ids, and update them yourself.

How to cache database data into memory for use by MVC application?

I have a somewhat complex permission system that uses six database tables in total and in order to speed it up, I would like to cache these tables in memory instead of having to hit the database every page load.
However, I'll need to update this cache when a new user is added or a permission is changed. I'm not sure how to go about having this in memory cache, and how to update it safely without causing problems if its accessed at the same time as updating
Does anyone have an example of how to do something like this or can point me in the right direction for research?
Without knowing more about the structure of the application, there are lots of possible options. One such option might be to abstract the data access behind a repository interface and handle in-memory caching within that repository. Something as simple as a private IEnumerable<T> on the repository object.
So, for example, say you have a User object which contains information about the user (name, permissions, etc.). You'd have a UserRepository with some basic fetch/save methods on it. Inside that repository, you could maintain a private static HashSet<User> which holds User objects which have already been retrieved from the database.
When you fetch a User from the repository, it first checks the HashSet for an object to return, and if it doesn't find out it gets it from the database, adds it to the HashSet, then returns it. When you save a User it updates both the HashSet and the database.
Again, without knowing the specifics of the codebase and overall design, it's hard to give a more specific answer. This should be a generic enough solution to work in any application, though.
I would cache items as you use it, which means on your data layer when you are getting you data back you check on your cache if it is available there otherwise you go to the database and cache the result after.
public AccessModel GetAccess(string accessCode)
{
if(cache.Get<string>(accessCode) != null)
return cache.Get<string>(accessCode);
return GetFromDatabase(accessCode);
}
Then I would think next on my cache invalidate strategy. You can follow two ways:
One would be set expire data to be 1 hour and then you just hit the database once in a hour.
Or invalidate the cache whenever you update the data. That is for sure the best but is a bit more complex.
Hope it helps.
Note: you can either use ASP.NET Cache or another solution like memcached depending on your infrastructure
Is it hitting the database every page load that's the problem or is it joining six tables that's the problem?
If it's just that the join is slow, why not create a database table that summarizes the data in a way that is much easier and faster to query?
This way, you just have to update your summary table each time you add a user or update a permission. If you group all of this into a single transaction, you shouldn't have issues with out-of-sync data.
You can take advantage of ASP.NET Caching and SqlCacheDependency Class. There is article on MSDN.
You can use the Cache object built in ASP.Net. Here is an article that explains how.
I can suggest cache such data in Application state object. For thread-safe usage, consider using lock operator. Your code would look something like this:
public void ClearTableCache(string tableName)
{
lock (System.Web.HttpContext.Current)
{
System.Web.HttpContext.Current.Application[tableName] = null;
}
}
public SomeDataType GetTableData(string tableName)
{
lock (System.Web.HttpContext.Current)
{
if (System.Web.HttpContext.Current.Application[tableName] == null)
{
//get data from DB then put it into application state
System.Web.HttpContext.Current.Application[tableName] = dataFromDb;
return dataFromDb;
}
return (SomeDataType)System.Web.HttpContext.Current.Application[tableName];
}
}

EF4 update a value for all rows in a table without doing a select

I need to reset a boolean field in a specific table before I run an update.
The table could have 1 million or so records and I'd prefer not to have to have to do a select before update as its taking too much time.
Basically what I need in code is to produce the following in TSQL
update tablename
set flag = false
where flag = true
I have some thing close to what I need here http://www.aneyfamily.com/terryandann/post/2008/04/Batch-Updates-and-Deletes-with-LINQ-to-SQL.aspx
but have yet to implement it but was wondering if there is a more standard way.
To keep within the restrictions we have for this project, we cant use SPROCs or directly write TSQL in an ExecuteStoreCommand parameter on the context which I believe you can do.
I'm aware that what I need to do may not be directly supported in EF4 and we may need to look at a SPROC for the job [in the total absence of any other way] but I just need to explore fully all possibilities first.
In an EF ideal world the call above to update the flag would be possible or alternatively it would be possible to get the entity with the id and the boolean flag only minus the associated entities and loop through the entity and set the flag and do a single SaveChanges call, but that may not be the way it works.
Any ideas,
Thanks in advance.
Liam
I would go to stakeholder who introduced restirctions about not using SQL or SProc directly and present him these facts:
Updates in ORM (like entity framework) work this way: you load object you perform modification you save object. That is the only valid way.
Obviously in you case it would mean load 1M entities and execute 1M updates separately (EF has no command batching - each command runs in its own roundtrip to DB) - usually absolutely useless solution.
The example you provided looks very interesting but it is for Linq-To-Sql. Not for Entity framework. Unless you implement it you can't be sure that it will work for EF, because infrastructure in EF is much more complex. So you can spent several man days by doing this without any result - this should be approved by stakeholder.
Solution with SProc or direct SQL will take you few minutes and it will simply work.
In both solution you will have to deal with another problem. If you already have materialized entities and you will run such command (via mentioned extension or via SQL) these changes will not be mirrored in already loaded entities - you will have to iterate them and set the flag.
Both scenarios break unit of work because some data changes are executed before unit of work is completed.
It is all about using the right tool for the right requirement.
Btw. loading of realted tables can be avoided. It is just about the query you run. Do not use Include and do not access navigation properties (in case of lazy loading) and you will not load relation.
It is possible to select only Id (via projection), create dummy entity (set only id and and flag to true) and execute only updates of flag but it will still execute up to 1M updates.
using(var myContext = new MyContext(connectionString))
{
var query = from o in myContext.MyEntities
where o.Flag == false
select o.Id;
foreach (var id in query)
{
var entity = new MyEntity
{
Id = id,
Flag = true
};
myContext.Attach(entity);
myContext.ObjectStateManager.GetObjectStateEntry(entity).SetModifiedProperty("Flag");
}
myContext.SaveChanges();
}
Moreover it will only work in empty object context (or at least no entity from updated table can be attached to context). So in some scenarios running this before other updates will require two ObjectContext instances = manually sharing DbConnection or two database connections and in case of transactions = distributed transaction and another performance hit.
Make a new EF model, and only add the one Table you need to make the update on. This way, all of the joins don't occur. This will greatly speed up your processing.
ObjectContext.ExecuteStoreCommand ( _
commandText As String, _
ParamArray parameters As Object() _
) As Integer
http://msdn.microsoft.com/en-us/library/system.data.objects.objectcontext.executestorecommand.aspx
Edit
Sorry, did not read the post all the way.

Ensure LINQ to SQL Entities Delete On Submit

What is the best way to mark some entities DeleteOnSubmit(). Is there a way to check and say to the context that this is for deletion?
Example: I have an Entity which reference an EntitySet<> and i delete from the EntitySet<> 4 of the 8 entities. When submitting changes i want to say DeleteOnSubmit() on those 4! This scenario should play on a single EntityRef<> too.
Of course DataContext lives in another layer so...grabbing, changing, sending back is the job.
Thank you.
This is pretty hard to answer based on the description of your architecture. Just because you're using a layered approach doesn't mean that you can't call DeleteOnSubmit... you'd just call your own method that wraps that I presume.
Unless, of course, you're instantiating your DataContext object in the update routine. in this case you'd have to do something else. Your data layer could expose a method like MarkForDelete() which just adds the entity to a collection, then expose a separate SubmitChanges() that iterates over the collected items for deletion, attaches them to the datacontext and then does the actual DeleteAllOnSubmit() call.
That said I've never really bothered with the whole entity serialization/deserialization/reattach thing as it seems fraught with peril. I usually just collect the primary keys in a list, select out the entities and re-delete them. It's no more work, really.
Take a look at DeleteAllOnSubmit(). You pass this method a list of entities to be deleted.

Categories