I get this error
Cannot add an entity with a key that is already in use
when I try to save an Item
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Edit(Item item)
{
Global.DataContext.Items.Attach(item);
Global.DataContext.SubmitChanges();
return View(item);
}
That's because I cannot attach the item to the global DataContext.
Is it possible to save an item without creating a new DataContext, and without having to assign each field of the item manually?
(I am very new to LINQ)
EDIT: I realised static DataContext would cause problems thanks to the comments below, it is now like this
public static AbcDataContext DataContext
{
get
{
if (!HttpContext.Current.Items.Contains("DataContext"))
HttpContext.Current.Items["DataContext"] = new AbcDataContext(ConnectionString);
return (AbcDataContext)HttpContext.Current.Items["DataContext"];
}
}
(Rex might not agree to that, but I can't be bothered changing the whole code at the moment - may be later)
Don't have a global/static DataContext, that is setting yourself up for pain. A DataContext should represent a single logical transaction ("get in, do x/y/z and get out"). They are cheap to create and easy to dispose; there is absolutely no reason to try to minimize them, much less keep a global/static one.
Suppose the primary key of your Item class is ItemId.
Suppose the ItemID for the instance you are attempting to update is 5.
The DataContext has seen an original state for ItemID 5, so it won't let you Attach().
http://msdn.microsoft.com/en-us/library/bb300517.aspx
In this version of Attach, the entity
is assumed to be in its original value
state. After calling this method, you
can then update its fields, for
example with additional data sent from
the client.
There's three normal ways to perform an update in LinqToSql.
If the parameter to this Edit method was originally loaded up from the DataContext, then all you need to do is:
public ActionResult Edit(Item item)
{
Global.DataContext.SubmitChanges();
return View(item);
}
The DataContext tracks changes against objects that it loaded. As a nasty side effect, any modified objects that was loaded by the DataContext are also going to be updated. This is a big reason to not use a single app level DataContext.
If the parameter to this Edit method was new'd up in your code, loaded by a different DataContext, or passed to your code (in other words, the instance has no attached DataContext) then you can do either of these:
public ActionResult Edit(Item item)
{
using(MyDataContext dc = new MyDataContext())
{
//this new DataContext has never heard of my item, so I may Attach.
dc.Items.Attach(item);
//this loads the database record in behind your changes
// to allow optimistic concurrency to work.
//if you turn off the optimistic concurrency in your item class
// then you won't have to do this
dc.Refresh(item, KeepCurrentValues);
dc.SubmitChanges();
}
return View(item);
}
public ActionResult Edit(Item item)
{
original = Global.DataContext.Items.Single(x => x.ItemID = item.ItemID)
//play the changes against the original object.
original.Property1 = item.Property1;
original.Property2 = item.Property2;
Global.DataContext.SubmitChanges();
return View(item);
}
With your question answered, allow me to echo the concern that others have stated for using a static DataContext. This is a poor practice and goes against Microsoft's intended use of the DataContext class.
http://msdn.microsoft.com/en-us/library/system.data.linq.datacontext.aspx
In general, a DataContext instance is
designed to last for one "unit of
work" however your application defines
that term. A DataContext is
lightweight and is not expensive to
create. A typical LINQ to SQL
application creates DataContext
instances at method scope or as a
member of short-lived classes that
represent a logical set of related
database operations.
DataContext discussion. Note I'm not commenting on your code.
DataContexts implement IDisposable, and therefore you should be disposing of the data context when it's no longer needed. Your website works well enough in development, but in production you will get nailed. You might as well do it right before your code gets too entrenched and changing it will be a big hassle. At best you'll just develop bad habits.
A better alternative to what you've written is to have your own controller base class that manages the lifetime for you.
public class MyBaseController : System.Web.Mvc.Controller
{
private AbcDataContext abcDataContext;
protected AbcDataContext DataContext
{
get
{ // lazy-create of DataContext
if (abcDataContext == null)
abcDataContext = new AbcDataContext(ConnectionString);
return abcDataContext;
}
}
protected override void Dispose(bool disposing)
{
base.Dispose(disposing);
if (disposing)
{
if( abcDataContext != null )
abcDataContext.Dispose();
}
}
}
which allows you to do
public class MyController : MyBaseController
{
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Edit(Item item)
{
DataContext.Items.Attach(item);
DataContext.SubmitChanges();
return View(item);
}
}
While this works, I personally find it can get annoying and tricky.
Best: If you're going to follow MVC as you're supposed to, you should populate the model completely and not rely on lazy-loading entities. The best way to achieve this is to get rid of your DataContext as soon as you can.
Typically, we enforce this at a code level via the following pattern:
using( var dc = new AbcDataContext(ConnectionString))
{
var itemUpdater = new ItemUpdater(dc);
item = itemUpdater.Update(item);
}
return View(item);
The idea is that you will get an ObjectDisposedException if your view attempts to get any additional data via lazy-loading.
static global DataContext? If my understanding of your question is correct, this will result in everyone connecting to you app sharing the same data context which will cause lot of security/sync issues. Avoid it.
According to this discussion about the same problem, it seems to be a type-mapping bug that may be worked around if you delete the Item class in the designer and just drag over the table into the designer again.
Related
I have problem with updating data from my datasource(database through entity fw) to wpf-windows. I generate files using entity framework, so i'm accesing data from datebase this way:
public partial class sampleWindow : Window
{
myEntity en = new myEntity();
public sampleWindow()
{
InitializeComponent();
Bind();
}
private void Bind()
{
var list = from o in en.table select o;
someDatagrid.ItemsSource = list.ToList();
}
This method, firstly, was adequate for my program, i was refreshing 'Bind' method after i was doing some operations on database, so the data in my datagrids or combos was fresh. The problem occurs when i was changing database in diffrent wpf-windows. I have read that I should implement observable interface and use load instead of itemsSource. I tried to do it but i'm begginer and my attempts faild miserably. Could someone tell me step by step, what i should do?
You need a Singleton to manage your data, combined with using an ObservableCollection to expose the data. When the collection is changed by any view, it will notify any subscribers to the observation and they will automatically update.
See: Example of bindable list in XAML app (first part)
Example of Singleton
You would want to use a singleton for the instance of your entity as The Sharp Ninja mentioned. His article in the link he posted does a good job of explaining. You will want to use an observable collection to bind your ItemSource to. When an item is added or removed from an Observable collection the UI is automatically notified. The problem you are going to have is that there is not a .ToObservableCollection()
extension method build in to .net so you will have to implement your own.
I use this extension method
public static ObservableCollection<T> ToObservableCollection<T>(
this IEnumerable<T> enumeration)
{
return new ObservableCollection<T>(enumeration);
}
So now your bind method can set your ItemSource to the observable collection
private void Bind()
{
var list = from o in en.table select o;
someDatagrid.ItemsSource = list.ToObservableCollection();
}
There are so many and better ways (MVVM pattern) to accomplish this than your approach. To keep it simple it can be accomplished this way:
//Gets the Load() extention method available for DbSet
using System.Data.Entity;
private void Bind()
{
myEntity.table.Load();
/*Local returns an obvervable collection convenient for data binding.
This is a synchronized local view of your data. It means any item added , deleted and updated will be reflected in your controls.*/
var obsColl = myEntity.table.Local;
someDatagrid.ItemsSource = obsColl;
}
Hello good people of stackoverflow.
EDIT: Sourcecode is avaliable on:
https://www.dropbox.com/sh/yq4qbznl4b6gm4h/AADdjd_hb-OQXV5KL8OU5cbqa?dl=0
More specefic on I4PRJ4 --> Backend --> Backend.sln.
I'm currently making a productmanagementsystem that has a GUI. We've decided to use MVVM, and we're still learning it.
Though I have a problem. On the main screen, a list of categories is shown, and the products in the selected category is also shown.
As for now, we've binded the data to an observeable collection. But the problem arises when we need to add another product, using a differen view and viewmodel. In that case we need to have Categories with data. We open the add-product-view through a command in the mainwindow, so to get the data to the viewmodel we have to pass the object from MainWindowViewModel to AddProductView and then to AddProductViewModel - and that's not the coupling I want.
So I tried using the singletonpattern, and binding to the observable collections as:
xmlns:models="clr-namespace:Backend.Models"
..
..
<ListBox Margin="0, 30, 0, 0" ItemsSource="{Binding Source={x:Static models:GlobalCategories.CategoryList}, Path=Name}" DisplayMemberPath="Name"/>
Where GlobalCategories is as follows:
[Models: GlobalCategories]
public class GlobalCategories
{
private static BackendProductCategoryList _list;
public static BackendProductCategoryList CategoryList
{
get
{
if (_list == null)
{
MessageBox.Show("Made new list");
return new BackendProductCategoryList();
}
MessageBox.Show("Returned old list");
return _list;
}
set { _list = value; }
}
}
As you can see in the code, a messagesbox appears, telling me what was returned. But with the above XAML-code, it actually creates that object, which was my understanding it wouldn't do that and you therefore would have to initialize it yourself. It actually creates the object, and the msgbox will say that a new list has been made.
Though if I then do the following in MainWindowViewModel
public class MainWindowViewModel
{
public MainWindowViewModel()
{
MessageBox.Show("" + CategoryList.Count);
}
Then it creates ANOTHER list, but if I then do another operation, I get a "old list" message. What is going on - why is this happening?
What am I doing wrong? Oh - and the binding doesn't work, nothing is shown when I do this. And it's driving me nuts. I'm comming from a background in C and C++, and been working with c# and xaml for a couple of months - I NEED CONTROL. AND POINTERS :-D
I really hope you guys can help me out here, giving me an understanding what is going on, and how I solve it.
Or even better - is there a better way of sharing data between viewmodels? Because to be honest, then I'm not the biggest fan of singleton, and would really appericiate another soloution to share data between viewmodels.
Thank you so much for your help!
Best regards,
Benjamin
I think there may be some confusion on your end as to how property accessors work. Set and get simply allow you to declare members that look like regular properties to everything referencing them but are implemented with code. When something needs to access the list it calls your getter and is expecting that function to return a value (or null). Your current implementation is creating a new list and returning it:
return new BackendProductCategoryList();
But you're not setting the value in _list, so the next time the getter is called the value of _list is still null and you create and return the list again. And again, and so on. All you need to do is store it so that the list only gets created once:
public static BackendProductCategoryList CategoryList
{
get
{
if (_list == null)
{
MessageBox.Show("Made new list");
_list = new BackendProductCategoryList();
}
else
MessageBox.Show("Returned old list");
return _list;
}
set { _list = value; }
}
One additional tip: don't call MessageBox.Show in accessors, you should be doing as little work in accessors as possible. Using statics like this really isn't a good idea either for a host of reasons, but I'll leave that for another question (look up "dependency injection").
My C# application uses the Repository Pattern, and I have a terrible doubt as how to implement the "Update" part of CRUD operations. Specifically, I don't know how to "tell" the repository which object I want to replace (so that persistence can be carried out aftwerwards.
I have the following code in a console application (written just as example) that uses the libraries from the application:
class Program
{
static void Main(string[] args) {
var repo = new RepositorioPacientes();
var listapacientes = repo.GetAll();
// Choosing an element by index
// (should be done via clicking on a WPF ListView or DataGrid)
var editando = listapacientes[0];
editando.Nome = "Novo Helton Moraes";
repo.Update(editando);
}
}
Question is: How am I supposed to tell the repository which element it has to update? Should I traverse the whole repository using an equality comparer to find the element?
NOTE: this repository encapsulates data-access using XML serialization, one file per entity, and my entities (of type Paciente in this example) have the [Serializable] attribute. That said, the "Update" operation would end up replacing the XML file of the given entity with another with updated data, via Serialize method.
I am not concerned with that, though. what I cannot figure out is how to implement repo.Update(entity) so that the repo knows that this entity that is being passed back is the same that has been selected from listapacientes, which is not the repository itself.
Thanks for reading!
Ultimately, this should come down to the time-space trade off. Your suggestion of implementing an equality comparer and iteration through the entire repository maximizes runtime but uses little space by using a List<T> as the data structure used by the repository. In the worst case, where you update the last element of the list, you will need to iterate through the entire thing and run the equality operation on each element until it matches the last one. This is feasible for smaller repositories.
Another very common solution would be to override the GetHashCode of your T types in the repository, and using a HashSet<T> or Dictionary<T, V> as the data structure in the repository. The latter would minimize time to O(1) but take more space for the data structure. This is probably a better solution for much larger repositories, especially so if each of the type T objects has one property, like a GUID or database identifier associated with it that is unique because then you have a very easy hash value.
There are other data structures you can consider for your repository based on the exact use-case of your repository. For example, if you are trying to maintain an ordering of elements in the repository where only the highest or lowest element is fetched at a time, a PriorityQueue or Heap might be for you. If you spend time thinking about the data structure that backs your repository, the rest of the implementation should solve itself.
Don't load everything into memory. Try it something like this.
class Program
{
static void Main(string[] args) {
var repo = new RepositorioPacientes();
var editando = repo.SingleOrDefault(p => p.Id == 1);
editando.Nome = "Novo Helton Moraes";
repo.Update(editando);
}
}
you can use this link: http://www.codeproject.com/Articles/644605/CRUD-Operations-Using-the-Repository-Pattern-in-MV
And try this code
public ActionResult Edit(int id)
{
Book book = _bookRepository.GetBookByID(id);
return View(book);
}
[HttpPost]
public ActionResult Edit(Book book)
{
try
{
if (ModelState.IsValid)
{
_bookRepository.UpdateBook(book);
_bookRepository.Save();
return RedirectToAction("Index");
}
}
catch (DataException)
{
ModelState.AddModelError("", "Unable to save changes. Try again, " +
"and if the problem persists see your system administrator.");
}
return View(book);
}
I have a complex object using inheritence that I map with automapper, it maps perfectly during a get request, but during a post request the exact same code doesn't map the inerited types correctly.
Let me explain. (See code below)
In the first case when I map the object during a simple get request, it maps perfectly fine and the Parent property of the class A below is of its specific type B or C.
But when the exact same mapping happens during a post, the Parent property of A is of type A!??
Now, the code is the same, the data model coming back from the DB is the same. (I use nhibernate - and the types are as I expect) the only difference is that it is a post request?!
Is there something I should know about AutoMapper in this case?
Class definitions (ViewModels follow the same structure):
public class A
{
public A Parent { get; set;}
}
public class B : A
{ }
public class C : A
{ }
And mapped like this:
CreateMap<A, AViewModel>()
.Include<B, BViewModel>()
.Include<C, CViewModel>();
CreateMap<B, BViewModel>();
CreateMap<C, CViewModel>();
Calling map:
var aModel = _aManager.Get("same parameter");
var aViewModel = Mapper.Map<AViewModel>(aModel);
Edit #1 - This depicts the logic in the post Action:
[Transaction] // Commits the nhibernate transaction on OnActionExecuted
[HttpPost]
public ActionResult UpdateA(OtherModelViewModel viewModel)
{
var a = _aManager.Get("same parameter");
var otherModel = Mapper.Map<OtherModel>(viewModel);
a.AddOtherModel(otherModel);
_otherModelRepository.New(otherModel);
// Eeek, writing this out I am seeing a problem here, I suspect this is where my problem would be, loading the model again from the db, after updating it in session without commiting it? I am going to change the logic and see if it fixes it.
var aModel = _aManager.Get("same parameter");
var aViewModel = Mapper.Map<AViewModel>(aModel);
// return result.
}
Sorry I was being silly and letting the complexity get the better of me.
I use a transaction attribute, to persist the information in OnActionExecuted.
So what I was doing was > loading the model > modifing it > then loading it again and trying to map it before it had even been persisted.
I know that nHibernate really doesn't like it when you try and do things like that, so I think the in memory object graph was in a state of flux (pending commit), which was affecting the mapping.
I have change my logic to rather do an ActionRedirect after the update, which has resolved the mapping issue.
Much happier all around.
I have multiple business objects in my application (C#, Winforms, WinXP). When the user executes some action on the UI, each of these objects are modified and updated by different parts of the application. After each modification, I need to first check what has changed and then log these changes made to the object. The purpose of logging this is to create a comprehensive tracking of activity going on in the application.
Many among these objects contain contain lists of other objects and this nesting can be several levels deep. The 2 main requirements for any solution would be
capture changes as accurately as possible
keep performance cost to minimum.
eg of a business object:
public class MainClass1
{
public MainClass1()
{
detailCollection1 = new ClassDetailCollection1();
detailCollection2 = new ClassDetailCollection2();
}
private Int64 id;
public Int64 ID
{
get { return id; }
set { id = value; }
}
private DateTime timeStamp;
public DateTime TimeStamp
{
get { return timeStamp; }
set { timeStamp = value; }
}
private string category = string.Empty;
public string Category
{
get { return category; }
set { category = value; }
}
private string action = string.Empty;
public string Action
{
get { return action; }
set { action = value; }
}
private ClassDetailCollection1 detailCollection1;
public ClassDetailCollection1 DetailCollection1
{
get { return detailCollection1; }
}
private ClassDetailCollection2 detailCollection2;
public ClassDetailCollection2 DetailCollection2
{
get { return detailCollection2; }
}
//more collections here
}
public class ClassDetailCollection1
{
private List<DetailType1> detailType1Collection;
public List<DetailType1> DetailType1Collection
{
get { return detailType1Collection; }
}
private List<DetailType2> detailType2Collection;
public List<DetailType2> DetailType2Collection
{
get { return detailType2Collection; }
}
}
public class ClassDetailCollection2
{
private List<DetailType3> detailType3Collection;
public List<DetailType3> DetailType3Collection
{
get { return detailType3Collection; }
}
private List<DetailType4> detailType4Collection;
public List<DetailType4> DetailType4Collection
{
get { return detailType4Collection; }
}
}
//more other Types like MainClass1 above...
I can assume that I will have access to the old values and new values of the object.
In that case I can think of 2 ways to try to do this without being told what has explicitly changed.
use reflection and iterate thru all properties of the object and compare
those with the corresponding
properties of the older object. Log
any properties that have changed. This
approach seems to be more flexible, in
that I would not have to worry if any
new properties are added to any of the
objects. But it also seems performance
heavy.
Log changes in the setter of all the properties for all the objects.
Other than the fact that this will
need me to change a lot of code, it
seems more brute force. This will be
maintenance heavy and inflexible if
some one updates any of the Object
Types. But this way it may also be
preformance light since I will not
need to check what changed and log
exactly what properties are changed.
Suggestions for any better approaches and/or improvements to above approaches are welcome
I developed a system like this a few years ago. The idea was to track changes to an object and store those changes in a database, like version control for objects.
The best approach is called Aspect-Oriented Programming, or AOP. You inject "advice" into the setters and getters (actually all method execution, getters and setters are just special methods) allowing you to "intercept" actions taken on the objects. Look into Spring.NET or PostSharp for .NET AOP solutions.
I may not be able to give you a good answer, but I will tell you that in the overwhelming majority of cases, option 1 is NOT a good answer. We're dealing with a very similar reflective "graph-walker" in our project; seemed like a good idea at the time, but it is a nightmare, for the following reasons:
You know the object changed, but without a high level of knowledge in the reflective "change handling" class about the workings of objects above it, you may not know why. If that information is important to you, you have to give it to the change handler, most l;ikely through a field or property on the domain object, requiring changes to your domain and imparting knowledge to the domain about the business logic.
Changes can affect multiple objects, but logs for changes at every level may not be desired; for instance, the client may not want to see a change to a Borrower's outstanding loan count in the log when a new Loan is approved, but they do want to see changes due to consolidations. Managing rules about logging in these cases requires change handling classes to know about more of the structure than just one object, which can very quickly make a change-handling object VERY big, and VERY brittle.
The requirements of your graph walker are probably more than you know; if your object graph includes backreferences or cross-references, the walker must know where it's been, and the simplest comprehensive way to do that is to keep a list of objects it's processed, and check the current object against those it's handled before processing it (making anti-backtracking an N^2 operation). It must also not consider changes to objects in the graph that will not be persisted when you persist the top level (references that are not "cascaded"). NHibernate gives you the ability to plug into its own graph-walker and abide by the cascade rukles in your mappings, which helps, but if you're using a roll-your-own DAL, or you DO want to log changes to objects that NHibernate won't cascade to, you're going to have to set this all up yourself.
A piece of logic in a handler may make a change that requires an update to a "parent" object (updating a calculated field, perhaps). Now, you have to go back and re-evaluate the changed object if the change is of interest to another piece of the change handling logic.
If you have logic that requires creation and persistence of a new object, you must do one of two things; attach the new object to the graph somewhere (where it may or may not be picked up by the walker), or persist the new object in its own transaction (if you're using an ORM, the object CANNOT reference an object from the other graph with a "cascade" setting that will cause it to be saved first).
Finally, being highly reflective in both walking the graph and finding the "handlers" for a particular object, passing a complex tree into such a framework is a guaranteed speed bump in your application.
I think you'll save yourself a lot of headaches if you skip the "change handler" reflective pattern, and include the creation of audit logs or any pre-persistence logic in the "unit of work" you're performing up at the business layer, through a set of "audit loggers". This allows the logic making the changes to employ an algorithm selection pattern such as Command or Strategy to tell your audit framework exactly what kind of change is happening, so it can pick the logger that will produce the required logging messages.
See here how adempiere did the changelog: http://wiki.adempiere.net/Change_Log