I'm building a repository with caching using spring.net. Can I update/add/delete one item in the cached list without having to rebuild the whole list?
Looking at the documentation and the example project from their site they always clear the cache whenever they update/add/delete one item. Therefore as long as you only read an object or the list of objects the caching works well but it feels stupid having to rebuild the whole cache just because I change one item?
Example:
// Cache per item and a list of items
[CacheResult("DefaultCache", "'AllMovies'", TimeToLive = "2m")]
[CacheResultItems("DefaultCache", "'Movie-' + ID")]
public IEnumerable<Movie> FindAll()
{
return movies.Values;
}
// Update or add an item invalidating the list of objects
[InvalidateCache("DefaultCache", Keys = "'AllMovies'")]
public void Save([CacheParameter("DefaultCache", "'Movie-' + ID")]Movie movie)
{
if (this.movies.ContainsKey(movie.ID))
{
this.movies[movie.ID] = movie;
}
else
{
this.movies.Add(movie.ID, movie);
}
}
Having mutable things stored in the cache seems to me a fountain of horrible side effects. Imho that is what you would need if you want to add/remove entries from a cached list.
The implementation of CacheResultAdvice and InvalidateCacheAdvice allows to store and invalidate an object (key) -> object (value) combination. You could add another layer and retrieve movie per movie but I think that it is just a case of premature optimization (with the opposite effect).
CacheResultAdvice
InvalidateCacheAdvice
Edit:
Btw. if you use a mature ORM look for integrated level2 caching, if you want to avoid hitting the db server: http://www.klopfenstein.net/lorenz.aspx/using-syscache-as-secondary-cache-in-nhibernate
Related
My C# application uses the Repository Pattern, and I have a terrible doubt as how to implement the "Update" part of CRUD operations. Specifically, I don't know how to "tell" the repository which object I want to replace (so that persistence can be carried out aftwerwards.
I have the following code in a console application (written just as example) that uses the libraries from the application:
class Program
{
static void Main(string[] args) {
var repo = new RepositorioPacientes();
var listapacientes = repo.GetAll();
// Choosing an element by index
// (should be done via clicking on a WPF ListView or DataGrid)
var editando = listapacientes[0];
editando.Nome = "Novo Helton Moraes";
repo.Update(editando);
}
}
Question is: How am I supposed to tell the repository which element it has to update? Should I traverse the whole repository using an equality comparer to find the element?
NOTE: this repository encapsulates data-access using XML serialization, one file per entity, and my entities (of type Paciente in this example) have the [Serializable] attribute. That said, the "Update" operation would end up replacing the XML file of the given entity with another with updated data, via Serialize method.
I am not concerned with that, though. what I cannot figure out is how to implement repo.Update(entity) so that the repo knows that this entity that is being passed back is the same that has been selected from listapacientes, which is not the repository itself.
Thanks for reading!
Ultimately, this should come down to the time-space trade off. Your suggestion of implementing an equality comparer and iteration through the entire repository maximizes runtime but uses little space by using a List<T> as the data structure used by the repository. In the worst case, where you update the last element of the list, you will need to iterate through the entire thing and run the equality operation on each element until it matches the last one. This is feasible for smaller repositories.
Another very common solution would be to override the GetHashCode of your T types in the repository, and using a HashSet<T> or Dictionary<T, V> as the data structure in the repository. The latter would minimize time to O(1) but take more space for the data structure. This is probably a better solution for much larger repositories, especially so if each of the type T objects has one property, like a GUID or database identifier associated with it that is unique because then you have a very easy hash value.
There are other data structures you can consider for your repository based on the exact use-case of your repository. For example, if you are trying to maintain an ordering of elements in the repository where only the highest or lowest element is fetched at a time, a PriorityQueue or Heap might be for you. If you spend time thinking about the data structure that backs your repository, the rest of the implementation should solve itself.
Don't load everything into memory. Try it something like this.
class Program
{
static void Main(string[] args) {
var repo = new RepositorioPacientes();
var editando = repo.SingleOrDefault(p => p.Id == 1);
editando.Nome = "Novo Helton Moraes";
repo.Update(editando);
}
}
you can use this link: http://www.codeproject.com/Articles/644605/CRUD-Operations-Using-the-Repository-Pattern-in-MV
And try this code
public ActionResult Edit(int id)
{
Book book = _bookRepository.GetBookByID(id);
return View(book);
}
[HttpPost]
public ActionResult Edit(Book book)
{
try
{
if (ModelState.IsValid)
{
_bookRepository.UpdateBook(book);
_bookRepository.Save();
return RedirectToAction("Index");
}
}
catch (DataException)
{
ModelState.AddModelError("", "Unable to save changes. Try again, " +
"and if the problem persists see your system administrator.");
}
return View(book);
}
I'm using .NET C# for a project.
I have a list of products which I want to cache as they're used company wide. If the products drop out of cache I already know how to lock the cache and rebuild it ok as per the patterns on various authority/blog sites.
In my pages/user controls etc, I might grab a reference to the cache, like this:
var myCacheInstance = cachedProducts
However, I might also want to do something like this:
myCacheInstance.Add(new product(...));
Which will also update the cache as it's the same object.
I have 2 queries.
If I have a reference to the cached object is it guaranteed to remain in cache for the lifetime of my variable?
In the scanario outlined above, how do I go about ensuring integrity? I'm only planning on adding in this instance, but suppose, I was updating and deleting objects as well?
1) If I have a reference to the cached object is it guaranteed to
remain in cache for the lifetime of my variable?
If I right interpret this question: responce is no.
cache.Add("key", new object()); // ADD KEY
var obj = cache["key"]; // GET REFERENCE TO CACHED OBJECT
cache.Remove("key"); // REMOVE OBJECT FROM CACHE
obj.DoSomething(..); //PERFECTLY VALID, STILL WORK ..
2) In the scanario outlined above, how do I go about ensuring
integrity? I'm only planning on adding in this instance, but suppose,
I was updating and deleting objects as well?
Can add bool property like, for example:
public bool IsValid
{
get; private set;
}
when object removed this property is set from the class to false. Just example, iff it really fits your need can tell us only you.
Do not pass around a reference to your cache!
Use an object for your cache and if a clients wants to have the cached items return a new list of your cached items, or a readonly collection.
If you want to add items to the cache, use a method on the cache object and in that method lock the cache and add the item. Same with remove.
question 1: If you pass around references you can not guarantee anything.
question 2: Use an object to cache all your items as I described above.
public class Cache
{
private List<Item> cachedItems = new List<Item>();
public void Add(Item item)
{
lock(cachedItems)
{
cachedItems.Add(item);
}
}
}
hello in order to ensure integrity, you must add key
Cache.Add("YourKey", yourValue)
here you can find helper for all operations
http://johnnycoder.com/blog/2008/12/10/c-cache-helper-class/
For duration or timeout you have this format, where you specify absoluteExpiration
public Object Add (string key, Object value, CacheDependency dependencies,
DateTime absoluteExpiration, TimeSpan slidingExpiration, CacheItemPriority
priority, CacheItemRemovedCallback onRemoveCallback)
I just installed membase and the enyim client for .NET, and came across an article that mentions this technique for integrating linq:
public static IEnumerable<T> CachedQuery<T>
(this IQueryable<T> query, MembaseClient cache, string key) where T : class
{
object result;
if (cache.TryGet(key, out result))
{
return (IEnumerable<T>)result;
}
else
{
IEnumerable<T> items = query.ToList();
cache.Store(StoreMode.Set, key, items);
return items;
}
}
It will check if the required data is in cache first, and if not cache it then return it.
Currently I am using a Dictionary<'String, List'> in my application and want to replace this with a membase/memcached type approach.
What about a similar pattern for adding items to a List<'T'> or using Linq operators on a cached list? It seems to me that it could be a bad idea to store an entire List<'T'> in cache under a single key and have to retrieve it, add to it, and then re-set it each time you want to add an element. Or is this an acceptable practice?
public bool Add(T item)
{
object list;
if (cache.TryGet(this.Key, out list))
{
var _list = list as List<T>;
_list.Add(item);
return cache.Store(StoreMode.Set, this.Key, _list);
}
else
{
var _list = new List<T>(new T[] { item });
return cache.Store(StoreMode.Set, this.Key, _list);
}
}
How are collections usually handled in a caching situation like this? Are hashing algorithms usually used instead, or some sort of key-prefixing system to identify 'Lists' of type T within the key-value store of the cache?
It depends on several factors:
Is this supposed to be scalable? Is this list user-specific and you can be certain that "Add" won't be called twice at the same time for the same list? - Race conditions are a risk.
I did implement such a thing where I stored a generic list in membase, but it's user-specific, so I can be pretty certain that there will be no race condition.
You should also consider the volume of the serialized list, which may be large. I my case the lists were pretty small.
Not sure if it helps, but I implemented a very basic iterateable list with random access over membase (via double indirection). Random access is done via a composite key (which is composed of several fields).
You need to:
Have a key that holds the list's length.
Have the ability to build the composite key (e.g one or more fields from your object).
Have the value that you'd like save (e.g. another field).
E.g:
list_length = 3
prefix1_0-> prefix2_[field1.value][field2.value][field3.value] -> field4.value
prefix1_1-> prefix2_[field1.value][field2.value][field3.value] -> field4.value
prefix1_2-> prefix2_[field1.value][field2.value][field3.value] -> field4.value
To perform serial access you iterate over the keys with "prefix1". To perform random access you use the keys with "prefix2" ans the fields that compose the key.
I hope it's clear enough.
It appears that AutoMapper's methods BeforeMap and AfterMap have a critical bug, which if one is attempting to iterate over a collection of the source object to populate a property of the destination object, those mapping methods execute more than once. See: Extra iterations in a foreach in an AutoMapper map
What I'm trying to do is a bit complicated, so please bear with me.
I have a EF4 many-to-many graph (Games-to-Platforms) I'm trying to build based on incoming form data. In order to build the graph, I take the raw integer ids that come from the form, and then grab the correct Platforms from my repository in order to add them to the Game's collection. You can see my attempt at doing this within BeforeMap in the link I provided above.
The problem is that I'm not sure how to proceed. I need to be able to grab a hold of the destination (Game) object in order to successfully Add the Platforms to the Game. Is something like this possible in ForMember? From what I've read, it doesn't look like a custom resolver would work for me, and I'm not sure how I'd implement a custom type converter given all the moving parts (two entities, repository).
Any ideas or suggestions?
I simply decided to make my own static mapper. Not an ideal, or even great solution, but it works. It can definitely be made more abstract, but I figure it's a band-aid until AutoMapper is fixed. My solution:
public static class GameMapper
{
public static Game Map(IGameRepository repo, AdminGameEditModel formData, Game newGame)
{
newGame.GameID = formData.GameID;
newGame.GameTitle = formData.GameTitle;
newGame.GenreID = formData.GenreID;
newGame.LastModified = DateTime.Now;
newGame.ReviewScore = (short)formData.ReviewScore;
newGame.ReviewText = formData.ReviewText;
newGame.Cons = String.Join("|", formData.Cons);
newGame.Pros = String.Join("|", formData.Pros);
newGame.Slug = formData.Slug;
if (newGame.Platforms != null && newGame.Platforms.Count > 0)
{
var oldPlats = newGame.Platforms.ToArray();
foreach (var oldPlat in oldPlats)
{
newGame.Platforms.Remove(oldPlat);
}
}
foreach (var platId in formData.PlatformIDs)
{
var plat = repo.GetPlatform(platId);
newGame.Platforms.Add(plat);
}
return newGame;
}
}
Unfortunately, I can't make the third parameter an out parameter due to my need to overwrite existing entity data during updating. Again, it's definitely not a pretty, or even good solution, but it does the job. I'm sure the OO gods will smite me at a later date.
I have multiple business objects in my application (C#, Winforms, WinXP). When the user executes some action on the UI, each of these objects are modified and updated by different parts of the application. After each modification, I need to first check what has changed and then log these changes made to the object. The purpose of logging this is to create a comprehensive tracking of activity going on in the application.
Many among these objects contain contain lists of other objects and this nesting can be several levels deep. The 2 main requirements for any solution would be
capture changes as accurately as possible
keep performance cost to minimum.
eg of a business object:
public class MainClass1
{
public MainClass1()
{
detailCollection1 = new ClassDetailCollection1();
detailCollection2 = new ClassDetailCollection2();
}
private Int64 id;
public Int64 ID
{
get { return id; }
set { id = value; }
}
private DateTime timeStamp;
public DateTime TimeStamp
{
get { return timeStamp; }
set { timeStamp = value; }
}
private string category = string.Empty;
public string Category
{
get { return category; }
set { category = value; }
}
private string action = string.Empty;
public string Action
{
get { return action; }
set { action = value; }
}
private ClassDetailCollection1 detailCollection1;
public ClassDetailCollection1 DetailCollection1
{
get { return detailCollection1; }
}
private ClassDetailCollection2 detailCollection2;
public ClassDetailCollection2 DetailCollection2
{
get { return detailCollection2; }
}
//more collections here
}
public class ClassDetailCollection1
{
private List<DetailType1> detailType1Collection;
public List<DetailType1> DetailType1Collection
{
get { return detailType1Collection; }
}
private List<DetailType2> detailType2Collection;
public List<DetailType2> DetailType2Collection
{
get { return detailType2Collection; }
}
}
public class ClassDetailCollection2
{
private List<DetailType3> detailType3Collection;
public List<DetailType3> DetailType3Collection
{
get { return detailType3Collection; }
}
private List<DetailType4> detailType4Collection;
public List<DetailType4> DetailType4Collection
{
get { return detailType4Collection; }
}
}
//more other Types like MainClass1 above...
I can assume that I will have access to the old values and new values of the object.
In that case I can think of 2 ways to try to do this without being told what has explicitly changed.
use reflection and iterate thru all properties of the object and compare
those with the corresponding
properties of the older object. Log
any properties that have changed. This
approach seems to be more flexible, in
that I would not have to worry if any
new properties are added to any of the
objects. But it also seems performance
heavy.
Log changes in the setter of all the properties for all the objects.
Other than the fact that this will
need me to change a lot of code, it
seems more brute force. This will be
maintenance heavy and inflexible if
some one updates any of the Object
Types. But this way it may also be
preformance light since I will not
need to check what changed and log
exactly what properties are changed.
Suggestions for any better approaches and/or improvements to above approaches are welcome
I developed a system like this a few years ago. The idea was to track changes to an object and store those changes in a database, like version control for objects.
The best approach is called Aspect-Oriented Programming, or AOP. You inject "advice" into the setters and getters (actually all method execution, getters and setters are just special methods) allowing you to "intercept" actions taken on the objects. Look into Spring.NET or PostSharp for .NET AOP solutions.
I may not be able to give you a good answer, but I will tell you that in the overwhelming majority of cases, option 1 is NOT a good answer. We're dealing with a very similar reflective "graph-walker" in our project; seemed like a good idea at the time, but it is a nightmare, for the following reasons:
You know the object changed, but without a high level of knowledge in the reflective "change handling" class about the workings of objects above it, you may not know why. If that information is important to you, you have to give it to the change handler, most l;ikely through a field or property on the domain object, requiring changes to your domain and imparting knowledge to the domain about the business logic.
Changes can affect multiple objects, but logs for changes at every level may not be desired; for instance, the client may not want to see a change to a Borrower's outstanding loan count in the log when a new Loan is approved, but they do want to see changes due to consolidations. Managing rules about logging in these cases requires change handling classes to know about more of the structure than just one object, which can very quickly make a change-handling object VERY big, and VERY brittle.
The requirements of your graph walker are probably more than you know; if your object graph includes backreferences or cross-references, the walker must know where it's been, and the simplest comprehensive way to do that is to keep a list of objects it's processed, and check the current object against those it's handled before processing it (making anti-backtracking an N^2 operation). It must also not consider changes to objects in the graph that will not be persisted when you persist the top level (references that are not "cascaded"). NHibernate gives you the ability to plug into its own graph-walker and abide by the cascade rukles in your mappings, which helps, but if you're using a roll-your-own DAL, or you DO want to log changes to objects that NHibernate won't cascade to, you're going to have to set this all up yourself.
A piece of logic in a handler may make a change that requires an update to a "parent" object (updating a calculated field, perhaps). Now, you have to go back and re-evaluate the changed object if the change is of interest to another piece of the change handling logic.
If you have logic that requires creation and persistence of a new object, you must do one of two things; attach the new object to the graph somewhere (where it may or may not be picked up by the walker), or persist the new object in its own transaction (if you're using an ORM, the object CANNOT reference an object from the other graph with a "cascade" setting that will cause it to be saved first).
Finally, being highly reflective in both walking the graph and finding the "handlers" for a particular object, passing a complex tree into such a framework is a guaranteed speed bump in your application.
I think you'll save yourself a lot of headaches if you skip the "change handler" reflective pattern, and include the creation of audit logs or any pre-persistence logic in the "unit of work" you're performing up at the business layer, through a set of "audit loggers". This allows the logic making the changes to employ an algorithm selection pattern such as Command or Strategy to tell your audit framework exactly what kind of change is happening, so it can pick the logger that will produce the required logging messages.
See here how adempiere did the changelog: http://wiki.adempiere.net/Change_Log