I'm working in the common scenario whereby I'd like to access a subset of a repository and not worry about having to keep it updated e.g. 'get all orders whose price is greater than 10'. I have implemented a solution but have two issues with it (listed at the end).
A subset of a repository can be achieved with something equivalent to
var expensiveOrders = Repository.GetOrders().Where(o => o.Price > 10);
But this is an IEnumerable and will not be updated when the original collection is updated. I could add handlers for CollectionChanged, but what if we want to access a further subset?
var expensiveOrdersFromBob = expensiveOrders.Where(o => o.Name == Bob);
We'd have to wire up a collection-changed for this one as well. The concept of live updates led me to thinking of Rx, so I set about to build an ObservableCache which contains both the ObservableCollection of items that auto-updates itself, and an RX stream for notification. (The stream is also what updates the cache under the hood.)
class ObservableCache<T> : IObservableCache<T>
{
private readonly ObservableCollection<T> _cache;
private readonly IObservable<Tuple<T, CRUDOperationType>> _updates;
public ObservableCache(IEnumerable<T> initialCache
, IObservable<Tuple<T, CRUDOperationType>> currentStream, Func<T, bool> filter)
{
_cache = new ObservableCollection<T>(initialCache.Where(filter));
_updates = currentStream.Where(tuple => filter(tuple.Item1));
_updates.Subscribe(ProcessUpdate);
}
private void ProcessUpdate(Tuple<T, CRUDOperationType> update)
{
var item = update.Item1;
lock (_cache)
{
switch (update.Item2)
{
case CRUDOperationType.Create:
_cache.Add(item);
break;
case CRUDOperationType.Delete:
_cache.Remove(item);
break;
case CRUDOperationType.Replace:
case CRUDOperationType.Update:
_cache.Remove(item); // ToDo: implement some key-based equality
_cache.Add(item);
break;
}
}
}
public ObservableCollection<T> Cache
{
get { return _cache; }
}
public IObservable<T> Updates
{
get { return _updates.Select(tuple => tuple.Item1); }
}
public IObservableCache<T> Where(Func<T, bool> predicate)
{
return new ObservableCache<T>(_cache, _updates, predicate);
}
}
You can then use it like this:
var expensiveOrders = new ObservableCache<Order>(_orders
, updateStream
, o => o.Price > 10);
expensiveOrders.Updates.Subscribe
(o => Console.WriteLine("Got new expensive order: " + o));
_observableBoundToSomeCtrl = expensiveOrders.Cache;
var expensiveOrdersFromBob = expensiveOrders
.Where(o => o.Name == "Bob");
expensiveOrdersFromBob.Updates.Subscribe
(o => Console.WriteLine("Got new expensive order from Bob: " + o));
_observableBoundToSomeOtherCtrl = expensiveOrdersFromBob.Cache;
And so forth, the idea being that you can keep projecting the cache into narrower and narrower subsets and never have to worry about it being out of sync. So what is my problem then?
I'm wondering whether I can do away with the CRUD stuff by having RX intrinsically update the collections. Maybe 'project' the updates with a Select, or something like that?
There is a race condition intrinsic to the repository-with-update pattern, in that I might miss some updates while I'm constructing the new cache. I think I need some sort of sequencing, but that would mean having all my T objects implement an ISequenceableItem interface. Is there any better way to do this? RX is great because it handles all the threading for you. I'd like to leverage that.
The OLinq project at http://github.com/wasabii/OLinq is designed for this kind of reactive updating, and the ObservableView is, I think, what you are after.
Have a look at these two projects which achieve what you want albeit by different means:
https://github.com/RolandPheasant/DynamicData
https://bitbucket.org/mendelmonteiro/reactivetables [disclaimer: this is my project]
Suppose you have a definition like this:
class SetOp<T>
{
public T Value { get; private set; }
public bool Include { get; private set; }
public SetOp(T value, bool include)
{
Value = value;
Include = include;
}
}
Using Observable.Scan and System.Collections.Immutable you can do something like this:
IObservable<SetOp<int>> ops = ...;
IImmutableSet<int> empty = ImmutableSortedSet<int>.Empty;
var observableSet = ops
.Scan(empty, (s, op) => op.Include ? s.Add(op.Value) : s.Remove(op.Value))
.StartWith(empty);
Using the immutable collection type is the key trick here: any observer of the observableSet can do whatever it wants with the values that are pushed at it, because they are immutable. Add it is efficient because it reuses the majority of the set data structure between consecutive values.
Here is an example of an ops stream and the corresponding observableSet.
ops observableSet
-------- ------------------
{}
Add 7 {7}
Add 4 {4,7}
Add 5 {4,5,7}
Add 6 {4,5,6,7}
Remove 5 {4,6,7}
Add 8 {4,6,7,8}
Remove 4 {6,7,8}
You should not need to lock _cache within ProcessUpdate. If your source observable currentStream is honoring Rx Guidelines you are guaranteed to only be within a single call to OnNext at a time. In otherwords, you will not receive another value from the stream while you are still processing the previous value.
The only reliable way to solve your race condition is to make sure you create the cache before the updateStream starts producing data.
You may want to take a look at Extensions for Reactive Extensions (Rxx). I believe Dave has built a number of utilities for binding UI controls to observable data. Documentation is sparse. I don't know if there is anything there for what you are doing.
Related
I have got stucked in a scenario that i have a custom collection class which inherits form ICollection interface and i have a code segement like following:
myCustomCollectionObject.Where(obj=>obj.isValid).ToList().Sort(mycustomerComparer);
above code filters the original collection and then sort the collection
now in this kind of scenario sorting would be performed on a different collection rather than original collection.
So, is there any way or workaround for implementing first filtering then sorting on the original collection
If you can't use the immutable/functional goodness of Linq, then you have to go old-skool:
//Remove unwanted items
for (int i = myCustomCollectionObject.Length; i >= 0 ; i--)
{
if(!myCustomCollectionObject[i].IsValid)
myCustomCollectionObject.Remove(myCustomCollectionObject[i]);
}
myCustomCollectionObject.Sort(mycustomerComparer);
Just happened to learn myCustomCollectionObject isn't List<T>, hence a complete rewrite.
Approach 1:
Have a Sort method in your class
List<T> backingStructure; //assuming this is what you have.
public void Sort(IComparer<T> comparer)
{
backingStructure = backingStructure.Where(obj => obj.isValid).ToList();
backingStructure.Sort(comparer);
}
and call Sort on the internal backing structure. I assume it has to be List<T> or Array both which has Sort on them. I have added the filtering logic internal to your
Sort method.
Approach 2:
If you don't want that, ie you want your filtering logic to be external to class, then have a method to repopulate your backing structure from an IEnumerable<T>. Like:
List<T> backingStructure; //assuming this is what you have.
//return type chosen to make method name meaningful, up to you to have void
public UndoRedoObservableCollection<T> From(IEnumerable<T> list)
{
backingStructure.Clear();
foreach(var item in list)
//populate and return;
}
Call it like
myCustomCollectionObject = myCustomCollectionObject.From
(
myCustomCollectionObject.Where(obj => obj.isValid)
.OrderBy(x => x.Key)
);
But you will need a key to specify ordering.
Approach 3 (the best of all):
Have a RemoveInvalid method
List<T> backingStructure; //assuming this is what you have.
public void RemoveInvalid()
{
//you can go for non-Linq (for loop) removal approach as well.
backingStructure = backingStructure.Where(obj => obj.isValid).ToList();
}
public void Sort(IComparer<T> comparer)
{
backingStructure.Sort(comparer);
}
Call it:
myCustomCollectionObject.RemoveInvalid();
myCustomCollectionObject.Sort(mycustomerComparer);
I'm working on the architecture for what is essentially a document parsing and analysis framework. Given the lines of the document, the framework will ultimately produce a large object (call it Document) representing the document.
Early filters in the pipeline will need to operate on a line-by-line basis. However, filters further down will need to transform (and ultimately produce) the Document object.
To implement this, I was thinking of using a filter definition like this:
public interface IFilter<in TIn, out TOut> {
TOut Execute(TIn data);
}
All filters will be registered with a PipelineManager class (as opposed to using the 'linked-list' style approach.) Before executing, PipelineManager will verify the integrity of the pipeline to ensure that no filter is given the wrong input type.
My question: Is it architecturally sound to have a pipeline with a changing data type (i.e. a good idea)?
P.S. The reason I'm implementing my application as a pipeline is because I feel it will be easy for plugin authors to replace/extend existing filters. Just swap out the filter you want to change with a different implementation, and you're set.
EDIT: Note, have removed other answer to replace with this wall'o'text grin
NINJAEDIT: Fun fact: Powershell (mentioned in #Loudenvier's answer) was once going to be named 'Monad' - also, found Wes Dyer's blog post on topic: The Marvels of Monads
One veryveryvery simplistic way of looking at this whole "Monad" thing is to think of it as a box with a very basic interface:
Return
Bind
Zero (optional)
The uses are similarly simple in concept - let's say you have a "thing":
You can wrap your "thing" in the box (this would be the "return") and have a "BoxOfThing"
You can give instructions on how to take the thing out of this box and put it into another box (Bind)
You can get an empty box (the "Zero": think of it as a sort of "no-op", like multiplying by one or adding zero)
(there are other rules, but these three are the most interesting)
The Bind bit is the really interesting part, and also the part that makes most people's heads explode; basically,
you're giving a specification of sorts for how to chain boxes together: Let's take a fairly simple Monad, the "Option"
or "Maybe" - a bit like Nullable<T>, but way cooler.
So everybody hates checking for null everywhere, but we're forced to due to the way reference types work; what we'd love
is to be able to code something like this:
var zipcodesNearby = order.Customer.Address.City.ZipCodes;
And either get back a valid answer if (customer is valid + address is valid + ...), or "Nothing" if any bit of that logic fails...but
no, we need to:
List<string> zipcodesNearBy = new List<string>();
if(goodOrder.Customer != null)
{
if(goodOrder.Customer.Address != null)
{
if(goodOrder.Customer.Address.City != null)
{
if(goodOrder.Customer.Address.City.ZipCodes != null)
{
zipcodesNearBy = goodOrder.Customer.Address.City.ZipCodes;
}
else { /* do something else? throw? */ }
}
else { /* do something else? throw? */ }
}
else { /* do something else? throw? */ }
}
else { /* do something else? throw? */ }
(note: you can also rely on null coalescing, when applicable - although it's pretty nasty looking)
List<string> nullCoalescingZips =
((((goodOrder ?? new Order())
.Customer ?? new Person())
.Address ?? new Address())
.City ?? new City())
.ZipCodes ?? new List<string>();
The Maybe monad "rules" might look a bit like:
(note:C# is NOT ideal for this type of Type-mangling, so it gets a bit wonky)
public static Maybe<T> Return(T value)
{
return ReferenceEquals(value, null) ? Maybe<T>.Nothing : new Maybe<T>() { Value = value };
}
public static Maybe<U> Bind<U>(Maybe<T> me, Func<T, Maybe<U>> map)
{
return me != Maybe<T>.Nothing ?
// extract, map, and rebox
map(me.Value) :
// We have nothing, so we pass along nothing...
Maybe<U>.Nothing;
}
But this leads to some NASTY code:
var result1 =
Maybe<string>.Bind(Maybe<string>.Return("hello"), hello =>
Maybe<string>.Bind(Maybe<string>.Return((string)null), doh =>
Maybe<string>.Bind(Maybe<string>.Return("world"), world =>
hello + doh + world).Value
).Value
);
Luckily, there's a neat shortcut: SelectMany is very roughly equivalent to "Bind":
If we implement SelectMany for our Maybe<T>...
public class Maybe<T>
{
public static readonly Maybe<T> Nothing = new Maybe<T>();
private Maybe() {}
public T Value { get; private set;}
public Maybe(T value) { Value = value; }
}
public static class MaybeExt
{
public static bool IsNothing<T>(this Maybe<T> me)
{
return me == Maybe<T>.Nothing;
}
public static Maybe<T> May<T>(this T value)
{
return ReferenceEquals(value, null) ? Maybe<T>.Nothing : new Maybe<T>(value);
}
// Note: this is basically just "Bind"
public static Maybe<U> SelectMany<T,U>(this Maybe<T> me, Func<T, Maybe<U>> map)
{
return me != Maybe<T>.Nothing ?
// extract, map, and rebox
map(me.Value) :
// We have nothing, so we pass along nothing...
Maybe<U>.Nothing;
}
// This overload is the one that "turns on" query comprehension syntax...
public static Maybe<V> SelectMany<T,U,V>(this Maybe<T> me, Func<T, Maybe<U>> map, Func<T,U,V> selector)
{
return me.SelectMany(x => map(x).SelectMany(y => selector(x,y).May()));
}
}
Now we can piggyback on LINQ comprehension syntax!
var result1 =
from hello in "Hello".May()
from oops in ((string)null).May()
from world in "world".May()
select hello + oops + world;
// prints "Was Nothing!"
Console.WriteLine(result1.IsNothing() ? "Was Nothing!" : result1.Value);
var result2 =
from hello in "Hello".May()
from space in " ".May()
from world in "world".May()
select hello + space + world;
// prints "Hello world"
Console.WriteLine(result2.IsNothing() ? "Was Nothing!" : result2.Value);
var goodOrder = new Order { Customer = new Person { Address = new Address { City = new City { ZipCodes = new List<string>{"90210"}}}}};
var badOrder = new Order { Customer = new Person { Address = null }};
var zipcodesNearby =
from ord in goodOrder.May()
from cust in ord.Customer.May()
from add in cust.Address.May()
from city in add.City.May()
from zip in city.ZipCodes.May()
select zip;
// prints "90210"
Console.WriteLine(zipcodesNearby.IsNothing() ? "Nothing!" : zipcodesNearby.Value.FirstOrDefault());
var badZipcodesNearby =
from ord in badOrder.May()
from cust in ord.Customer.May()
from add in cust.Address.May()
from city in add.City.May()
from zip in city.ZipCodes.May()
select zip;
// prints "Nothing!"
Console.WriteLine(badZipcodesNearby.IsNothing() ? "Nothing!" : badZipcodesNearby.Value.FirstOrDefault());
Hah, just realized I forgot to mention the whole point of this...so basically, once you've figured out what the equivalent for "bind" is at each stage of your pipeline, you can use the same type of pseudomonadic code to handle the wrapping, unwrapping, and processing of each of your type transformations.
This won't answer your question, but a great place to look for inspiration on pipelines in the .NET world is PowerShell. They've implemented the pipeline model in a very clever way, and the objects flowing the pipeline will change all the time.
I've had to produce a Database to PDF document creation pipeline in the past and did it as PowerShell commandlets. It was so extensible that years later it is still being actively used and developed, it only migrated from PowerShell 1 to 2 and now possibly to 3.
You can get great ideas here: http://blogs.technet.com/b/heyscriptingguy/
I'm trying to make my application thread safe. I hold my hands up and admit I'm new to threading so not sure what way to proceed.
To give a simplified version, my application contains a list.
Most of the application accesses this list and doesn't change it but
may enumerate through it. All this happens on the UI thread.
Thread
one will periodically look for items to be Added and Removed from the
list.
Thread two will enumerate the list and update the items with
extra information. This has to run at the same time as thread one as
can take anything from seconds to hours.
The first question is does anyone have a recommend stragy for this.
Secondly I was trying to make seperate copies of the list that the main application will use, periodically getting a new copy when something is updated/added or removed, but this doesn't seem to be working.
I have my list and a copy......
public class MDGlobalObjects
{
public List<T> mainList= new List<T>();
public List<T> copyList
{
get
{
return new List<T>(mainList);
}
}
}
If I get copyList, modify it, save mainlist, restart my application, load mainlist and look again at copylist then the changes are present. I presume I've done something wrong as copylist seems to still refer to mainlist.
I'm not sure if it makes a difference but everything is accessed through a static instance of the class.
public static MDGlobalObjects CacheObjects = new MDGlobalObjects();
This is the gist using a ConcurrentDictionary:
public class Element
{
public string Key { get; set; }
public string Property { get; set; }
public Element CreateCopy()
{
return new Element
{
Key = this.Key,
Property = this.Property,
};
}
}
var d = new ConcurrentDictionary<string, Element>();
// thread 1
// prune
foreach ( var kv in d )
{
if ( kv.Value.Property == "ToBeRemoved" )
{
Element dummy = null;
d.TryRemove( kv.Key, out dummy );
}
}
// thread 1
// add
Element toBeAdded = new Element();
// set basic properties here
d.TryAdd( toBeAdded.Key, toBeAdded );
// thread 2
// populate element
Element unPopulated = null;
if ( d.TryGetValue( "ToBePopulated", out unPopulated ) )
{
Element nowPopulated = unPopulated.CreateCopy();
nowPopulated.Property = "Populated";
// either
d.TryUpdate( unPopulated.Key, nowPopulated, unPopulated );
// or
d.AddOrUpdate( unPopulated.Key, nowPopulated, ( key, value ) => nowPopulated );
}
// read threads
// enumerate
foreach ( Element element in d.Values )
{
// do something with each element
}
// read threads
// try to get specific element
Element specific = null;
if ( d.TryGetValue( "SpecificKey", out specific ) )
{
// do something with specific element
}
In thread 2, if you can set properties so that the whole object is consistent after each atomic write, then you can skip making a copy and just populate the properties with the object in place in the collection.
There are a few race conditions in this code, but they should be benign in that readers always have a consistent view of the collection.
actly copylist is just a shallow copy of the mainList. the list is new but the refrences of the objects contained in the list are still the same. to achieve what you are trying to you have to make a deep copy of the list
something like this
public static IEnumerable<T> Clone<T>(this IEnumerable<T> collection) where T : ICloneable
{
return collection.Select(item => (T)item.Clone());
}
and use it like
return mainList.Clone();
looking at your ques again.. i would like to suggest an overall change of approach.
you should use ConcurrentDictionary() as you are using .Net 4.0. in that you wont hav eto use locks as a concurrent collection always maintains a valid state.
so your code will look something like this.
Thread 1s code --- <br>
var object = download_the_object();
dic.TryAdd("SomeUniqueKeyOfTheObject",object);
//try add will return false so implement some sort of retry mechanism
Thread 2s code
foreach(var item in Dictionary)
{
var object item.Value;
var extraInfo = downloadExtraInfoforObject(object);
//update object by using Update
dictionary.TryUpdate(object.uniqueKey,"somenewobjectWithExtraInfoAdded",object);
}
I have a series of about 30 lookup tables in my database schema, all with the same layout (and I would prefer to keep them as separate tables rather than one lookup table), and thus my Linq2SQL context has 30 entities for these lookup tables.
I have a standard class that I would use for CRUD operations on each of these 30 entites, for example:
public class ExampleAttributes : IAttributeList
{
#region IAttributeList Members
public bool AddItem(string Item, int SortOrder)
{
MyDataContext context = ContextHelper.GetContext();
ExampleAttribute a = new ExampleAttribute();
a.Name = Item;
a.SortOrder = SortOrder;
context.ExampleAttributes.InsertOnSubmit(a);
try
{
context.SubmitChanges();
return true;
}
catch
{
return false;
}
}
public bool DeleteItem(int Id)
{
MyDataContext context = ContextHelper.GetContext();
ExampleAttribute a = (from m in context.ExampleAttributes
where m.Id == Id
select m).FirstOrDefault();
if (a == null)
return true;
// Make sure nothing is using it
int Count = (from m in context.Businesses
where m.ExampleAttributeId == a.Id
select m).Count();
if (Count > 0)
return false;
// Delete the item
context.ExampleAttributes.DeleteOnSubmit(a);
try
{
context.SubmitChanges();
return true;
}
catch
{
return false;
}
}
public bool UpdateItem(int Id, string Item, int SortOrder)
{
MyDataContext context = ContextHelper.GetContext();
ExampleAttribute a = (from m in context.ExampleAttributes
where m.Id == Id
select m).FirstOrDefault();
a.Name = Item;
a.SortOrder = SortOrder;
try
{
context.SubmitChanges();
return true;
}
catch
{
return false;
}
}
public String GetItem(int Id)
{
MyDataContext context = ContextHelper.GetContext();
var Attribute = (from a in context.ExampleAttributes
where a.Id == Id
select a).FirstOrDefault();
return Attribute.Name;
}
public Dictionary<int, string> GetItems()
{
Dictionary<int, string> Attributes = new Dictionary<int, string>();
MyDataContext context = ContextHelper.GetContext();
context.ObjectTrackingEnabled = false;
Attributes = (from o in context.ExampleAttributes orderby o.Name select new { o.Id, o.Name }).AsEnumerable().ToDictionary(k => k.Id, v => v.Name);
return Attributes;
}
#endregion
}
I could replicate this class 30 times with very minor changes for each lookup entity, but that seems messy somehow - so can this class be genericised so I can also pass it the type I want, and have it handle internally the type differences in the linq queries? That way, I have one class to make additions to, one class to bug fix et al - seems the way that it should be done.
UPDATE:
Andrews answer below gave me the option that I was really looking at while thinking about the question (passing the type in) but I need more clarification on how to genericise the linq queries. Can anyone clarify this?
Cheers
Moo
There are a couple things you can try.
One is to define an interface that has all the relevant fields that the thirty entity classes share. Then, you would be able to have each entity class implement this interface (let's call it IMyEntity) by doing something like
public partial class EntityNumber1 : IMyEntity
{
}
for each entity (where EntityNumber1 is the name of one of the entity classes). Granted, this is still thirty different definitions, but your CRUD operation class could then operate on IMyEntity instead of having to write a new class each time.
A second way to do this is simply to genericize the CRUD operation class, as you suggest:
public class ExampleAttributes<T> : IAttributeList
{
...
which allows you to use T as the type on which to operate. Granted, this might be easier in combination with the first method, since you would still have to check for the presence of the attributes and cast the entity to the appropriate type or interface.
Edit:
To check for the presence of the appropriate properties on the entity, you might need to use reflection methods. One way to check whether the given type T has a particular property might be to check for
typeof(T).GetProperties().OfType<PropertyInfo>().Count<PropertyInfo>(pi => pi.Name == "MyPropertyName" && pi.GetGetMethod().ReturnType == typeof(TypeIWant)) > 0
Of course, replace TypeIWant with the type you are expecting the property to be, and replace MyPropertyName with the name of the property for which you are checking.
Add a parameter to the constructors which specifies the type. Then you can work with it internally. One class, with perhaps a switch statement in the constructor.
For genericising a LINQ query, the biggest problem is that your DataContext has the collections based on type. There are a few ways this can be circumvented. You could try to access it using reflection, but that will require quite a bit of hacking and would pretty much destroy all efficiency that LINQ to SQL would provide.
The easiest way seems to be to use Dynamic LINQ. I have not used it personally, but it seems like it should support it. You can find more information in this thread: Generic LINQ query predicate?
and on http://aspalliance.com/1569_Dynamic_LINQ_Part_1_Using_the_LINQ_Dynamic_Query_Library.1
Maybe someone else can provide more information about this?
This isn't necessarily an answer to the question, but may be a solution to your problem. Have you considered generating all the classes that you need? T4 is built into Visual Studio, and can generate code for you. The link below describes it fairly broadly, but contains heaps of links for further information.
http://www.hanselman.com/blog/T4TextTemplateTransformationToolkitCodeGenerationBestKeptVisualStudioSecret.aspx
That way, you can define all the methods in one place, and generate the class files for your 30-odd lookup models. One place to make changes etc.
Maybe worth considering, and if not, still worth knowing about.
If I have a static method like this
private static bool TicArticleExists(string supplierIdent)
{
using (TicDatabaseEntities db = new TicDatabaseEntities())
{
if((from a in db.Articles where a.SupplierArticleID.Equals(supplierIdent) select a).Count() > 0)
return true;
}
return false;
}
and use this method in various places in foreach loops or just plain calling it numerous times, does it create and open new connection every time?
If so, how can I tackle this? Should I cache the results somewhere, like in this case, I would cache the entire Classifications table in Memory Cache? And then do queries vs this cached object?
Or should I make TicDatabaseEntities variable static and initialize it at class level?
Should my class be static if it contains only static methods? Because right now it is not..
Also I've noticed that if I return result.First() instead of FirstOrDefault() and the query does not find a match, it will issue an exception (with FirstOrDefault() there is no exception, it returns null).
Thank you for clarification.
new connections are non-expensive thanks to connection caching. Basically, it grabs an already open connection (I htink they are kept open for 2 minutes for reuse).
Still, caching may be better. I do really not like the "firstordefault". Thinks of whether you can acutally pull in more in ONE statement, then work from that.
For the rest, I can not say anything - too much depends on what you actually do there logically. What IS TicDatabaseEntities? CAN it be cached? How long? Same with (3) - we do not know because we do not know what else is in there.
If this is something like getting just some lookup strings for later use, I would say....
Build a key out of classI, class II, class III
load all classifications in (I assume there are only a couple of hundred)
Put them into a static / cached dictionary, assuming they normally do not change (and I htink I have that idea here - is this a financial tickstream database?)
Without business knowledge this can not be answered.
4: yes, that is as documented. First gives first or an exception, FirstOrDefault defaults to default (empty struct initialized with 0, null for classes).
Thanks Dan and TomTom, I've came up with this. Could you please comment this if you see anything out or the order?
public static IEnumerable<Article> TicArticles
{
get
{
ObjectCache cache = MemoryCache.Default;
if (cache["TicArticles"] == null)
{
CacheItemPolicy policy = new CacheItemPolicy();
using(TicDatabaseEntities db = new TicDatabaseEntities())
{
IEnumerable<Article> articles = (from a in db.Articles select a).ToList();
cache.Set("TicArticles", articles, policy);
}
}
return (IEnumerable<Article>)MemoryCache.Default["TicArticles"];
}
}
private static bool TicArticleExists(string supplierIdent)
{
if (TicArticles.Count(p => p.SupplierArticleID.Equals(supplierIdent)) > 0)
return true;
return false;
}
If this is ok, I'm going to make all my method follow this pattern.
does it create and open new connection every time?
No. Connections are cached.
Should I cache the results somewhere
No. Do not cache entire tables.
should I make TicDatabaseEntities variable static and initialize it at class level?
No. Do not retain a DataContext instance longer than a UnitOfWork.
Should my class be static if it contains only static methods?
Sure... doing so will prevent anyone from creating useless instances of the class.
Also I've noticed that if I return result.First() instead of FirstOrDefault() and the query does not find a match, it will issue an exception
That is the behavior of First. As such - I typically restrict use of First to IGroupings or to collections previously checked with .Any().
I'd rewrite your existing method as:
using (TicDatabaseEntities db = new TicDatabaseEntities())
{
bool result = db.Articles
.Any(a => a.supplierArticleID.Equals(supplierIdent));
return result;
}
If you are calling the method in a loop, I'd rewrite to:
private static Dictionary<string, bool> TicArticleExists
(List<string> supplierIdents)
{
using (TicDatabaseEntities db = new TicDatabaseEntities())
{
HashSet<string> queryResult = new HashSet(db.Articles
.Where(a => supplierIdents.Contains(a.supplierArticleID))
.Select(a => a.supplierArticleID));
Dictionary<string, bool> result = supplierIdents
.ToDictionary(s => s, s => queryResult.Contains(s));
return result;
}
}
I'm trying to find the article where I read this, but I think it's better to do (if you're just looking for a count):
from a in db.Articles where a.SupplierArticleID.Equals(supplierIdent) select 1
Also, use Any instead of Count > 0.
Will update when I can cite a source.