Caching best practices - Single object or multiple entries? - c#

Does anyone have any advice on which method is better when caching data in a C# ASP.net application?
I am currently using a combination of two approaches, with some data (List, dictionaries, the usual domain-specific information) being put directly into the cache and boxed when needed, and some data being kept inside a globaldata class, and retrieved through that class (i.e. the GlobalData class is cached, and it's properties are the actual data).
Is either approach preferable?
I get the feeling that caching each item separately would be more sensible from a concurrency point of view, however it creates a lot more work in the long run with more functions that purely deal with getting data out of a cache location in a Utility class.
Suggestions would be appreciated.

Generally the cache's performance is so much better than the underlying source (e.g. a DB) that the performance of the cache is not a problem. The main goal is rather to get as high cache-hit ratio as possible (unless you are developing at really large scale because then it pays off to optimize the cache as well).
To achieve this I usually try to make it as straight forward as possible for the developer to use cache (so that we don't miss any chances of cache-hits just because the developer is too lazy to use the cache). In some projects we've use a modified version of a CacheHandler available in Microsoft's Enterprise Library.
With CacheHandler (which uses Policy Injection) you can easily make a method "cacheable" by just adding an attribute to it. For instance this:
[CacheHandler(0, 30, 0)]
public Object GetData(Object input)
{
}
would make all calls to that method cached for 30 minutes. All invocations gets a unique cache-key based on the input data and method name so if you call the method twice with different input it doesn't get cached but if you call it >1 times within the timout interval with the same input then the method only gets executed once.
Our modified version looks like this:
using System;
using System.Diagnostics;
using System.IO;
using System.Reflection;
using System.Runtime.Remoting.Contexts;
using System.Text;
using System.Web;
using System.Web.Caching;
using System.Web.UI;
using Microsoft.Practices.EnterpriseLibrary.Common.Configuration;
using Microsoft.Practices.Unity.InterceptionExtension;
namespace Middleware.Cache
{
/// <summary>
/// An <see cref="ICallHandler"/> that implements caching of the return values of
/// methods. This handler stores the return value in the ASP.NET cache or the Items object of the current request.
/// </summary>
[ConfigurationElementType(typeof (CacheHandler)), Synchronization]
public class CacheHandler : ICallHandler
{
/// <summary>
/// The default expiration time for the cached entries: 5 minutes
/// </summary>
public static readonly TimeSpan DefaultExpirationTime = new TimeSpan(0, 5, 0);
private readonly object cachedData;
private readonly DefaultCacheKeyGenerator keyGenerator;
private readonly bool storeOnlyForThisRequest = true;
private TimeSpan expirationTime;
private GetNextHandlerDelegate getNext;
private IMethodInvocation input;
public CacheHandler(TimeSpan expirationTime, bool storeOnlyForThisRequest)
{
keyGenerator = new DefaultCacheKeyGenerator();
this.expirationTime = expirationTime;
this.storeOnlyForThisRequest = storeOnlyForThisRequest;
}
/// <summary>
/// This constructor is used when we wrap cached data in a CacheHandler so that
/// we can reload the object after it has been removed from the cache.
/// </summary>
/// <param name="expirationTime"></param>
/// <param name="storeOnlyForThisRequest"></param>
/// <param name="input"></param>
/// <param name="getNext"></param>
/// <param name="cachedData"></param>
public CacheHandler(TimeSpan expirationTime, bool storeOnlyForThisRequest,
IMethodInvocation input, GetNextHandlerDelegate getNext,
object cachedData)
: this(expirationTime, storeOnlyForThisRequest)
{
this.input = input;
this.getNext = getNext;
this.cachedData = cachedData;
}
/// <summary>
/// Gets or sets the expiration time for cache data.
/// </summary>
/// <value>The expiration time.</value>
public TimeSpan ExpirationTime
{
get { return expirationTime; }
set { expirationTime = value; }
}
#region ICallHandler Members
/// <summary>
/// Implements the caching behavior of this handler.
/// </summary>
/// <param name="input"><see cref="IMethodInvocation"/> object describing the current call.</param>
/// <param name="getNext">delegate used to get the next handler in the current pipeline.</param>
/// <returns>Return value from target method, or cached result if previous inputs have been seen.</returns>
public IMethodReturn Invoke(IMethodInvocation input, GetNextHandlerDelegate getNext)
{
lock (input.MethodBase)
{
this.input = input;
this.getNext = getNext;
return loadUsingCache();
}
}
public int Order
{
get { return 0; }
set { }
}
#endregion
private IMethodReturn loadUsingCache()
{
//We need to synchronize calls to the CacheHandler on method level
//to prevent duplicate calls to methods that could be cached.
lock (input.MethodBase)
{
if (TargetMethodReturnsVoid(input) || HttpContext.Current == null)
{
return getNext()(input, getNext);
}
var inputs = new object[input.Inputs.Count];
for (int i = 0; i < inputs.Length; ++i)
{
inputs[i] = input.Inputs[i];
}
string cacheKey = keyGenerator.CreateCacheKey(input.MethodBase, inputs);
object cachedResult = getCachedResult(cacheKey);
if (cachedResult == null)
{
var stopWatch = Stopwatch.StartNew();
var realReturn = getNext()(input, getNext);
stopWatch.Stop();
if (realReturn.Exception == null && realReturn.ReturnValue != null)
{
AddToCache(cacheKey, realReturn.ReturnValue);
}
return realReturn;
}
var cachedReturn = input.CreateMethodReturn(cachedResult, input.Arguments);
return cachedReturn;
}
}
private object getCachedResult(string cacheKey)
{
//When the method uses input that is not serializable
//we cannot create a cache key and can therefore not
//cache the data.
if (cacheKey == null)
{
return null;
}
object cachedValue = !storeOnlyForThisRequest ? HttpRuntime.Cache.Get(cacheKey) : HttpContext.Current.Items[cacheKey];
var cachedValueCast = cachedValue as CacheHandler;
if (cachedValueCast != null)
{
//This is an object that is reloaded when it is being removed.
//It is therefore wrapped in a CacheHandler-object and we must
//unwrap it before returning it.
return cachedValueCast.cachedData;
}
return cachedValue;
}
private static bool TargetMethodReturnsVoid(IMethodInvocation input)
{
var targetMethod = input.MethodBase as MethodInfo;
return targetMethod != null && targetMethod.ReturnType == typeof (void);
}
private void AddToCache(string key, object valueToCache)
{
if (key == null)
{
//When the method uses input that is not serializable
//we cannot create a cache key and can therefore not
//cache the data.
return;
}
if (!storeOnlyForThisRequest)
{
HttpRuntime.Cache.Insert(
key,
valueToCache,
null,
System.Web.Caching.Cache.NoAbsoluteExpiration,
expirationTime,
CacheItemPriority.Normal, null);
}
else
{
HttpContext.Current.Items[key] = valueToCache;
}
}
}
/// <summary>
/// This interface describes classes that can be used to generate cache key strings
/// for the <see cref="CacheHandler"/>.
/// </summary>
public interface ICacheKeyGenerator
{
/// <summary>
/// Creates a cache key for the given method and set of input arguments.
/// </summary>
/// <param name="method">Method being called.</param>
/// <param name="inputs">Input arguments.</param>
/// <returns>A (hopefully) unique string to be used as a cache key.</returns>
string CreateCacheKey(MethodBase method, object[] inputs);
}
/// <summary>
/// The default <see cref="ICacheKeyGenerator"/> used by the <see cref="CacheHandler"/>.
/// </summary>
public class DefaultCacheKeyGenerator : ICacheKeyGenerator
{
private readonly LosFormatter serializer = new LosFormatter(false, "");
#region ICacheKeyGenerator Members
/// <summary>
/// Create a cache key for the given method and set of input arguments.
/// </summary>
/// <param name="method">Method being called.</param>
/// <param name="inputs">Input arguments.</param>
/// <returns>A (hopefully) unique string to be used as a cache key.</returns>
public string CreateCacheKey(MethodBase method, params object[] inputs)
{
try
{
var sb = new StringBuilder();
if (method.DeclaringType != null)
{
sb.Append(method.DeclaringType.FullName);
}
sb.Append(':');
sb.Append(method.Name);
TextWriter writer = new StringWriter(sb);
if (inputs != null)
{
foreach (var input in inputs)
{
sb.Append(':');
if (input != null)
{
//Diffrerent instances of DateTime which represents the same value
//sometimes serialize differently due to some internal variables which are different.
//We therefore serialize it using Ticks instead. instead.
var inputDateTime = input as DateTime?;
if (inputDateTime.HasValue)
{
sb.Append(inputDateTime.Value.Ticks);
}
else
{
//Serialize the input and write it to the key StringBuilder.
serializer.Serialize(writer, input);
}
}
}
}
return sb.ToString();
}
catch
{
//Something went wrong when generating the key (probably an input-value was not serializble.
//Return a null key.
return null;
}
}
#endregion
}
}
Microsoft deserves most credit for this code. We've only added stuff like caching at request level instead of across requests (more useful than you might think) and fixed some bugs (e.g. equal DateTime-objects serializing to different values).

Under what conditions do you need to invalidate your cache? Objects should be stored so that when they are invalidated repopulating the cache only requires re-caching the items that were invalidated.
For example if you have cached say a Customer object that contains the delivery details for an order along with the shopping basket. Invalidating the shopping basket because they added or removed an item would also require repopulating the delivery details unnecessarily.
(NOTE: This is an obteuse example and I'm not advocating this just trying to demonstrate the principle and my imagination is a bit off today).

Ed, I assume those lists and dictionaries contain almost static data with low chances of expiration. Then there's data that gets frequent hits but also changes more frequently, so you're caching it using the HttpRuntime cache.
Now, you should think of all that data and all of the dependencies between diferent types. If you logically find that the HttpRuntime cached data depends somehow on your GlobalData items, you should move that into the cache and set up the appropriate dependencies in there so you'll benefit of the "cascading expiration".
Even if you do use your custom caching mechanism, you'd still have to provide all the synchronization, so you won't save on that by avoiding the other.
If you need (preordered) lists of items with a really low frequency change, you can still do that by using the HttpRuntime cache. So you could just cache a dictionary and either use it to list your items or to index and access by your custom key.

How about the best (worst?) of both worlds?
Have the globaldata class manage all the cache access internally. The rest of your code can then just use globaldata, meaning that it doesn't need to be cache-aware at all.
You could change the cache implementation as/when you like just by updating globaldata, and the rest of your code won't know or care what's going on inside.

There's much more than that to consider when architecting your caching strategy. Think of your cache store as if it were your in-memory db. So carefully handle dependencies and expiration policy for each and every type stored in there. It really doesn't matter what you use for caching (system.web, other commercial solution, rolling your own...).
I'd try to centralize it though and also use some sort of a plugable architecture. Make your data consumers access it through a common API (an abstract cache that exposes it) and plug your caching layer at runtime (let's say asp.net cache).
You should really take a top down approach when caching data to avoid any kind of data integrity problems (proper dependecies like I said) and then take care of providing synchronization.

Related

Is ISerializable backwards-compatible with previous versions of classes with fewer fields?

Sorry if I've worded the question a bit odd! Basically, I have a serializable class that has a single field at this current time, but will definitely gain more in the future as we add features to the system. The serialization process will be used for both passing instances to a WCF service, and for reading/writing it from/to file. Of course, the latter may be a big problem if I'm continually updating the class with extra fields.
Luckily, I think I've solved the problem by adding a try/catch block around the field setter in the constructor, which I'll also do for any other fields that are added to the class.
/// <summary>
/// Represents launch options supplied to an executable.
/// </summary>
[Serializable]
public sealed class ExecutableLaunchOptions : ISerializable
{
/// <summary>
/// Creates a new set of executable launch options.
/// </summary>
public ExecutableLaunchOptions() { }
/// <summary>
/// Creates a new set of executable launch options via deserialization.
/// </summary>
/// <param name="info">the serialization information.</param>
/// <param name="context">the streaming context.</param>
public ExecutableLaunchOptions(
SerializationInfo info, StreamingContext context) : this()
{
// Get the value of the window style from the serialization information.
try { this.windowStyle = (ProcessWindowStyle)info.GetValue(nameof(this.windowStyle), typeof(ProcessWindowStyle)); } catch { }
}
// Instance variables.
private ProcessWindowStyle windowStyle = ProcessWindowStyle.Normal;
/// <summary>
/// Gets or sets the window style to apply to the executable when it launches.
/// </summary>
public ProcessWindowStyle WindowStyle
{
get { return this.windowStyle; }
set { this.windowStyle = value; }
}
/// <summary>
/// Gets the information required for the serialization of this set of launch options.
/// </summary>
/// <param name="info">the serialization information.</param>
/// <param name="context">the streaming context.</param>
public void GetObjectData(
SerializationInfo info, StreamingContext context)
{
// Add the value of the window style to the serialization information.
info.AddValue(nameof(this.windowStyle), this.windowStyle, typeof(ProcessWindowStyle));
}
}
I'm guessing this will allow me to retain backwards-compatibility with files containing instances of previous versions of the class when I deserialize them, as the code will simply throw and subsequently catch exceptions for each field that doesn't exist in the serialization information, leaving their values at their default. Am I correct here, or am I missing something?
Serializing objects with the FormatterAssemblyStyle.Simple formatter will allow you to read old versions of your serialized objects. Marking new fields with [OptionalField] will allow old versions of your app to open new serialization files without throwing. So should you use them? No. Nooo no noooo no no no.
Serialization was designed for exchanging data between processes, it was not, and is not, a data persistence mechanism. The format has changed in the past, and may change in the future, so it is unsafe to use it for persistent data you expect to open again in the future with a new .NET version.
The BinaryFormatter algorithm is proprietary, so it will be very difficult to write non-.NET applications using such data.
Serialization is inefficient for data that contains multiple properties because it requires deserializing the entire object to access any field. This is especially problematic if the data contains large data like images.
If your data does not require random access, I suggest serializing it to a text format like JSON or XML. If the data is large, you should consider compressing the text-encoded data.
If you require random-access to data you should investigate data stores like MySQL or SQL Server CE.
You can use SerializationInfo.GetEnumerator() to loop through the name-value pairs contained in the SerializationInfo to look for items that might only be conditionally present:
public ExecutableLaunchOptions(
SerializationInfo info, StreamingContext context) : this()
{
// Get the value of the window style from the serialization information.
var enumerator = info.GetEnumerator();
while (enumerator.MoveNext())
{
var current = enumerator.Current;
if (current.Name == "windowStyle" && current.ObjectType == typeof(ProcessWindowStyle))
{
this.windowStyle = (ProcessWindowStyle)current.Value;
}
}
}
Note that this class is so old (i.e. from c# 1.0) that, despite having a GetEnumerator() method, it doesn't actually implement IEnumerable.
If you need to do this often, you can introduce an extension method like so:
public static class SerializationInfoExtensions
{
public static IEnumerable<SerializationEntry> AsEnumerable(this SerializationInfo info)
{
if (info == null)
throw new NullReferenceException();
var enumerator = info.GetEnumerator();
while (enumerator.MoveNext())
{
yield return enumerator.Current;
}
}
}
And then do:
public ExecutableLaunchOptions(
SerializationInfo info, StreamingContext context) : this()
{
foreach (var current in info.AsEnumerable())
{
if (current.Name == "windowStyle" && current.ObjectType == typeof(ProcessWindowStyle))
{
this.windowStyle = (ProcessWindowStyle)current.Value;
}
}
}
Which is a bit more modern and readable.

Multithreaded caching in SQL CLR

Are there any multithreaded caching mechanisms that will work in a SQL CLR function without requiring the assembly to be registered as "unsafe"?
As also described in this post, simply using a lock statement will throw an exception on a safe assembly:
System.Security.HostProtectionException:
Attempted to perform an operation that was forbidden by the CLR host.
The protected resources (only available with full trust) were: All
The demanded resources were: Synchronization, ExternalThreading
I want any calls to my functions to all use the same internal cache, in a thread-safe manner so that many operations can do cache reads and writes simultaneously. Essentially - I need a ConcurrentDictionary that will work in a SQLCLR "safe" assembly. Unfortunately, using ConcurrentDictionary itself gives the same exception as above.
Is there something built-in to SQLCLR or SQL Server to handle this? Or am I misunderstanding the threading model of SQLCLR?
I have read as much as I can find about the security restrictions of SQLCLR. In particular, the following articles may be useful to understand what I am talking about:
SQL Server CLR Integration Part 1: Security
Deploy/Use assemblies which require Unsafe/External Access with CLR and T-SQL
This code will ultimately be part of a library that is distributed to others, so I really don't want to be required to run it as "unsafe".
One option that I am considering (brought up in comments below by Spender) is to reach out to tempdb from within the SQLCLR code and use that as a cache instead. But I'm not quite sure exactly how to do that. I'm also not sure if it will be anywhere near as performant as an in-memory cache. See update below.
I am interested in any other alternatives that might be available. Thanks.
Example
The code below uses a static concurrent dictionary as a cache and accesses that cache via SQL CLR user-defined functions. All calls to the functions will work with the same cache. But this will not work unless the assembly is registered as "unsafe".
public class UserDefinedFunctions
{
private static readonly ConcurrentDictionary<string,string> Cache =
new ConcurrentDictionary<string, string>();
[SqlFunction]
public static SqlString GetFromCache(string key)
{
string value;
if (Cache.TryGetValue(key, out value))
return new SqlString(value);
return SqlString.Null;
}
[SqlProcedure]
public static void AddToCache(string key, string value)
{
Cache.TryAdd(key, value);
}
}
These are in an assembly called SqlClrTest, and and use the following SQL wrappers:
CREATE FUNCTION [dbo].[GetFromCache](#key nvarchar(4000))
RETURNS nvarchar(4000) WITH EXECUTE AS CALLER
AS EXTERNAL NAME [SqlClrTest].[SqlClrTest.UserDefinedFunctions].[GetFromCache]
GO
CREATE PROCEDURE [dbo].[AddToCache](#key nvarchar(4000), #value nvarchar(4000))
WITH EXECUTE AS CALLER
AS EXTERNAL NAME [SqlClrTest].[SqlClrTest.UserDefinedFunctions].[AddToCache]
GO
Then they are used in the database like this:
EXEC dbo.AddToCache 'foo', 'bar'
SELECT dbo.GetFromCache('foo')
UPDATE
I figured out how to access the database from SQLCLR using the Context Connection. The code in this Gist shows both the ConcurrentDictionary approach, and the tempdb approach. I then ran some tests, with the following results measured from client statistics (average of 10 trials):
Concurrent Dictionary Cache
10,000 Writes: 363ms
10,000 Reads : 81ms
TempDB Cache
10,000 Writes: 3546ms
10,000 Reads : 1199ms
So that throws out the idea of using a tempdb table. Is there really nothing else I can try?
I've added a comment that says something similar, but I'm going to put it here as an answer instead, because I think it might need some background.
ConcurrentDictionary, as you've correctly pointed out, requires UNSAFE ultimately because it uses thread synchronisation primitives beyond even lock - this explicitly requires access to lower-level OS resources, and therefore requires the code fishing outside of the SQL hosting environment.
So the only way you can get a solution that doesn't require UNSAFE, is to use one which doesn't use any locks or other thread synchronisation primitives. However, if the underlying structure is a .Net Dictionary then the only truly safe way to share it across multiple threads is to use Lock or an Interlocked.CompareExchange (see here) with a spin wait. I can't seem to find any information on whether the latter is allowed under the SAFE permission set, but my guess is that it's not.
I'd also be questioning the validity of applying a CLR-based solution to this problem inside a database engine, whose indexing-and-lookup capability is likely to be far in excess of any hosted CLR solution.
The accepted answer is not correct. Interlocked.CompareExchange is not an option since it requires a shared resource to update, and there is no way to create said static variable, in a SAFE Assembly, that can be updated.
There is (for the most part) no way to cache data across calls in a SAFE Assembly (nor should there be). The reason is that there is a single instance of the class (well, within the App Domain which is per-database per-owner) that is shared across all sessions. That behavior is, more often than not, highly undesirable.
However, I did say "for the most part" it was not possible. There is a way, though I am not sure if it is a bug or intended to be this way. I would err on the side of it being a bug since again, sharing a variable across sessions is a very precarious activity. Nonetheless, you can (do so at your own risk, AND this is not specifically thread safe, but might still work) modify static readonly collections. Yup. As in:
using Microsoft.SqlServer.Server;
using System.Data.SqlTypes;
using System.Collections;
public class CachingStuff
{
private static readonly Hashtable _KeyValuePairs = new Hashtable();
[SqlFunction(DataAccess = DataAccessKind.None, IsDeterministic = true)]
public static SqlString GetKVP(SqlString KeyToGet)
{
if (_KeyValuePairs.ContainsKey(KeyToGet.Value))
{
return _KeyValuePairs[KeyToGet.Value].ToString();
}
return SqlString.Null;
}
[SqlProcedure]
public static void SetKVP(SqlString KeyToSet, SqlString ValueToSet)
{
if (!_KeyValuePairs.ContainsKey(KeyToSet.Value))
{
_KeyValuePairs.Add(KeyToSet.Value, ValueToSet.Value);
}
return;
}
[SqlProcedure]
public static void UnsetKVP(SqlString KeyToUnset)
{
_KeyValuePairs.Remove(KeyToUnset.Value);
return;
}
}
And running the above, with the database set as TRUSTWORTHY OFF and the assembly set to SAFE, we get:
EXEC dbo.SetKVP 'f', 'sdfdg';
SELECT dbo.GetKVP('f'); -- sdfdg
SELECT dbo.GetKVP('g'); -- NULL
EXEC dbo.UnsetKVP 'f';
SELECT dbo.GetKVP('f'); -- NULL
That all being said, there is probably a better way that is not SAFE but also not UNSAFE. Since the desire is to use memory for caching of repeatedly used values, why not set up a memcached or redis server and create SQLCLR functions to communicate with it? That would only require setting the assembly to EXTERNAL_ACCESS.
This way you don't have to worry about several issues:
consuming a bunch of memory which could/should be used for queries.
there is no automatic expiration of the data held in static variables. It exists until you remove it or the App Domain gets unloaded, which might not happen for a long time. But memcached and redis do allow for setting an expiration time.
this is not explicitly thread safe. But cache servers are.
SQL Server locking functions sp_getapplock and sp_releaseapplock can be used in SAFE context. Employ them to protect an ordinary Dictionary and you have yourself a cache!
The price of locking this way is much worse than ordinary lock, but that may not be an issue if you are accessing your cache in a relatively coarsely-grained way.
--- UPDATE ---
The Interlocked.CompareExchange can be used on a field contained in a static instance. The static reference can be made readonly, but a field in the referenced object can still be mutable, and therefore usable by Interlocked.CompareExchange.
Both Interlocked.CompareExchange and static readonly are allowed in SAFE context. Performance is much better than sp_getapplock.
Based on Andras answer, here is my implantation of a "SharedCache" to read and write in a dictionary in SAFE permission.
EvalManager (Static)
using System;
using System.Collections.Generic;
using Z.Expressions.SqlServer.Eval;
namespace Z.Expressions
{
/// <summary>Manager class for eval.</summary>
public static class EvalManager
{
/// <summary>The cache for EvalDelegate.</summary>
public static readonly SharedCache<string, EvalDelegate> CacheDelegate = new SharedCache<string, EvalDelegate>();
/// <summary>The cache for SQLNETItem.</summary>
public static readonly SharedCache<string, SQLNETItem> CacheItem = new SharedCache<string, SQLNETItem>();
/// <summary>The shared lock.</summary>
public static readonly SharedLock SharedLock;
static EvalManager()
{
// ENSURE to create lock first
SharedLock = new SharedLock();
}
}
}
SharedLock
using System.Threading;
namespace Z.Expressions.SqlServer.Eval
{
/// <summary>A shared lock.</summary>
public class SharedLock
{
/// <summary>Acquires the lock on the specified lockValue.</summary>
/// <param name="lockValue">[in,out] The lock value.</param>
public static void AcquireLock(ref int lockValue)
{
do
{
// TODO: it's possible to wait 10 ticks? Thread.Sleep doesn't really support it.
} while (0 != Interlocked.CompareExchange(ref lockValue, 1, 0));
}
/// <summary>Releases the lock on the specified lockValue.</summary>
/// <param name="lockValue">[in,out] The lock value.</param>
public static void ReleaseLock(ref int lockValue)
{
Interlocked.CompareExchange(ref lockValue, 0, 1);
}
/// <summary>Attempts to acquire lock on the specified lockvalue.</summary>
/// <param name="lockValue">[in,out] The lock value.</param>
/// <returns>true if it succeeds, false if it fails.</returns>
public static bool TryAcquireLock(ref int lockValue)
{
return 0 == Interlocked.CompareExchange(ref lockValue, 1, 0);
}
}
}
SharedCache
using System;
using System.Collections.Generic;
namespace Z.Expressions.SqlServer.Eval
{
/// <summary>A shared cache.</summary>
/// <typeparam name="TKey">Type of key.</typeparam>
/// <typeparam name="TValue">Type of value.</typeparam>
public class SharedCache<TKey, TValue>
{
/// <summary>The lock value.</summary>
public int LockValue;
/// <summary>Default constructor.</summary>
public SharedCache()
{
InnerDictionary = new Dictionary<TKey, TValue>();
}
/// <summary>Gets the number of items cached.</summary>
/// <value>The number of items cached.</value>
public int Count
{
get { return InnerDictionary.Count; }
}
/// <summary>Gets or sets the inner dictionary used to cache items.</summary>
/// <value>The inner dictionary used to cache items.</value>
public Dictionary<TKey, TValue> InnerDictionary { get; set; }
/// <summary>Acquires the lock on the shared cache.</summary>
public void AcquireLock()
{
SharedLock.AcquireLock(ref LockValue);
}
/// <summary>Adds or updates a cache value for the specified key.</summary>
/// <param name="key">The cache key.</param>
/// <param name="value">The cache value used to add.</param>
/// <param name="updateValueFactory">The cache value factory used to update.</param>
/// <returns>The value added or updated in the cache for the specified key.</returns>
public TValue AddOrUpdate(TKey key, TValue value, Func<TKey, TValue, TValue> updateValueFactory)
{
try
{
AcquireLock();
TValue oldValue;
if (InnerDictionary.TryGetValue(key, out oldValue))
{
value = updateValueFactory(key, oldValue);
InnerDictionary[key] = value;
}
else
{
InnerDictionary.Add(key, value);
}
return value;
}
finally
{
ReleaseLock();
}
}
/// <summary>Adds or update a cache value for the specified key.</summary>
/// <param name="key">The cache key.</param>
/// <param name="addValueFactory">The cache value factory used to add.</param>
/// <param name="updateValueFactory">The cache value factory used to update.</param>
/// <returns>The value added or updated in the cache for the specified key.</returns>
public TValue AddOrUpdate(TKey key, Func<TKey, TValue> addValueFactory, Func<TKey, TValue, TValue> updateValueFactory)
{
try
{
AcquireLock();
TValue value;
TValue oldValue;
if (InnerDictionary.TryGetValue(key, out oldValue))
{
value = updateValueFactory(key, oldValue);
InnerDictionary[key] = value;
}
else
{
value = addValueFactory(key);
InnerDictionary.Add(key, value);
}
return value;
}
finally
{
ReleaseLock();
}
}
/// <summary>Clears all cached items.</summary>
public void Clear()
{
try
{
AcquireLock();
InnerDictionary.Clear();
}
finally
{
ReleaseLock();
}
}
/// <summary>Releases the lock on the shared cache.</summary>
public void ReleaseLock()
{
SharedLock.ReleaseLock(ref LockValue);
}
/// <summary>Attempts to add a value in the shared cache for the specified key.</summary>
/// <param name="key">The key.</param>
/// <param name="value">The value.</param>
/// <returns>true if it succeeds, false if it fails.</returns>
public bool TryAdd(TKey key, TValue value)
{
try
{
AcquireLock();
if (!InnerDictionary.ContainsKey(key))
{
InnerDictionary.Add(key, value);
}
return true;
}
finally
{
ReleaseLock();
}
}
/// <summary>Attempts to remove a key from the shared cache.</summary>
/// <param name="key">The key.</param>
/// <param name="value">[out] The value.</param>
/// <returns>true if it succeeds, false if it fails.</returns>
public bool TryRemove(TKey key, out TValue value)
{
try
{
AcquireLock();
var isRemoved = InnerDictionary.TryGetValue(key, out value);
if (isRemoved)
{
InnerDictionary.Remove(key);
}
return isRemoved;
}
finally
{
ReleaseLock();
}
}
/// <summary>Attempts to get value from the shared cache for the specified key.</summary>
/// <param name="key">The key.</param>
/// <param name="value">[out] The value.</param>
/// <returns>true if it succeeds, false if it fails.</returns>
public bool TryGetValue(TKey key, out TValue value)
{
try
{
return InnerDictionary.TryGetValue(key, out value);
}
catch (Exception)
{
value = default(TValue);
return false;
}
}
}
}
Source Files:
https://github.com/zzzprojects/Eval-SQL.NET/blob/master/src/Z.Expressions.SqlServer.Eval/EvalManager/EvalManager.cs
https://github.com/zzzprojects/Eval-SQL.NET/blob/master/src/Z.Expressions.SqlServer.Eval/Shared/SharedLock.cs
https://github.com/zzzprojects/Eval-SQL.NET/blob/master/src/Z.Expressions.SqlServer.Eval/Shared/SharedCache.cs
Would your needs be satisfied with a table variable? They're kept in memory, as long as possible anyway, so performance should be excellent. Not so useful if you need to maintain your cache between app calls, of course.
Created as a type, you can also pass such a table into a sproc or UDF.

Thread-safe enumeration of shared memory that can be updated or deleted

I have a shared object between threads that is used to hold file state information. The object that holds the information is this class:
/// <summary>
/// A synchronized dictionary class.
/// Uses ReaderWriterLockSlim to handle locking. The dictionary does not allow recursion by enumeration. It is purly used for quick read access.
/// </summary>
/// <typeparam name="T">Type that is going to be kept.</typeparam>
public sealed class SynchronizedDictionary<U,T> : IEnumerable<T>
{
private System.Threading.ReaderWriterLockSlim _lock = new System.Threading.ReaderWriterLockSlim();
private Dictionary<U, T> _collection = null;
public SynchronizedDictionary()
{
_collection = new Dictionary<U, T>();
}
/// <summary>
/// if getting:
/// Enters read lock.
/// Tries to get the value.
///
/// if setting:
/// Enters write lock.
/// Tries to set value.
/// </summary>
/// <param name="key">The key to fetch the value with.</param>
/// <returns>Object of T</returns>
public T this[U key]
{
get
{
_lock.EnterReadLock();
try
{
return _collection[key];
}
finally
{
_lock.ExitReadLock();
}
}
set
{
Add(key, value);
}
}
/// <summary>
/// Enters write lock.
/// Removes key from collection
/// </summary>
/// <param name="key">Key to remove.</param>
public void Remove(U key)
{
_lock.EnterWriteLock();
try
{
_collection.Remove(key);
}
finally
{
_lock.ExitWriteLock();
}
}
/// <summary>
/// Enters write lock.
/// Adds value to the collection if key does not exists.
/// </summary>
/// <param name="key">Key to add.</param>
/// <param name="value">Value to add.</param>
private void Add(U key, T value)
{
_lock.EnterWriteLock();
if (!_collection.ContainsKey(key))
{
try
{
_collection[key] = value;
}
finally
{
_lock.ExitWriteLock();
}
}
}
/// <summary>
/// Collection does not support iteration.
/// </summary>
/// <returns>Throw NotSupportedException</returns>
public IEnumerator<T> GetEnumerator()
{
throw new NotSupportedException();
}
/// <summary>
/// Collection does not support iteration.
/// </summary>
/// <returns>Throw NotSupportedException</returns>
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
throw new NotSupportedException();
}
}
I call this dictionary like this:
SynchronizedDictionary _cache = new SynchronizedDictionary();
Other threads can be spawned and use the thread like this:
_cache["key"];
The dictionary can be modified at runtime. I see no problem here. Or am I wrong?
The problem, in my eyes, lies in the enumerator, because I want to make an enumerator that iterates over the collection. How do I do this? I have thought of these three solutions:
Making a Enumerator like this:
http://www.codeproject.com/Articles/56575/Thread-safe-enumeration-in-C
(but using ReaderWriterLockSlim)
Expose the lock object, like SyncRoot does (but with
ReaderWriterLockSlim), so a caller calls the enter and exit read methods.
Use a database (SQLite fx) instead, holding the information.
The problem with number 1) is:
it uses the contructor to entry read mode. What if the
GetEnumerator() is call manually, not using the foreach? And forget
calling dispose.
I do not know if this is a good coding style. Even though I like the
code.
If the caller uses a foreach, I do not know what the caller might do
between the instantiation of the enumerator and the call to dispose.
If I have understood the documentation I have read correct this can
end up blocking the writer as long as there is one reader left doing
some heavy work.
The problem with number 2) is:
I do not like exposing this. I know that the .NET API does it, but
do not like it.
It is up to the caller to enter and exit properly
There is no problem with 3) I my eyes. But I am doing this small project as a spare time project and I want to learn more about multi-threading and reflection, so I want to keep this as a last option.
The reason why I want to iterate over the collection at runtime is that I want to find the values, that matches some criteria.
Maybe it is just me that have invented a problem?
I know of ConcurrentDictionary, but I do not want to use this. I am using this project as a playground. Playing with threading and reflection.
EDIT
I have been asked what it is that I am reading and writing. And I am going to tell this in this edit. I am reading and writing this class:
public class AssemblyInformation
{
public string FilePath { get; private set; }
public string Name { get; private set; }
public AssemblyInformation(string filePath, string name)
{
FilePath = filePath;
Name = name;
}
}
I am doing alot of reads, and almost no writes at runtime. Maybe I will do 2000 and 1 write. There is not going to be alot of object either, maybe 200.
I'll treat your questions as a request for feedback which helps you learn. Let me address the three solutions you have already identified:
Yes, this is why such a design should never be exposed as an API to a 3rd-party (or even other developers). It is tricky to use correctly. This codeproject article has some nasty advice.
Much better because this model would be explicit about locking, not implicit. However this violates separation of concerns in my opinion.
Not sure what you mean here. You could have a Snapshot() method on your dictionary which does a read-only copy which can be safely passed around and read. This is a different trade-off than solution 1.
There is a different solution entirely: Use an immutable dictionary. Such a dictionary could be passed around, read and enumerated safely even under concurrent write access. Such dictionaries/maps are commonly implemented using trees.
I'll elaborate more on a key point: You need to think about the concurrent system as a whole. You cannot make you app correct by making all components thread-safe (in your case a dictionary). You need to define, what you are using the dictionary for.
You say:
The reason why I want to iterate over the collection at runtime is
that I want to find the values, that matches some criteria.
You you have concurrent writes happening to the data and want to get a consistent snapshot atomically from the dictionary (maybe to shot some progress report in the UI?). Now that we know this goal, we can devise a solution:
You could add a Clone method to your dictionary which clones all data while taking the read-lock. This will give the caller a fresh object which it can then enumerate over independently. This would be a clean and safely exposable API.
Instead of implementing IEnumerable directly I would add a Values property (like Dictionary.Values):
public IEnumerable<T> Values {
get {
_lock.EnterReadLock();
try {
foreach (T v in _collection.Values) {
yield return v;
}
} finally {
_lock.ExitReadLock();
}
}
}

MemoryCache with regions support?

I need to add cache functionality and found a new shiny class called MemoryCache. However, I find MemoryCache a little bit crippled as it is (I'm in need of regions functionality). Among other things I need to add something like ClearAll(region). Authors made a great effort to keep this class without regions support, code like:
if (regionName != null)
{
throw new NotSupportedException(R.RegionName_not_supported);
}
flies in almost every method.
I don't see an easy way to override this behaviour. The only way to add region support that I can think of is to add a new class as a wrapper of MemoryCache rather then as a class that inherits from MemoryCache. Then in this new class create a Dictionary and let each method "buffer" region calls. Sounds nasty and wrong, but eventually...
Do you know of better ways to add regions to MemoryCache?
I know it is a long time since you asked this question, so this is not really an answer to you, but rather an addition for future readers.
I was also surprised to find that the standard implementation of MemoryCache does NOT support regions. It would have been so easy to provide right away. I therefore decided to wrap the MemoryCache in my own simple class to provide the functionality I often need.
I enclose my code it here to save time for others having the same need!
/// <summary>
/// =================================================================================================================
/// This is a static encapsulation of the Framework provided MemoryCache to make it easier to use.
/// - Keys can be of any type, not just strings.
/// - A typed Get method is provided for the common case where type of retrieved item actually is known.
/// - Exists method is provided.
/// - Except for the Set method with custom policy, some specific Set methods are also provided for convenience.
/// - One SetAbsolute method with remove callback is provided as an example.
/// The Set method can also be used for custom remove/update monitoring.
/// - Domain (or "region") functionality missing in default MemoryCache is provided.
/// This is very useful when adding items with identical keys but belonging to different domains.
/// Example: "Customer" with Id=1, and "Product" with Id=1
/// =================================================================================================================
/// </summary>
public static class MyCache
{
private const string KeySeparator = "_";
private const string DefaultDomain = "DefaultDomain";
private static MemoryCache Cache
{
get { return MemoryCache.Default; }
}
// -----------------------------------------------------------------------------------------------------------------------------
// The default instance of the MemoryCache is used.
// Memory usage can be configured in standard config file.
// -----------------------------------------------------------------------------------------------------------------------------
// cacheMemoryLimitMegabytes: The amount of maximum memory size to be used. Specified in megabytes.
// The default is zero, which indicates that the MemoryCache instance manages its own memory
// based on the amount of memory that is installed on the computer.
// physicalMemoryPercentage: The percentage of physical memory that the cache can use. It is specified as an integer value from 1 to 100.
// The default is zero, which indicates that the MemoryCache instance manages its own memory
// based on the amount of memory that is installed on the computer.
// pollingInterval: The time interval after which the cache implementation compares the current memory load with the
// absolute and percentage-based memory limits that are set for the cache instance.
// The default is two minutes.
// -----------------------------------------------------------------------------------------------------------------------------
// <configuration>
// <system.runtime.caching>
// <memoryCache>
// <namedCaches>
// <add name="default" cacheMemoryLimitMegabytes="0" physicalMemoryPercentage="0" pollingInterval="00:02:00" />
// </namedCaches>
// </memoryCache>
// </system.runtime.caching>
// </configuration>
// -----------------------------------------------------------------------------------------------------------------------------
/// <summary>
/// Store an object and let it stay in cache until manually removed.
/// </summary>
public static void SetPermanent(string key, object data, string domain = null)
{
CacheItemPolicy policy = new CacheItemPolicy { };
Set(key, data, policy, domain);
}
/// <summary>
/// Store an object and let it stay in cache x minutes from write.
/// </summary>
public static void SetAbsolute(string key, object data, double minutes, string domain = null)
{
CacheItemPolicy policy = new CacheItemPolicy { AbsoluteExpiration = DateTime.Now + TimeSpan.FromMinutes(minutes) };
Set(key, data, policy, domain);
}
/// <summary>
/// Store an object and let it stay in cache x minutes from write.
/// callback is a method to be triggered when item is removed
/// </summary>
public static void SetAbsolute(string key, object data, double minutes, CacheEntryRemovedCallback callback, string domain = null)
{
CacheItemPolicy policy = new CacheItemPolicy { AbsoluteExpiration = DateTime.Now + TimeSpan.FromMinutes(minutes), RemovedCallback = callback };
Set(key, data, policy, domain);
}
/// <summary>
/// Store an object and let it stay in cache x minutes from last write or read.
/// </summary>
public static void SetSliding(object key, object data, double minutes, string domain = null)
{
CacheItemPolicy policy = new CacheItemPolicy { SlidingExpiration = TimeSpan.FromMinutes(minutes) };
Set(key, data, policy, domain);
}
/// <summary>
/// Store an item and let it stay in cache according to specified policy.
/// </summary>
/// <param name="key">Key within specified domain</param>
/// <param name="data">Object to store</param>
/// <param name="policy">CacheItemPolicy</param>
/// <param name="domain">NULL will fallback to default domain</param>
public static void Set(object key, object data, CacheItemPolicy policy, string domain = null)
{
Cache.Add(CombinedKey(key, domain), data, policy);
}
/// <summary>
/// Get typed item from cache.
/// </summary>
/// <param name="key">Key within specified domain</param>
/// <param name="domain">NULL will fallback to default domain</param>
public static T Get<T>(object key, string domain = null)
{
return (T)Get(key, domain);
}
/// <summary>
/// Get item from cache.
/// </summary>
/// <param name="key">Key within specified domain</param>
/// <param name="domain">NULL will fallback to default domain</param>
public static object Get(object key, string domain = null)
{
return Cache.Get(CombinedKey(key, domain));
}
/// <summary>
/// Check if item exists in cache.
/// </summary>
/// <param name="key">Key within specified domain</param>
/// <param name="domain">NULL will fallback to default domain</param>
public static bool Exists(object key, string domain = null)
{
return Cache[CombinedKey(key, domain)] != null;
}
/// <summary>
/// Remove item from cache.
/// </summary>
/// <param name="key">Key within specified domain</param>
/// <param name="domain">NULL will fallback to default domain</param>
public static void Remove(object key, string domain = null)
{
Cache.Remove(CombinedKey(key, domain));
}
#region Support Methods
/// <summary>
/// Parse domain from combinedKey.
/// This method is exposed publicly because it can be useful in callback methods.
/// The key property of the callback argument will in our case be the combinedKey.
/// To be interpreted, it needs to be split into domain and key with these parse methods.
/// </summary>
public static string ParseDomain(string combinedKey)
{
return combinedKey.Substring(0, combinedKey.IndexOf(KeySeparator));
}
/// <summary>
/// Parse key from combinedKey.
/// This method is exposed publicly because it can be useful in callback methods.
/// The key property of the callback argument will in our case be the combinedKey.
/// To be interpreted, it needs to be split into domain and key with these parse methods.
/// </summary>
public static string ParseKey(string combinedKey)
{
return combinedKey.Substring(combinedKey.IndexOf(KeySeparator) + KeySeparator.Length);
}
/// <summary>
/// Create a combined key from given values.
/// The combined key is used when storing and retrieving from the inner MemoryCache instance.
/// Example: Product_76
/// </summary>
/// <param name="key">Key within specified domain</param>
/// <param name="domain">NULL will fallback to default domain</param>
private static string CombinedKey(object key, string domain)
{
return string.Format("{0}{1}{2}", string.IsNullOrEmpty(domain) ? DefaultDomain : domain, KeySeparator, key);
}
#endregion
}
You can create more than one just one MemoryCache instance, one for each partition of your data.
http://msdn.microsoft.com/en-us/library/system.runtime.caching.memorycache.aspx :
you can create multiple instances of the MemoryCache class for use in the same application and in the same AppDomain instance
I just recently came across this problem. I know this is an old question but maybe this might be useful for some folks. Here is my iteration of the solution by Thomas F. Abraham
namespace CLRTest
{
using System;
using System.Collections.Concurrent;
using System.Diagnostics;
using System.Globalization;
using System.Linq;
using System.Runtime.Caching;
class Program
{
static void Main(string[] args)
{
CacheTester.TestCache();
}
}
public class SignaledChangeEventArgs : EventArgs
{
public string Name { get; private set; }
public SignaledChangeEventArgs(string name = null) { this.Name = name; }
}
/// <summary>
/// Cache change monitor that allows an app to fire a change notification
/// to all associated cache items.
/// </summary>
public class SignaledChangeMonitor : ChangeMonitor
{
// Shared across all SignaledChangeMonitors in the AppDomain
private static ConcurrentDictionary<string, EventHandler<SignaledChangeEventArgs>> ListenerLookup =
new ConcurrentDictionary<string, EventHandler<SignaledChangeEventArgs>>();
private string _name;
private string _key;
private string _uniqueId = Guid.NewGuid().ToString("N", CultureInfo.InvariantCulture);
public override string UniqueId
{
get { return _uniqueId; }
}
public SignaledChangeMonitor(string key, string name)
{
_key = key;
_name = name;
// Register instance with the shared event
ListenerLookup[_uniqueId] = OnSignalRaised;
base.InitializationComplete();
}
public static void Signal(string name = null)
{
// Raise shared event to notify all subscribers
foreach (var subscriber in ListenerLookup.ToList())
{
subscriber.Value?.Invoke(null, new SignaledChangeEventArgs(name));
}
}
protected override void Dispose(bool disposing)
{
// Set delegate to null so it can't be accidentally called in Signal() while being disposed
ListenerLookup[_uniqueId] = null;
EventHandler<SignaledChangeEventArgs> outValue = null;
ListenerLookup.TryRemove(_uniqueId, out outValue);
}
private void OnSignalRaised(object sender, SignaledChangeEventArgs e)
{
if (string.IsNullOrWhiteSpace(e.Name) || string.Compare(e.Name, _name, true) == 0)
{
// Cache objects are obligated to remove entry upon change notification.
base.OnChanged(null);
}
}
}
public static class CacheTester
{
private static Stopwatch _timer = new Stopwatch();
public static void TestCache()
{
MemoryCache cache = MemoryCache.Default;
int size = (int)1e6;
Start();
for (int idx = 0; idx < size; idx++)
{
cache.Add(idx.ToString(), "Value" + idx.ToString(), GetPolicy(idx, cache));
}
long prevCnt = cache.GetCount();
Stop($"Added {prevCnt} items");
Start();
SignaledChangeMonitor.Signal("NamedData");
Stop($"Removed {prevCnt - cache.GetCount()} entries");
prevCnt = cache.GetCount();
Start();
SignaledChangeMonitor.Signal();
Stop($"Removed {prevCnt - cache.GetCount()} entries");
}
private static CacheItemPolicy GetPolicy(int idx, MemoryCache cache)
{
string name = (idx % 10 == 0) ? "NamedData" : null;
CacheItemPolicy cip = new CacheItemPolicy();
cip.AbsoluteExpiration = System.DateTimeOffset.UtcNow.AddHours(1);
var monitor = new SignaledChangeMonitor(idx.ToString(), name);
cip.ChangeMonitors.Add(monitor);
return cip;
}
private static void Start()
{
_timer.Start();
}
private static void Stop(string msg = null)
{
_timer.Stop();
Console.WriteLine($"{msg} | {_timer.Elapsed.TotalSeconds} sec");
_timer.Reset();
}
}
}
His solution involved using an event to keep track of ChangeMonitors. But the dispose method was working slow when the number of entries were more than 10k. My guess is that this code SignaledChangeMonitor.Signaled -= OnSignalRaised removes a delegate from invocation list by doing a linear search. So when you remove a lot of entries it becomes slow. I decided to use ConcurrentDictionary instead of an event. In hope that dispose becomes faster. I ran some basic performance tests and here are the results:
Added 10000 items | 0.027697 sec
Removed 1000 entries | 0.0040669 sec
Removed 9000 entries | 0.0105687 sec
Added 100000 items | 0.5065736 sec
Removed 10000 entries | 0.0338991 sec
Removed 90000 entries | 0.1418357 sec
Added 1000000 items | 6.5994546 sec
Removed 100000 entries | 0.4176233 sec
Removed 900000 entries | 1.2514225 sec
I am not sure if my code does not have some critical flaws. I would like to know if that is the case.
Another approach is to implement a wrapper around MemoryCache that implements regions by composing the key and region name e.g.
public interface ICache
{
...
object Get(string key, string regionName = null);
...
}
public class MyCache : ICache
{
private readonly MemoryCache cache
public MyCache(MemoryCache cache)
{
this.cache = cache.
}
...
public object Get(string key, string regionName = null)
{
var regionKey = RegionKey(key, regionName);
return cache.Get(regionKey);
}
private string RegionKey(string key, string regionName)
{
// NB Implements region as a suffix, for prefix, swap order in the format
return string.IsNullOrEmpty(regionName) ? key : string.Format("{0}{1}{2}", key, "::", regionName);
}
...
}
It's not perfect but it works for most use cases.
I've implemented this and it's available as a NuGet package: Meerkat.Caching

WCF service proxy not setting "FieldSpecified" property

I've got a WCF DataContract that looks like the following:
namespace MyCompanyName.Services.Wcf
{
[DataContract(Namespace = "http://mycompanyname/services/wcf")]
[Serializable]
public class DataContractBase
{
[DataMember]
public DateTime EditDate { get; set; }
// code omitted for brevity...
}
}
When I add a reference to this service in Visual Studio, this proxy code is generated:
/// <remarks/>
[System.CodeDom.Compiler.GeneratedCodeAttribute("System.Xml", "2.0.50727.3082")]
[System.SerializableAttribute()]
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.ComponentModel.DesignerCategoryAttribute("code")]
[System.Xml.Serialization.XmlTypeAttribute(Namespace="http://mycompanyname/services/wcf")]
public partial class DataContractBase : object, System.ComponentModel.INotifyPropertyChanged {
private System.DateTime editDateField;
private bool editDateFieldSpecified;
/// <remarks/>
[System.Xml.Serialization.XmlElementAttribute(Order=0)]
public System.DateTime EditDate {
get {
return this.editDateField;
}
set {
this.editDateField = value;
this.RaisePropertyChanged("EditDate");
}
}
/// <remarks/>
[System.Xml.Serialization.XmlIgnoreAttribute()]
public bool EditDateSpecified {
get {
return this.editDateFieldSpecified;
}
set {
this.editDateFieldSpecified = value;
this.RaisePropertyChanged("EditDateSpecified");
}
}
// code omitted for brevity...
}
As you can see, besides generating a backing property for EditDate, an additional <propertyname>Specified property is generated. All good, except that when I do the following:
DataContractBase myDataContract = new DataContractBase();
myDataContract.EditDate = DateTime.Now;
new MyServiceClient.Update(new UpdateRequest(myDataContract));
the EditDate was not getting picked up by the endpoint of the service (does not appear in the transmitted XML).
I debugged the code and found that, although I was setting EditDate, the EditDateSpecified property wasn't being set to true as I would expect; hence, the XML serializer was ignoring the value of EditDate, even though it's set to a valid value.
As a quick hack I modified the EditDate property to look like the following:
/// <remarks/>
[System.Xml.Serialization.XmlElementAttribute(Order=0)]
public System.DateTime EditDate {
get {
return this.editDateField;
}
set {
this.editDateField = value;
// hackhackhack
if (value != default(System.DateTime))
{
this.EditDateSpecified = true;
}
// end hackhackhack
this.RaisePropertyChanged("EditDate");
}
}
Now the code works as expected, but of course every time I re-generate the proxy, my modifications are lost. I could change the calling code to the following:
DataContractBase myDataContract = new DataContractBase();
myDataContract.EditDate = DateTime.Now;
myDataContract.EditDateSpecified = true;
new MyServiceClient.Update(new UpdateRequest(myDataContract));
but that also seems like a hack-ish waste of time.
So finally, my question: does anyone have a suggestion on how to get past this unintuitive (and IMO broken) behavior of the Visual Studio service proxy generator, or am I simply missing something?
It might be a bit unintuitive (and caught me off guard and reeling, too!) - but it's the only proper way to handle elements that might or might not be specified in your XML schema.
And it also might seem counter-intuitive that you have to set the xyzSpecified flag yourself - but ultimately, this gives you more control, and WCF is all about the Four Tenets of SOA of being very explicit and clear about your intentions.
So basically - that's the way it is, get used to it :-) There's no way "past" this behavior - it's the way the WCF system was designed, and for good reason, too.
What you always can do is catch and handle the this.RaisePropertyChanged("EditDate"); event and set the EditDateSpecified flag in an event handler for that event.
try this
[DataMember(IsRequired=true)]
public DateTime EditDate { get; set; }
This should omit the EditDateSpecified property since the field is specified as required
Rather than change the setters of the autogenerated code, you can use an extension class to 'autospecify' (bind the change handler event). This could have two implementations -- a "lazy" one (Autospecify) using reflection to look for fieldSpecified based on the property name, rather than listing them all out for each class in some sort of switch statement like Autonotify:
Lazy
public static class PropertySpecifiedExtensions
{
private const string SPECIFIED_SUFFIX = "Specified";
/// <summary>
/// Bind the <see cref="INotifyPropertyChanged.PropertyChanged"/> handler to automatically set any xxxSpecified fields when a property is changed. "Lazy" via reflection.
/// </summary>
/// <param name="entity">the entity to bind the autospecify event to</param>
/// <param name="specifiedSuffix">optionally specify a suffix for the Specified property to set as true on changes</param>
/// <param name="specifiedPrefix">optionally specify a prefix for the Specified property to set as true on changes</param>
public static void Autospecify(this INotifyPropertyChanged entity, string specifiedSuffix = SPECIFIED_SUFFIX, string specifiedPrefix = null)
{
entity.PropertyChanged += (me, e) =>
{
foreach (var pi in me.GetType().GetProperties().Where(o => o.Name == specifiedPrefix + e.PropertyName + specifiedSuffix))
{
pi.SetValue(me, true, BindingFlags.SetField | BindingFlags.SetProperty, null, null, null);
}
};
}
/// <summary>
/// Create a new entity and <see cref="Autospecify"/> its properties when changed
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="specifiedSuffix"></param>
/// <param name="specifiedPrefix"></param>
/// <returns></returns>
public static T Create<T>(string specifiedSuffix = SPECIFIED_SUFFIX, string specifiedPrefix = null) where T : INotifyPropertyChanged, new()
{
var ret = new T();
ret.Autospecify(specifiedSuffix, specifiedPrefix);
return ret;
}
}
This simplifies writing convenience factory methods like:
public partial class MyRandomClass
{
/// <summary>
/// Create a new empty instance and <see cref="PropertySpecifiedExtensions.Autospecify"/> its properties when changed
/// </summary>
/// <returns></returns>
public static MyRandomClass Create()
{
return PropertySpecifiedExtensions.Create<MyRandomClass>();
}
}
A downside (other than reflection, meh) is that you have to use the factory method to instantiate your classes or use .Autospecify before (?) you make any changes to properties with specifiers.
No Reflection
If you don't like reflection, you could define another extension class + interface:
public static class PropertySpecifiedExtensions2
{
/// <summary>
/// Bind the <see cref="INotifyPropertyChanged.PropertyChanged"/> handler to automatically call each class's <see cref="IAutoNotifyPropertyChanged.Autonotify"/> method on the property name.
/// </summary>
/// <param name="entity">the entity to bind the autospecify event to</param>
public static void Autonotify(this IAutoNotifyPropertyChanged entity)
{
entity.PropertyChanged += (me, e) => ((IAutoNotifyPropertyChanged)me).WhenPropertyChanges(e.PropertyName);
}
/// <summary>
/// Create a new entity and <see cref="Autonotify"/> it's properties when changed
/// </summary>
/// <typeparam name="T"></typeparam>
/// <returns></returns>
public static T Create<T>() where T : IAutoNotifyPropertyChanged, new()
{
var ret = new T();
ret.Autonotify();
return ret;
}
}
/// <summary>
/// Used by <see cref="PropertySpecifiedExtensions.Autonotify"/> to standardize implementation behavior
/// </summary>
public interface IAutoNotifyPropertyChanged : INotifyPropertyChanged
{
void WhenPropertyChanges(string propertyName);
}
And then each class themselves defines the behavior:
public partial class MyRandomClass: IAutoNotifyPropertyChanged
{
public void WhenPropertyChanges(string propertyName)
{
switch (propertyName)
{
case "field1": this.field1Specified = true; return;
// etc
}
}
}
The downside to this is, of course, magic strings for property names making refactoring difficult, which you could get around with Expression parsing?
Further information
On the MSDN here
In her answer, Shreesha explains that:
"Specified" fields are only generated on optional parameters that are structs. (int, datetime, decimal etc). All such variables will have additional variable generated with the name Specified.
This is a way of knowing if a parameter is really passed between the client and the server.
To elaborate, an optional integer, if not passed, would still have the dafault value of 0. How do you differentiate between this and the one that was actually passed with a value 0 ? The "specified" field lets you know if the optional integer is passed or not. If the "specified" field is false, the value is not passed across. If it true, the integer is passed.
so essentially, the only way to have these fields set is to set them manually, and if they aren't set to true for a field that has been set, then that field will be missed out in the SOAP message of the web-service call.
What I did in the end was build a method to loop through all the members of the object, and if the property has been set, and if there is a property called name _Specified then set that to true.
Ian,
Please ignore my previous answers, was explaining how to suck eggs. I've voted to delete them.
Could you tell me which version of Visual Studio you're using, please?
In VS2005 client - in the generated code, I get the <property>Specified flags, but no event raised on change of values. To pass data I have to set the <property>Specified flag.
In Visual Web Developer 2008 Express client - in the generated code, I get no <property>Specified flags, but I do get the event on change of value.
Seems to me that this functionality has evolved and the Web Dev 2008 is closer to what you're after and is more intuitive, in that you don't need to set flags once you've set a value.
Bowthy
Here's a simple project that can modify the setters in generated WCF code for optional properties to automatically set the *Specified flags to true when setting the related value.
https://github.com/b9chris/WcfClean
Obviously there are situations where you want manual control over the *Specified flag so I'm not recommending it to everyone, but in most simple use cases the *Specified flags are just an extra nuisance and automatically setting them saves time, and is often more intuitive.
Note that Mustafa Magdy's comment on another answer here will solve this for you IF you control the Web Service publication point. However, I usually don't control the Web Service publication and am just consuming one, and have to cope with the *Specified flags in some simple software where I'd like this automated. Thus this tool.
Change proxy class properties to nullable type
ex :
bool? confirmed
DateTime? checkDate

Categories