I have a project in Asp.Net Core. This project has a ICacheService as below:
public interface ICacheService
{
T Get<T>(string key);
T Get<T>(string key, Func<T> getdata);
Task<T> Get<T>(string key, Func<Task<T>> getdata);
void AddOrUpdate(string key, object value);
}
The implementation is simply based on ConcurrentDictionary<string, object>, so its not that complicated, just storing and retrieving data from this dictionary. At one of my services I have a method as below:
public async Task<List<LanguageInfoModel>> GetLanguagesAsync(string frontendId, string languageId, string accessId)
{
async Task<List<LanguageInfoModel>> GetLanguageInfoModel()
{
var data = await _commonServiceProxy.GetLanguages(frontendId, languageId, accessId);
return data;
}
_scheduler.ScheduleAsync($"{CacheKeys.Jobs.LanguagesJob}_{frontendId}_{languageId}_{accessId}", async () =>
{
_cacheService.AddOrUpdate($"{CacheKeys.Languages}_{frontendId}_{languageId}_{accessId}", await GetLanguageInfoModel());
return JobStatus.Success;
}, TimeSpan.FromMinutes(5.0));
return await _cacheService.Get($"{CacheKeys.Languages}_{frontendId}_{languageId}_{accessId}", async () => await GetLanguageInfoModel());
}
The problem is that I have three params in this method that I use as a cache key. This works fine but the problem is that the combination of three params is pretty high so there will be so many duplication of objects in cache. I was thinking to create a cache without duplication like below:
To have a cache with a list as a key where I can store more than one key for one object. So when I get new elements I will check for each of them if it is in the cache, if it is in the cache I will only add a key in the key list otherwise insert a new element in the cache. The problem here is that testing if an object is in the cache is a big problem. I think it will consume a lot of resources and would need some serialization into a specific form to make the comparison possible which will make again the comparison consuming a lot of resources.
The cache might look something like this CustomDictionary<List<string>, object>
Does anybody know a good approach of solving this issue to not duplicate objects in the cache ?
EDIT 1:
My main concern is when I retrieve List<MyModel> from my webservices because they might have 80% of the objects with the same data which will drastically increase the size in memory. But this would be relevant for simple cases as well.
Lest suppose I have something like this:
MyClass o1 = new MyObject();
_cache.Set("key1", o1);
_cashe.Set("key2", o1);
In this case when trying to add the same object twice I would like to not duplicate it but to have key2 somehow pointing to the same object as key1. If this achieved it will be problem to invalidate them but I expect to have something like this:
_cache.Invalidate("key2");
This will check if there is another key pointing to same object. If so, it will only remove the key otherwise destroy the object itself.
Maybe we could reformulate this problem to two separate issues ...
executing the call for each combination and
storing n times the identical result, wasting tons of memory
For 1 I don't have any idea how we could prevent it, as we do not know prior to execution if we will fetch a duplicate in this setup. We would need more information that is based on when these values vary, which may or may not be possible.
For 2 one solution would be to override hashcode so it is based on the actual returned values. A good solution would be generic and walk through the object tree (which probably can be expensive). Would like to know if there are any pre-made solutions for this actually.
This answer is specifically for returning List<TItem>s, rather than just individual TItems, and it avoids duplication of any TItem as well as any List<T>. It uses arrays, because you're trying to save memory, and arrays will use less than a List.
Note that for this (and any solution really) to work, you MUST override Equals and GetHashCode on TItem, so that it knows what a duplicate item is. (Unless the data provider is returning the same object each time, which is unlikely.) If you don't have control of TItem, but you can yourself determine whether two TItems are equal, you can use an IEqualityComparer to do this, but the below solution would need to be modified very slightly in order to do that.
View the solution with a basic test at:
https://dotnetfiddle.net/pKHLQP
public class DuplicateFreeCache<TKey, TItem> where TItem : class
{
private ConcurrentDictionary<TKey, int> Primary { get; } = new ConcurrentDictionary<TKey, int>();
private List<TItem> ItemList { get; } = new List<TItem>();
private List<TItem[]> ListList { get; } = new List<TItem[]>();
private Dictionary<TItem, int> ItemDict { get; } = new Dictionary<TItem, int>();
private Dictionary<IntArray, int> ListDict { get; } = new Dictionary<IntArray, int>();
public IReadOnlyList<TItem> GetOrAdd(TKey key, Func<TKey, IEnumerable<TItem>> getFunc)
{
int index = Primary.GetOrAdd(key, k =>
{
var rawList = getFunc(k);
lock (Primary)
{
int[] itemListByIndex = rawList.Select(item =>
{
if (!ItemDict.TryGetValue(item, out int itemIndex))
{
itemIndex = ItemList.Count;
ItemList.Add(item);
ItemDict[item] = itemIndex;
}
return itemIndex;
}).ToArray();
var intArray = new IntArray(itemListByIndex);
if (!ListDict.TryGetValue(intArray, out int listIndex))
{
lock (ListList)
{
listIndex = ListList.Count;
ListList.Add(itemListByIndex.Select(ii => ItemList[ii]).ToArray());
}
ListDict[intArray] = listIndex;
}
return listIndex;
}
});
lock (ListList)
{
return ListList[index];
}
}
public override string ToString()
{
StringBuilder sb = new StringBuilder();
sb.AppendLine($"A cache with:");
sb.AppendLine($"{ItemList.Count} unique Items;");
sb.AppendLine($"{ListList.Count} unique lists of Items;");
sb.AppendLine($"{Primary.Count} primary dictionary items;");
sb.AppendLine($"{ItemDict.Count} item dictionary items;");
sb.AppendLine($"{ListDict.Count} list dictionary items;");
return sb.ToString();
}
//We have this to make Dictionary lookups on int[] find identical arrays.
//One could also just make an IEqualityComparer, but I felt like doing it this way.
public class IntArray
{
private readonly int _hashCode;
public int[] Array { get; }
public IntArray(int[] arr)
{
Array = arr;
unchecked
{
_hashCode = 0;
for (int i = 0; i < arr.Length; i++)
_hashCode = (_hashCode * 397) ^ arr[i];
}
}
protected bool Equals(IntArray other)
{
return Array.SequenceEqual(other.Array);
}
public override bool Equals(object obj)
{
if (ReferenceEquals(null, obj)) return false;
if (ReferenceEquals(this, obj)) return true;
if (obj.GetType() != this.GetType()) return false;
return Equals((IntArray)obj);
}
public override int GetHashCode() => _hashCode;
}
}
It occurred to me that a ReaderWriterLockSlim would be better than the lock(ListList), if the lock is causing performance to lag, but it's very slightly more complicated.
Similar to #MineR, this solution is performing a 'double caching' operation: it caches the key'ed lists (lookups) as well as the individual objects - performing an automatic deduplication.
It is a fairly simple solution using two ConcurrentDictionaries - one acting as a HashSet and one as a keyed lookup. This allows most of the threading concerns to be handled by the framework.
You can also pass in and share the hashset between multiple Cachedlookups allowing lookups with different keys.
Note that object equality or an IEqualityComparer are required to make any such solution function.
Class:
public class CachedLookup<T, TKey>
{
private readonly ConcurrentDictionary<T, T> _hashSet;
private readonly ConcurrentDictionary<TKey, List<T>> _lookup = new ConcurrentDictionary<TKey, List<T>>();
public CachedLookup(ConcurrentDictionary<T, T> hashSet)
{
_hashSet = hashSet;
}
public CachedLookup(IEqualityComparer<T> equalityComparer = default)
{
_hashSet = equalityComparer is null ? new ConcurrentDictionary<T, T>() : new ConcurrentDictionary<T, T>(equalityComparer);
}
public List<T> Get(TKey key) => _lookup.ContainsKey(key) ? _lookup[key] : null;
public List<T> Get(TKey key, Func<TKey, List<T>> getData)
{
if (_lookup.ContainsKey(key))
return _lookup[key];
var result = DedupeAndCache(getData(key));
_lookup.TryAdd(key, result);
return result;
}
public async ValueTask<List<T>> GetAsync(TKey key, Func<TKey, Task<List<T>>> getData)
{
if (_lookup.ContainsKey(key))
return _lookup[key];
var result = DedupeAndCache(await getData(key));
_lookup.TryAdd(key, result);
return result;
}
public void Add(T value) => _hashSet.TryAdd(value, value);
public List<T> AddOrUpdate(TKey key, List<T> data)
{
var deduped = DedupeAndCache(data);
_lookup.AddOrUpdate(key, deduped, (k,l)=>deduped);
return deduped;
}
private List<T> DedupeAndCache(IEnumerable<T> input) => input.Select(v => _hashSet.GetOrAdd(v,v)).ToList();
}
Example Usage:
public class ExampleUsage
{
private readonly CachedLookup<LanguageInfoModel, (string frontendId, string languageId, string accessId)> _lookup
= new CachedLookup<LanguageInfoModel, (string frontendId, string languageId, string accessId)>(new LanguageInfoModelComparer());
public ValueTask<List<LanguageInfoModel>> GetLanguagesAsync(string frontendId, string languageId, string accessId)
{
return _lookup.GetAsync((frontendId, languageId, accessId), GetLanguagesFromDB(k));
}
private async Task<List<LanguageInfoModel>> GetLanguagesFromDB((string frontendId, string languageId, string accessId) key) => throw new NotImplementedException();
}
public class LanguageInfoModel
{
public string FrontendId { get; set; }
public string LanguageId { get; set; }
public string AccessId { get; set; }
public string SomeOtherUniqueValue { get; set; }
}
public class LanguageInfoModelComparer : IEqualityComparer<LanguageInfoModel>
{
public bool Equals(LanguageInfoModel x, LanguageInfoModel y)
{
return (x?.FrontendId, x?.AccessId, x?.LanguageId, x?.SomeOtherUniqueValue)
.Equals((y?.FrontendId, y?.AccessId, y?.LanguageId, y?.SomeOtherUniqueValue));
}
public int GetHashCode(LanguageInfoModel obj) =>
(obj.FrontendId, obj.LanguageId, obj.AccessId, obj.SomeOtherUniqueValue).GetHashCode();
}
Notes:
The CachedLookup class is generic on both the value and key. The example use of ValueTuple makes it easy to have compound keys. I have also used ValueTuples to simplify the equality comparisons.
This usage of ValueTask fits nicely with its intended purpose, returning the cached list synchronously.
If you have access to the lower level data access layer, one optimization would be to move the deduplication to happen before the objects are instantiated (based on property value equality). This would reduce the allocations and load on the GC.
If you have control over your complete solution then you can do something like this.
Whatever object that is capable of storing in Cache. You have to identify that.
All Such object implement common interface.
public interface ICacheable
{
string ObjectId(); // This will implement logic to calculate each object identity. You can count hash code but you have to add some other value to.
}
Now when you store object in Cache. You do two thing.
Store Two way things. Like one cache store ObjectId to Key.
Another will contains ObjectId to Object.
Overall idea is that when you get object. You search in first cache and see that the key you want is there against ObjectId. If yes then no further action otherwise you have to create new entry in First Cache for ObjectId to Key Map.
If object is not present then you have to create entry in both cache
Note : You have to overcome performance issue. Because your keys is some kind of list so it create problem while searching.
It sound to me as though you need to implement some sort of index. Assuming that your model is fairly large, which is why you want to save memory then you could do this with two concurrent dictionaries.
The first would be ConcurrentDictionary<string, int> (or whatever unique id applies to your model object) and would contain your key values. Each key is obviously be different as per all your combinations, but you are only duplicating the int unique key for all of your objects, not the entire object.
The second dictionary would be a ConcurrentDictionary<int, object> or ConcurrentDictionary<int, T> and would contain your unique large objects indexed via their unique key.
When building the cache you would need to populate both dictionaries, the exact method would depend upon how you are doing it at the moment.
To retrieve an object you would build the key as you do at the moment, retrieve the hashcode value from the first dictionary, and then use that to locate the actual object from the second dictionary.
It is also possible to invalidate one key without invalidating the main object another key is also using it, although it does require you to iterate over the index dictionary to check if any other key is pointing to the same object.
I think this is not a caching concern where one key map to one and only one data. Yours is not in this case. You are trying to manipulate a local data repository in memory work as cached data.
You are trying to create mappers between keys and objects that loaded from remote. One key is able to map to many objects. One object can be mapped by many Keys, so the relationship is n <======> n
I have created a sample modal as following
Key, KeyMyModel and MyModel are classes for caching handler
RemoteModel is class that you got from remote service
With this models, you are able to meet the requirements. This utilizes entity Id to specify an object, does not need to hash to specify duplications. This is very basic that I have implemented set method. Invaildate a key is very similar. You must write code that ensure thread safe as well
public class MyModel
{
public RemoteModel RemoteModel { get; set; }
public List<KeyMyModel> KeyMyModels { get; set; }
}
public class RemoteModel
{
public string Id { get; set; } // Identity property this get from remote service
public string DummyProperty { get; set; } // Some properties returned by remote service
}
public class KeyMyModel
{
public string Key { get; set; }
public string MyModelId { get; set; }
}
public class Key
{
public string KeyStr { get; set; }
public List<KeyMyModel> KeyMyModels { get; set; }
}
public interface ICacheService
{
List<RemoteModel> Get(string key);
List<RemoteModel> Get(string key, Func<List<RemoteModel>> getdata);
Task<List<RemoteModel>> Get(string key, Func<Task<List<RemoteModel>>> getdata);
void AddOrUpdate(string key, object value);
}
public class CacheService : ICacheService
{
public List<MyModel> MyModels { get; private set; }
public List<Key> Keys { get; private set; }
public List<KeyMyModel> KeyMyModels { get; private set; }
public CacheService()
{
MyModels = new List<MyModel>();
Keys = new List<Key>();
KeyMyModels = new List<KeyMyModel>();
}
public List<RemoteModel> Get(string key)
{
return MyModels.Where(s => s.KeyMyModels.Any(t => t.Key == key)).Select(s => s.RemoteModel).ToList();
}
public List<RemoteModel> Get(string key, Func<List<RemoteModel>> getdata)
{
var remoteData = getdata();
Set(key, remoteData);
return MyModels.Where(s => s.KeyMyModels.Any(t => t.Key == key)).Select(t => t.RemoteModel).ToList();
}
public Task<List<RemoteModel>> Get(string key, Func<Task<List<RemoteModel>>> getdata)
{
throw new NotImplementedException();
}
public void AddOrUpdate(string key, object value)
{
throw new NotImplementedException();
}
public void Invalidate(string key)
{
}
public void Set(string key, List<RemoteModel> data)
{
var Key = Keys.FirstOrDefault(s => s.KeyStr == key) ?? new Key()
{
KeyStr = key
};
foreach (var remoteModel in data)
{
var exist = MyModels.FirstOrDefault(s => s.RemoteModel.Id == remoteModel.Id);
if (exist == null)
{
// add data to the cache
var myModel = new MyModel()
{
RemoteModel = remoteModel
};
var keyMyModel = new KeyMyModel()
{
Key = key,
MyModelId = remoteModel.Id
};
myModel.KeyMyModels.Add(keyMyModel);
Key.KeyMyModels.Add(keyMyModel);
Keys.Add(Key);
}
else
{
exist.RemoteModel = remoteModel;
var existKeyMyModel =
KeyMyModels.FirstOrDefault(s => s.Key == key && s.MyModelId == exist.RemoteModel.Id);
if (existKeyMyModel == null)
{
existKeyMyModel = new KeyMyModel()
{
Key = key,
MyModelId = exist.RemoteModel.Id
};
Key.KeyMyModels.Add(existKeyMyModel);
exist.KeyMyModels.Add(existKeyMyModel);
KeyMyModels.Add(existKeyMyModel);
}
}
}
// Remove MyModels if need
var remoteIds = data.Select(s => s.Id);
var currentIds = KeyMyModels.Where(s => s.Key == key).Select(s => s.MyModelId);
var removingIds = currentIds.Except(remoteIds);
var removingKeyMyModels = KeyMyModels.Where(s => s.Key == key && removingIds.Any(i => i == s.MyModelId)).ToList();
removingKeyMyModels.ForEach(s =>
{
KeyMyModels.Remove(s);
Key.KeyMyModels.Remove(s);
});
}
}
class CacheConsumer
{
private readonly CacheService _cacheService = new CacheService();
public List<RemoteModel> GetMyModels(string frontendId, string languageId, string accessId)
{
var key = $"{frontendId}_{languageId}_{accessId}";
return _cacheService.Get(key, () =>
{
// call to remote service here
return new List<RemoteModel>();
});
}
}
Related
I'm writing a validation engine.
I'm given with an a object payload containing around list of ~40 different properties. Every property will undergo different validation.
The validations include checking if the field is a string and validating if the length exceeds permissible limit set by db and there are conditions to check null values and empty fields as well.
So, I thought of picking the strategy pattern.
Code:
interface IValidationEngine
{
Error Validate(string propName, dynamic propValue);
}
public class StringLengthValidator : IValidationEngine
{
static Dictionary<string, int> sLengths = new Dictionary<string, int>();
public StringLengthValidator()
{
if (sLengths.Count == 0)
{
sLengths = new Dictionary<string, int>(){....};
}
}
public Error Validate(string name, dynamic value)
{
var err = default(Error);
//logic here
return err;
}
}
public class MandatoryValidator : IValidationEngine
{
public Error Validate(string name, dynamic value)
{
var err = default(Error);
//logic here
return err;
}
}
public class MandatoryStringLengthValidator : IValidationEngine
{
public Error Validate(string name, dynamic value)
{
var e = default(Error);
if (value == null || (value.GetType() == typeof(string) && string.IsNullOrWhiteSpace(value)))
{
e = new Error()
{
//.. err info
};
}
else
{
StringLengthValidator sl = new StringLengthValidator();
e = sl.Validate(name, value);
}
return e;
}
}
There is another class which I named it ValidationRouter. The job of this is to route to the specific validator (Think of this as a Strategy as per pattern).
I wrote the initial router to have a dictionary of keys and its respective objects map so that whatever comes in will match its key and calls its appropriate validate() method on the dictionary value. But as adding keys and values with new object creation. I think the class will take lot of memory as I've the object creation in the constructor.
So, I've added Lazy to the object creation and it ended up like this
Here is the code for router
public class ValidationRouter
{
public Dictionary<string, Lazy<IValidationEngine>> strategies = new Dictionary<string, Lazy<IValidationEngine>>();
public ValidationRouter()
{
var mandatoryStringLength = new Lazy<IValidationEngine>(() => new MandatoryStringLengthValidator());
var mandatory = new Lazy<IValidationEngine>(() => new MandatoryValidator());
var stringLength = new Lazy<IValidationEngine>(() => new StringLengthValidator());
strategies.Add("position", mandatoryStringLength);
strategies.Add("pcode", stringLength);
strategies.Add("username", mandatoryStringLength);
strategies.Add("description", stringLength);
strategies.Add("sourcename", stringLength);
//OMG: 35 more fields to be added to route to specific routers
}
public Error Execute(string name, dynamic value)
{
//Find appropriate field and route to its strategy
var lowered = name.ToLower();
if (!strategies.ContainsKey(lowered)) return default(Error);
return strategies[lowered].Value.Validate(name, value);
}
}
As you can see I need to add 35 more keys to dictionaries with appropriate strategies in the constructor.
Is the approach correct or is there any better way of routing to specific validator algorithms ?
I thought of object creation to be done by factory pattern with Activator.CreateInstance but not sure how to achieve as each of my properties will have different strategies.
Any ideas would be appreciated.
Say I have the following (simplified):
public class Item
{
public String Name { get; set; }
public String Type { get; set; }
}
public class Armor : Item
{
public int AC { get; set; }
public Armor () { Type = "Armor"; }
}
public class Weapon : Item
{
public int Damage { get; set; }
public Armor () { Type = "Weapon"; }
}
public class Actor
{
...
}
public class HasItem : Relationship<ItemProps>, IRelationshipAllowingSourceNode<Actor>, IRelationshipAllowingTargetNode<Item>
{
public readonly string TypeKey = "HasItem";
public HasItem ( NodeReference targetItem, int count = 1 )
: base(targetItem, new ItemProps { Count = count })
{
}
public override string RelationshipTypeKey
{
get { return TypeKey; }
}
}
With this setup I can easily create a heterogeneous list of Weapons, Armor, etc related to the Actor. But I can't seem to figure out how to get them out. I have this method (again simplified) to get a list of all the related items, but it gets them all out as Items. I can't figure out how to get them as their actual type. I can use the Type field to determine the type, but there doesn't seem to be anyway of dynamically building the return:
public IEnumerable<Item> Items
{
get
{
return
GameNode
.GraphClient
.Cypher
.Start(new { a = Node.ByIndexLookup("node_auto_index", "Name", Name) })
.Match("(a)-[r:HasItem]-(i)")
.Return<Item>("i") // Need something here to return Armor, Weapon, etc as needed based on the Type property
.Results;
}
}
I found a bad workaround where I return the Type and NodeID and run the list through a switch statement that does a .Get with the NodeID and casts it to the right type. but this is inflexible and inefficient. I could run one query for each derived class and concatenate them together, but the thought of that makes my skin crawl.
This seems like it would be a common problem, but I couldn't find anything online. Any ideas?
The problem is how the data is stored in Neo4J, and serialized back via Json.net.
Let's say I have a sword:
var sword = new Weapon{
Name = "Sword 12.32.rc1",
Type = "Sword"
Damage = 12
};
If I serialize this to neo4j: graphClient.Create(sword); all is fine, internally we now have a Json representation which will look something like this:
{ "Name" : "Sword 12.32.rc1", "Type": "Sword", "Damage": "12"}
There is no information here that the computer can use to derive that this is in fact of type 'Sword', so if you bring back a collection of type Item it can only bring back the two properties Name and Type.
So, there are two solutions that I can think of, neither one of which is great, but both do get you with a one query solution. The first (most sucky) is to create a 'SuperItem' which has all the properties from the derived classes together, so:
public class SuperItem { Name, Type, Damage, AC } //ETC
But that is horrible, and kind of makes having a hierarchy pointless. The 2nd option, which whilst not great is better - is to use a Dictionary to get the data:
var query = GraphClient
.Cypher
.Start(new {n = actorRef})
.Match("n-[:HasItem]->item")
.Return(
item => new
{
Item = item.CollectAs<Dictionary<string,string>>()
});
var results = query.Results.ToList();
Which if you run:
foreach (var data in results2.SelectMany(item => item.Item, (item, node) => new {item, node}).SelectMany(#t => #t.node.Data))
Console.WriteLine("Key: {0}, Value: {1}", data.Key, data.Value);
Would print out:
Key: Type, Value: Sword
Key: Damage, Value: 12
Key: Name, Value: 12.32.rc1
So, now we have a dictionary of the properties, we can create an extension class to parse it:
public static class DictionaryExtensions
{
public static Item GetItem(this Dictionary<string, string> dictionary)
{
var type = dictionary.GetTypeOfItem().ToLowerInvariant();
var json = dictionary.ToJson();
switch (type)
{
case "sword":
return GetItem<Weapon>(json);
case "armor":
return GetItem<Armor>(json);
default:
throw new ArgumentOutOfRangeException("dictionary", type, string.Format("Unknown type: {0}", type));
}
}
private static string GetTypeOfItem(this Dictionary<string, string> dictionary)
{
if(!dictionary.ContainsKey("Type"))
throw new ArgumentException("Not valid type!");
return dictionary["Type"];
}
private static string ToJson(this Dictionary<string, string> dictionary)
{
var output = new StringBuilder("{");
foreach (var property in dictionary.OrderBy(k => k.Key))
output.AppendFormat("\"{0}\":\"{1}\",", property.Key, property.Value);
output.Append("}");
return output.ToString();
}
private static Item GetItem<TItem>(string json) where TItem: Item
{
return JsonConvert.DeserializeObject<TItem>(json);
}
}
and use something like:
var items = new List<Item>();
foreach (var data in results)
foreach (Node<Dictionary<string, string>> item in data.Item)
items.Add(item.Data.GetItem());
Where items will be the types you're after.
I know this isn't great, but it does get you to one query.
I have a class that inherits from the abstract Configuration class, and then each class implements the reader for INI files, XML, conf, or proprietary formats. I am having a problem in creating the objects to be tested using FakeItEasy.
The object I am trying to test uses the configuration object via Dependency Injection, so it can simply read configuration settings by calling the ReadString(), ReadInteger() etc... functions, and then the text for the location (Section, Key for instance, with an INI) may be retrieved from the appropriate section in whatever format of configuration file (INI, XML, conf, etc...).
Sample code being used:
public class TestFile
{
private readonly ConfigurationSettings ConfigurationController_ ;
...
public TestFile(ConfigurationSettings ConfigObject)
{
this.ConfigurationController_ = ConfigObject;
}
public TestFile(XMLFile ConfigObject)
{
this.ConfigurationController_ = ConfigObject;
}
public TestFile(INIFile ConfigObject)
{
this.ConfigurationController_ = ConfigObject;
}
...
private List<string> GetLoginSequence()
{
List<string> ReturnText = new List<string>();
string SignOnFirst = ConfigurationController_.ReadString("SignOn", "SignOnKeyFirst", "Y");
string SendEnterClear = ConfigurationController_.ReadString("SignOn", "SendEnterClear", "N");
if (SendEnterClear.Equals("Y", StringComparison.CurrentCultureIgnoreCase))
{
ReturnText.Add("enter");
ReturnText.Add("clear");
}
if (SignOnFirst.Equals("N", StringComparison.CurrentCultureIgnoreCase))
{
ReturnText.AddRange(MacroUserPassword("[$PASS]"));
ReturnText.Add("sign_on");
}
else
{
ReturnText.Add("sign_on");
ReturnText.AddRange(MacroUserId("[$USER]"));
ReturnText.AddRange(MacroUserPassword("[$PASS]"));
}
return ReturnText;
}
A Simple Test example:
[TestMethod]
public void TestSignOnSequence()
IniReader FakeINI = A.Fake<IniReader>();
//Sample Reads:
//Config.ReadString("SignOn", "SignOnKeyFirst", "Y");
//Config.ReadString("SignOn", "SendEnterClear", "N"); // Section, Key, Default
A.CallTo(() => FakeINI.ReadString(A<string>.That.Matches(s => s == "SignOn"), A<string>.That.Matches(s => s == "SendEnterClear"))).Returns("N");
A.CallTo(() => FakeINI.ReadString(A<string>.That.Matches(s => s == "SignOn"), A<string>.That.Matches(s => s == "SignOnKeyFirst"))).Returns("N");
A.CallTo(FakeINI).Where( x => x.Method.Name == "ReadInteger").WithReturnType<int>().Returns(1000);
TestFile TestFileObject = new TestFile(FakeINI);
List<string> ReturnedKeys = TestFileObject.GetLoginSequence();
Assert.AreEqual(2, ReturnedKeys.Count, "Ensure all keystrokes are returned");
This compiles fine, but when I execute the code, I get the following Exception:
Test threw Exception:
FakeItEasy.Configuration.FakeConfigurationException:
The current proxy generator can not intercept the specified method for the following reason:
- Non virtual methods can not be intercepted.
If I change how I create the fake which then works without an exception, I can not get different values for the various calls to the same function.
A.CallTo(FakeINI).Where( x => x.Method.Name == "ReadString").WithReturnType<string>().Returns("N");
The above method does not allow me to control the return for the different calls that the function uses to the INI.
How can I combine the two methods, the where and the test of the parameters?
Additional definitions as requested:
public abstract class ConfigurationSettings
{
...
abstract public int ReadInteger(string Section, string Key, int Default);
abstract public string ReadString(string Section, string Key, string Default);
public int ReadInteger(string Section, string Key)
{ return ReadInteger(Section, Key, 0); }
public int ReadInteger(string Key, int Default)
{ return ReadInteger("", Key, Default); }
public int ReadInteger(string Key)
{ return ReadInteger(Key, 0); }
public string ReadString(string Section, string Key)
{ return ReadString(Section, Key, null); }
}
public class IniReader : ConfigurationSettings
{
...
public IniReader()
{
}
public IniReader(string PathAndFile)
{
this.PathAndFileName = PathAndFile;
}
public override int ReadInteger(string Section, string Key, int Default)
{
return GetPrivateProfileInt(Section, Key, Default, PathAndFileName);
}
public override string ReadString(string Section, string Key, string Default)
{
StringBuilder WorkingString = new StringBuilder(MAX_ENTRY);
int Return = GetPrivateProfileString(Section, Key, Default, WorkingString, MAX_ENTRY, PathAndFileName);
return WorkingString.ToString();
}
}
You're getting the
The current proxy generator can not intercept the specified method for the following reason:
- Non virtual methods can not be intercepted.
error because you're trying to fake the 2-parameter version of ReadString. Only virtual members, abstract members, or interface members can be faked. Since your two-paremter ReadString is none of these, it can't be faked. I think you should either have a virtual 2-parameter ReadString or fake the 3-parameter ReadString.
Your example pushes me towards faking the 3-parameter ReadString, especially since GetLoginSequence uses that one. Then I think you could just constrain using the expressions (rather than method name strings) and it would all work out.
I made a little test with bits of your code (mostly from before your update) and had success with faking the 3-parameter ReadString:
[Test]
public void BlairTest()
{
IniReader FakeINI = A.Fake<IniReader>();
A.CallTo(() => FakeINI.ReadString("SignOn", "SendEnterClear", A<string>._)).Returns("N");
A.CallTo(() => FakeINI.ReadString("SignOn", "SignOnKeyFirst", A<string>._)).Returns("N");
// Personally, I'd use the above syntax for this one too, but I didn't
// want to muck too much.
A.CallTo(FakeINI).Where(x => x.Method.Name == "ReadInteger").WithReturnType<int>().Returns(1000);
TestFile TestFileObject = new TestFile(FakeINI);
List<string> ReturnedKeys = TestFileObject.GetLoginSequence();
Assert.AreEqual(2, ReturnedKeys.Count, "Ensure all keystrokes are returned");
}
I would like some way to hard code information in C# as follows:
1423, General
5298, Chiro
2093, Physio
9685, Dental
3029, Optics
I would like to then refer to this data as follows:
"The description for category 1423 is " & MyData.GetDescription[1423]
"The id number for General is " & MyData.GetIdNumber("General")
What would be the best way to do this in C#?
Well you could use Tuple<int, string> - but I'd suggest creating a class to store the two values:
public sealed class Category
{
private readonly int id;
public int Id { get { return id; } }
private readonly string description;
public string Description { get { return description; } }
public Category(int id, string description)
{
this.id = id;
this.description = description;
}
// Possibly override Equals etc
}
Then for lookup purposes, you could either have a Dictionary<string, Category> for description lookups and a Dictionary<int, Category> for ID lookups - or if you were confident that the number of categories would stay small, you could just use a List<Category>.
The benefits of having a named type for this over using just a Tuple or simple Dictionary<string, int> and Dictionary<int, string> are:
You have a concrete type you can pass around, use in your data model etc
You won't end up confusing a Category with any other data type which is logically just an int and a string
Your code will be clearer to read when it uses Id and Description properties than Item1 and Item2 from Tuple<,>.
If you need to add another property later, the changes are minimal.
You can use a Dictionary<TKey, TValue>:
var items = new Dictionary<int, string>();
items.Add(1423, "General");
...
var valueOf1423 = items[1423];
var keyOfGeneral = items.FirstOrDefault(x => x.Value == "General").Key;
The example above will throw an exception if there's no item with value "General". To prevent this you could wrap the Dictionary in a custom class and check if the entry exists and returns whatever you need.
Note that the value is not unique, a Dictonary allows you to store the same values with different keys.
A wrapper class could look something like this:
public class Category {
private Dictionary<int, string> items = new Dictionary<int,, string>();
public void Add(int id, string description) {
if (GetId(description <> -1)) {
// Entry with description already exists.
// Handle accordingly to enforce uniqueness if required.
} else {
items.Add(id, description);
}
}
public string GetDescription(int id) {
return items[id];
}
public int GetId(string description) {
var entry = items.FirstOrDefault(x => x.Value == description);
if (entry == null)
return -1;
else
return entry.Key;
}
}
In an application of mine, I need a large constant (actually static readonly) array of objects. The array is initialized in the type's static constructor.
The array contains more than a thousand items, and when the type is first used, my program experiences a serious slowdown. I would like to know if there is a way to initialise a large array quickly in C#.
public static class XSampa {
public class XSampaPair : IComparable<XSampaPair> {
public XSampaPair GetReverse() {
return new XSampaPair(Key, Target);
}
public string Key { get; private set; }
public string Target { get; private set; }
internal XSampaPair(string key, string target) {
Key = key;
Target = target;
}
public int CompareTo(XSampaPair other) {
if (other == null)
throw new ArgumentNullException("other",
"Cannot compare with Null.");
if (Key == null)
throw new NullReferenceException("Key is null!");
if (other.Key == null)
throw new NullReferenceException("Key is null!");
if (Key.Length == other.Key.Length)
return string.Compare(Key, other.Key,
StringComparison.InvariantCulture);
return other.Key.Length - other.Key;
}
}
private static readonly XSampaPair[] pairs, reversedPairs;
public static string ParseXSampaToIpa(this string xsampa) {
// Parsing code here...
}
public static string ParseIpaToXSampa(this string ipa) {
// reverse code here...
}
static XSampa() {
pairs = new [] {
new XSampaPair("a", "\u0061"),
new XSampaPair("b", "\u0062"),
new XSampaPair("b_<", "\u0253"),
new XSampaPair("c", "\u0063"),
// And many more pairs initialized here...
};
var temp = pairs.Select(x => x.GetReversed());
reversedPairs = temp.ToArray();
Array.Sort(pairs);
Array.Sort(reversedPairs);
}
}
PS: I use to array to convert X-SAMPA phonetic transcription to a Unicode string with the corresponding IPA characters.
You can serialize a completely initialized onject into a binary file, add that file as a resource, and load it into your array on startup. If your constructors are CPU-intensive, you might get an improvement. Since your code appears to perform some sort of parsing, the chances of getting a decent improvement there are fairly high.
You could use an IEnumerable<yourobj> which would let you lazily yield return the enumerable only as needed.
The problem with this is you won't be able to index into it like you could using the array.