I am using the Microsoft EnterPrise Library Caching library (Version 5) with a FileDependency.
On the class that I want cached, I have a static property that will either return the item from the cache, or else create a new class and add it to the cache.
This initially works well and the class is created once, and from then on, the cached copy is returned. However once the dependency file changes the cached item is never returned.
I have put together a sample program below to illustrate the issue.
The output from this is
999 cached , 1 uncached
999 cached , 1001 uncached
I would expect the results to be
999 cached , 1 uncached
1998 cached , 2 uncached
It would look like the object is added back to the cache, but is then immediately deleted as expired.
Any ideas why?
using System;
using Microsoft.Practices.EnterpriseLibrary.Common.Configuration;
using Microsoft.Practices.EnterpriseLibrary.Caching;
using Microsoft.Practices.EnterpriseLibrary.Caching.Expirations;
namespace TestCache
{
static class Program
{
[STAThread]
static void Main()
{
Cache.Create();
for (int i = 0; i < 1000; i++)
TestClass.Current.DummyMethod();
Console.WriteLine(String.Format("{0} cached , {1} uncached", TestClass.CachedItems, TestClass.UncachedItems));
System.IO.File.AppendAllText(Cache.dependencyFileName, "Test");
for (int i = 0; i < 1000; i++)
TestClass.Current.DummyMethod();
Console.WriteLine(String.Format("{0} cached , {1} uncached", TestClass.CachedItems, TestClass.UncachedItems));
Console.ReadLine();
}
}
public class Cache
{
public static CacheManager cacheManager = null;
public static string dependencyFileName;
public static FileDependency objFileDependency;
public static void Create()
{
var builder = new ConfigurationSourceBuilder();
builder.ConfigureCaching()
.ForCacheManagerNamed("TestCache")
.UseAsDefaultCache()
.StoreInMemory();
var configSource = new DictionaryConfigurationSource();
builder.UpdateConfigurationWithReplace(configSource);
EnterpriseLibraryContainer.Current = EnterpriseLibraryContainer.CreateDefaultContainer(configSource);
cacheManager = (CacheManager)EnterpriseLibraryContainer.Current.GetInstance<ICacheManager>("TestCache");
dependencyFileName = "testCache.xml";
if (!System.IO.File.Exists(dependencyFileName))
using (System.IO.File.Create(dependencyFileName)) { }
objFileDependency = new FileDependency(dependencyFileName);
}
}
public class TestClass
{
public static int CachedItems =0;
public static int UncachedItems = 0;
public void DummyMethod()
{
}
public static TestClass Current
{
get
{
TestClass current = (Cache.cacheManager.GetData("Test") as TestClass);
if (current != null)
CachedItems++;
else
{
UncachedItems++;
current = new TestClass();
Cache.cacheManager.Add("Test", current, CacheItemPriority.Normal, null, new ICacheItemExpiration[] { Cache.objFileDependency });
}
return current;
}
}
}
}
Your issue is that you are using a static FileDependency. This is causing the LastUpdateTime of the FileDependency to never be updated which in turn causes all items added to the cache to show as being expired (HasExpired() == true). Even though you are adding items to the cache since they are expired you can never retrieve them.
The solution is to use a new FileDependency object for all additions to the cache. The easiest change would be to replace the objFileDependency field with a property. Using your existing names and approach, the code would look like:
public class Cache
{
public static CacheManager cacheManager = null;
public static readonly string dependencyFileName = "testCache.xml";
public static FileDependency objFileDependency
{
get
{
return new FileDependency(dependencyFileName);
}
}
public static void Create()
{
var builder = new ConfigurationSourceBuilder();
builder.ConfigureCaching()
.ForCacheManagerNamed("TestCache")
.UseAsDefaultCache()
.StoreInMemory();
var configSource = new DictionaryConfigurationSource();
builder.UpdateConfigurationWithReplace(configSource);
EnterpriseLibraryContainer.Current = EnterpriseLibraryContainer.CreateDefaultContainer(configSource);
cacheManager = (CacheManager)EnterpriseLibraryContainer.Current.GetInstance<ICacheManager>("TestCache");
if (!System.IO.File.Exists(dependencyFileName))
using (System.IO.File.Create(dependencyFileName)) { }
}
}
Related
I am using ElasticClient.Search function to get the values of the fields.
The issue is :
The code that i make below make the mapping correctly but for searching it returns null values of the fields that was mapped before.
Main.cs
using Nest;
using System;
using System.Linq;
using System.Threading;
namespace DataAccessConsole
{
class Program
{
public static Uri node;
public static ConnectionSettings settings;
public static ElasticClient client;
static void Main(string[] args)
{
{
node = new Uri("http://localhost:9200");
settings = new ConnectionSettings(node).DefaultIndex("getallcommissionspermanentes");
settings.DefaultFieldNameInferrer(p => p);
client = new ElasticClient(settings);
var indexSettings = new IndexSettings();
indexSettings.NumberOfReplicas = 1;
indexSettings.NumberOfShards = 1;
client.Indices.Create("getallcommissionspermanentes", index => index
.Map<GetAllCommissionsPermanentes>(
x => x
.AutoMap<GetAllCommissionsPermanentes>()
));
client.Search<GetAllCommissionsPermanentes>(s => s
.AllIndices()
);
}
}
GetAllCommissionsPermanentes.cs
the table is located in an edmx model of Entityframework and Data came from SQL SERVER Database
public partial class GetAllCommissionsPermanentes
{
public int ID { get; set; }
public string NomAr { get; set; }
public string NomFr { get; set; }
}
if you need more informations just make a comment below.
Thanks
Code is correct but '.All Indices ()' searches in all indexes, results that do not match the model are coming. This code will return more accurate results;
client.Search<GetAllCommissionsPermanentes>(s => s.Index("getallcommissionspermanentes");
We are using C1 Azure Redis Cache in our application. Recently we are experiencing lots of time-outs on GET operations.
According to this article, one of possible solutions is to implement pool of ConnectionMultiplexer objects.
Another possible solution is to use a pool of ConnectionMultiplexer
objects in your client, and choose the “least loaded”
ConnectionMultiplexer when sending a new request. This should prevent
a single timeout from causing other requests to also timeout.
How would implementation of a pool of ConnectionMultiplexer objects using C# look like?
Edit:
Related question that I asked recently.
You can also accomplish this in a easier way by using StackExchange.Redis.Extensions
Sample code:
using StackExchange.Redis;
using StackExchange.Redis.Extensions.Core.Abstractions;
using StackExchange.Redis.Extensions.Core.Configuration;
using System;
using System.Collections.Concurrent;
using System.Linq;
namespace Pool.Redis
{
/// <summary>
/// Provides redis pool
/// </summary>
public class RedisConnectionPool : IRedisCacheConnectionPoolManager
{
private static ConcurrentBag<Lazy<ConnectionMultiplexer>> connections;
private readonly RedisConfiguration redisConfiguration;
public RedisConnectionPool(RedisConfiguration redisConfiguration)
{
this.redisConfiguration = redisConfiguration;
Initialize();
}
public IConnectionMultiplexer GetConnection()
{
Lazy<ConnectionMultiplexer> response;
var loadedLazys = connections.Where(lazy => lazy.IsValueCreated);
if (loadedLazys.Count() == connections.Count)
{
response = connections.OrderBy(x => x.Value.GetCounters().TotalOutstanding).First();
}
else
{
response = connections.First(lazy => !lazy.IsValueCreated);
}
return response.Value;
}
private void Initialize()
{
connections = new ConcurrentBag<Lazy<ConnectionMultiplexer>>();
for (int i = 0; i < redisConfiguration.PoolSize; i++)
{
connections.Add(new Lazy<ConnectionMultiplexer>(() => ConnectionMultiplexer.Connect(redisConfiguration.ConfigurationOptions)));
}
}
public void Dispose()
{
var activeConnections = connections.Where(lazy => lazy.IsValueCreated).ToList();
activeConnections.ForEach(connection => connection.Value.Dispose());
Initialize();
}
}
}
Where RedisConfiguration is something like this:
return new RedisConfiguration()
{
AbortOnConnectFail = true,
Hosts = new RedisHost[] {
new RedisHost()
{
Host = ConfigurationManager.AppSettings["RedisCacheAddress"].ToString(),
Port = 6380
},
},
ConnectTimeout = Convert.ToInt32(ConfigurationManager.AppSettings["RedisTimeout"].ToString()),
Database = 0,
Ssl = true,
Password = ConfigurationManager.AppSettings["RedisCachePassword"].ToString(),
ServerEnumerationStrategy = new ServerEnumerationStrategy()
{
Mode = ServerEnumerationStrategy.ModeOptions.All,
TargetRole = ServerEnumerationStrategy.TargetRoleOptions.Any,
UnreachableServerAction = ServerEnumerationStrategy.UnreachableServerActionOptions.Throw
},
PoolSize = 50
};
If you're using StackExchange.Redis, according to this github issue, you can use the TotalOutstanding property on the connection multiplexer object.
Here is a implementation I came up with, that is working correctly:
public static int POOL_SIZE = 100;
private static readonly Object lockPookRoundRobin = new Object();
private static Lazy<Context>[] lazyConnection = null;
//Static initializer to be executed once on the first call
private static void InitConnectionPool()
{
lock (lockPookRoundRobin)
{
if (lazyConnection == null) {
lazyConnection = new Lazy<Context>[POOL_SIZE];
}
for (int i = 0; i < POOL_SIZE; i++){
if (lazyConnection[i] == null)
lazyConnection[i] = new Lazy<Context>(() => new Context("YOUR_CONNECTION_STRING", new CachingFramework.Redis.Serializers.JsonSerializer()));
}
}
}
private static Context GetLeastLoadedConnection()
{
//choose the least loaded connection from the pool
/*
var minValue = lazyConnection.Min((lazyCtx) => lazyCtx.Value.GetConnectionMultiplexer().GetCounters().TotalOutstanding);
var lazyContext = lazyConnection.Where((lazyCtx) => lazyCtx.Value.GetConnectionMultiplexer().GetCounters().TotalOutstanding == minValue).First();
*/
// UPDATE following #Luke Foust comment below
Lazy<Connection> lazyContext;
var loadedLazys = lazyConnection.Where((lazy) => lazy.IsValueCreated);
if(loadedLazys.Count()==lazyConnection.Count()){
var minValue = loadedLazys.Min((lazy) => lazy.Value.TotalOutstanding);
lazyContext = loadedLazys.Where((lazy) => lazy.Value.TotalOutstanding == minValue).First();
}else{
lazyContext = lazyConnection[loadedLazys.Count()];
}
return lazyContext.Value;
}
private static Context Connection
{
get
{
lock (lockPookRoundRobin)
{
return GetLeastLoadedConnection();
}
}
}
public RedisCacheService()
{
InitConnectionPool();
}
Ninject doesn’t provide a InSessionScope Binding for Websites, so we have created our own extension:
public static IBindingNamedWithOrOnSyntax<T> InSessionScope<T>(this IBindingInSyntax<T> parent)
{
return parent.InScope(SessionScopeCallback);
}
private const string _sessionKey = "Ninject Session Scope Sync Root";
private static object SessionScopeCallback(IContext context)
{
if (HttpContext.Current.Session[_sessionKey] == null)
{
HttpContext.Current.Session[_sessionKey] = new object();
}
return HttpContext.Current.Session[_sessionKey];
}
This extension is working fine until we are using the standard local SessionStore.
But we changed the SessionStore and we now use the „AppFabricCacheSessionStoreProvider“ and this store is no longer on the local machine its on the server.
And the problem is that Ninject tries to resolve the reference of an object which was serialized and deserialized and comes from the server and not from the local memory and so ninject can’t find the reference. The result is, that ninjects allways creates a new Object and the SessionScope does not work any more.
Edit 1:
We are using this functionality
https://msdn.microsoft.com/en-us/library/hh361711%28v=azure.10%29.aspx
and here I can use the standard "HttpContext.Current.Session" Object and the list content is stored on the server and not on the local machine.
So architecturally you have a problem in that you need to store the settings for AppFabric somewhere, and this is an issue with your static method. But assume you create a public static class like so:
public static class AppCache
{
public static DataCache Cache { get; private set; }
static AppCache()
{
List<DataCacheServerEndpoint> servers = new List<DataCacheServerEndpoint>(1);
servers.Add(new DataCacheServerEndpoint("ServerName", 22233)); //22233 is the default port
DataCacheFactoryConfiguration configuration = new DataCacheFactoryConfiguration
{
Servers = servers,
LocalCacheProperties = new DataCacheLocalCacheProperties(),
SecurityProperties = new DataCacheSecurity(),
RequestTimeout = new TimeSpan(0, 0, 300),
MaxConnectionsToServer = 10,
ChannelOpenTimeout = new TimeSpan(0, 0, 300),
TransportProperties = new DataCacheTransportProperties() { MaxBufferSize = int.MaxValue, MaxBufferPoolSize = long.MaxValue }
};
DataCacheClientLogManager.ChangeLogLevel(System.Diagnostics.TraceLevel.Off);
var _factory = new DataCacheFactory(configuration);
Cache = _factory.GetCache("MyCache");
}
}
then you can change extension like so:
public static IBindingNamedWithOrOnSyntax<T> InSessionScope<T>(this IBindingInSyntax<T> parent)
{
return parent.InScope(SessionScopeCallback);
}
private const string _sessionKey = "Ninject Session Scope Sync Root";
private static object SessionScopeCallback(IContext context)
{
var cachedItem = AppCache.Cache.Get("MyItem"); // IMPORTANT: For concurrency reason, get the whole item down to method scope.
if (cachedItem == null)
{
cachedItem = new object();
AppCache.Cache.Put("MyItem", cachedItem);
}
return cachedItem;
}
I've found a "Solution" that works so far it's not perfect because I am avoiding the AppFabric Store with an Localstore for the Object Reference.
public static IBindingNamedWithOrOnSyntax<T> InSessionScope<T>(this IBindingInSyntax<T> parent)
{
return parent.InScope(SessionScopeCallback);
}
public static Dictionary<string, object> LocalSessionStore = new Dictionary<string, object>();
private const string _sessionKey = "Ninject Session Scope Sync Root";
private static object SessionScopeCallback(IContext context)
{
var obj = new object();
var key = (string)HttpContext.Current.Session[_sessionKey];
if (string.IsNullOrEmpty(key))
{
var guid = Guid.NewGuid().ToString();
HttpContext.Current.Session[_sessionKey] = guid;
LocalSessionStore.Add(guid, obj);
}
else if(!LocalSessionStore.ContainsKey(key))
{
LocalSessionStore.Add(key, obj);
return LocalSessionStore[key];
}
else if (LocalSessionStore.ContainsKey(key))
{
return LocalSessionStore[key];
}
return HttpContext.Current.Session[_sessionKey];
}
}
I'm sure its very straightforward but I am struggling to figure out how to write an array to file using CSVHelper.
I have a class for example
public class Test
{
public Test()
{
data = new float[]{0,1,2,3,4};
}
public float[] data{get;set;}
}
i would like the data to be written with each array value in a separate cell. I have a custom converter below which is instead providing one cell with all the values in it.
What am I doing wrong?
public class DataArrayConverter<T> : ITypeConverter
{
public string ConvertToString(TypeConverterOptions options, object value)
{
var data = (T[])value;
var s = string.Join(",", data);
}
public object ConvertFromString(TypeConverterOptions options, string text)
{
throw new NotImplementedException();
}
public bool CanConvertFrom(Type type)
{
return type == typeof(string);
}
public bool CanConvertTo(Type type)
{
return type == typeof(string);
}
}
To further detail the answer from Josh Close, here what you need to do to write any IEnumerable (including arrays and generic lists) in a recent version (anything above 3.0) of CsvHelper!
Here the class under test:
public class Test
{
public int[] Data { get; set; }
public Test()
{
Data = new int[] { 0, 1, 2, 3, 4 };
}
}
And a method to show how this can be saved:
static void Main()
{
using (var writer = new StreamWriter("db.csv"))
using (var csv = new CsvWriter(writer))
{
var list = new List<Test>
{
new Test()
};
csv.Configuration.HasHeaderRecord = false;
csv.WriteRecords(list);
writer.Flush();
}
}
The important configuration here is csv.Configuration.HasHeaderRecord = false;. Only with this configuration you will be able to see the data in the csv file.
Further details can be found in the related unit test cases from CsvHelper.
In case you are looking for a solution to store properties of type IEnumerable with different amounts of elements, the following example might be of any help:
using CsvHelper;
using CsvHelper.Configuration;
using System.Collections.Generic;
using System.IO;
using System.Linq;
namespace CsvHelperSpike
{
class Program
{
static void Main(string[] args)
{
using (var writer = new StreamWriter("db.csv"))
using (var csv = new CsvWriter(writer))
{
csv.Configuration.Delimiter = ";";
var list = new List<AnotherTest>
{
new AnotherTest("Before String") { Tags = new List<string> { "One", "Two", "Three" }, After="After String" },
new AnotherTest("This is still before") {After="after again", Tags=new List<string>{ "Six", "seven","eight", "nine"} }
};
csv.Configuration.RegisterClassMap<TestIndexMap>();
csv.WriteRecords(list);
writer.Flush();
}
using(var reader = new StreamReader("db.csv"))
using(var csv = new CsvReader(reader))
{
csv.Configuration.IncludePrivateMembers = true;
csv.Configuration.RegisterClassMap<TestIndexMap>();
var result = csv.GetRecords<AnotherTest>().ToList();
}
}
private class AnotherTest
{
public string Before { get; private set; }
public string After { get; set; }
public List<string> Tags { get; set; }
public AnotherTest() { }
public AnotherTest(string before)
{
this.Before = before;
}
}
private sealed class TestIndexMap : ClassMap<AnotherTest>
{
public TestIndexMap()
{
Map(m => m.Before).Index(0);
Map(m => m.After).Index(1);
Map(m => m.Tags).Index(2);
}
}
}
}
By using the ClassMap it is possible to enable HasHeaderRecord (the default) again. It is important to note here, that this solution will only work, if the collection with different amounts of elements is the last property. Otherwise the collection needs to have a fixed amount of elements and the ClassMap needs to be adapted accordingly.
This example also shows how to handle properties with a private set. For this to work it is important to use the csv.Configuration.IncludePrivateMembers = true; configuration and have a default constructor on your class.
Unfortunately, it doesn't work like that. Since you are returning , in the converter, it will quote the field, as that is a part of a single field.
Currently the only way to accomplish what you want is to write manually, which isn't too horrible.
foreach( var test in list )
{
foreach( var item in test.Data )
{
csvWriter.WriteField( item );
}
csvWriter.NextRecord();
}
Update
Version 3 has support for reading and writing IEnumerable properties.
I am trying to get some data for a specified user using ebay's getFeedback API and ended up with this code.
namespace one
{
class Program
{
private static ApiContext apiContext = null;
static void Main(string[] args)
{
ApiContext apiContext = GetApiContext();
GeteBayOfficialTimeCall apiCall = new GeteBayOfficialTimeCall(apiContext);
GetFeedbackCall call = new GetFeedbackCall(apiContext);
call.UserID = "abc";
Console.WriteLine(call.GetFeedback().ToString());
Console.ReadKey();
}
static ApiContext GetApiContext()
{
if (apiContext != null)
{
return apiContext;
}
else
{
apiContext = new ApiContext();
apiContext.SoapApiServerUrl = ConfigurationManager.AppSettings["Environment.ApiServerUrl"];
ApiCredential apiCredential = new ApiCredential();
apiCredential.eBayToken = ConfigurationManager.AppSettings["UserAccount.ApiToken"];
apiContext.ApiCredential = apiCredential;
apiContext.Site = SiteCodeType.US;
return apiContext;
}
}
}
}
It prints the following line in console
eBay.Service.Core.Soap.FeedbackDetailTypeCollection
How can I get the original data?
call.GetFeedback() returning collection of FeedbackDetailType members, so you can use foreach to retrieve informations (such as feedback score and other stuff) about all particular feedback.
see complete members list of FeedbackDetailType members
here!
e.g
foreach (FeedbackDetailType feedback in call.GetFeedback())
{
Console.WriteLine(feedback.CommentText);
//and other stuff
}
Or you can use something like that
call.GetFeedback();
Console.WriteLine(call.FeedbackScore);