1 out of N threads never joining - c#

I have thread pool implementation where whenever I try to stop/join the pool there is always one random thread in the pool that will not stop (state == Running) when I call Stop() on the pool.
I cannot see why, I only have one lock, I notify whoever might be blocked waiting for Dequeue with Monitor.PulseAll in Stop. The debugger clearly shows most of them got the message, it is just always 1 out of N that is still running...
Here is a minimal implementation of the pool
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
namespace MultiThreading
{
public class WorkerHub
{
private readonly object _listMutex = new object();
private readonly Queue<TaskWrapper> _taskQueue;
private readonly List<Thread> _threads;
private int _runCondition;
private readonly Dictionary<string, int> _statistics;
public WorkerHub(int count = 4)
{
_statistics = new Dictionary<string, int>();
_taskQueue = new Queue<TaskWrapper>();
_threads = new List<Thread>();
InitializeThreads(count);
}
private bool ShouldRun
{
get => Interlocked.CompareExchange(ref _runCondition, 1, 1) == 1;
set
{
if (value)
Interlocked.CompareExchange(ref _runCondition, 1, 0);
else
Interlocked.CompareExchange(ref _runCondition, 0, 1);
}
}
private void InitializeThreads(int count)
{
Action threadHandler = () =>
{
while (ShouldRun)
{
var wrapper = Dequeue();
if (wrapper != null)
{
wrapper.FunctionBinding.Invoke();
_statistics[Thread.CurrentThread.Name] += 1;
}
}
};
for (var i = 0; i < count; ++i)
{
var t = new Thread(() => { threadHandler.Invoke(); });
t.Name = $"WorkerHub Thread#{i}";
_statistics[t.Name] = 0;
_threads.Add(t);
}
}
public Task Enqueue(Action work)
{
var tcs = new TaskCompletionSource<bool>();
var wrapper = new TaskWrapper();
Action workInvoker = () =>
{
try
{
work.Invoke();
tcs.TrySetResult(true);
}
catch (Exception e)
{
tcs.TrySetException(e);
}
};
Action workCanceler = () => { tcs.TrySetCanceled(); };
wrapper.FunctionBinding = workInvoker;
wrapper.CancelBinding = workCanceler;
lock (_taskQueue)
{
_taskQueue.Enqueue(wrapper);
Monitor.PulseAll(_taskQueue);
}
return tcs.Task;
}
private TaskWrapper Dequeue()
{
lock (_listMutex)
{
while (_taskQueue.Count == 0)
{
if (!ShouldRun)
return null;
Monitor.Wait(_listMutex);
}
_taskQueue.TryDequeue(out var wrapper);
return wrapper;
}
}
public void Stop()
{
ShouldRun = false;
//Wake up whoever is waiting for dequeue
lock (_listMutex)
{
Monitor.PulseAll(_listMutex);
}
foreach (var thread in _threads)
{
thread.Join();
}
var sum = _statistics.Sum(pair => pair.Value) * 1.0;
foreach (var stat in _statistics)
{
Console.WriteLine($"{stat.Key} ran {stat.Value} functions, {stat.Value/sum * 100} percent of the total.");
}
}
public void Start()
{
ShouldRun = true;
foreach (var thread in _threads) thread.Start();
}
}
}
With a test run
public static async Task Main(string[] args)
{
var hub = new WorkerHub();
var tasks = Enumerable.Range(0, (int) 100).Select(x => hub.Enqueue(() => Sum(x)))
.ToArray();
var sw = new Stopwatch();
sw.Start();
hub.Start();
await Task.WhenAll(tasks);
hub.Stop();
sw.Start();
Console.WriteLine($"Work took: {sw.ElapsedMilliseconds}ms.");
}
public static int Sum(int n)
{
var sum = 0;
for (var i = 0; i <= n; ++i) sum += i;
Console.WriteLine($"Sum of numbers up to {n} is {sum}");
return sum;
}
Am I missing something fundamental? Please note this is not production code (phew) but stuff I am just missing around with so you might find more than 1 issue :)

I wasn't able to repro your MCVE at first because I ran it in a non-async Main()...
If you view the 'Threads' debug window at the call to hub.Stop(); you should see that execution has switched to one of your worker threads. This is why one worker thread does not respond.
I think its related to the problem described here.
Switching Enqueue(Action work) to use TaskCreationOptions.RunContinuationsAsynchronously should fix it:
var tcs = new TaskCompletionSource<bool>(TaskCreationOptions.RunContinuationsAsynchronously);
[Edit]
Probably a better way to avoid the problem is to swap out the direct thread management to use tasks (this isn't a proper drop-in replacement for your current code, just want to show the idea):
public class TaskWorkerHub
{
ConcurrentQueue<Action> workQueue = new ConcurrentQueue<Action>();
int concurrentTasks;
CancellationTokenSource cancelSource;
List<Task> workers = new List<Task>();
private async Task Worker(CancellationToken cancelToken)
{
while (workQueue.TryDequeue(out var workTuple))
{
await Task.Run(workTuple, cancelToken);
}
}
public TaskWorkerHub(int concurrentTasks = 4)
{
this.concurrentTasks = concurrentTasks;
}
public void Enqueue(Action work) => workQueue.Enqueue(work);
public void Start()
{
cancelSource = new CancellationTokenSource();
for (int i = 0; i < concurrentTasks; i++)
{
workers.Add(Worker(cancelSource.Token));
}
}
public void Stop() => cancelSource.Cancel();
public Task WaitAsync() => Task.WhenAll(workers);
}

Related

C# WebAPI Thread Aborting when Sharing Data Access Logic

I'm getting the following error on my C# Web API: "Exception thrown: 'System.Threading.ThreadAbortException' in System.Data.dll
Thread was being aborted". I have a long running process on one thread using my data access logic class to get and update records being process. Meanwhile a user submits another group to process which has need of the same data access logic class, thus resulting in the error. Here is a rough sketch of what I'm doing.
WebAPI Class:
public IHttpActionResult OkToProcess(string groupNameToProcess)
{
var logic = GetLogic();
//Gets All Unprocessed Records and Adds them to Blocking Queue
Task.Factory.StartNew(() => dataAccessLogic.LoadAndProcess(groupNameToProcess);
}
public IHttpActionResult AddToProcess(int recordIdToProcess)
{
StaticProcessingFactory.AddToQueue(recordIdToProcess);
}
StaticProcessingFactory
internal static ConcurrentDictionary<ApplicationEnvironment, Logic> correctors = new ConcurrentDictionary<ApplicationEnvironment, Logic>();
internal static BlockingCollection<CorrectionMessage> MessageQueue = new BlockingCollection<Message>(2000);
public void StartService(){
Task.Factory.StartNew(() => LoadService());
}
public void LoadService(){
var logic = GetLogic();
if(isFirstGroupOkToProcessAsPerTextFileLog())
logic.LoadAndProcess("FirstGroup");
if(isSeconddGroupOkToProcessAsPerTextFileLog())
logic.LoadAndProcess("SecondGroup");
}
public static GetLogic(){
var sqlConnectionFactory = Tools.GetSqlConnectionFactory();
string environment = ConfigurationManager.AppSettings["DefaultApplicationEnvironment"];
ApplicationEnvironment applicationEnvironment =
ApplicationEnvironmentExtensions.ToApplicationEnvironment(environment);
return correctors.GetOrAdd(applicationEnvironment, new Logic(sqlConnectionFactory ));
}
public static void AddToQueue(Message message, bool completeAdding = true)
{
if (MessageQueue.IsAddingCompleted)
MessageQueue = new BlockingCollection<Message>();
if (completeAdding && message.ProcessImmediately)
StartQueue(message);
else
MessageQueue.Add(message);
}
public static void StartQueue(Message message = null)
{
if (message != null)
{
if(!string.IsNullOrEmpty(message.ID))
MessageQueue.Add(message);
Logic logic = GetLogic(message.Environment);
try
{
var messages = MessageQueue.TakeWhile(x => logic.IsPartOfGroup(x.GroupName, message.GroupName));
if (messages.Count() > 0)
MessageQueue.CompleteAdding();
int i = 0;
foreach (var msg in messages)
{
i++;
Process(msg);
}
}
catch (InvalidOperationException) { MessageQueue.CompleteAdding(); }
}
}
public static void Process(Message message)
{
Var logic = GetLogic(message.Environment);
var record = logic.GetRecord(message.ID);
record.Status = Status.Processed;
logic.Save(record);
}
Logic Class
private readonly DataAccess DataAccess;
public Logic(SqlConnectionFactory factory)
{
DataAccess = new DataAcess(factory);
}
public void LoadAndProcess(string groupName)
{
var groups = DataAccess.GetGroups();
var records = DataAccess.GetRecordsReadyToProcess(groups);
for(int i = 0; i < records.Count; i++)
{
Message message = new Message();
message.Enviornment = environment.ToString();
message.ID = records[i].ID;
message.User = user;
message.Group = groupName;
message.ProcessImmediately = true;
StaticProcessingFactory.AddToQueue(message, i + 1 == records.Count);
}
}
Any ideas how I might ensure that all traffic from all threads have access to the Data Access Logic without threads being systematically aborted?

Batch single rest requests to batch rest request

Say I have an api that takes individual get requests and batch requests:
http://myapiendpoint.com/mysuperitems/1234
and
http://myapiendpoint.com/mysuperitems/1234,2345,456,5677
and in my code I have a method for getting singles:
async Task<mysuperitem> GetSingleItem(int x) {
var endpoint = $"http://myapiendpoint.com/mysuperitems/{x}";
//... calls single request endpoint
}
but what I want to do is pool the single calls into batch calls.
async Task<mysuperitem> GetSingleItem(int x) {
//... pool this request in a queue and retrieve it when batch complete
}
async Task<IEnumerable<mysuperitem> GetMultiItem(IEnumerable<int> ids){
//... gets items and lets single item know it's done
}
how would i batch the calls asynchronously and inform the single call of completion. Thinking something with a ConcurrentQueue and Timer job?
It seems Task.WhenAll is what you need:
async Task<mysuperitem> GetSingleItem(int x)
{
return await ... // calls single request endpoint
}
async Task<IEnumerable<mysuperitem>> GetMultiItem(IEnumerable<int> ids)
{
return await Task.WhenAll(ids.Select(id => GetSingleItem(id)));
}
Yea, you can use a System.Timers with Timer.Interval.
And I'd use a normal Dictionary> to make it simple and to easily map id's to tasks, you're most likely batch all requests from that intervall anyway, so no real need for a queue. And then simply sync the GetSingleItem with the GetMultiItem called from the timer like:
private Dictionary<int,Task<mysuperitem>> _batchbuffer;
private object _lock = new object();
Task<mysuperitem> GetSingleItem(int id) {
lock(_lock) {
return _batchbuffer[id] = new Task<mysuperitem>();
}
}
async Task GetMultiItem(){
Dictionary<int,Task<mysuperitem>> temp;
lock(_lock) {
temp = new Dictionary<int,Task<mysuperitem>>(_batchbuffer);
_batchbuffer.Clear()
}
var batchResults = // do batch request for temp.Keys;
foreach(var result in batchResults)
temp[result.id].complete(result);
}
this is ofc batching to reduce server/network load, if you want to increase client performance that's something different.
this is a first hack attempt:
using System;
using System.Threading.Tasks;
using System.Collections.Generic;
using System.Linq;
using timers = System.Timers;
using System.Threading;
using System.Collections.Concurrent;
namespace MyNamespace
{
class BatchPoolConsumer<TReturn, TArgs>
{
class PoolItem
{
private readonly object _itemWriteLock = new object();
public object ItemWriteLock => _itemWriteLock;
public Task BlockingTask { get; set; }
public TReturn ReturnValue { get; set; }
public Guid BatchId { get; set; }
public bool IsRead { get; set; }
public ManualResetEventSlim Slim { get; set; }
}
private readonly timers.Timer _batchTimer;
private readonly timers.Timer _poolCleanerTimer;
private readonly ConcurrentDictionary<TArgs, PoolItem> _taskPool =
new ConcurrentDictionary<TArgs, PoolItem>();
private readonly Func<IEnumerable<TArgs>, Task<IEnumerable<(TArgs, TReturn)>>> _batchProcessor;
private readonly int _consumerMaxBatchConsumption;
public BatchPoolConsumer(Func<IEnumerable<TArgs>, Task<IEnumerable<(TArgs, TReturn)>>> batchProcessor, TimeSpan interval, int consumerMaxBatchConsumption)
{
_batchProcessor = batchProcessor;
_consumerMaxBatchConsumption = consumerMaxBatchConsumption;
_batchTimer = InitTimer(interval, BatchTimerElapsed);
_poolCleanerTimer = InitTimer(interval, PoolCleanerElapesed);
}
private static timers.Timer InitTimer(TimeSpan interval, Action<object, timers.ElapsedEventArgs> callback)
{
var timer = new timers.Timer(interval.TotalMilliseconds);
timer.Elapsed += (s, e) => callback(s, e);
timer.Start();
return timer;
}
private void PoolCleanerElapesed(object sendedr, timers.ElapsedEventArgs e)
{
var completedKeys = _taskPool
.Where(i => i.Value.IsRead)
.Select(i => i.Key).ToList();
completedKeys.ForEach(k => _taskPool.TryRemove(k, out _));
}
private void BatchTimerElapsed(object sender, timers.ElapsedEventArgs e)
{
_batchTimer.Stop();
var batchId = Guid.NewGuid();
var keys = _taskPool
.Where(i => !i.Value.BlockingTask.IsCompleted && !i.Value.IsRead && i.Value.BatchId == Guid.Empty)
.Take(_consumerMaxBatchConsumption).Select(kvp => kvp.Key);
keys.ToList()
.ForEach(k =>
{
if(_taskPool.TryGetValue(k, out PoolItem item))
{
lock (item.ItemWriteLock)
{
item.BatchId = batchId;
}
}
});
_batchTimer.Start();
if (_taskPool
.Any(pi => pi.Value.BatchId == batchId))
{
Console.WriteLine($"Processing batch {batchId} for {_taskPool.Count(pi => pi.Value.BatchId == batchId)} items");
var results = _batchProcessor(_taskPool
.Where(pi => pi.Value.BatchId == batchId)
.Select(i => i.Key)).Result;
Console.WriteLine($"Completed batch {batchId} for {_taskPool.Count(pi => pi.Value.BatchId == batchId)} items");
results.ToList().ForEach(r =>
{
if(_taskPool.TryGetValue(r.Item1,out PoolItem val))
{
lock (val.ItemWriteLock)
{
val.ReturnValue = r.Item2;
val.Slim.Set();
}
}
});
}
}
public async Task<TReturn> Get(TArgs args)
{
var slim = new ManualResetEventSlim(false);
var task = Task.Run(() =>
{
slim.Wait();
});
var output = new PoolItem
{
BlockingTask = task,
IsRead = false,
Slim = slim
};
_taskPool[args] = output;
await task;
var returnVal = output.ReturnValue;
output.IsRead = true;
return returnVal;
}
}
}

c# lock on Array vs Index of Array

Given this code below
var array = new object[10];
for(int x = 0;x<array.Length;x++)
array[x] = new object();
//Lock on Array
lock(array){
//Do Stuff
}
//Lock on object of array
lock(array[1]){
//Do Stuff
}
//lock on another object
var o = array[1];
lock(o){
//Do Stuff
}
The first lock statement will lock on the object array.
But on the second lock statement, is locking happening on the object at index 1 of the array, or is it also happening on the object array? Another way of asking the question, are the 2nd lock and the 3rd lock the same behavior?
I've never liked working with lock because of the complexity around multithreading. Answering my question, using the following code here are the results.
Scenrio 2 and 3 are equivalent. Those lock statements interfere with each other (this is what one would expect by looking at the code).
Scenario 1 doesn't interfere with 2 or 3.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsoleApplication1
{
class Program
{
public static object o;
public static object[] array;
static void Main(string[] args)
{
array = new object[10];
for (int x = 0; x < array.Length; x++)
array[x] = new object();
o = array[1];
var tasks = new Task[100];
Task t;
//t = Task.Run(() => lockArray(5000));
t = Task.Run(() => lockArrayIndex(5000));
//t = Task.Run(() => lockObject(5000));
for (int i = 0; i < tasks.Length; i++)
{
//tasks[i] = Task.Run(() => lockArray(1000));
//tasks[i] = Task.Run(() => lockArrayIndex(1000));
tasks[i] = Task.Run(() => lockObject(1000));
}
Task.WaitAll(tasks);
"done".Dump();
Console.ReadKey();
}
private static void lockArray(int input)
{
//Lock on Array
lock (array)
{
System.Threading.Thread.Sleep(input);
"Array".Dump();
}
}
private static void lockArrayIndex(int input)
{
//Lock on object of array
lock (array[1])
{
System.Threading.Thread.Sleep(input);
"Array[1]".Dump();
}
}
private static void lockObject(int input)
{
//lock on another object
lock (o)
{
System.Threading.Thread.Sleep(input);
"o".Dump();
}
}
}
public static class extenstions
{
public static void Dump(this string input)
{
Console.WriteLine(input);
}
}
}

how to create last file with renaming messages where message batch size is not greater than equal to 240

In below application,
Producer method adding messages to a blocking collection.
In Consumer method, I'm consuming blocking collection and adding messages to a list and when size >= 240, writing that list to json file.
At some point I don't have any new messages in blocking collection, but in Consumer, I have a list of messages which is not >=240 in size, then in this case , the app is not able to write to a new JSON file (rest of the data).
How can I let the Consumer know that no new messages coming up, write whatever left with you in a new file?
Is this possible? let say Consumer will wait for 1 minute and if there is no new messages, then write whatever left in an new file?
Here is the code (here I'm adding 11 messages. Till 9 messages the batch size is 240 and it's generates a file, but message no 10 & 11 not able to write in new file),
class Program
{
private static List<Batch> batchList = new List<Batch>();
private static BlockingCollection<Message> messages = new BlockingCollection<Message>();
private static int maxbatchsize = 240;
private static int currentsize = 0;
private static void Producer()
{
int ctr = 1;
while (ctr <= 11)
{
messages.Add(new Message { Id = ctr, Name = $"Name-{ctr}" });
Thread.Sleep(1000);
ctr++;
}
}
private static void Consumer()
{
foreach (var message in messages.GetConsumingEnumerable())
{
var msg = JsonConvert.SerializeObject(message);
Console.WriteLine(msg);
if (currentsize + msg.Length >= maxbatchsize)
{
WriteToFile(batchList);
}
batchList.Add(new Batch { Message = message });
currentsize += msg.Length;
}
}
private static void WriteToFile(List<Batch> batchList)
{
using (StreamWriter outFile = System.IO.File.CreateText(Path.Combine(#"C:\TEMP", $"{DateTime.Now.ToString("yyyyMMddHHmmssfff")}.json")))
{
outFile.Write(JsonConvert.SerializeObject(batchList));
}
batchList.Clear();
currentsize = 0;
}
static void Main(string[] args)
{
var producer = Task.Factory.StartNew(() => Producer());
var consumer = Task.Factory.StartNew(() => Consumer());
Console.Read();
}
}
}
Supporting classes,
public class Message
{
public int Id { get; set; }
public string Name { get; set; }
}
public class Batch
{
public Message Message { get; set; }
}
Update:
class Program
{
private static readonly List<Batch> BatchList = new List<Batch>();
private static readonly BlockingCollection<Message> Messages = new BlockingCollection<Message>();
private const int Maxbatchsize = 240;
private static int _currentsize;
private static void Producer()
{
int ctr = 1;
while (ctr <= 11)
{
Messages.Add(new Message { Id = ctr, Name = $"Name-{ctr}" });
Thread.Sleep(1000);
ctr++;
}
Messages.CompleteAdding();
}
private static void Consumer()
{
foreach (var message in Messages.GetConsumingEnumerable())
{
if (_currentsize >= Maxbatchsize)
{
var listToWrite = new Batch[BatchList.Count];
BatchList.CopyTo(listToWrite);
BatchList.Clear();
_currentsize = 0;
WriteToFile(listToWrite.ToList());
}
else
{
Thread.Sleep(1000);
if (Messages.IsAddingCompleted)
{
var remainSize = Messages.Select(JsonConvert.SerializeObject).Sum(x => x.Length);
if (remainSize == 0)
{
var lastMsg = JsonConvert.SerializeObject(message);
BatchList.Add(new Batch { Message = message });
_currentsize += lastMsg.Length;
Console.WriteLine(lastMsg);
var additionListToWrite = new Batch[BatchList.Count];
BatchList.CopyTo(additionListToWrite);
BatchList.Clear();
_currentsize = 0;
WriteToFile(additionListToWrite.ToList());
break;
}
}
}
var msg = JsonConvert.SerializeObject(message);
BatchList.Add(new Batch { Message = message });
_currentsize += msg.Length;
Console.WriteLine(msg);
}
}
private static void WriteToFile(List<Batch> listToWrite)
{
using (StreamWriter outFile = System.IO.File.CreateText(Path.Combine(#"C:\TEMP", $"{DateTime.Now.ToString("yyyyMMddHHmmssfff")}.json")))
{
outFile.Write(JsonConvert.SerializeObject(listToWrite));
}
}
static void Main(string[] args)
{
var producer = Task.Factory.StartNew(() => Producer());
var consumer = Task.Factory.StartNew(() => Consumer());
Console.Read();
}
}

Azure Redis Cache - pool of ConnectionMultiplexer objects

We are using C1 Azure Redis Cache in our application. Recently we are experiencing lots of time-outs on GET operations.
According to this article, one of possible solutions is to implement pool of ConnectionMultiplexer objects.
Another possible solution is to use a pool of ConnectionMultiplexer
objects in your client, and choose the “least loaded”
ConnectionMultiplexer when sending a new request. This should prevent
a single timeout from causing other requests to also timeout.
How would implementation of a pool of ConnectionMultiplexer objects using C# look like?
Edit:
Related question that I asked recently.
You can also accomplish this in a easier way by using StackExchange.Redis.Extensions
Sample code:
using StackExchange.Redis;
using StackExchange.Redis.Extensions.Core.Abstractions;
using StackExchange.Redis.Extensions.Core.Configuration;
using System;
using System.Collections.Concurrent;
using System.Linq;
namespace Pool.Redis
{
/// <summary>
/// Provides redis pool
/// </summary>
public class RedisConnectionPool : IRedisCacheConnectionPoolManager
{
private static ConcurrentBag<Lazy<ConnectionMultiplexer>> connections;
private readonly RedisConfiguration redisConfiguration;
public RedisConnectionPool(RedisConfiguration redisConfiguration)
{
this.redisConfiguration = redisConfiguration;
Initialize();
}
public IConnectionMultiplexer GetConnection()
{
Lazy<ConnectionMultiplexer> response;
var loadedLazys = connections.Where(lazy => lazy.IsValueCreated);
if (loadedLazys.Count() == connections.Count)
{
response = connections.OrderBy(x => x.Value.GetCounters().TotalOutstanding).First();
}
else
{
response = connections.First(lazy => !lazy.IsValueCreated);
}
return response.Value;
}
private void Initialize()
{
connections = new ConcurrentBag<Lazy<ConnectionMultiplexer>>();
for (int i = 0; i < redisConfiguration.PoolSize; i++)
{
connections.Add(new Lazy<ConnectionMultiplexer>(() => ConnectionMultiplexer.Connect(redisConfiguration.ConfigurationOptions)));
}
}
public void Dispose()
{
var activeConnections = connections.Where(lazy => lazy.IsValueCreated).ToList();
activeConnections.ForEach(connection => connection.Value.Dispose());
Initialize();
}
}
}
Where RedisConfiguration is something like this:
return new RedisConfiguration()
{
AbortOnConnectFail = true,
Hosts = new RedisHost[] {
new RedisHost()
{
Host = ConfigurationManager.AppSettings["RedisCacheAddress"].ToString(),
Port = 6380
},
},
ConnectTimeout = Convert.ToInt32(ConfigurationManager.AppSettings["RedisTimeout"].ToString()),
Database = 0,
Ssl = true,
Password = ConfigurationManager.AppSettings["RedisCachePassword"].ToString(),
ServerEnumerationStrategy = new ServerEnumerationStrategy()
{
Mode = ServerEnumerationStrategy.ModeOptions.All,
TargetRole = ServerEnumerationStrategy.TargetRoleOptions.Any,
UnreachableServerAction = ServerEnumerationStrategy.UnreachableServerActionOptions.Throw
},
PoolSize = 50
};
If you're using StackExchange.Redis, according to this github issue, you can use the TotalOutstanding property on the connection multiplexer object.
Here is a implementation I came up with, that is working correctly:
public static int POOL_SIZE = 100;
private static readonly Object lockPookRoundRobin = new Object();
private static Lazy<Context>[] lazyConnection = null;
//Static initializer to be executed once on the first call
private static void InitConnectionPool()
{
lock (lockPookRoundRobin)
{
if (lazyConnection == null) {
lazyConnection = new Lazy<Context>[POOL_SIZE];
}
for (int i = 0; i < POOL_SIZE; i++){
if (lazyConnection[i] == null)
lazyConnection[i] = new Lazy<Context>(() => new Context("YOUR_CONNECTION_STRING", new CachingFramework.Redis.Serializers.JsonSerializer()));
}
}
}
private static Context GetLeastLoadedConnection()
{
//choose the least loaded connection from the pool
/*
var minValue = lazyConnection.Min((lazyCtx) => lazyCtx.Value.GetConnectionMultiplexer().GetCounters().TotalOutstanding);
var lazyContext = lazyConnection.Where((lazyCtx) => lazyCtx.Value.GetConnectionMultiplexer().GetCounters().TotalOutstanding == minValue).First();
*/
// UPDATE following #Luke Foust comment below
Lazy<Connection> lazyContext;
var loadedLazys = lazyConnection.Where((lazy) => lazy.IsValueCreated);
if(loadedLazys.Count()==lazyConnection.Count()){
var minValue = loadedLazys.Min((lazy) => lazy.Value.TotalOutstanding);
lazyContext = loadedLazys.Where((lazy) => lazy.Value.TotalOutstanding == minValue).First();
}else{
lazyContext = lazyConnection[loadedLazys.Count()];
}
return lazyContext.Value;
}
private static Context Connection
{
get
{
lock (lockPookRoundRobin)
{
return GetLeastLoadedConnection();
}
}
}
public RedisCacheService()
{
InitConnectionPool();
}

Categories