StackExchange.Redis Async calls slower than sync calls - c#

I understand that the point of asynchronous methods are not to improve performance but I am finding that the asynchronous methods on the StackExchange.Redis is taking alot longer than the sync methods.
public static async Task<bool> GetScoresFromSetAsync(int score, string name)
{
string redisConnection = ConfigurationManager.AppSettings["RedisAccount"].ToString();
ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(redisConnection);
IDatabase _cache = connection.GetDatabase();
List<string> scores = new List<string>();
var resultAsync = await _cache.SortedSetRangeByScoreAsync(name, score, score);
var result = _cache.SortedSetRangeByScore(name score, score);
return true;
}
The async call is taking about 5000 ms while the non async one is taking about 30ms on average. My redis is hosted on azure. Any thoughts?
Edit: I am talking about a single request here. The SortedSetRangeByScore api call is returning within 30 ms while the SortedSetRangeByScoreAsync api call is returning within 5000 ms.

Wondering how are you measuring the latency to compare? I tried measuring it with the following code and the time taken by SE.Redis for async vs sync came out to be pretty close. I hope this helps.
My client code is running on a Azure Iaas VM and connecting to a Azure Redis Cache in the same region.
Measuring sync vs async for sorted set length 10000, iterations 10000
10000 sync calls completed in average 1.41190622 ms
10000 async calls completed in average 1.43989741 ms
Measuring sync vs async for sorted set length 100000, iterations 1
1 sync calls completed in average 0.9513 ms
1 async calls completed in average 1.1436 ms
using StackExchange.Redis;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;
namespace RedisLatency
{
class Program
{
private const string host = "myazurecache.redis.cache.windows.net";
private const string password = "password";
private static int sortedsetlength;
private static int iterations;
private static IDatabase _cache;
static void Main(string[] args)
{
sortedsetlength = Int32.Parse(args[0]);
iterations = Int32.Parse(args[1]);
CreateMultiplexer(host,password);
PopulateTestData();
RunTestSync();
RunTestAsync();
}
private static void CreateMultiplexer(string host, string password)
{
Console.WriteLine("Measuring sync vs async for sorted set length {0}, iteration {1}", sortedsetlength,iterations);
ConfigurationOptions configoptions = new ConfigurationOptions();
configoptions.EndPoints.Add(host);
configoptions.Password = password;
configoptions.Ssl = true;
ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(configoptions);
_cache = connection.GetDatabase();
}
private static void PopulateTestData()
{
for (int i = 0; i < sortedsetlength; i++)
{
_cache.SortedSetAdd("testsorted", "user" + i, i);
}
}
static void RunTestSync()
{
for (int warmup = 0; warmup < 100; warmup++)
{
MeasureSync();
}
Stopwatch sw = Stopwatch.StartNew();
for (int i = 0; i < iterations;i++ )
{
MeasureSync();
}
sw.Stop();
Console.WriteLine("{0} sync calls completed in average {1} ms", iterations, sw.Elapsed.TotalMilliseconds/iterations);
}
async static void RunTestAsync()
{
//warm up
for (int warmup = 0; warmup < 100; warmup++)
{
MeasureAsync().Wait();
}
Stopwatch sw = Stopwatch.StartNew();
for (int i = 0; i < iterations; i++)
{
MeasureAsync().Wait();
}
sw.Stop();
Console.WriteLine("{0} async calls completed in average {1} ms", iterations, sw.Elapsed.TotalMilliseconds/iterations);
}
static public void MeasureSync()
{
var result = _cache.SortedSetRangeByScore("testset", 1.0, sortedsetlength / 1.0);
}
async static public Task MeasureAsync()
{
var result = await _cache.SortedSetRangeByScoreAsync("testset", 1.0, sortedsetlength / 1.0);
}
}
}

Related

Event Hub - not getting expected throughput

I have setup an instance of Event Hub with 20 Throughput units and 32 partitions on Standard Tier. As per documentation, every throughput unit equates to 1 MB/second. So ideally I should be getting throughput of 20 MB/second or 1.2 GB/minute. The namespace has only one event hub and I am the only user. The event hub is set up in West US which is the option closest to where requests are sent from.
However, I see that it takes at least 10 minutes to 1.77GB of data. I am using async batch calls and packing each request to the 1 MB limit. I see a vast variance in time taken by SendBatchAsync call - it varies from 0.15 to 25 seconds.
Here is my code :
(Please note : I am constrained to use .Net Framework 4.5)
static EventHubClient eventHubClient;
static Dictionary<int, List<EventData>> events = new Dictionary<int, List<EventData>>();
static Dictionary<int, long> batchSizes = new Dictionary<int, long>();
static long threshold = (long)(1e6 - 1000);
static SemaphoreSlim concurrencySemaphore;
static int maxConcurrency = 1;
static void Main()
{
eventHubClient = EventHubClient.CreateFromConnectionString(connectionString, eventHubName);
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
using (concurrencySemaphore = new SemaphoreSlim(maxConcurrency))
{
foreach (string record in GetRecords())
{
Tuple<int, EventData> currentEventDetails = GetEventData(record);
int partitionId = currentEventDetails.Item1;
EventData currentEvent = currentEventDetails.Item2;
BatchOrSendAsync(partitionId, currentEvent);
}
SendRemainingAsync();
}
stopWatch.Stop();
Console.WriteLine(string.Format("### total time taken = {0}", stopWatch.Elapsed.TotalSeconds.ToString()));
}
static async void BatchOrSendAsync(int partitionId, EventData currentEvent)
{
long batchSize = 0;
batchSizes.TryGetValue(partitionId, out batchSize);
long currentEventSize = currentEvent.SerializedSizeInBytes;
if( batchSize + currentEventSize > threshold)
{
List<EventData> eventsToSend = events[partitionId];
if (eventsToSend == null || eventsToSend.Count == 0)
{
if (currentEventSize > threshold)
throw new Exception("found event with size above threshold");
return;
}
concurrencySemaphore.Wait();
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
await eventHubClient.SendBatchAsync(eventsToSend);
stopWatch.Stop();
Console.WriteLine(stopWatch.Elapsed.TotalSeconds.ToString());
concurrencySemaphore.Release();
events[partitionId] = new List<EventData> { currentEvent };
batchSizes[partitionId] = currentEventSize;
}
else
{
if (!events.ContainsKey(partitionId))
{
events[partitionId] = new List<EventData>();
batchSizes[partitionId] = 0;
}
events[partitionId].Add(currentEvent);
batchSizes[partitionId] += currentEventSize;
}
}
static async void SendRemainingAsync()
{
foreach(int partitionId in events.Keys)
{
concurrencySemaphore.Wait();
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
await eventHubClient.SendBatchAsync(events[partitionId]);
stopWatch.Stop();
Console.WriteLine(stopWatch.Elapsed.TotalSeconds.ToString());
concurrencySemaphore.Release();
}
}
Note : increasing the maxConcurrency for the semaphore only degrades the overall time taken and the SendBatchAsync call starts erroring out when maxConcurrency is 10
What should I do to improve throughput?

Task.Delay delays too long

I've created a multi task program. This program has around 20 main tasks and each of them calls some sub tasks to operate file I/Os. I wanted each main task to repeat periodically every 500ms, so I enterd the code Task.Delay(500).
The problem is Task.Delay delays a lot more than 500ms sometimes. There is a case it delays more than 3 seconds.
How can I fix it?
The original progam is so big that I created a sample program below.
(1) If Task.Delay is on, over-delay happens.
(2) If Thead.Sleep is on, over-delay doesn't happen.
ThreadPool.SetMinThreads() doesn't seem to resolve it.
Thanks.
class Program
{
const int DELAY_TIME = 500;
const int TASKS = 100;
const int WAITS = 100;
const int WARNING_THRESHOLD = 100;
static void Main(string[] args)
{
//ThreadPool.SetMinThreads(workerThreads: 200, completionPortThreads: 200);
Console.WriteLine("*** Start...");
Test();
Console.WriteLine("*** Done!");
Console.ReadKey();
}
private static void Test()
{
List<Task> tasks = new List<Task>();
for (int taskId = 0; taskId < TASKS; taskId++)
{
tasks.Add(DelaysAsync(taskId));
}
Task.WaitAll(tasks.ToArray());
}
static async Task DelaysAsync(int taskId)
{
await Task.Yield();
Stopwatch sw = new Stopwatch();
for (int i = 0; i < WAITS; i++)
{
sw.Reset();
sw.Start();
await Task.Delay(DELAY_TIME).ConfigureAwait(false); // (1)
//Thread.Sleep(DELAY_TIME); // (2)
sw.Stop();
Console.Write($"Task({taskId})_iter({i}) Elapsed={sw.ElapsedMilliseconds}");
if (sw.ElapsedMilliseconds > DELAY_TIME + WARNING_THRESHOLD)
{
Console.WriteLine(" *********** Too late!! ************");
}
else
{
Console.WriteLine();
}
}
}
}
I’ve run your test, with .NET 4.6.1 and VS 2017. Here on Xeon E3-1230 v3 CPU it never printed “Too late”, the Elapsed value was within 498-527 ms.
The Thread.Sleep version performed very similarly, 500-528ms per sleep, however the total execution time was much longer because the runtime refused to create 100 OS threads, that’s way too many, so less than 100 DelaysAsync functions ran in parallel. The debugger showed me there were 27 worker threads in Thread.Sleep version and only 9 worker threads in Task.Delay version.
I think you have other apps on your PC creating too many threads and consuming too much CPU. Windows tries to load balance threads evenly so when the whole system is CPU bound, more native threads = more CPU time and therefore less jitter.
If that’s your case and you want to prioritize your app in the scheduler, instead of using Thread.Sleep and more threads, raise the priority of your process.
It seems that I could find the answer. I changed the previous sample program like below. The main difference is using StopWatch or DateTime to measure time durations.
In StopWatch version, many delays happen.
In DateTime version, no or at least very little delays happen(s).
I guess that the cause is the contention of Timer that is used by both StopWatch and Task.Delay. I concluded that I should not use StopWatch and Task.Delay together.
Thank you.
class Program
{
const int DELAY_TIME = 500;
const int TASKS = 100;
const int WAITS = 100;
const int WARNING_THRESHOLD = 500;
static void Main(string[] args)
{
using (Process p = Process.GetCurrentProcess())
{
p.PriorityClass = ProcessPriorityClass.RealTime;
//ThreadPool.SetMinThreads(workerThreads: 200, completionPortThreads: 200);
int workerThreads;
int completionPortThreads;
ThreadPool.GetAvailableThreads(out workerThreads, out completionPortThreads);
Console.WriteLine($"{workerThreads}, {completionPortThreads}");
Console.WriteLine("*** Start...");
Test();
Console.WriteLine("*** Done!");
Console.ReadKey();
}
}
private static void Test()
{
int totalCount = 0;
List<Task<int>> tasks = new List<Task<int>>();
for (int taskId = 0; taskId < TASKS; taskId++)
{
//tasks.Add(DelaysWithStopWatchAsync(taskId)); // many delays
tasks.Add(DelaysWithDateTimeAsync(taskId)); // no delays
}
Task.WaitAll(tasks.ToArray());
foreach (var task in tasks)
{
totalCount += task.Result;
}
Console.WriteLine($"Total counts of deday = {totalCount}");
}
static async Task<int> DelaysWithStopWatchAsync(int taskId)
{
await Task.Yield();
int count = 0;
Stopwatch sw = new Stopwatch();
for (int i = 0; i < WAITS; i++)
{
sw.Reset();
sw.Start();
await Task.Delay(DELAY_TIME).ConfigureAwait(false); // (1)
//Thread.Sleep(DELAY_TIME); // (2)
sw.Stop();
Console.Write($"task({taskId})_iter({i}) elapsed={sw.ElapsedMilliseconds}");
if (sw.ElapsedMilliseconds > DELAY_TIME + WARNING_THRESHOLD)
{
Console.WriteLine($" *********** Too late!! ************");
count++;
}
else
{
Console.WriteLine();
}
}
return count;
}
static async Task<int> DelaysWithDateTimeAsync(int taskId)
{
await Task.Yield();
int count = 0;
for (int i = 0; i < WAITS; i++)
{
DateTime start = DateTime.Now;
await Task.Delay(DELAY_TIME).ConfigureAwait(false); // (1)
//Thread.Sleep(DELAY_TIME); // (2)
DateTime end = DateTime.Now;
int duration = (end - start).Milliseconds;
Console.Write($"Task({taskId})_iter({i}) Elapsed={duration}");
if (duration > DELAY_TIME + WARNING_THRESHOLD)
{
Console.WriteLine($" *********** Too late!! ************");
count++;
}
else
{
Console.WriteLine();
}
}
return count;
}
}

Stopwatch and ReadKey don't work properly

I'm working on my multi-threading password cracker, only numbers. It must show how much time has passed to find the password. I used Stopwatch to find it, but in functions Stopwatch doesn't work. Here is my code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Diagnostics;
using System.Threading;
namespace ConsoleApplication4
{
class Program
{
static void Main(string[] args)
{
int psw = 14995399;
Stopwatch time = new Stopwatch();
Thread Thread1 = new Thread(islem1);
Thread Thread2 = new Thread(islem2);
Thread Thread3 = new Thread(islem3);
Thread Thread4 = new Thread(islem4);
time.Start();
Thread1.Start();
Thread2.Start();
Thread3.Start();
Thread4.Start();
Thread.Sleep(1000);
time.Stop();
System.Console.WriteLine("Time elapsed: {0}", time.Elapsed);
Console.ReadKey();
}
static void islem1()
{
for (int i = 00000000; i < 25000000; i++)
{
int psw = 14995399;
if (i == psw)
{
System.Console.WriteLine("Şifre=" + i);
time.Stop();
Console.WriteLine("Time elapsed: {0}", time.Elapsed);
Console.ReadKey();
}
}
}
static void islem2()
{
for (int i = 25000000; i < 50000000; i++)
{
int psw = 14995399;
if (i == psw)
{
System.Console.WriteLine("Şifre=" + i);
time.Stop();
Console.WriteLine("Time elapsed: {0}", time.Elapsed);
Console.ReadKey();
}
}
}
static void islem3()
{
for (int i = 50000000; i < 75000000; i++)
{
int psw = 14995399;
if (i == psw)
{
System.Console.WriteLine("Şifre=" + i);
time.Stop();
Console.WriteLine("Time elapsed: {0}", time.Elapsed);
Console.ReadKey();
}
}
}
static void islem4()
{
for (int i = 75000000; i < 100000000; i++)
{
int psw = 14995399;
if (i == psw)
{
System.Console.WriteLine("Şifre=" + i);
time.Stop();
Console.WriteLine("Time elapsed: {0}", time.Elapsed);
Console.ReadKey();
}
}
}
}
}
It's because your variable
Stopwatch time = new Stopwatch();
declared outside the visibility functions. The scope of visibility the var is your function Main. You can pass Stopwatch as a parameter to your functions:
Thread1.Start(time);
Or declare it as a class field:
class Program
{
private static Stopwatch time = new Stopwatch();
...
}
Note that you have just one Stopwatch instance then if you stop it in one thread it'll stopped in all application and elapsed time will not changed after that.
Then you should delete time.Stop(); from your Main method because it can caused the result in cased when your threads works longet then 1 second.
Also it's no reason to call Thread.Sleep(). Just delete this lines and your code continues work as expected.
Finally you can delete Console.ReadKey() from your thread functions because your main-thread already waits for user input.
The whole solution with configurable threads count can illustrate interesting results for different number of threads. Try the code below which can illustrate work with thread parameters and reduce lines of code:
using System;
using System.Diagnostics;
using System.Threading;
namespace ConsoleApplication4
{
internal class Program
{
private class BruteforceParams
{
public int StartNumber { get; set; }
public int EndNumber { get; set; }
}
private const int password = 14995399;
private static readonly Stopwatch time = new Stopwatch();
private static void Main(string[] args)
{
const int maxPassword = 100000000;
Console.WriteLine("Enter number of threads: ");
var threadsCountString = Console.ReadLine();
var threadsCount = int.Parse(threadsCountString);
var threads = new Thread[threadsCount];
for (int i = 0; i < threadsCount; i++)
{
var thread = new Thread(Bruteforce);
threads[i] = thread;
}
time.Start();
for (int i = 0; i < threadsCount; i++)
{
threads[i].Start(new BruteforceParams { StartNumber = i * maxPassword / threadsCount, EndNumber = (i + 1) * maxPassword / threadsCount });
}
Console.ReadKey();
}
private static void Bruteforce(object param)
{
var bp = (BruteforceParams) param;
for (int i = bp.StartNumber; i < bp.EndNumber; i++)
{
if (i == password)
{
Console.WriteLine("Şifre=" + i);
time.Stop();
Console.WriteLine("Time elapsed: {0}", time.Elapsed);
}
}
}
}
}
How do you think time.Stop(); going to work inside your function body islem1() or any other since you have defined the stopwatch inside Main() function body. You are bound to get compilation error saying time doesn't exist in current context.
static void islem1()
{
.............
time.Stop(); // this line of code
Console.WriteLine("Time elapsed: {0}", time.Elapsed);
Console.ReadKey();
}
}
}
Rather, you can create a separate watch per method and report that
static void islem1()
{
StopWatch time = Stopwatch.StartNew();
time.Stop(); // this line of code
Console.WriteLine("Time elapsed: {0}", time.Elapsed);
Console.ReadKey();
}
It's going to be difficult to extract meaningful timings using a single Stopwatch instance.
You might chose to make your timing measurements using a different pattern that uses a new Stopwatch for each measurement.
void Main()
{
var t1 = new Thread(_ => {
var sw = Stopwatch.StartNew();
DoSomething();
Console.WriteLine("took {0}ms", sw.ElapsedMilliseconds);
});
var t2 = new Thread(_ => {
var sw = Stopwatch.StartNew();
DoSomethingElse();
Console.WriteLine("took {0}ms", sw.ElapsedMilliseconds);
});
t1.Start();
t2.Start();
t1.Join();
t2.Join();
Console.ReadKey();
}
void DoSomething()
{
//do something
}
void DoSomethingElse()
{
//do something
}

Restart concurrent tasks as soon as they fail for x number of times

I have a console app that is making HTTP queries and adding/updating products in my database according to response. Some fail and need to be retried a few times.
The way I came up with was to use a dictionary to store the product ID and a Task. Then I can check all the task results and re-run.
This is working but it strikes me as inefficient. Tasks are not being re-created until all tasks have finished. It would be more efficient if they were immediately restarted but I can't figure out how to do this. Also every retry involves a query to the database as only the ID is stored.
I made small app that shows how I am currently retrying failed requests.
Can someone suggest a more efficient method for retrying?
class Program
{
private static void Main(string[] args)
{
HttpQuery m = new HttpQuery();
var task = Task.Run(() => m.Start());
Task.WaitAll(task);
Console.WriteLine("Finished");
Console.ReadLine();
}
}
class HttpQuery
{
public async Task Start()
{
// dictionary where key represent reference to something that needs to be processed and bool whether it has completed or not
ConcurrentDictionary<int, Task<bool>> monitor = new ConcurrentDictionary<int, Task<bool>>();
// start async tasks.
Console.WriteLine("starting first try");
for (int i = 0; i < 1000; i++)
{
Console.Write(i+",");
monitor[i] = this.Query(i);
}
// wait for completion
await Task.WhenAll(monitor.Values.ToArray());
Console.WriteLine();
// start retries
// number of retries per query
int retries = 10;
int count = 0;
// check if max retries exceeded or all completed
while (count < retries && monitor.Any(x => x.Value.Result == false))
{
// make list of numbers that failed
List<int> retryList = monitor.Where(x => x.Value.Result == false).Select(x => x.Key).ToList();
Console.WriteLine("starting try number: " + (count+1) + ", Processing: " + retryList.Count);
// create list of tasks to wait for
List<Task<bool>> toWait = new List<Task<bool>>();
foreach (var i in retryList)
{
Console.Write(i + ",");
monitor[i] = this.Query(i);
toWait.Add(monitor[i]);
}
// wait for completion
await Task.WhenAll(toWait.ToArray());
Console.WriteLine();
count++;
}
Console.WriteLine("ended");
Console.ReadLine();
}
public async Task<bool> Query(int i)
{
// simulate a http request that may or may not fail
Random r = new Random();
int delay = i * r.Next(1, 10);
await Task.Delay(delay);
if (r.Next(0,2) == 1)
{
return true;
}
else
{
return false;
}
}
}
You can create another method and wrap all these ugly retry logic. All of that ugly code goes away :)
public async Task Start()
{
const int MaxNumberOfTries = 10;
List<Task<bool>> tasks = new List<Task<bool>>();
for (int i = 0; i < 1000; i++)
{
tasks.Add(this.QueryWithRetry(i, MaxNumberOfTries));
}
await Task.WhenAll(tasks);
}
public async Task<bool> QueryWithRetry(int i, int numOfTries)
{
int tries = 0;
bool result;
do
{
result = await Query(i);
tries++;
} while (!result && tries < numOfTries);
return result;
}

Async Task.Run Not Working

I simply wrote below codes and I expect to have 3 text files with async feature in C# but I do not see anything:
private async void Form1_Load(object sender, EventArgs e)
{
Task<int> file1 = test();
Task<int> file2 = test();
Task<int> file3 = test();
int output1 = await file1;
int output2 = await file2;
int output3 = await file3;
}
async Task<int> test()
{
return await Task.Run(() =>
{
string content = "";
for (int i = 0; i < 100000; i++)
{
content += i.ToString();
}
System.IO.File.WriteAllText(string.Format(#"c:\test\{0}.txt", new Random().Next(1, 5000)), content);
return 1;
});
}
There are a few potential issues:
Does c:\test\ exist? If not, you'll get an error.
As written, your Random objects might generate the same numbers, since the current system time is used as the seed, and you are doing these at about the same time. You can fix this by making them share a static Random instance. Edit: but you need to synchronize the access to it somehow. I chose a simple lock on the Random instance, which isn't the fastest, but works for this example.
Building a long string that way is very inefficient (e.g. about 43 seconds in Debug mode for me, to do it once). Your tasks might be working just fine, and you don't notice that it's actually doing anything because it takes so long to finish. It can be made much faster by using the StringBuilder class (e.g. about 20 ms).
(this won't affect whether or not it works, but is more of a stylistic thing) you don't need to use the async and await keywords in your test() method as written. They are redundant, since Task.Run already returns a Task<int>.
This works for me:
private async void Form1_Load(object sender, EventArgs e)
{
Task<int> file1 = test();
Task<int> file2 = test();
Task<int> file3 = test();
int output1 = await file1;
int output2 = await file2;
int output3 = await file3;
}
static Random r = new Random();
Task<int> test()
{
return Task.Run(() =>
{
var content = new StringBuilder();
for (int i = 0; i < 100000; i++)
{
content.Append(i);
}
int n;
lock (r) n = r.Next(1, 5000);
System.IO.File.WriteAllText(string.Format(#"c:\test\{0}.txt", n), content.ToString());
return 1;
});
}
Using a different Random instance each time will cause the Random number generation to generate the same number each time!
The random number generation starts from a seed value. If the same seed is used repeatedly, the same series of numbers is generated.
This is because Random uses the computer's time as a seed value but the precision of this is not sufficient for the computer's processing speed.
Use the same Random number generator, example:
internal async Task<int> test()
{
return await Task.Run(() =>
{
string content = "";
for (int i = 0; i < 10000; i++)
{
content += i.ToString();
}
System.IO.File.WriteAllText(string.Format(#"c:\test\{0}.txt",MyRandom.Next(1,5000)), content);
return 1;
});
}
EDIT:
Also Random is not thread safe so you should synchronize access to it:
public static class MyRandom
{
private static Random random = new Random();
public static int Next(int start, int end)
{
lock (random)
{
return random.Next(start,end);
}
}
}

Categories