It has to be trivial, but I just cannot get through it.
I have to limit amount of tasks (let's say connections, emails sent or clicks in the button) per amount of time. So e.g. I can send 1000 emails per hour.
How can I do that in c#? I don't know and don't care how much time each operation will take. I just want to make sure that for last hour, only 1000 will be executed.
class EventLimiter
{
Queue<DateTime> requestTimes;
int maxRequests;
TimeSpan timeSpan;
public EventLimiter(int maxRequests, TimeSpan timeSpan)
{
this.maxRequests = maxRequests;
this.timeSpan = timeSpan;
requestTimes = new Queue<DateTime>(maxRequests);
}
private void SynchronizeQueue()
{
while ((requestTimes.Count > 0) && (requestTimes.Peek().Add(timeSpan) < DateTime.UtcNow))
requestTimes.Dequeue();
}
public bool CanRequestNow()
{
SynchronizeQueue();
return requestTimes.Count < maxRequests;
}
public void EnqueueRequest()
{
while (!CanRequestNow())
Thread.Sleep(requestTimes.Peek().Add(timeSpan).Subtract(DateTime.UtcNow));
// Was: System.Threading.Thread.Sleep(1000);
requestTimes.Enqueue(DateTime.UtcNow);
}
}
Assuming a rolling hour window:
Maintain a list of when actions were done.
Each time you want to do your action, remove all in the list not within the hour.
If there are fewer than 1000 then do the action and add a record to your list.
Assuming hourly:
Create a proxy method and a variable that is incremented for every action, and reduced to zero on the hour.
Do your action if the counter is < 1000.
The above solution looked fine. Here is my trimmed down version:
public class EmailRateHelper
{
private int _requestsPerInterval;
private Queue<DateTime> _history;
private TimeSpan _interval;
public EmailRateHelper()
: this(30, new TimeSpan(0, 1, 0)) { }
public EmailRateHelper(int requestsPerInterval, TimeSpan interval)
{
_requestsPerInterval = requestsPerInterval;
_history = new Queue<DateTime>();
_interval = interval;
}
public void SleepAsNeeded()
{
DateTime now = DateTime.Now;
_history.Enqueue(now);
if (_history.Count >= _requestsPerInterval)
{
var last = _history.Dequeue();
TimeSpan difference = now - last;
if (difference < _interval)
{
System.Threading.Thread.Sleep(_interval - difference);
}
}
}
}
You can use Rx extensions (How to use the new BufferWithTimeOrCount in Rx that returns IObservable<IObservable<T>> instead of IObservable<IList<T>>), but I would implement the buffering manually by adding an appropriate proxy object.
You may also consider storing {action, time, user} information in a database and get number of actions in a last hour fomr the DB (or similar persisted storager) if you need to handle Application pool restarts / crashes. Otherwise clever user may circumvent your in-memory protection with overloading your server.
You can create a persistent counter for every user. Every time you receive a request (for sending an email) you need to check the value of the counter and the date of the counter creation.
If the count is greater than the limit you refuse the request
If the date is older than an hour you reset the counter and set the new creation date
If the date is correct and the count is under the limit you increase the counter
Only in the last two cases the request is executed.
Related
I have an API that people are calling and I have a database containing statistics of the number of requests. All API requests are made by a user in a company. There's a row in the database per user per company per hour. Example:
| CompanyId | UserId| Date | Requests |
|-----------|-------|------------------|----------|
| 1 | 100 | 2020-01-30 14:00 | 4527 |
| 1 | 100 | 2020-01-30 15:00 | 43 |
| 2 | 201 | 2020-01-30 14:00 | 161 |
To avoid having to make a database call on every request, I've developed a service class in C# maintaining an in-memory representation of the statistics stored in a database:
public class StatisticsService
{
private readonly IDatabase database;
private readonly Dictionary<string, CompanyStats> statsByCompany;
private DateTime lastTick = DateTime.MinValue;
public StatisticsService(IDatabase database)
{
this.database = database;
this.statsByCompany = new Dictionary<string, CompanyStats>();
}
private class CompanyStats
{
public CompanyStats(List<UserStats> userStats)
{
UserStats = userStats;
}
public List<UserStats> UserStats { get; set; }
}
private class UserStats
{
public UserStats(string userId, int requests, DateTime hour)
{
UserId = userId;
Requests = requests;
Hour = hour;
Updated = DateTime.MinValue;
}
public string UserId { get; set; }
public int Requests { get; set; }
public DateTime Hour { get; set; }
public DateTime Updated { get; set; }
}
}
Every time someone calls the API, I'm calling an increment method on the StatisticsService:
public void Increment(string companyId, string userId)
{
var utcNow = DateTime.UtcNow;
EnsureCompanyLoaded(companyId, utcNow);
var currentHour = new DateTime(utcNow.Year, utcNow.Month, utcNow.Day, utcNow.Hour, 0, 0);
var stats = statsByCompany[companyId];
var userStats = stats.UserStats.FirstOrDefault(ls => ls.UserId == userId && ls.Hour == currentHour);
if (userStats == null)
{
var userStatsToAdd = new UserStats(userId, 1, currentHour);
userStatsToAdd.Updated = utcNow;
stats.UserStats.Add(userStatsToAdd);
}
else
{
userStats.Requests++;
userStats.Updated = utcNow;
}
}
The method loads the company into the cache if not already there (will publish EnsureCompanyLoaded in a bit). It then checks if there is a UserStats object for this hour for the user and company. If not it creates it and set Requests to 1. If other requests have already been made for this user, company, and current hour, it increments the number of requests by 1.
EnsureCompanyLoaded as promised:
private void EnsureCompanyLoaded(string companyId, DateTime utcNow)
{
if (statsByCompany.ContainsKey(companyId)) return;
var currentHour = new DateTime(utcNow.Year, utcNow.Month, utcNow.Day, utcNow.Hour, 0, 0); ;
var userStats = new List<UserStats>();
userStats.AddRange(database.GetAllFromThisMonth(companyId));
statsByCompany[companyId] = new CompanyStats(userStats);
}
The details behind loading the data from the database are hidden away behind the GetAllFromThisMonth method and not important to my question.
Finally, I have a timer that stores any updated results to the database every 5 minutes or when the process shuts down:
public void Tick(object state)
{
var utcNow = DateTime.UtcNow;
var currentHour = new DateTime(utcNow.Year, utcNow.Month, utcNow.Day, utcNow.Hour, 0, 0);
foreach (var companyId in statsByCompany.Keys)
{
var usersToUpdate = statsByCompany[companyId].UserStats.Where(ls => ls.Updated > lastTick);
foreach (var userStats in usersToUpdate)
{
database.Save(GenerateSomeEntity(userStats.Requests));
userStats.Updated = DateTime.MinValue;
}
}
// If we moved into new month since last tick, clear entire cache
if (lastTick.Month != utcNow.Month)
{
statsByCompany.Clear();
}
lastTick = utcNow;
}
I've done some single-threaded testing of the code and the concept seem to work out as expected. Now I want to migrate this to be thread-safe but cannot seem to figure out how to implement it the best way. I've looked at ConcurrentDictionary which might be needed. The main problem isn't on the dictionary methods, though. If two threads call Increment simultaneously, they could both end up in the EnsureCompanyLoaded method. I know of the concepts of lock in C#, but I'm afraid to just lock on every invocation and slow down performance that way.
Anyone needed something similar and have some good pointers in which direction I could go?
When keeping counters in memory like this you have two options:
Keep in memory the actual historic value of the counter
Keep in memory only the differential increment of the counter
I have used both approaches and I've found the second to be simpler, faster and safer. So my suggestion is to stop loading UserStats from the database, and just increment the in-memory counter starting from 0. Then every 5 minutes call a stored procedure that inserts or updates the related database record accordingly (while zero-ing the in-memory value). This way you'll eliminate the race conditions at the loading phase, and you'll ensure that every call to Increment will be consistently fast.
For thread-safety you can use either a normal Dictionary
with a lock, or a ConcurrentDictionary without lock. The first option is more flexible, and the second more efficient. If you choose Dictionary+lock, use the lock only for protecting the internal state of the Dictionary. Don't lock while updating the database. Before updating each counter take the current value from the dictionary and remove the entry in an atomic operation, and then issue the database command while other threads will be able to recreate the entry again if needed. The ConcurrentDictionary class contains a TryRemove method that can be used to achieve this goal without locking:
public bool TryRemove (TKey key, out TValue value);
It also contains a ToArray method that returns a snapshot of the entries in the dictionary. At first glance it seems that the ConcurrentDictionary suits your needs, so you could use it as a basis of your implementation and see how it goes.
To avoid having to make a database call on every request, I've
developed a service class in C# maintaining an in-memory
representation of the statistics stored in a database:
If you want to avoid Update race conditions, you should stop doing exactly that.
Databases by design, by purpose prevent simple update race conditions. This is a simple counting-up operation. A single DML statement. Implicity protected by transactions, journaling and locks. Indeed that is why calling them a lot is costly.
You are fighting the concurrency already there, by adding that service. You are also moving a DB job outside of the DB. And Moving DB jobs outside of the DB, is just going to cause issues.
If your worry is speed:
Please read the Speed Rant.
Maybe a Dsitributed Database Design is the droid you are looking for? They had a massive surge in popularity since Mobile Devices have proliferated, both for speed and reliability reasons.
In general, to make your code thread-safe:
Use concurrent collections, such as ConcurrentDictionary
Make sure to understand concepts such as lock statement, Monitor.Wait and Mintor.PulseAll in tutorials. Locks can be slow if IO operations (such as disk write/read) it being locked on, but for something in RAM it is not necessary to worrry about. If you have really some lengthy operation such as IO or http requests, consider using ConcurrentQueue and learn about the consumer-producer pattern to process work in queues by many workers (example)
You can also try Redis server to cache database without need to design something from zero.
You can also make your service singleton, and update database only after value changes. For reading value, you have already stored it in your service.
I have a service that handles many (~100K) requests per second. Before each request, it checks (for example) if it's started raining, and if so, the behavior changes:
if(IsRaining())
return "You probably shouldn't go anywhere today.";
//... otherwise proceed
IsRaining Version 1 (Slowest)
public bool IsRaining() => ExternalService.IsRaining();
In trying to speed up my service I discovered that checking Service.IsRaining is the performance bottleneck.
I decided I didn't care if the status only just changed to "raining", I could cache the result for a small time. (With a slight exception - if it stops raining, I want to know immediately).
I solved that using the following approach:
IsRaining Version 2 (Faster)
bool isRainingCache;
public bool IsRaining()
{
DateTime now = DateTime.UTCNow;
// If the last time we checked, it was raining, make sure it still is. OR
// If it hasn't been raining, only check again if we haven't checked in the past second.
if (isRainingCache || (now - lastChecked) > TimeSpan.FromSeconds(1))
{
isRainingCache = ExternalService.IsRaining();
lastChecked = now;
}
return isRainingCache;
}
This made things a lot faster and worked for a long time. Then, my service got even faster, it started getting called hundreds of thousands of times per second, and benchmarking informed me that calling DateTime.Now so much makes up 50% of all CPU time.
I know what you're thinking:
Is calling DateTime.Now really your bottleneck?
I'm pretty sure it is. I'm calling it hundreds of thousands of times per second. My real service is just a wrapper for a hash-map lookup - calls are meant to be very fast.
My next thought is that rather than checking how long it's been every single call, some timer could asynchronously expire the cached result after some time:
IsRaining Version 3 (Fastest?)
bool? isRainingCache = null;
public bool IsRaining()
{
// Only check for rain if the cache is empty, or it was raining last time we checked.
if (isRainingCache == null || isRainingCache == true)
{
isRainingCache = ExternalService.IsRaining();
// If it's not raining, force us to check again after 1 second
if(!isRainingCache)
Task.Run(() => Task.Delay(1000).ContinueWith(() => { isRainingCache = null; }));
}
return false;
}
The above (untested) would speed things along, but I feel like this leaves me with several new problems:
It feels abusive to "fire and forget" a Task like this (especially as often as once per second).
If my service is disposed or finalized, I would be leaving queued tasks lying around. I feel like I need to hold on to the task or a cancellation token.
I'm generally inexperienced with TPL, but I feel like it's not appropriate to use Timers or Threads here, which in my experience can lead to a myriad of other shutdown and cleanup issues.
If anyone has any suggestions for a better approach, I would be very appreciative.
I've got several cases like this, I'm thinking it would be nice to abstract the solution into it's own wrapper class, something like:
// Calls the getter at most once per 1000 ms, returns a cached value otherwise.
public Throttled<bool> IsRaining = new Throttled<bool>(() => Service.IsRaining, 1000);
If you change your code to use Environment.TickCount, you should notice a speedup. This is probably going to be the cheapest timer you can check.
#Fabjan's answer may be better, though, if you truly are seeing this method hit 100,000 times a second.
bool isRainingCache;
int lastChecked = Environment.TickCount - 1001;
public bool IsRaining()
{
int now = Environment.TickCount;
// If the last time we checked, it was raining, make sure it still is. OR
// If it hasn't been raining, only check again if we haven't checked in the past second.
if (isRainingCache || unchecked(now - lastChecked) > 1000)
{
isRainingCache = ExternalService.IsRaining();
lastChecked = now;
}
return isRainingCache;
}
A simple rewrite to use Stopwatch instead of DateTime.Now reduces the overhead (for this isolated part) quite significantly.
(since another answer here posted Environment.TickCount I added it for completeness and it has the lowest overhead of them all, note that this value has a turnover rate around 24-25 days before it goes negative so any solution would need to take that into account, note that the answer by #Cory Nelson does that, it uses unchecked to make sure the subtraction works.)
void Main()
{
BenchmarkSwitcher.FromAssembly(GetType().Assembly).RunAll();
}
public class Benchmarks
{
private DateTime _Last = DateTime.Now;
private DateTime _Next = DateTime.Now.AddSeconds(1);
private Stopwatch _Stopwatch = Stopwatch.StartNew();
private int _NextTick = Environment.TickCount + 1000;
[Benchmark]
public void ReadDateTime()
{
bool areWeThereYet = DateTime.Now >= _Last.AddSeconds(1);
}
[Benchmark]
public void ReadDateTimeAhead()
{
bool areWeThereYet = DateTime.Now >= _Next;
}
[Benchmark]
public void ReadStopwatch()
{
bool areWeThereYet = _Stopwatch.ElapsedMilliseconds >= 1000;
}
[Benchmark]
public void ReadEnvironmentTick()
{
bool areWeThereYet = Environment.TickCount > _NextTick;
}
}
Output:
Method | Mean | Error | StdDev |
-------------------- |-----------:|----------:|----------:|
ReadDateTime | 220.958 ns | 4.3334 ns | 4.8166 ns |
ReadDateTimeAhead | 214.025 ns | 0.8364 ns | 0.7414 ns |
ReadStopwatch | 25.365 ns | 0.1805 ns | 0.1689 ns |
ReadEnvironmentTick | 1.832 ns | 0.0163 ns | 0.0153 ns |
So a simple change to this should reduce the overhead for this isolated part of your code:
bool isRainingCache;
Stopwatch stopwatch = Stopwatch.StartNew();
public bool IsRaining()
{
DateTime now = DateTime.Now;
// If the last time we checked, it was raining, make sure it still is. OR
// If it hasn't been raining, only check again if we haven't checked in the past second.
if (isRainingCache || stopwatch.ElapsedMilliseconds > 1000)
{
isRainingCache = ExternalService.IsRaining();
stopwatch.Restart();
}
return isRainingCache;
}
The fact that DateTime.Now call is a bottleneck of application indicates that something might be wrong with the architecture. What's possible wrong here is that we're updating the cache inside of a method that should only get the latest value and return it. If we split up updating the cache and method to get the latest value we'd get something along the lines of:
const int UpdateCacheInterval = 300;
// we use keyword volatile as we access this variable from different threads
private volatile bool isRainingCache;
private Task UpdateCacheTask { get; set; }
// Use it to cancel background task when it's requred
private CancellationTokenSource CancellationTokenSource = new CancellationTokenSource();
private void InitializeCache()
{
UpdateCacheTask = Task.Run(async () =>
{
while(!CancellationTokenSource.Token.IsCancellationRequested)
{
await Task.Delay(UpdateCacheInterval);
isRainingCache = ExternalService.IsRaining();
}
}, CancellationTokenSource.Token);
}
public bool IsRaining()
{
// set the UpdateCacheInterval to a short interval where it's not possible
// that one second has expired from the time of the last check
return isRainingCache;
}
// To stop the task execution
public async Task Stop()
{
CancellationTokenSource.Cancel();
await UpdateCacheTask;
}
I'm generally inexperienced with TPL, but I feel like it's not
appropriate to use Timers or Threads here, which in my experience can
lead to a myriad of other shutdown and cleanup issues
It's perfectly fine to use timers and threads here because we need some backgroundworker to update the cache.
Thanks for the different approaches. If anyone is curious, I did end up abstracting this functionality into a re-usable class, so I can go:
private static readonly Throttled<bool> ThrottledIsRaining =
new Throttled<bool>(ExternalService.IsRaining, 1000);
public static bool IsRaining()
{
bool cachedIsRaining = ThrottledIsRaining.Value;
// This extra bit satisfies my special case - bypass the cache while it's raining
if (!cachedIsRaining) return false;
return ThrottledIsRaining.ForceGetUpdatedValue();
}
/// <summary>Similar to <see cref="Lazy{T}"/>. Wraps an expensive getter
/// for a value by caching the result and only invoking the supplied getter
/// to update the value if the specified cache expiry time has elapsed.</summary>
/// <typeparam name="T">The type of underlying value.</typeparam>
public class Throttled<T>
{
#region Private Fields
/// <summary>The time (in milliseconds) we must to cache the value after
/// it has been retrieved.</summary>
private readonly int _cacheTime;
/// <summary>Prevent multiple threads from updating the value simultaneously.</summary>
private readonly object _updateLock = new object();
/// <summary>The function used to retrieve the underlying value.</summary>
private readonly Func<T> _getValue;
/// <summary>The cached result from the last time the underlying value was retrieved.</summary>
private T _cachedValue;
/// <summary>The last time the value was retrieved</summary>
private volatile int _lastRetrieved;
#endregion Private Fields
/// <summary>Get the underlying value, updating the result if the cache has expired.</summary>
public T Value
{
get
{
int now = Environment.TickCount;
// If the cached value has expired, update it
if (unchecked(now - _lastRetrieved) > _cacheTime)
{
lock (_updateLock)
{
// Upon acquiring the lock, ensure another thread didn't update it first.
if (unchecked(now - _lastRetrieved) > _cacheTime)
return ForceGetUpdatedValue();
}
}
return _cachedValue;
}
}
/// <summary>Construct a new throttled value getter.</summary>
/// <param name="getValue">The function used to retrieve the underlying value.</param>
/// <param name="cacheTime">The time (in milliseconds) we must to cache the value after
/// it has been retrieved</param>
public Throttled(Func<T> getValue, int cacheTime)
{
_getValue = getValue;
_cacheTime = cacheTime;
_lastRetrieved = unchecked(Environment.TickCount - cacheTime);
}
/// <summary>Retrieve the current value, regardless of whether
/// the current cached value has expired.</summary>
public T ForceGetUpdatedValue()
{
_cachedValue = _getValue();
_lastRetrieved = Environment.TickCount;
return _cachedValue;
}
/// <summary>Allows instances of this class to be accessed like a normal
/// <typeparamref name="T"/> identifier.</summary>
public static explicit operator T(Throttled<T> t) => t.Value;
}
I decided to minimize expiry check time using #CoryNelson's unckecked TickCount method. While using an asynchronous expiry mechanism should be faster, I found it not worth the complexity of maintaining additional disposable resources and worrying about additional threading and cleanup issues.
I also took into account #Servy's warning about race conditions that might arise when multiple threads access the same throttled value. The addition of a lock avoids unnecessarily updating the value more than once within the expiry window.
Let me know if you think I missed anything. Thanks everyone.
I'm trying to control access to an object so that it may only be accessed a certain number of times over a given timespan. In one unit test that I have, access is limit to once per second. So 5 accesses should take just over 4 seconds. However, the test is failing on our TFS server, taking only 2 seconds. A stripped down version of my code to do this is here:
public class RateLimitedSessionStrippedDown<T>
{
private readonly int _rateLimit;
private readonly TimeSpan _rateLimitSpan;
private readonly T _instance;
private readonly object _lock;
private DateTime _lastReset;
private DateTime _lastUse;
private int _retrievalsSinceLastReset;
public RateLimitedSessionStrippedDown(int limitAmount, TimeSpan limitSpan, T instance )
{
_rateLimit = limitAmount;
_rateLimitSpan = limitSpan;
_lastUse = DateTime.UtcNow;
_instance = instance;
_lock = new object();
}
private void IncreaseRetrievalCount()
{
_retrievalsSinceLastReset++;
}
public T GetRateLimitedSession()
{
lock (_lock)
{
_lastUse = DateTime.UtcNow;
Block();
IncreaseRetrievalCount();
return _instance;
}
}
private void Block()
{
while (_retrievalsSinceLastReset >= _rateLimit &&
_lastReset.Add(_rateLimitSpan) > DateTime.UtcNow)
{
Thread.Sleep(TimeSpan.FromMilliseconds(10));
}
if (DateTime.UtcNow > _lastReset.Add(_rateLimitSpan))
{
_lastReset = DateTime.UtcNow;
_retrievalsSinceLastReset = 0;
}
}
}
While running on my computer, in both Debug and Release, it works fine. However, I have a unit test that fails once I commit to our TFS build server. This is the test:
[Test]
public void TestRateLimitOnePerSecond_AssertTakesAtLeastNMinusOneSeconds()
{
var rateLimiter = new RateLimitedSessionStrippedDown<object>(1, TimeSpan.FromSeconds(1), new object());
DateTime start = DateTime.UtcNow;
for (int i = 0; i < 5; i++)
{
rateLimiter.GetRateLimitedSession();
}
DateTime end = DateTime.UtcNow;
Assert.GreaterOrEqual(end.Subtract(start), TimeSpan.FromSeconds(4));
}
I wonder if the loop in the test is being optimised in a way that it runs each iteration of the loop on a separate thread (or something similar), which means that the test completes quicker than it should because Thread.Sleep only blocks the thread that it is being called on?
Your problem is inside of the Block method, and now that I look at the comments, it appears that Henk Holterman has already brought this up.
It will only fail when _lastReset.Add(_rateLimitSpan) and DateTime.UtcNow are equal. This doesn't happen very often, hence the reason why it fails intermittently. A fix would be to change > to >= on this line:
if (DateTime.UtcNow > _lastReset.Add(_rateLimitSpan))
It's not intuitive why, unless you understand that each call to DateTime.UtcNow doesn't necessarily return a new value one each call.
Even though DateTime.UtcNow is precise up to 100 nanoseconds, its precision is not the same as its accuracy. It relies on the machine's timer interval, which ranges from 1-15ms, but more often set to 15.25ms, unless you're doing something with multimedia.
You can see this in action with this dotnetfiddle. Unless you have a program open that is setting the timer to a different value, like 1ms, you'll notice that the difference between the ticks is about 150000 ticks, about 15ms, or the normal system timer interval.
We can also validate this by lifting out the calls to DateTime.UtcNow into temporary variables and comparing them at the end of the method:
private void Block()
{
var first = DateTime.UtcNow;
while (_retrievalsSinceLastReset >= _rateLimit &&
_lastReset.Add(_rateLimitSpan) > first)
{
Thread.Sleep(TimeSpan.FromMilliseconds(10));
first = DateTime.UtcNow;
}
var second = DateTime.UtcNow;
if (second > _lastReset.Add(_rateLimitSpan))
{
_lastReset = DateTime.UtcNow;
_retrievalsSinceLastReset = 0;
}
if (first == second)
{
Console.WriteLine("DateTime.UtcNow returned same value");
}
}
On my machine, all five calls to Block printed out DateTime.UtcNow as being equal.
Im struggling to check if there is at least a minute between two date times. I created a game in c# and have limited a part of my game to once a minute every time this command is
executed it runs a void
The problem is that it does it even if it hasn't been a minute?
public void _CheckIfBeenAMinute
{
string TimeStamp;
using (IQueryAdapter dbClient = SilverwaveEnvironment.GetDatabaseManager().getQueryreactor())
{
dbClient.setQuery("SELECT game_timestamp FROM users WHERE id=" + Session.Id + "");
TimeStamp = dbClient.getString();
}
DateTime TimeStamp_Converted = Convert.ToDateTime(TimeStamp);
if (TimeStamp_Converted > DateTime.UtcNow.AddMinutes(-1))
{
//It has been a minuted...
//But the problem is, i it hasnt been it still does this?
this.SendMessage("You have reached your limit today");
return;
}
}
EDIT: I have decided to use timespan. But when I try to get the seconds of the timespan after it has reached 60 it resets?
Try
if ((DateTime.UtcNow - TimeStamp_Converted).TotalMinutes > 1)
It should be:
if (TimeStamp_Converted < DateTime.UtcNow.AddMinutes(-1))
I am trying to display the number of times a method is called from my client windows form application. Below are how the service and client are defined.
In my log file I see the count is incremented per method call but I am not able to see the total count that I put in the list from my client form.
IOperator
{
SendMessage(string strMsgId, string strMessage);
[OperationContract]
List<int> GetCount();
}
[ServiceBehavior(Namespace = "http://X.org/MessageService/"]
Operator: IOperator
{
private List<Int32> TotalCount = new List<Int32>();
public static List<int> TotalCount
{
get { return _totalCount; }
set { _totalCount = value; }
}
SendMessage(string strMsgId, string strMessage)
{
if (strMsgId == "02")
{
lock (_lock)
{
++_count;
TotalCount.Add(_count);
}
string debugFileName = "C:\\Test.txt";
// Write to the file:
inboundMessageLog.WriteLine("{0}{1}", "Inbound Message:", strMessage.Substring(549, 27));
inboundMessageLog.WriteLine("{0}{1}", "count:", _count);
inboundMessageLog.WriteLine("{0}{1}", "Total Count:", TotalCount.Count);
result = 0;
}
}
public List<int> GetCount()
{
return TotalCount;
}
}
EDIT
I am trying to save that total count in some session per a given time and get that count in my text box.I want the total count regardless of the number of clients. TotalCount is static, defined as private static List _totalCount = new List(); with getter TotalCount.
I didn't explicitly defined the InstanceContextMode for the service and yes the totalcount is showing 0.
Client:
var clientA = new SendServiceReference.SendService();
Operator clientB = new Operator();
while ((DateTime.Now - startTime) <= timeoutSpan)
{
// Send request to external service and all the requests will be logged to my service since I don't have control over the external service.
sendMessageResult = clientA.SendMessageToExternalService("01", txtRequest.Text);
}
//display the total request received from client A for the give time span
responseCount.Text = clientB.GetCount().Count.ToString();
You don't indicate what binding you are using or if you've explicitly defined the InstanceContextMode for the service, but from the behavior you've described it sounds like it's the default PerSession, which creates a new instance of the service for each client.
What is most likely happening is that you are creating one client to send the messages, which is why you are seeing the counter incremented. You then create a second client (client2 = new Operator();, which creates another instance of the service, which means TotalCount is either 0 or null (since you don't indicate that you get an error, I'm going to guess that the count TotalCount is 0). In other words, you're no longer accessing/using the instance of the service that incremented the count, but an entirely new instance of the service with its own TotalCount field/property.
There are a few ways to resolve this, depending on what your requirements/needs are.
If you need the total count regardless of the number of clients, you can either make TotalCount static, or you can set the InstanceContextMode to Single. I would discourage using InstanceContextMode.Single as that can lead to scaling problems, and go with a static TotalCount.
If you need the total count by each client, then you will need to use the same client that made the 10 calls in the loop to make the call to GetCount(). For example:
Operator client1 = new Operator();
for (int i =0; i < 10; i++)
{
// Send your messages
}
responseCount.Text = client1.GetCount().Count.ToString();
There's an article on CodeProject that has illustrations of the 3 different InstanceContextModes that may be of use for you: Three ways to do WCF instance management