StackExchange.Redis timeout on only 1 server - c#

When a new box starts up (or presumably gets the app pool recycled), we're seeing a timeout error for every redis request. What's interesting is that it's probably 1/30 or so. That is, 30 boxes will boot up just fine and work (actual call is a Redis Lock call) to every 1 box that boots up in this faulty state. The example below shows 9k items in queue. The ConnectionMultiplexer is being initialized lazily, per MS azure recommendation (though we're not on Azure) and here's the call:
var db = m_dbFactory.GetDatabase();
bool gotLock = db.LockTake(key, value, m_redisLockConfig.RedisLockMaxAgeTimeSpan);
and we're using Ninject to get a singleton of that dbFactory injected in:
kernel.Bind<IRedisDatabaseFactory>().To<RedisDatabaseFactory>().InSingletonScope();
We've had to redeploy the code (recycling the app pool) to fix the issue, or kill the 1 bad box behind the load balancer. Has anyone come across this problem before? I see we have 9k items in queue that haven't been written to outbound network, following an azure troubleshooting link: https://azure.microsoft.com/en-us/blog/investigating-timeout-exceptions-in-stackexchange-redis-for-azure-redis-cache/
If the connection was not opened, however, I am specifically throwing an error from my redis db factory (which I'm not seeing in our logs). Here's the whole class to see the connectionmultiplexer initialization:
public class RedisDatabaseFactory : IRedisDatabaseFactory
{
private readonly Lazy<IConnectionMultiplexer> m_lazyConnectionMultiplexer;
public RedisDatabaseFactory(IRedisConfig redisConfig)
{
var endPoint = new DnsEndPoint(redisConfig.Host, redisConfig.Port);
var configOptions = new ConfigurationOptions
{
EndPoints = { endPoint },
Password = redisConfig.Password,
ConnectTimeout = 5000,
AbortOnConnectFail = false
};
m_lazyConnectionMultiplexer = new Lazy<IConnectionMultiplexer>(() =>
ConnectionMultiplexer.Connect(configOptions));
}
private IConnectionMultiplexer Connection
{
get { return m_lazyConnectionMultiplexer.Value; }
}
/// <summary>
/// Gets a connected redis database
/// </summary>
/// <exception cref="Exception"></exception>
/// <returns>Connected redis database</returns>
public IDatabase GetDatabase()
{
if (!Connection.IsConnected)
{
throw new Exception("Redis connection failure");
}
return Connection.GetDatabase();
}
}
Here's the stack trace:
System.TimeoutException: Timeout performing SET mykey, inst: 0, mgr: ExecuteSelect, err: never, queue: 9058, qu: 9058, qs: 0, qc: 0, wr: 0, wq: 0, in: 0, ar: 0, IOCP: (Busy=0,Free=1000,Min=1,Max=1000), WORKER: (Busy=1,Free=32766,Min=1,Max=32767), clientName: myclient
at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor1 processor, ServerEndPoint server)
at StackExchange.Redis.RedisBase.ExecuteSync[T](Message message, ResultProcessor1 processor, ServerEndPoint server)
at StackExchange.Redis.RedisDatabase.StringSet(RedisKey key, RedisValue value, Nullable1 expiry, When when, CommandFlags flags)
at StackExchange.Redis.RedisDatabase.LockTake(RedisKey key, RedisValue value, TimeSpan expiry, CommandFlags flags)
I changed the name of my key, client name, and removed backticks.

This is really late, but we did eventually make a change that solved the problem. We upgraded to the latest StackExchange.Redis in case the issue was fixed by Marc Gravell and team, but we also made the following change:
m_lazyConnectionMultiplexer = new Lazy<IConnectionMultiplexer>(() => ConnectionMultiplexer.Connect(configOptions),LazyThreadSafetyMode.PublicationOnly;
so that should the connection multiplexer initialize to a bad state, another would get initialized afterwards. After making those 2 changes, we never saw the issue again. I believe the issue was not actually in app pool recycle but in our process of tearing down and building up the boxes from Amazon Machine Image on a regular basis. When they were built back up, occasionally 1 was in a bad state. I wish I had pinpointed the fix, but that's what worked for us.

Two things jump out at me from your timeout error message.
Your "qu: 9058" number means that 9058 requests have been queued up locally but haven't yet been sent on the wire. This may mean that your system is taking too long to connect to Redis.
You should probably change your ThreadPool configuration as described here: https://gist.github.com/JonCole/e65411214030f0d823cb. You have 1 min thread for both IOCP and WORKER threads, which can cause problems during bursts of traffic, which is common for many apps during startup.
If that doesn't fix things for you, then you may want to monitor your client side CPU usage. If your client CPU is spiking up around 100%, then your system will just not have enough CPU to keep up with all the work you are trying to give it. Upgrade your client machine to something faster. The default Min Threads in the ThreadPool is 1 in your case, which often indicates that you have only 1 CPU Core, which may not be enough.

Related

C# console app to send email at scheduled times

I've got a C# console app running on Windows Server 2003 whose purpose is to read a table called Notifications and a field called "NotifyDateTime" and send an email when that time is reached. I have it scheduled via Task Scheduler to run hourly, check to see if the NotifyDateTime falls within that hour, and then send the notifications.
It seems like because I have the notification date/times in the database that there should be a better way than re-running this thing every hour.
Is there a lightweight process/console app I could leave running on the server that reads in the day's notifications from the table and issues them exactly when they're due?
I thought service, but that seems overkill.
My suggestion is to write simple application, which uses Quartz.NET.
Create 2 jobs:
First, fires once a day, reads all awaiting notification times from database planned for that day, creates some triggers based on them.
Second, registered for such triggers (prepared by the first job), sends your notifications.
What's more,
I strongly advice you to create windows service for such purpose, just not to have lonely console application constantly running. It can be accidentally terminated by someone who have access to the server under the same account. What's more, if the server will be restarted, you have to remember to turn such application on again, manually, while the service can be configured to start automatically.
If you're using web application you can always have this logic hosted e.g. within IIS Application Pool process, although it is bad idea whatsoever. It's because such process is by default periodically restarted, so you should change its default configuration to be sure it is still working in the middle of the night, when application is not used. Unless your scheduled tasks will be terminated.
UPDATE (code samples):
Manager class, internal logic for scheduling and unscheduling jobs. For safety reasons implemented as a singleton:
internal class ScheduleManager
{
private static readonly ScheduleManager _instance = new ScheduleManager();
private readonly IScheduler _scheduler;
private ScheduleManager()
{
var properties = new NameValueCollection();
properties["quartz.scheduler.instanceName"] = "notifier";
properties["quartz.threadPool.type"] = "Quartz.Simpl.SimpleThreadPool, Quartz";
properties["quartz.threadPool.threadCount"] = "5";
properties["quartz.threadPool.threadPriority"] = "Normal";
var sf = new StdSchedulerFactory(properties);
_scheduler = sf.GetScheduler();
_scheduler.Start();
}
public static ScheduleManager Instance
{
get { return _instance; }
}
public void Schedule(IJobDetail job, ITrigger trigger)
{
_scheduler.ScheduleJob(job, trigger);
}
public void Unschedule(TriggerKey key)
{
_scheduler.UnscheduleJob(key);
}
}
First job, for gathering required information from the database and scheduling notifications (second job):
internal class Setup : IJob
{
public void Execute(IJobExecutionContext context)
{
try
{
foreach (var kvp in DbMock.ScheduleMap)
{
var email = kvp.Value;
var notify = new JobDetailImpl(email, "emailgroup", typeof(Notify))
{
JobDataMap = new JobDataMap {{"email", email}}
};
var time = new DateTimeOffset(DateTime.Parse(kvp.Key).ToUniversalTime());
var trigger = new SimpleTriggerImpl(email, "emailtriggergroup", time);
ScheduleManager.Instance.Schedule(notify, trigger);
}
Console.WriteLine("{0}: all jobs scheduled for today", DateTime.Now);
}
catch (Exception e) { /* log error */ }
}
}
Second job, for sending emails:
internal class Notify: IJob
{
public void Execute(IJobExecutionContext context)
{
try
{
var email = context.MergedJobDataMap.GetString("email");
SendEmail(email);
ScheduleManager.Instance.Unschedule(new TriggerKey(email));
}
catch (Exception e) { /* log error */ }
}
private void SendEmail(string email)
{
Console.WriteLine("{0}: sending email to {1}...", DateTime.Now, email);
}
}
Database mock, just for purposes of this particular example:
internal class DbMock
{
public static IDictionary<string, string> ScheduleMap =
new Dictionary<string, string>
{
{"00:01", "foo#gmail.com"},
{"00:02", "bar#yahoo.com"}
};
}
Main entry of the application:
public class Program
{
public static void Main()
{
FireStarter.Execute();
}
}
public class FireStarter
{
public static void Execute()
{
var setup = new JobDetailImpl("setup", "setupgroup", typeof(Setup));
var midnight = new CronTriggerImpl("setuptrigger", "setuptriggergroup",
"setup", "setupgroup",
DateTime.UtcNow, null, "0 0 0 * * ?");
ScheduleManager.Instance.Schedule(setup, midnight);
}
}
Output:
If you're going to use service, just put this main logic to the OnStart method (I advice to start the actual logic in a separate thread not to wait for the service to start, and the same avoid possible timeouts - not in this particular example obviously, but in general):
protected override void OnStart(string[] args)
{
try
{
var thread = new Thread(x => WatchThread(new ThreadStart(FireStarter.Execute)));
thread.Start();
}
catch (Exception e) { /* log error */ }
}
If so, encapsulate the logic in some wrapper e.g. WatchThread which will catch any errors from the thread:
private void WatchThread(object pointer)
{
try
{
((Delegate) pointer).DynamicInvoke();
}
catch (Exception e) { /* log error and stop service */ }
}
You trying to implement polling approach, where a job is monitoring a record in DB for any changes.
In this case we are trying to hit DB for periodic time, so if the one hour delay reduced to 1 min later stage, then this solution turns to performance bottle neck.
Approach 1
For this scenario please use Queue based approach to avoid any issues, you can also scale up number of instances if you are sending so many emails.
I understand there is a program updates NotifyDateTime in a table, the same program can push a message to Queue informing that there is a notification to handle.
There is a windows service looking after this queue for any incoming messages, when there is a message it performs the required operation (ie sending email).
Approach 2
http://msdn.microsoft.com/en-us/library/vstudio/zxsa8hkf(v=vs.100).aspx
you can also invoke C# code from SQL Server Stored procedure if you are using MS SQL Server. but in this case you are making use of your SQL server process to send mail, which is not a good practice.
However you can invoke a web service, OR WCF service which can send emails.
But Approach 1 is error free, Scalable , Track-able, Asynchronous , and doesn't trouble your data base OR APP, you have different process to send email.
Queues
Use MSMQ which is part of windows server
You can also try https://www.rabbitmq.com/dotnet.html
Pre-scheduled tasks (at undefined times) are generally a pain to handle, as opposed to scheduled tasks where Quartz.NET seems well suited.
Furthermore, another distinction is to be made between fire-and-forget for tasks that shouldn't be interrupted/change (ex. retries, notifications) and tasks that need to be actively managed (ex. campaign or communications).
For the fire-and-forget type tasks a message queue is well suited. If the destination is unreliable, you will have to opt for retry levels (ex. try send (max twice), retry after 5 minutes, try send (max twice), retry after 15 minutes) that at least require specifying message specific TTL's with a send and retry queue. Here's an explanation with a link to code to setup a retry level queue
The managed pre-scheduled tasks will require that you use a database queue approach (Click here for a CodeProject article on designing a database queue for scheduled tasks)
. This will allow you to update, remove or reschedule notifications given you keep track of ownership identifiers (ex. specifiy a user id and you can delete all pending notifications when the user should no longer receive notifications such as being deceased/unsubscribed)
Scheduled e-mail tasks (including any communication tasks) require finer grained control (expiration, retry and time-out mechanisms). The best approach to take here is to build a state machine that is able to process the e-mail task through its steps (expiration, pre-validation, pre-mailing steps such as templating, inlining css, making links absolute, adding tracking objects for open tracking, shortening links for click tracking, post-validation and sending and retrying).
Hopefully you are aware that the .NET SmtpClient isn't fully compliant with the MIME specifications and that you should be using a SAAS e-mail provider such as Amazon SES, Mandrill, Mailgun, Customer.io or Sendgrid. I'd suggest you look at Mandrill or Mailgun. Also if you have some time, take a look at MimeKit which you can use to construct MIME messages for the providers allow sending raw e-mail and doesn't necessarily support things like attachments/custom headers/DKIM signing.
I hope this sets you on the right path.
Edit
You will have to use a service to poll at specific intervals (ex. 15 seconds or 1 minute). The database load can be somewhat negated by checkout out a certain amount of due tasks at a time and keeping an internal pool of messages due for sending (with a time-out mechanism in place). When there's no messages returned, just 'sleep' the polling for a while. I'd would advise against building such a system out against a single table in a database - instead design an independant e-mail scheduling system that you can integrate with.
I would turn it into a service instead.
You can use System.Threading.Timer event handler for each of the scheduled times.
Scheduled tasks can be scheduled to run just once at a specific time (as opposed to hourly, daily, etc.), so one option would be to create the scheduled task when the specific field in your database changes.
You don't mention which database you use, but some databases support the notion of a trigger, e.g. in SQL: http://technet.microsoft.com/en-us/library/ms189799.aspx
If you know when the emails need to be sent ahead of time then I suggest that you use a wait on an event handle with the appropriate timeout. At midnight look at the table then wait on an event handle with the timeout set to expire when the next email needs to be sent. After sending the email wait again with the timeout set based on the next mail that should be sent.
Also, based on your description, this should probably be implemented as a service but it is not required.
I have been dealing with the same problem about three years ago. I have changed the process several times before it was good enough, I tell you why:
First implementation was using special deamon from webhosting which called the IIS website. The website checked the caller IP and then check the database and send emails. This was working until one day, when I got a lot of very dirty emails from the users that I have totally spammed their mailboxes. The drawback of keeping email in database and sending from SMTP email is that there is NOTHING which ensure DB to SMTP transaction. You are never sure if the email has been successfully sent or not. Sending email can be successfull, can failed or it can be false positive or it can be false negative (SMTP client tells you, that the email was not sent, but it was). There was some problem with SMTP server and the server returned false(email not send), but the email was sent. The daemon was resending the email every hour the whole day before the dirty emails appears.
Second implementation: To prevent spamming, I have changed the algorithm, that the email is considered to be sent even if it failed (my email notification was not too important). My first advice is: "Don't launch the deamon too often, because this false negative smtp error makes users upset."
After several month there were some changes on the server and the daemon was not working well. I got the idea from the stackoverflow: bind the .NET timer to the web application domain. It wasn't good idea, because it seems, that IIS can restart application from time to time because of memory leaks and the timer never fires if the restarts are more often then timer ticks.
The last implementation. Windows scheduler every hour fires python batch which read local website. This fire ASP.NET code. The advantage is that time windows scheduler call the the local batch and website reliably. IIS doesn't hang, it has restart ability. The timer site is part of my website, it is still one projects. (you can use console app instead). Simple is better. It just works!
Your first choice is the correct option in my opinion. Task Scheduler is the MS recommended way to perform periodic jobs. Moreover it's flexible, can reports failures to ops, is optimized and amortized amongst all tasks in the system, ...
Creating any console-kind app that runs all the time is fragile. It can be shutdown by anyone, needs an open seesion, doesn't restart automatically, ...
The other option is creating some kind of service. It's guaranteed to be running all the time, so that would at least work. But what was your motivation?
"It seems like because I have the notification date/times in the database that there should be a better way than re-running this thing every hour."
Oh yeah optimization... So you want to add a new permanently running service to your computer so that you avoid one potentially unrequired SQL query every hour? The cure looks worse than the disease to me.
And I didn't mention all the drawbacks of the service. On one hand, your task uses no resource when it doesn't run. It's very simple, lightweight and the query efficient (provided you have the right index).
On the other hand, if your service crashes it's probably gone for good. It needs a way to be notified of new e-mails that may need to be sent earlier than what's currently scheduled. It permanently uses computer resources, such as memory. Worse, it may contain memory leaks.
I think that the cost/benefit ratio is very low for any solution other than the trivial periodic task.

How does PubSub work in BookSleeve/ Redis?

I wonder what the best way is to publish and subscribe to channels using BookSleeve. I currently implement several static methods (see below) that let me publish content to a specific channel with the newly created channel being stored in private static Dictionary<string, RedisSubscriberConnection> subscribedChannels;.
Is this the right approach, given I want to publish to channels and subscribe to channels within the same application (note: my wrapper is a static class). Is it enough to create one channel even I want to publish and subscribe? Obviously I would not publish to the same channel than I would subscribe to within the same application. But I tested it and it worked:
RedisClient.SubscribeToChannel("Test").Wait();
RedisClient.Publish("Test", "Test Message");
and it worked.
Here my questions:
1) Will it be more efficient to setup a dedicated publish channel and a dedicated subscribe channel rather than using one channel for both?
2) What is the difference between "channel" and "PatternSubscription" semantically? My understanding is that I can subscribe to several "topics" through PatternSubscription() on the same channel, correct? But if I want to have different callbacks invoked for each "topic" I would have to setup a channel for each topic correct? Is that efficient or would you advise against that?
Here the code snippets.
Thanks!!!
public static Task<long> Publish(string channel, byte[] message)
{
return connection.Publish(channel, message);
}
public static Task SubscribeToChannel(string channelName)
{
string subscriptionString = ChannelSubscriptionString(channelName);
RedisSubscriberConnection channel = connection.GetOpenSubscriberChannel();
subscribedChannels[subscriptionString] = channel;
return channel.PatternSubscribe(subscriptionString, OnSubscribedChannelMessage);
}
public static Task UnsubscribeFromChannel(string channelName)
{
string subscriptionString = ChannelSubscriptionString(channelName);
if (subscribedChannels.Keys.Contains(subscriptionString))
{
RedisSubscriberConnection channel = subscribedChannels[subscriptionString];
Task task = channel.PatternUnsubscribe(subscriptionString);
//remove channel subscription
channel.Close(true);
subscribedChannels.Remove(subscriptionString);
return task;
}
else
{
return null;
}
}
private static string ChannelSubscriptionString(string channelName)
{
return channelName + "*";
}
1: there is only one channel in your example (Test); a channel is just the name used for a particular pub/sub exchange. It is, however, necessary to use 2 connections due to specifics of how the redis API works. A connection that has any subscriptions cannot do anything else except:
listen to messages
manage its own subscriptions (subscribe, psubscribe, unsubscribe, punsubscribe)
However, I don't understand this:
private static Dictionary<string, RedisSubscriberConnection>
You shouldn't need more than one subscriber connection unless you are catering for something specific to you. A single subscriber connection can handle an arbitrary number of subscriptions. A quick check on client list on one of my servers, and I have one connection with (at time of writing) 23,002 subscriptions. Which could probably be reduced, but: it works.
2: pattern subscriptions support wildcards; so rather than subscribing to /topic/1, /topic/2/ etc you could subscribe to /topic/*. The name of the actual channel used by publish is provided to the receiver as part of the callback signature.
Either can work. It should be noted that the performance of publish is impacted by the total number of unique subscriptions - but frankly it is still stupidly fast (as in: 0ms) even if you have tens of multiple thousands of subscribed channels using subscribe rather than psubscribe.
But from publish
Time complexity: O(N+M) where N is the number of clients subscribed to the receiving channel and M is the total number of subscribed patterns (by any client).
I recommend reading the redis documentation of pub/sub.
Edit for follow on questions:
a) I assume I would have to "publish" synchronously (using Result or Wait()) if I want to guarantee the order of sending items from the same publisher is preserved when receiving items, correct?
that won't make any difference at all; since you mention Result / Wait(), I assume you're talking about BookSleeve - in which case the multiplexer already preserves command order. Redis itself is single threaded, and will always process commands on a single connection in order. However: the callbacks on the subscriber may be executed asynchronously and may be handed (separately) to a worker thread. I am currently investigating whether I can force this to be in-order from RedisSubscriberConnection.
Update: from 1.3.22 onwards you can set the CompletionMode to PreserveOrder - then all callbacks will be completed sequentially rather than concurrently.
b) after making adjustments according to your suggestions I get a great performance when publishing few items regardless of the size of the payload. However, when sending 100,000 or more items by the same publisher performance drops rapidly (down to 7-8 seconds just to send from my machine).
Firstly, that time sounds high - testing locally I get (for 100,000 publications, including waiting for the response for all of them) 1766ms (local) or 1219ms (remote) (that might sound counter-intuitive, but my "local" isn't running the same version of redis; my "remote" is 2.6.12 on Centos; my "local" is
2.6.8-pre2 on Windows).
I can't make your actual server faster or speed up the network, but: in case this is packet fragmentation, I have added (just for you) a SuspendFlush() / ResumeFlush() pair. This disables eager-flushing (i.e. when the send-queue is empty; other types of flushing still happen); you might find this helps:
conn.SuspendFlush();
try {
// start lots of operations...
} finally {
conn.ResumeFlush();
}
Note that you shouldn't Wait until you have resumed, because until you call ResumeFlush() there could be some operations still in the send-buffer. With that all in place, I get (for 100,000 operations):
local: 1766ms (eager-flush) vs 1554ms (suspend-flush)
remote: 1219ms (eager-flush) vs 796ms (suspend-flush)
As you can see, it helps more with remote servers, as it will be putting fewer packets through the network.
I cannot use transactions because later on the to-be-published items are not all available at once. Is there a way to optimize with that knowledge in mind?
I think that is addressed by the above - but note that recently CreateBatch was added too. A batch operates a lot like a transaction - just: without the transaction. Again, it is another mechanism to reduce packet fragmentation. In your particular case, I suspect the suspend/resume (on flush) is your best bet.
Do you recommend having one general RedisConnection and one RedisSubscriberConnection or any other configuration to have such wrapper perform desired functions?
As long as you're not performing blocking operations (blpop, brpop, brpoplpush etc), or putting oversized BLOBs down the wire (potentially delaying other operations while it clears), then a single connection of each type usually works pretty well. But YMMV depending on your exact usage requirements.

ActiveMQ - multiple connections per session?

Within ActiveMQ I've been told that the most optimal solution for increased throughput is to have multiple connections, each with their own session and consumer.
I've been trying to achieve this with NMS (Connecting via C#), but in the "Active Consumwers" screen of the MQ web console, I'm seeing all my connections & consumers listed as I'd expect to see them, but in the next column they all have a sessionId of "1". I would have expected there to be a separate session ID for each.
Is this right? And if there should be different session IDs for each connection/consumer, how would I go about ensuring these extra sessions are created?
Here's some example code I'm using to start a new connection (this is based on Remarks ActiveMQ transactional messaging introduction code):
public QueueConnection(IConnectionFactory connectionFactory, string queueName, AcknowledgementMode acknowledgementMode)
{
this.connection = connectionFactory.CreateConnection();
this.connection.Start();
this.session = this.connection.CreateSession(acknowledgementMode);
this.queue = new ActiveMQQueue(queueName);
}
... and this is being done each time for each of the connections I'm opening.
It looks this piece of code which is the root source of my problem:
public SimpleQueueListener CreateSimpleQueueListener(IMessageProcessor processor)
{
IMessageConsumer consumer = this.session.CreateConsumer(this.queue, "2 > 1");
return new SimpleQueueListener(consumer, processor, this.session);
}
As it's using a common session (this.session) for all the consumer. By creating a new session each time (and holding it in a collection or by other means), I have achieved the goal of operating each listener on its own session within the same connection. Eg:
public SimpleQueueListener CreateSimpleQueueListener(IMessageProcessor processor)
{
var listenerSession = this.connection.CreateSession();
IMessageConsumer consumer = listenerSession.CreateConsumer(this.queue, "2 > 1");
return new SimpleQueueListener(consumer, processor, listenerSession);
}

MSMQ Receive() method timeout

My original question from a while ago is MSMQ Slow Queue Reading, however I have advanced from that and now think I know the problem a bit more clearer.
My code (well actually part of an open source library I am using) looks like this:
queue.Receive(TimeSpan.FromSeconds(10), MessageQueueTransactionType.Automatic);
Which is using the Messaging.MessageQueue.Receive function and queue is a MessageQueue. The problem is as follows.
The above line of code will be called with the specified timeout (10 seconds). The Receive(...) function is a blocking function, and is supposed to block until a message arrives in the queue at which time it will return. If no message is received before the timeout is hit, it will return at the timeout. If a message is in the queue when the function is called, it will return that message immediately.
However, what is happening is the Receive(...) function is being called, seeing that there is no message in the queue, and hence waiting for a new message to come in. When a new message comes in (before the timeout), it isn't detecting this new message and continues waiting. The timeout is eventually hit, at which point the code continues and calls Receive(...) again, where it picks up the message and processes it.
Now, this problem only occurs after a number of days/weeks. I can make it work normally again by deleting & recreating the queue. It happens on different computers, and different queues. So it seems like something is building up, until some point when it breaks the triggering/notification ability that the Receive(...) function uses.
I've checked a lot of different things, and everything seems normal & isn't different from a queue that is working normally. There is plenty of disk space (13gig free) and RAM (about 350MB free out of 1GB from what I can tell). I have checked registry entries which all appear the same as other queues, and the performance monitor doesn't show anything out of the normal. I have also run the TMQ tool and can't see anything noticably wrong from that.
I am using Windows XP on all the machines and they all have service pack 3 installed. I am not sending a large amount of messages to the queues, at most it would be 1 every 2 seconds but generally a lot less frequent than that. The messages are only small too and nowhere near the 4MB limit.
The only thing I have just noticed is the p0000001.mq and r0000067.mq files in C:\WINDOWS\system32\msmq\storage are both 4,096KB however they are that size on other computers also which are not currently experiencing the problem. The problem does not happen to every queue on the computer at once, as I can recreate 1 problem queue on the computer and the other queues still experience the problem.
I am not very experienced with MSMQ so if you post possible things to check can you please explain how to check them or where I can find more details on what you are talking about.
Currently the situation is:
ComputerA - 4 queues normal
ComputerB - 2 queues experiencing problem, 1 queue normal
ComputerC - 2 queues experiencing problem
ComputerD - 1 queue normal
ComputerE - 2 queues normal
So I have a large number of computers/queues to compare and test against.
Any particular reason you aren't using an event handler to listen to the queues? The System.Messaging library allows you to attach a handler to a queue instead of, if I understand what you are doing correctly, looping Receive every 10 seconds. Try something like this:
class MSMQListener
{
public void StartListening(string queuePath)
{
MessageQueue msQueue = new MessageQueue(queuePath);
msQueue.ReceiveCompleted += QueueMessageReceived;
msQueue.BeginReceive();
}
private void QueueMessageReceived(object source, ReceiveCompletedEventArgs args)
{
MessageQueue msQueue = (MessageQueue)source;
//once a message is received, stop receiving
Message msMessage = null;
msMessage = msQueue.EndReceive(args.AsyncResult);
//do something with the message
//begin receiving again
msQueue.BeginReceive();
}
}
We are also using NServiceBus and had a similar problem inside our network.
Basically, MSMQ is using UDP with two-phase commits. After a message is received, it has to be acknowledged. Until it is acknowledged, it cannot be received on the client side as the receive transaction hasn't been finalized.
This was caused by different things in different times for us:
once, this was due to the Distributed Transaction Coordinator unable to communicate between machines as firewall misconfiguration
another time, we were using cloned virtual machines without sysprep which made internal MSMQ ids non-unique and made it receive a message to one machine and ack to another. Eventually, MSMQ figures things out but it takes quite a while.
Try this
public Message Receive( TimeSpan timeout, Cursor cursor )
overloaded function.
To get a cursor for a MessageQueue, call the CreateCursor method for that queue.
A Cursor is used with such methods as Peek(TimeSpan, Cursor, PeekAction) and Receive(TimeSpan, Cursor) when you need to read messages that are not at the front of the queue. This includes reading messages synchronously or asynchronously. Cursors do not need to be used to read only the first message in a queue.
When reading messages within a transaction, Message Queuing does not roll back cursor movement if the transaction is aborted. For example, suppose there is a queue with two messages, A1 and A2. If you remove message A1 while in a transaction, Message Queuing moves the cursor to message A2. However, if the transaction is aborted for any reason, message A1 is inserted back into the queue but the cursor remains pointing at message A2.
To close the cursor, call Close.
If you want to use something completely synchronous and without event you can test this method
public object Receive(string path, int millisecondsTimeout)
{
var mq = new System.Messaging.MessageQueue(path);
var asyncResult = mq.BeginReceive();
var handles = new System.Threading.WaitHandle[] { asyncResult.AsyncWaitHandle };
var index = System.Threading.WaitHandle.WaitAny(handles, millisecondsTimeout);
if (index == 258) // Timeout
{
mq.Close();
return null;
}
var result = mq.EndReceive(asyncResult);
return result;
}

How do I obtain the latency between server and client in C#?

I'm working on a C# Server application for a game engine I'm writing in ActionScript 3. I'm using an authoritative server model as to prevent cheating and ensure fair game. So far, everything works well:
When the client begins moving, it tells the server and starts rendering locally; the server, then, tells everyone else that client X has began moving, among with details so they can also begin rendering. When the client stops moving, it tells the server, which performs calculations based on the time the client began moving and the client render tick delay and replies to everyone, so they can update with the correct values.
The thing is, when I use the default 20ms tick delay on server calculations, when the client moves for a rather long distance, there's a noticeable tilt forward when it stops. If I increase slightly the delay to 22ms, on my local network everything runs very smoothly, but in other locations, the tilt is still there. After experimenting a little, I noticed that the extra delay needed is pretty much tied to the latency between client and server. I even boiled it down to a formula that would work quite nicely: delay = 20 + (latency / 10).
So, how would I proceed to obtain the latency between a certain client and the server (I'm using asynchronous sockets). The CPU effort can't be too much, as to not have the server run slowly. Also, is this really the best way, or is there a more efficient/easier way to do this?
Sorry that this isn't directly answering your question, but generally speaking you shouldn't rely too heavily on measuring latency because it can be quite variable. Not only that, you don't know if the ping time you measure is even symmetrical, which is important. There's no point applying 10ms of latency correction if it turns out that the ping time of 20ms is actually 19ms from server to client and 1ms from client to server. And latency in application terms is not the same as in networking terms - you may be able to ping a certain machine and get a response in 20ms but if you're contacting a server on that machine that only processes network input 50 times a second then your responses will be delayed by an extra 0 to 20ms, and this will vary rather unpredictably.
That's not to say latency measurement it doesn't have a place in smoothing predictions out, but it's not going to solve your problem, just clean it up a bit.
On the face of it, the problem here seems to be that that you're sent information in the first message which you use to extrapolate data from until the last message is received. If all else stays constant then the movement vector given in the first message multiplied by the time between the messages will give the server the correct end position that the client was in at roughly now-(latency/2). But if the latency changes at all, the time between the messages will grow or shrink. The client may know he's moved 10 units, but the server simulated him moving 9 or 11 units before being told to snap him back to 10 units.
The general solution to this is to not assume that latency will stay constant but to send periodic position updates, which allow the server to verify and correct the client's position. With just 2 messages as you have now, all the error is found and corrected after the 2nd message. With more messages, the error is spread over many more sample points allowing for smoother and less visible correction.
It can never be perfect though: all it takes is a lag spike in the last millisecond of movement and the server's representation will overshoot. You can't get around that if you're predicting future movement based on past events, as there's no real alternative to choosing either correct-but-late or incorrect-but-timely since information takes time to travel. (Blame Einstein.)
One thing to keep in mind when using ICMP based pings is that networking equipment will often give ICMP traffic lower priority than normal packets, especially when the packets cross network boundaries such as WAN links. This can lead to pings being dropped or showing higher latency than traffic is actually experiencing and lends itself to being an indicator of problems rather than a measurement tool.
The increasing use of Quality of Service (QoS) in networks only exacerbates this and as a consequence though ping still remains a useful tool, it needs to be understood that it may not be a true reflection of the network latency for non-ICMP based real traffic.
There is a good post at the Itrinegy blog How do you measure Latency (RTT) in a network these days? about this.
You could use the already available Ping Class. Should be preferred over writing your own IMHO.
Have a "ping" command, where you send a message from the server to the client, then time how long it takes to get a response. Barring CPU overload scenarios, it should be pretty reliable. To get the one-way trip time, just divide the time by 2.
We can measure the round-trip time using the Ping class of the .NET Framework.
Instantiate a Ping and subscribe to the PingCompleted event:
Ping pingSender = new Ping();
pingSender.PingCompleted += PingCompletedCallback;
Add code to configure and action the ping.
Our PingCompleted event handler (PingCompletedEventHandler) has a PingCompletedEventArgs argument. The PingCompletedEventArgs.Reply gets us a PingReply object. PingReply.RoundtripTime returns the round trip time (the "number of milliseconds taken to send an Internet Control Message Protocol (ICMP) echo request and receive the corresponding ICMP echo reply message"):
public static void PingCompletedCallback(object sender, PingCompletedEventArgs e)
{
...
Console.WriteLine($"Roundtrip Time: {e.Reply.RoundtripTime}");
...
}
Code-dump of a full working example, based on MSDN's example. I have modified it to write the RTT to the console:
public static void Main(string[] args)
{
string who = "www.google.com";
AutoResetEvent waiter = new AutoResetEvent(false);
Ping pingSender = new Ping();
// When the PingCompleted event is raised,
// the PingCompletedCallback method is called.
pingSender.PingCompleted += PingCompletedCallback;
// Create a buffer of 32 bytes of data to be transmitted.
string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
byte[] buffer = Encoding.ASCII.GetBytes(data);
// Wait 12 seconds for a reply.
int timeout = 12000;
// Set options for transmission:
// The data can go through 64 gateways or routers
// before it is destroyed, and the data packet
// cannot be fragmented.
PingOptions options = new PingOptions(64, true);
Console.WriteLine("Time to live: {0}", options.Ttl);
Console.WriteLine("Don't fragment: {0}", options.DontFragment);
// Send the ping asynchronously.
// Use the waiter as the user token.
// When the callback completes, it can wake up this thread.
pingSender.SendAsync(who, timeout, buffer, options, waiter);
// Prevent this example application from ending.
// A real application should do something useful
// when possible.
waiter.WaitOne();
Console.WriteLine("Ping example completed.");
}
public static void PingCompletedCallback(object sender, PingCompletedEventArgs e)
{
// If the operation was canceled, display a message to the user.
if (e.Cancelled)
{
Console.WriteLine("Ping canceled.");
// Let the main thread resume.
// UserToken is the AutoResetEvent object that the main thread
// is waiting for.
((AutoResetEvent)e.UserState).Set();
}
// If an error occurred, display the exception to the user.
if (e.Error != null)
{
Console.WriteLine("Ping failed:");
Console.WriteLine(e.Error.ToString());
// Let the main thread resume.
((AutoResetEvent)e.UserState).Set();
}
Console.WriteLine($"Roundtrip Time: {e.Reply.RoundtripTime}");
// Let the main thread resume.
((AutoResetEvent)e.UserState).Set();
}
You might want to perform several pings and then calculate an average, depending on your requirements of course.

Categories