ArgumentException in asynchronous webservice calls - c#

I'm running into a problem, I haven't experienced before. I'm calling a method asynchronously, via the BeginXXX/EndXX pattern, with an extra wrapper for some timing functionality, like so:
BeginGetStuffTimed(arguments)
{
var timingstuff = TimingStuff;
var callback = new AsyncResult(CallbackMethod);
service.BeginGetStuff(CallbackMethod, timingstuff);
}
CallbackMethodTimed(IAsyncResult result)
{
SaveTiming(result.AsyncState...);
service.EndGetStuff;
}
Now every once in a while, an exception gets thrown:
Async End called on wrong channel.
Parameter name: result
Since it doesn't occur always this kind of puzzles me. I was thinking IIS couldn't keep up with requests and goes fubar, so I added a pause in my calling code and this does seem to work. The longer the pause, the less frequent the exception.
Of course this is no real solution, so I'm looking for some insights into this matter.
Edit: upon further inspection this seems to be unrelated to IIS, IIS can process up to 5000 (per CPU) simultaneous threads. I'm pushing nowhere near this limit.

Related

What does the FabricNotReadableException mean? And how should we respond to it?

We are using the following method in a Stateful Service on Service-Fabric. The service has partitions. Sometimes we get a FabricNotReadableException from this peace of code.
public async Task HandleEvent(EventHandlerMessage message)
{
var queue = await StateManager.GetOrAddAsync<IReliableQueue<EventHandlerMessage>>(EventHandlerServiceConstants.EventHandlerQueueName);
using(ITransaction tx = StateManager.CreateTransaction())
{
await queue.EnqueueAsync(tx, message);
await tx.CommitAsync();
}
}
Does that mean that the partition is down and is being moved? Of that we hit a secondary partition? Because there is also a FabricNotPrimaryException that is being raised in some cases.
I have seen the MSDN link (https://msdn.microsoft.com/en-us/library/azure/system.fabric.fabricnotreadableexception.aspx). But what does
Represents an exception that is thrown when a partition cannot accept reads.
mean? What happened that a partition cannot accept a read?
Under the covers Service Fabric has several states that can impact whether a given replica can safely serve reads and writes. They are:
Granted (you can think of this as normal operation)
Not Primary
No Write Quorum (again mainly impacting writes)
Reconfiguration Pending
FabricNotPrimaryException which you mention can be thrown whenever a write is attempted on a replica which is not currently the Primary, and maps to the NotPrimary state.
FabricNotReadableException maps to the other states (you don't really need to worry or differentiate between them), and can happen in a variety of cases. One example is if the replica you are trying to perform the read on is a "Standby" replica (a replica which was down and which has been recovered, but there are already enough active replicas in the replica set). Another example is if the replica is a Primary but is being closed (say due to an upgrade or because it reported fault), or if it is currently undergoing a reconfiguration (say for example that another replica is being added). All of these conditions will result in the replica not being able to satisfy writes for a small amount of time due to certain safety checks and atomic changes that Service Fabric needs to handle under the hood.
You can consider FabricNotReadableException retriable. If you see it, just try the call again and eventually it will resolve into either NotPrimary or Granted. If you get FabricNotPrimary exception, generally this should be thrown back to the client (or the client in some way notified) that it needs to re-resolve in order to find the current Primary (the default communication stacks that Service Fabric ships take care of watching for non-retriable exceptions and re-resolving on your behalf).
There are two current known issues with FabricNotReadableException.
FabricNotReadableException should have two variants. The first should be explicitly retriable (FabricTransientNotReadableException) and the second should be FabricNotReadableException. The first version (Transient) is the most common and is probably what you are running into, certainly what you would run into in the majority of cases. The second (non-transient) would be returned in the case where you end up talking to a Standby replica. Talking to a standby won't happen with the out of the box transports and retry logic, but if you have your own it is possible to run into it.
The other issue is that today the FabricNotReadableException should be deriving from FabricTransientException, making it easier to determine what the correct behavior is.
Posted as an answer (to asnider's comment - Mar 16 at 17:42) because it was too long for comments! :)
I am also stuck in this catch 22. My svc starts and immediately receives messages. I want to encapsulate the service startup in OpenAsync and set up some ReliableDictionary values, then start receiving message. However, at this point the Fabric is not Readable and I need to split this "startup" between OpenAsync and RunAsync :(
RunAsync in my service and OpenAsync in my client also seem to have different Cancellation tokens, so I need to work around how to deal with this too. It just all feels a bit messy. I have a number of ideas on how to tidy this up in my code but has anyone come up with an elegant solution?
It would be nice if ICommunicationClient had a RunAsync interface that was called when the Fabric becomes ready/readable and cancelled when the Fabric shuts down the replica - this would seriously simplify my life. :)
I was running into the same problem. My listener was starting up before the main thread of the service. I queued the list of listeners needing to be started, and then activated them all early on in the main thread. As a result, all messages coming in were able to be handled and placed into the appropriate reliable storage. My simple solution (this is a service bus listener):
public Task<string> OpenAsync (CancellationToken cancellationToken)
{
string uri;
Start ();
uri = "<your endpoint here>";
return Task.FromResult (uri);
}
public static object lockOperations = new object ();
public static bool operationsStarted = false;
public static List<ClientAuthorizationBusCommunicationListener> pendingStarts = new List<ClientAuthorizationBusCommunicationListener> ();
public static void StartOperations ()
{
lock (lockOperations)
{
if (!operationsStarted)
{
foreach (ClientAuthorizationBusCommunicationListener listener in pendingStarts)
{
listener.DoStart ();
}
operationsStarted = true;
}
}
}
private static void QueueStart (ClientAuthorizationBusCommunicationListener listener)
{
lock (lockOperations)
{
if (operationsStarted)
{
listener.DoStart ();
}
else
{
pendingStarts.Add (listener);
}
}
}
private void Start ()
{
QueueStart (this);
}
private void DoStart ()
{
ServiceBus.WatchStatusChanges (HandleStatusMessage,
this.clientId,
out this.subscription);
}
========================
In the main thread, you call the function to start listener operations:
protected override async Task RunAsync (CancellationToken cancellationToken)
{
ClientAuthorizationBusCommunicationListener.StartOperations ();
...
This problem likely manifested itself here as the bus in question already had messages and started firing the second the listener was created. Trying to access anything in state manager was throwing the exception you were asking about.

That async-ing feeling - httpclient and mvc thread blocking

Dilemma, dilemma...
I've been working up a solution to a problem that uses async calls to the HttpClient library (GetAsync=>ConfigureAwait(false) etc). IIn a console app, my dll is very responsive and the mixture of using the async await calls and the Parallel.ForEach(=>) really makes me glow.
Now for the issue. After moving from this test harness to the target app, things have become problematic. I'm using asp.net mvc 4 and have hit a few issues. The main issue really is that calling my process on a controller action actually blocks the main thread until the async actions are complete. I've tried using an async controller pattern, I've tried using Task.Factory, I've tried using new Threads. You name it, I've tried all the flavours - and then some!.
Now, I appreciate that the nature of http is not designed to facilitate long processes like this and there are a number of articles here on SO that say don't do it. However, there are mitigating reasons why i NEED to use this approach. The main reason that I need to run this in mvc is due to the fact that I actually update the live data cache (on the mvc app) in realtime via raising an event in my dll's code. This means that fragments of the 50-60 data feeds can be pushed out live before the entire async action is complete. Therefore, client apps can receive partial updates within seconds of the async action being instigated. If I were to delegate the process out to a console app that ran the entire process in the background, I'd no longer be able to harness those fragment partial updates and this is the raison d'etre behind the entire choice of this architecture.
Can anyone shed light on a solution that would allow me to mitigate the blocking of the thread, whilst at the same time, allow each async fragment to be consumed by my object model and fed out to the client apps (I'm using signalr to make these client updates). A kind of nirvanna would be a scenario where an out-of-process cache object could be shared between numerous processes - the cache update could then be triggered and consumed by my mvc process (aka - http://devproconnections.com/aspnet-mvc/out-process-caching-aspnet). And so back to reality...
I have also considered using a secondary webservice to achieve this, but would welcome other options before once again over engineering my solution (there are already many moving parts and a multitude of async Actions going on).
Sorry not to have added any code, I'm hoping for practical philosophy/insights, rather than code help on this, tho would of course welcome coded examples that illustrate a solution to my problem.
I'll update the question as we move in time, as my thinking process is still maturing on this.
[edit] - for the sake of clarity, the snippet below is my brothers grimm code collision (extracted from a larger body of work):
Parallel.ForEach(scrapeDataBases, new ParallelOptions()
{
MaxDegreeOfParallelism = Environment.ProcessorCount * 15
},
async dataBase =>
{
await dataBase.ScrapeUrlAsync().ConfigureAwait(false);
await UpdateData(dataType, (DataCheckerScrape)dataBase);
});
async and Parallel.ForEach do not mix naturally, so I'm not sure what your console solution looks like. Furthermore, Parallel should almost never be used on ASP.NET at all.
It sounds like what you would want is to just use Task.WhenAll.
On a side note, I think your reasoning around background processing on ASP.NET is incorrect. It is perfectly possible to have a separate process that updates the clients via SignalR.
Being that your question is pretty high level without a lot of code. You could try Reactive Extensions.
Something like
private IEnumerable<Task<Scraper>> ScrappedUrls()
{
// Return the 50 to 60 task for each website here.
// I assume they all return the same type.
// return .ScrapeUrlAsync().ConfigureAwait(false);
throw new NotImplementedException();
}
public async Task<IEnumerable<ScrapeOdds>> GetOdds()
{
var results = new Collection<ScrapeOdds>();
var urlRequest = ScrappedUrls();
var observerableUrls = urlRequest.Select(u => u.ToObservable()).Merge();
var publisher = observerableUrls.Publish();
var hubContext = GlobalHost.ConnectionManager.GetHubContext<OddsHub>();
publisher.Subscribe(scraper =>
{
// Whatever you do do convert to the result set
var scrapedOdds = scraper.GetOdds();
results.Add(scrapedOdds);
// update anything else you want when it arrives.
// Update SingalR here
hubContext.Clients.All.UpdatedOdds(scrapedOdds);
});
// Will fire off subscriptions and not continue until they are done.
await publisher;
return results;
}
The merge option will process the results as they come in. You can then update the signalR hubs plus whatever else you need to update as they come in. The controller action will have to wait for them all to come in. That's why there is an await on the publisher.
I don't really know if httpClient is going to like to have 50 - 60 web calls all at once or not. If it doesn't you can just take the IEnumerable to an array and break it down into a smaller chunks. And also there should be some error checking in there. With Rx you can also tell it to SubscribeOn and ObserverOn different threads but I think with everything being pretty much async that wouldn't be necessary.

Documents List API client for C# timing out

I'm working on a simple wrapper for the google docs api in c#. The problem I'm running into is my tests are timing out. Sometimes. When I run all of my tests (only 12 of them) then it usually hangs up on the 8th one, which is testing the delete function. After about 6.5 minutes, it continues on, but every test after it also times out after 6.5 minutes for each test. If I run the tests individually then it works fine every time.
Here is the first method that times out:
Updated to show exception handling
[TestMethod]
public void CanDeleteFile()
{
var api = CreateApi();
api.UploadFile("pic.jpg", "..\\..\\..\\pic.jpg", "image/jpeg");
try
{
var files = api.GetDocuments();
api.DeleteFile("pic.jpg");
var lessFiles = api.GetDocuments();
Assert.AreEqual(files.Count - 1, lessFiles.Count);
}
catch (Google.GData.Client.GDataRequestException ex)
{
using (StreamWriter writer = new StreamWriter("..\\..\\..\\errors.log", true))
{
string time = DateTime.Now.ToString();
writer.WriteLine(time + ":\r\n\t" + ex.ResponseString);
}
throw ex;
}
}
It times out on var lessFiles = api.GetDocuments(); The second call to that method. I have other methods that call that method twice, and they don't time out, but this one does.
The method that all the test methods use that times out:
public AtomEntryCollection GetDocuments(DocumentType type = DocumentType.All, bool includeFolders = false)
{
checkToken();
DocumentsListQuery query = getQueryByType(type);
query.ShowFolders = includeFolders;
DocumentsFeed feed = service.Query(query);
return feed.Entries;
}
It times out on this line DocumentsFeed feed = service.Query(query);. This would be closer to acceptable if I was requesting insane numbers of files. I'm not. I'm requesting 5 - 6 depending on what test I'm running.
Things I've tried:
Deleting all files from my google docs account, leaving only 1-2 files depending on the test for it to retrieve.
Running the tests individually (they all pass and nothing times out, but I shouldn't have to do this)
Checking my network speed to make sure it's not horribly slow (15mbps down 4.5mbps up)
I'm out of ideas. If anyone knows why it might start timing out on me? Any suggestions are welcome.
edit
As #gowansg suggested, I implemented exponential backoff in my code. It started failing at the same point with the same exception. I then wrote a test to send 10000 requests for a complete list of all documents in my drive. It passed without any issues without using exponential backoff. Next I modified my test class so it would keep track of how many requests were sent. My tests crash on request 11.
The complete exception:
Google.GData.Client.GDataRequestException was unhandled by user code
Message=Execution of request failed: https://docs.google.com/feeds/default/private/full
Source=GoogleDrive
StackTrace:
at GoogleDrive.GoogleDriveApi.GetDocuments(DocumentType type, Boolean includeFolders) in C:\Users\nlong\Desktop\projects\GoogleDrive\GoogleDrive\GoogleDriveApi.cs:line 105
at GoogleDrive.Test.GoogleDriveTests.CanDeleteFile() in C:\Users\nlong\Desktop\projects\GoogleDrive\GoogleDrive.Test\GoogleDriveTests.cs:line 77
InnerException: System.Net.WebException
Message=The operation has timed out
Source=System
StackTrace:
at System.Net.HttpWebRequest.GetResponse()
at Google.GData.Client.GDataRequest.Execute()
InnerException:
another edit
It seems that I only crash after requesting the number of documents after the second upload. I'm not sure why that is, but I'm definitely going to look into my upload method.
If your tests pass when ran individually, but not when ran consecutively then you may be hitting a request rate limit. I noticed in your comment to JotaBe's answer you mentioned were getting request timeout exceptions. You should take a look at the http status code to figure out what to do. In the case of a 503 you should implement exceptional back off.
Updated Suggestion
Place a try-catch around the line that is throwing the exception and catch the Google.GData.Client.GDataRequestException. According to the source there are two properties that may be of value to you:
/// <summary>
/// this is the error message returned by the server
/// </summary>
public string ResponseString
{ ... }
and
//////////////////////////////////////////////////////////////////////
/// <summary>Read only accessor for response</summary>
//////////////////////////////////////////////////////////////////////
public WebResponse Response
{ ... }
Hopefully these contain some useful information for you, e.g., an HTTP Status Code.
You can explicitly set the time-outs for your unit tests. Here you have extensive information on it:
How to: Set Time Limits for Running Tests
You can include Thread.Sleep(miliseconds) in your unit tests before the offending methods. Probably your requests are being rejected from google docs for being too may in too short a time.

Calling a webservice async

Long post.. sorry
I've been reading up on this and tried back and forth with different solutions for a couple of days now but I can't find the most obvious choice for my predicament.
About my situation; I am presenting to the user a page that will contain a couple of different repeaters showing some info based on the result from a couple of webservice calls. I'd like to have the data brought in with an updatepanel (that would be querying the result table once per every two or three seconds until it found results) so I'd actually like to render the page and then when the data is "ready" it gets shown.
The page asks a controller for the info to render and the controller checks in a result table to see if there's anything to be found. If the specific data is not found it calls a method GetData() in WebServiceName.cs. GetData does not return anything but is supposed to start an async operation that gets the data from the webservice. The controller returns null and UpdatePanel waits for the next query.
When that operation is complete it'll store the data in it's relevant place in the db where the controller will find it the next time the page asks for it.
The solution I have in place now is to fire up another thread. I will host the page on a shared webserver and I don't know if this will cause any problems..
So the current code which resides on page.aspx:
Thread t = new Thread(new ThreadStart(CreateService));
t.Start();
}
void CreateService()
{
ServiceName serviceName = new ServiceName(user, "12345", "MOVING", "Apartment", "5100", "0", "72", "Bill", "rate_total", "1", "103", "serviceHost", "password");
}
At first I thought the solution was to use Begin[Method] and End[Method] but these don't seem to have been generated. I thought this seemed like a good solution so I was a little frustrated when they didn't show up.. is there a chance I might have missed a checkbox or something when adding the web references?
I do not want to use the [Method]Async since this stops the page from rendering until [Method]AsyncCompleted gets called from what I've understood.
The call I'm going to do is not CPU-intensive, I'm just waiting on a webService sitting on a slow server, so what I understood from this article: http://msdn.microsoft.com/en-us/magazine/cc164128.aspx making the threadpool bigger is not a choice as this will actually impair the performance instead (since I can't throw in a mountain of hardware).
What do you think is the best solution for my current situation? I don't really like the current one (only by gut feeling but anyway)
Thanks for reading this awfully long post..
Interesting. Until your question, I wasn't aware that VS changed from using Begin/End to Async/Completed when adding web references. I assumed that they would also include Begin/End, but apparently they did not.
You state "GetData does not return anything but is supposed to start an async operation that gets the data from the webservice," so I'm assuming that GetData actually blocks until the "async operation" completes. Otherwise, you could just call it synchronously.
Anyway, there are easy ways to get this working (asynchronous delegates, etc), but they consume a thread for each async operation, which doesn't scale.
You are correct that Async/Completed will block an asynchronous page. (side note: I believe that they will not block a synchronous page - but I've never tried that - so if you're using a non-async page, then you could try that). The method by which they "block" the asynchronous page is wrapped up in SynchronizationContext; in particular, each asynchronous page has a pending operation count which is incremented by Async and decremented after Completed.
You should be able to fake out this count (note: I haven't tried this either ;) ). Just substitute the default SynchronizationContext, which ignores the count:
var oldSyncContext = SynchronizationContext.Current;
try
{
SynchronizationContext.SetSynchronizationContext(new SynchronizationContext());
var serviceName = new ServiceName(..);
// Note: MyMethodCompleted will be invoked in a ThreadPool thread
// but WITHOUT an associated ASP.NET page, so some global state
// might be missing. Be careful with what code goes in there...
serviceName.MethodCompleted += MyMethodCompleted;
serviceName.MethodAsync(..);
}
finally
{
SynchronizationContext.SetSynchronizationContext(oldSyncContext);
}
I wrote a class that handles the temporary replacement of SynchronizationContext.Current as part of the Nito.Async library. Using that class simplifies the code to:
using (new ScopedSynchronizationContext(new SynchronizationContext()))
{
var serviceName = new ServiceName(..);
// Note: MyMethodCompleted will be invoked in a ThreadPool thread
// but WITHOUT an associated ASP.NET page, so some global state
// might be missing. Be careful with what code goes in there...
serviceName.MethodCompleted += MyMethodCompleted;
serviceName.MethodAsync(..);
}
This solution does not consume a thread that just waits for the operation to complete. It just registers a callback and keeps the connection open until the response arrives.
You can do this:
var action = new Action(CreateService);
action.BeginInvoke(action.EndInvoke, action);
or use ThreadPool.QueueUserWorkItem.
If using a Thread, make sure to set IsBackground=true.
There's a great post about fire and forget threads at http://consultingblogs.emc.com/jonathangeorge/archive/2009/09/10/make-methods-fire-and-forget-with-postsharp.aspx
try using below settings
[WebMethod]
[SoapDocumentMethod(OneWay = true)]
void MyAsyncMethod(parameters)
{
}
in your web service
but be careful if you use impersonation, we had problems on our side.
I'd encourage a different approach - one that doesn't use update panels. Update panels require an entire page to be loaded, and transferred over the wire - you only want the contents for a single control.
Consider doing a slightly more customized & optimized approach, using the MVC platform. Your data flow could look like:
Have the original request to your web page spawn a thread that goes out and warms your data.
Have a "skeleton" page returned to your client
In said page, have a javascript thread that calls your server asking for the data.
Using MVC, have a controller action that returns a partial view, which is limited to just the control you're interested in.
This will reduce your server load (can have a backoff algorithm), reduce the amount of info sent over the wire, and still give a great experience to the client.

MSMQ Receive() method timeout

My original question from a while ago is MSMQ Slow Queue Reading, however I have advanced from that and now think I know the problem a bit more clearer.
My code (well actually part of an open source library I am using) looks like this:
queue.Receive(TimeSpan.FromSeconds(10), MessageQueueTransactionType.Automatic);
Which is using the Messaging.MessageQueue.Receive function and queue is a MessageQueue. The problem is as follows.
The above line of code will be called with the specified timeout (10 seconds). The Receive(...) function is a blocking function, and is supposed to block until a message arrives in the queue at which time it will return. If no message is received before the timeout is hit, it will return at the timeout. If a message is in the queue when the function is called, it will return that message immediately.
However, what is happening is the Receive(...) function is being called, seeing that there is no message in the queue, and hence waiting for a new message to come in. When a new message comes in (before the timeout), it isn't detecting this new message and continues waiting. The timeout is eventually hit, at which point the code continues and calls Receive(...) again, where it picks up the message and processes it.
Now, this problem only occurs after a number of days/weeks. I can make it work normally again by deleting & recreating the queue. It happens on different computers, and different queues. So it seems like something is building up, until some point when it breaks the triggering/notification ability that the Receive(...) function uses.
I've checked a lot of different things, and everything seems normal & isn't different from a queue that is working normally. There is plenty of disk space (13gig free) and RAM (about 350MB free out of 1GB from what I can tell). I have checked registry entries which all appear the same as other queues, and the performance monitor doesn't show anything out of the normal. I have also run the TMQ tool and can't see anything noticably wrong from that.
I am using Windows XP on all the machines and they all have service pack 3 installed. I am not sending a large amount of messages to the queues, at most it would be 1 every 2 seconds but generally a lot less frequent than that. The messages are only small too and nowhere near the 4MB limit.
The only thing I have just noticed is the p0000001.mq and r0000067.mq files in C:\WINDOWS\system32\msmq\storage are both 4,096KB however they are that size on other computers also which are not currently experiencing the problem. The problem does not happen to every queue on the computer at once, as I can recreate 1 problem queue on the computer and the other queues still experience the problem.
I am not very experienced with MSMQ so if you post possible things to check can you please explain how to check them or where I can find more details on what you are talking about.
Currently the situation is:
ComputerA - 4 queues normal
ComputerB - 2 queues experiencing problem, 1 queue normal
ComputerC - 2 queues experiencing problem
ComputerD - 1 queue normal
ComputerE - 2 queues normal
So I have a large number of computers/queues to compare and test against.
Any particular reason you aren't using an event handler to listen to the queues? The System.Messaging library allows you to attach a handler to a queue instead of, if I understand what you are doing correctly, looping Receive every 10 seconds. Try something like this:
class MSMQListener
{
public void StartListening(string queuePath)
{
MessageQueue msQueue = new MessageQueue(queuePath);
msQueue.ReceiveCompleted += QueueMessageReceived;
msQueue.BeginReceive();
}
private void QueueMessageReceived(object source, ReceiveCompletedEventArgs args)
{
MessageQueue msQueue = (MessageQueue)source;
//once a message is received, stop receiving
Message msMessage = null;
msMessage = msQueue.EndReceive(args.AsyncResult);
//do something with the message
//begin receiving again
msQueue.BeginReceive();
}
}
We are also using NServiceBus and had a similar problem inside our network.
Basically, MSMQ is using UDP with two-phase commits. After a message is received, it has to be acknowledged. Until it is acknowledged, it cannot be received on the client side as the receive transaction hasn't been finalized.
This was caused by different things in different times for us:
once, this was due to the Distributed Transaction Coordinator unable to communicate between machines as firewall misconfiguration
another time, we were using cloned virtual machines without sysprep which made internal MSMQ ids non-unique and made it receive a message to one machine and ack to another. Eventually, MSMQ figures things out but it takes quite a while.
Try this
public Message Receive( TimeSpan timeout, Cursor cursor )
overloaded function.
To get a cursor for a MessageQueue, call the CreateCursor method for that queue.
A Cursor is used with such methods as Peek(TimeSpan, Cursor, PeekAction) and Receive(TimeSpan, Cursor) when you need to read messages that are not at the front of the queue. This includes reading messages synchronously or asynchronously. Cursors do not need to be used to read only the first message in a queue.
When reading messages within a transaction, Message Queuing does not roll back cursor movement if the transaction is aborted. For example, suppose there is a queue with two messages, A1 and A2. If you remove message A1 while in a transaction, Message Queuing moves the cursor to message A2. However, if the transaction is aborted for any reason, message A1 is inserted back into the queue but the cursor remains pointing at message A2.
To close the cursor, call Close.
If you want to use something completely synchronous and without event you can test this method
public object Receive(string path, int millisecondsTimeout)
{
var mq = new System.Messaging.MessageQueue(path);
var asyncResult = mq.BeginReceive();
var handles = new System.Threading.WaitHandle[] { asyncResult.AsyncWaitHandle };
var index = System.Threading.WaitHandle.WaitAny(handles, millisecondsTimeout);
if (index == 258) // Timeout
{
mq.Close();
return null;
}
var result = mq.EndReceive(asyncResult);
return result;
}

Categories