When invoking a WCF service asynchronous there seems to be two ways it can be done.
1.
WcfClient _client = new WcfClient();
public void One()
{
_client.BegindoSearch("input", ResultOne, null);
}
private void ResultOne(IAsyncResult ar)
{
string data = _client.EnddoSearch(ar);
}
2.
public void Two()
{
WcfClient client = new WcfClient();
client.doSearchCompleted += TwoCompleted;
client.doSearchAsync("input");
}
void TwoCompleted(object sender, doSearchCompletedEventArgs e)
{
string data = e.Result;
}
And with the new Task<T> class we have an easy third way by wrapping the synchronous operation in a task.
3.
public void Three()
{
WcfClient client = new WcfClient();
var task = Task<string>.Factory.StartNew(() => client.doSearch("input"));
string data = task.Result;
}
They all give you the ability to execute other code while you wait for the result, but I think Task<T> gives better control on what you execute before or after the result is retrieved.
Are there any advantages or disadvantages to using one over the other? Or scenarios where one way of doing it is more preferable?
I would not use the final version because it will run the operation on a worker thread instead of an I/O thread. This is especially bad if you're doing it inside ASP.NET, where the worker threads are needed to serve requests. Not to mention, you're still blocking on the main thread waiting for the task to finish when you check its Result, so technically you're wasting two worker threads, or one worker and the UI.
The BeginXYZ and XyzAsync methods for WCF clients work essentially the same way - you should choose the appropriate version based on the use case you want to support (either APC or event-driven, respectively). For example, the BeginXyz version would (perhaps counterintuitively) be easier to use within an ASP.NET (or MVC) async page, whereas the XyzAsync version would be easier to use in a Windows Form.
There's a problem with your first example. You should certainly not be creating a new WcfClient instance when you call EndDoSearch. You should either keep the original instance around in a field or pass it as the state parameter.
But in general, I prefer option #1 because it makes it very easy to use an anonymous method to handle the result.
var client = new WcfClient();
client.BeginDoSearch("input", ar => {
var result = client.EndDoSearch(ar);
// blah blah
}, null);
Related
I have a 3rd party component that is "expensive" to spin up. This component is not thread safe. Said component is hosted inside of a WCF service (for now), so... every time a call comes into the service I have to new up the component.
What I'd like to do instead is have a pool of say 16 threads that each spin up their own copy of the component and have a mechanism to call the method and have it distributed to one of the 16 threads and have the value returned.
So something simple like:
var response = threadPool.CallMethod(param1, param2);
Its fine for the call to block until it gets a response as I need the response to proceed.
Any suggestions? Maybe I'm overthinking it and a ConcurrentQueue that is serviced by 16 threads would do the job, but now sure how the method return value would get returned to the caller?
WCF will already use the thread pool to manage its resources so if you add a layer of thread management on top of that it is only going to go badly. Avoid doing that if possible as you will get contention on your service calls.
What I would do in your situation is just use a single ThreadLocal or thread static that would get initialized with your expensive object once. Thereafter it would be available to the thread pool thread.
That is assuming that your object is fine on an MTA thread; I'm guessing it is from your post since it sounds like things are current working, but just slow.
There is the concern that too many objects get created and you use too much memory as the pool grows too large. However, see if this is the case in practice before doing anything else. This is a very simple strategy to implement so easy to trial. Only get more complex if you really need to.
First and foremost, I agree with #briantyler: ThreadLocal<T> or thread static fields is probably what you want. You should go with that as a starting point and consider other options if it doesn't meet your needs.
A complicated but flexible alternative is a singleton object pool. In its most simple form your pool type will look like this:
public sealed class ObjectPool<T>
{
private readonly ConcurrentQueue<T> __objects = new ConcurrentQueue<T>();
private readonly Func<T> __factory;
public ObjectPool(Func<T> factory)
{
__factory = factory;
}
public T Get()
{
T obj;
return __objects.TryDequeue(out obj) ? obj : __factory();
}
public void Return(T obj)
{
__objects.Enqueue(obj);
}
}
This doesn't seem awfully useful if you're thinking of type T in terms of primitive classes or structs (i.e. ObjectPool<MyComponent>), as the pool does not have any threading controls built in. But you can substitute your type T for a Lazy<T> or Task<T> monad, and get exactly what you want.
Pool initialisation:
Func<Task<MyComponent>> factory = () => Task.Run(() => new MyComponent());
ObjectPool<Task<MyComponent>> pool = new ObjectPool<Task<MyComponent>>(factory);
// "Pre-warm up" the pool with 16 concurrent tasks.
// This starts the tasks on the thread pool and
// returns immediately without blocking.
for (int i = 0; i < 16; i++) {
pool.Return(pool.Get());
}
Usage:
// Get a pooled task or create a new one. The task may
// have already completed, in which case Result will
// be available immediately. If the task is still
// in flight, accessing its Result will block.
Task<MyComponent> task = pool.Get();
try
{
MyComponent component = task.Result; // Alternatively you can "await task"
// Do something with component.
}
finally
{
pool.Return(task);
}
This method is more complex than maintaining your component in a ThreadLocal or thread static field, but if you need to do something fancy like limiting the number of pooled instances, the pool abstraction can be quite useful.
EDIT
Basic "fixed set of X instances" pool implementation with a Get which blocks once the pool has been drained:
public sealed class ObjectPool<T>
{
private readonly Queue<T> __objects;
public ObjectPool(IEnumerable<T> items)
{
__objects = new Queue<T>(items);
}
public T Get()
{
lock (__objects)
{
while (__objects.Count == 0) {
Monitor.Wait(__objects);
}
return __objects.Dequeue();
}
}
public void Return(T obj)
{
lock (__objects)
{
__objects.Enqueue(obj);
Monitor.Pulse(__objects);
}
}
}
I have 30 sub companies and every one has implemented their web service (with different technologies).
I need to implement a web service to aggregate them, for example, all the sub company web services have a web method with name GetUserPoint(int nationalCode) and I need to implement my web service that will call all of them and collect all of the responses (for example sum of points).
This is my base class:
public abstract class BaseClass
{ // all same attributes and methods
public long GetPoint(int nationalCode);
}
For each of sub companies web services, I implement a class that inherits this base class and define its own GetPoint method.
public class Company1
{
//implement own GetPoint method (call a web service).
}
to
public class CompanyN
{
//implement own GetPoint method (call a web service).
}
so, this is my web method:
[WebMethod]
public long MyCollector(string nationalCode)
{
BaseClass[] Clients = new BaseClass[] { new Company1(),//... ,new Company1()}
long Result = 0;
foreach (var item in Clients)
{
long ResultTemp = item.GetPoint(nationalCode);
Result += ResultTemp;
}
return Result;
}
OK, it works but it's so slow, because every sub companys web service is hosted on different servers (on the internet).
I can use parallel programing like this:(is this called parallel programing!?)
foreach (var item in Clients)
{
Tasks.Add(Task.Run(() =>
{
Result.AddRange(item.GetPoint(MasterLogId, mobileNumber));
}
}
I think parallel programing (and threading) isn't good for this solution, because my solution is IO bound (not CPU intensive)!
Call every external web service is so slow, am i right? Many thread that are pending to get response!
I think async programming is the best way but I am new to async programming and parallel programing.
What is the best way? (parallel.foreach - async TAP - async APM - async EAP -threading)
Please write for me an example.
It's refreshing to see someone who has done their homework.
First things first, as of .NET 4 (and this is still very much the case today) TAP is the preferred technology for async workflow in .NET. Tasks are easily composable, and for you to parallelise your web service calls is a breeze if they provide true Task<T>-returning APIs. For now you have "faked" it with Task.Run, and for the time being this may very well suffice for your purposes. Sure, your thread pool threads will spend a lot of time blocking, but if the server load isn't very high you could very well get away with it even if it's not the ideal thing to do.
You just need to fix a potential race condition in your code (more on that towards the end).
If you want to follow the best practices though, you go with true TAP. If your APIs provide Task-returning methods out of the box, that's easy. If not, it's not game over as APM and EAP can easily be converted to TAP. MSDN reference: https://msdn.microsoft.com/en-us/library/hh873178(v=vs.110).aspx
I'll also include some conversion examples here.
APM (taken from another SO question):
MessageQueue does not provide a ReceiveAsync method, but we can get it to play ball via Task.Factory.FromAsync:
public static Task<Message> ReceiveAsync(this MessageQueue messageQueue)
{
return Task.Factory.FromAsync(messageQueue.BeginReceive(), messageQueue.EndPeek);
}
...
Message message = await messageQueue.ReceiveAsync().ConfigureAwait(false);
If your web service proxies have BeginXXX/EndXXX methods, this is the way to go.
EAP
Assume you have an old web service proxy derived from SoapHttpClientProtocol, with only event-based async methods. You can convert them to TAP as follows:
public Task<long> GetPointAsyncTask(this PointWebService webService, int nationalCode)
{
TaskCompletionSource<long> tcs = new TaskCompletionSource<long>();
webService.GetPointAsyncCompleted += (s, e) =>
{
if (e.Cancelled)
{
tcs.SetCanceled();
}
else if (e.Error != null)
{
tcs.SetException(e.Error);
}
else
{
tcs.SetResult(e.Result);
}
};
webService.GetPointAsync(nationalCode);
return tcs.Task;
}
...
using (PointWebService service = new PointWebService())
{
long point = await service.GetPointAsyncTask(123).ConfigureAwait(false);
}
Avoiding races when aggregating results
With regards to aggregating parallel results, your TAP loop code is almost right, but you need to avoid mutating shared state inside your Task bodies as they will likely execute in parallel. Shared state being Result in your case - which is some kind of collection. If this collection is not thread-safe (i.e. if it's a simple List<long>), then you have a race condition and you may get exceptions and/or dropped results on Add (I'm assuming AddRange in your code was a typo, but if not - the above still applies).
A simple async-friendly rewrite that fixes your race would look like this:
List<Task<long>> tasks = new List<Task<long>>();
foreach (BaseClass item in Clients) {
tasks.Add(item.GetPointAsync(MasterLogId, mobileNumber));
}
long[] results = await Task.WhenAll(tasks).ConfigureAwait(false);
If you decide to be lazy and stick with the Task.Run solution for now, the corrected version will look like this:
List<Task<long>> tasks = new List<Task<long>>();
foreach (BaseClass item in Clients)
{
Task<long> dodgyThreadPoolTask = Task.Run(
() => item.GetPoint(MasterLogId, mobileNumber)
);
tasks.Add(dodgyThreadPoolTask);
}
long[] results = await Task.WhenAll(tasks).ConfigureAwait(false);
You can create an async version of the GetPoint:
public abstract class BaseClass
{ // all same attributes and methods
public abstract long GetPoint(int nationalCode);
public async Task<long> GetPointAsync(int nationalCode)
{
return await GetPoint(nationalCode);
}
}
Then, collect the tasks for each client call. After that, execute all tasks using Task.WhenAll. This will execute them all in parallell. Also, as pointed out by Kirill, you can await the results of each task:
var tasks = Clients.Select(x => x.GetPointAsync(nationalCode));
long[] results = await Task.WhenAll(tasks);
If you do not want to make the aggregating method async, you can collect the results by calling .Result instead of awaiting, like so:
long[] results = Task.WhenAll(tasks).Result;
I'm calling a service over HTTP (ultimately using the HttpClient.SendAsync method) from within my code. This code is then called into from a WebAPI controller action. Mostly, it works fine (tests pass) but then when I deploy on say IIS, I experience a deadlock because caller of the async method call has been blocked and the continuation cannot proceed on that thread until it finishes (which it won't).
While I could make most of my methods async I don't feel as if I have a basic understanding of when I'd must do this.
For example, let's say I did make most of my methods async (since they ultimately call other async service methods) how would I then invoke the first async method of my program if I built say a message loop where I want some control of the degree of parallelism?
Since the HttpClient doesn't have any synchronous methods, what can I safely presume to do if I have an abstraction that isn't async aware? I've read about the ConfigureAwait(false) but I don't really understand what it does. It's strange to me that it's set after the async invocation. To me that feels as if a race waiting to happen... however unlikely...
WebAPI example:
public HttpResponseMessage Get()
{
var userContext = contextService.GetUserContext(); // <-- synchronous
return ...
}
// Some IUserContextService implementation
public IUserContext GetUserContext()
{
var httpClient = new HttpClient();
var result = httpClient.GetAsync(...).Result; // <-- I really don't care if this is asynchronous or not
return new HttpUserContext(result);
}
Message loop example:
var mq = new MessageQueue();
// we then run say 8 tasks that do this
for (;;)
{
var m = mq.Get();
var c = GetCommand(m);
c.InvokeAsync().Wait();
m.Delete();
}
When you have a message loop that allow things to happen in parallel and you have asynchronous methods, there's a opportunity to minimize latency. Basically, what I want to accomplish in this instance is to minimize latency and idle time. Though I'm actually unsure as to how to invoke into the command that's associated with the message that arrives off the queue.
To be more specific, if the command invocation needs to do service requests there's going to be latency in the invocation that could be used to get the next message. Stuff like that. I can totally do this simply by wrapping up things in queues and coordinating this myself but I'd like to see this work with just some async/await stuff.
While I could make most of my methods async I don't feel as if I have a basic understanding of when I'd must do this.
Start at the lowest level. It sounds like you've already got a start, but if you're looking for more at the lowest level, then the rule of thumb is anything I/O-based should be made async (e.g., HttpClient).
Then it's a matter of repeating the async infection. You want to use async methods, so you call them with await. So that method must be async. So all of its callers must use await, so they must also be async, etc.
how would I then invoke the first async method of my program if I built say a message loop where I want some control of the degree of parallelism?
It's easiest to put the framework in charge of this. E.g., you can just return a Task<T> from a WebAPI action, and the framework understands that. Similarly, UI applications have a message loop built-in that async will work naturally with.
If you have a situation where the framework doesn't understand Task or have a built-in message loop (usually a Console application or a Win32 service), you can use the AsyncContext type in my AsyncEx library. AsyncContext just installs a "main loop" (that is compatible with async) onto the current thread.
Since the HttpClient doesn't have any synchronous methods, what can I safely presume to do if I have an abstraction that isn't async aware?
The correct approach is to change the abstraction. Do not attempt to block on asynchronous code; I describe that common deadlock scenario in detail on my blog.
You change the abstraction by making it async-friendly. For example, change IUserContext IUserContextService.GetUserContext() to Task<IUserContext> IUserContextService.GetUserContextAsync().
I've read about the ConfigureAwait(false) but I don't really understand what it does. It's strange to me that it's set after the async invocation.
You may find my async intro helpful. I won't say much more about ConfigureAwait in this answer because I think it's not directly applicable to a good solution for this question (but I'm not saying it's bad; it actually should be used unless you can't use it).
Just bear in mind that async is an operator with precedence rules and all that. It feels magical at first, but it's really not so much. This code:
var result = await httpClient.GetAsync(url).ConfigureAwait(false);
is exactly the same as this code:
var asyncOperation = httpClient.GetAsync(url).ConfigureAwait(false);
var result = await asyncOperation;
There are usually no race conditions in async code because - even though the method is asynchronous - it is also sequential. The method can be paused at an await, and it will not be resumed until that await completes.
When you have a message loop that allow things to happen in parallel and you have asynchronous methods, there's a opportunity to minimize latency.
This is the second time you've mentioned a "message loop" "in parallel", but I think what you actually want is to have multiple (asynchronous) consumers working off the same queue, correct? That's easy enough to do with async (note that there is just a single message loop on a single thread in this example; when everything is async, that's usually all you need):
await tasks.WhenAll(ConsumerAsync(), ConsumerAsync(), ConsumerAsync());
async Task ConsumerAsync()
{
for (;;) // TODO: consider a CancellationToken for orderly shutdown
{
var m = await mq.ReceiveAsync();
var c = GetCommand(m);
await c.InvokeAsync();
m.Delete();
}
}
// Extension method
public static Task<Message> ReceiveAsync(this MessageQueue mq)
{
return Task<Message>.Factory.FromAsync(mq.BeginReceive, mq.EndReceive, null);
}
You'd probably also be interested in TPL Dataflow. Dataflow is a library that understands and works well with async code, and has nice parallel options built-in.
While I appreciate the insight from community members it's always difficult to express the intent of what I'm trying to do but tremendously helpful to get advice about circumstances surrounding the problem. With that, I eventually arrived that the following code.
public class AsyncOperatingContext
{
struct Continuation
{
private readonly SendOrPostCallback d;
private readonly object state;
public Continuation(SendOrPostCallback d, object state)
{
this.d = d;
this.state = state;
}
public void Run()
{
d(state);
}
}
class BlockingSynchronizationContext : SynchronizationContext
{
readonly BlockingCollection<Continuation> _workQueue;
public BlockingSynchronizationContext(BlockingCollection<Continuation> workQueue)
{
_workQueue = workQueue;
}
public override void Post(SendOrPostCallback d, object state)
{
_workQueue.TryAdd(new Continuation(d, state));
}
}
/// <summary>
/// Gets the recommended max degree of parallelism. (Your main program message loop could use this value.)
/// </summary>
public static int MaxDegreeOfParallelism { get { return Environment.ProcessorCount; } }
#region Helper methods
/// <summary>
/// Run an async task. This method will block execution (and use the calling thread as a worker thread) until the async task has completed.
/// </summary>
public static T Run<T>(Func<Task<T>> main, int degreeOfParallelism = 1)
{
var asyncOperatingContext = new AsyncOperatingContext();
asyncOperatingContext.DegreeOfParallelism = degreeOfParallelism;
return asyncOperatingContext.RunMain(main);
}
/// <summary>
/// Run an async task. This method will block execution (and use the calling thread as a worker thread) until the async task has completed.
/// </summary>
public static void Run(Func<Task> main, int degreeOfParallelism = 1)
{
var asyncOperatingContext = new AsyncOperatingContext();
asyncOperatingContext.DegreeOfParallelism = degreeOfParallelism;
asyncOperatingContext.RunMain(main);
}
#endregion
private readonly BlockingCollection<Continuation> _workQueue;
public int DegreeOfParallelism { get; set; }
public AsyncOperatingContext()
{
_workQueue = new BlockingCollection<Continuation>();
}
/// <summary>
/// Initialize the current thread's SynchronizationContext so that work is scheduled to run through this AsyncOperatingContext.
/// </summary>
protected void InitializeSynchronizationContext()
{
SynchronizationContext.SetSynchronizationContext(new BlockingSynchronizationContext(_workQueue));
}
protected void RunMessageLoop()
{
while (!_workQueue.IsCompleted)
{
Continuation continuation;
if (_workQueue.TryTake(out continuation, Timeout.Infinite))
{
continuation.Run();
}
}
}
protected T RunMain<T>(Func<Task<T>> main)
{
var degreeOfParallelism = DegreeOfParallelism;
if (!((1 <= degreeOfParallelism) & (degreeOfParallelism <= 5000))) // sanity check
{
throw new ArgumentOutOfRangeException("DegreeOfParallelism must be between 1 and 5000.", "DegreeOfParallelism");
}
var currentSynchronizationContext = SynchronizationContext.Current;
InitializeSynchronizationContext(); // must set SynchronizationContext before main() task is scheduled
var mainTask = main(); // schedule "main" task
mainTask.ContinueWith(task => _workQueue.CompleteAdding());
// for single threading we don't need worker threads so we don't use any
// otherwise (for increased parallelism) we simply launch X worker threads
if (degreeOfParallelism > 1)
{
for (int i = 1; i < degreeOfParallelism; i++)
{
ThreadPool.QueueUserWorkItem(_ => {
// do we really need to restore the SynchronizationContext here as well?
InitializeSynchronizationContext();
RunMessageLoop();
});
}
}
RunMessageLoop();
SynchronizationContext.SetSynchronizationContext(currentSynchronizationContext); // restore
return mainTask.Result;
}
protected void RunMain(Func<Task> main)
{
// The return value doesn't matter here
RunMain(async () => { await main(); return 0; });
}
}
This class is complete and it does a couple of things that I found difficult to grasp.
As general advice you should allow the TAP (task-based asynchronous) pattern to propagate through your code. This may imply quite a bit of refactoring (or redesign). Ideally you should be allowed to break this up into pieces and make progress as you work towards to overall goal of making your program more asynchronous.
Something that's inherently dangerous to do is to call asynchronous code callously in an synchronous fashion. By this we mean invoking the Wait or Result methods. These can lead to deadlocks. One way to work around something like that is to use the AsyncOperatingContext.Run method. It will use the current thread to run a message loop until the asynchronous call is complete. It will swap out whatever SynchronizationContext is associated with the current thread temporarily to do so.
Note: I don't know if this is enough, or if you are allowed to swap back the SynchronizationContext this way, assuming that you can, this should work. I've already been bitten by the ASP.NET deadlock issue and this could possibly function as a workaround.
Lastly, I found myself asking the question, what is the corresponding equivalent of Main(string[]) in an async context? Turns out that's the message loop.
What I've found is that there are two things that make out this async machinery.
SynchronizationContext.Post and the message loop. In my AsyncOperatingContext I provide a very simple message loop:
protected void RunMessageLoop()
{
while (!_workQueue.IsCompleted)
{
Continuation continuation;
if (_workQueue.TryTake(out continuation, Timeout.Infinite))
{
continuation.Run();
}
}
}
My SynchronizationContext.Post thus becomes:
public override void Post(SendOrPostCallback d, object state)
{
_workQueue.TryAdd(new Continuation(d, state));
}
And our entry point, basically the equivalent of an async main from synchronous context (simplified version from original source):
SynchronizationContext.SetSynchronizationContext(new BlockingSynchronizationContext(_workQueue));
var mainTask = main(); // schedule "main" task
mainTask.ContinueWith(task => _workQueue.CompleteAdding());
RunMessageLoop();
return mainTask.Result;
All of this is costly and we can't just go replace calls to async methods with this but it does allow us to rather quickly create the facilities required to keep writing async code where needed without having to deal with the whole program. It's also very clear from this implementation where the worker threads go and how the impact concurrency of your program.
I look at this and think to myself, yeap, that's how Node.js does it. Though JavaScript does not have this nice async/await language support that C# currently does.
As an added bonus, I have complete control of the degree of parallelism, and if I want, I can run my async tasks completely single threaded. Though, If I do so and call Wait or Result on any task, it will deadlock the program because it will block the only message loop available.
I am writing a simple Silverlight application and WCF Service.
I want to create a synchronous method that return a value.
The method itself, call an asynchronous method from WCF Services. After I call asynchronous method, I want to get it value, and return to sender.
I hear that Rx can solve this kind of problem.
This is my code :
private void btnCreate_Click(object sender, RoutedEventArgs e)
{
string myResult = getMyBook(txtBookName.Text);
MessageBox.Show("Result\n" + myResult);
// myResult will be use for another purpose here..
}
// I want this method can be called anywhere, as long as the caller still in the same namespace.
public string getMyBook(string bookName)
{
Servo.ServoClient svc = new ServoClient();
string returnValue = "";
var o = Observable.FromEventPattern<GetBookCompletedEventArgs>(svc, "GetBookCompleted");
o.Subscribe(
b => returnValue = b.EventArgs.Result
);
svc.GetBookAsync(bookName);
return returnValue;
}
When I click btnCreate, myResult variable still empty. Is that something wrong with my code? Or maybe I am just don't understand with Rx concept? I am new to Rx.
My goal is : I need to get the result (myResult variable) from asynchronous method, and then used in later code.
This is more suited to the async/await keywords than Rx. Rx is primarily for managing streams of data, whereas in this case all you want to do is manage an asynchronous call synchronously. You could try using Rx like so:
public string getMyBook(string bookName)
{
Servo.ServoClient svc = new ServoClient();
svc.GetBookAsync(bookName);
var o = Observable.FromEventPattern<GetBookCompletedEventArgs>(svc, "GetBookCompleted");
return o.First().EventArgs.Result;
}
However, if GetBookAsync raises the event before you subscribe, the thread will block forever. You could mess around with .Replay() and .Connect() but you should probably just use async/await!
Remember that GetBookAsync returns immediately, and will return the value stored in returnvalue. When the data arrives returnvalue will be out of scope, and by then btnCreate will have finished.
U could use await on the GetBookAsync, so that it will wait for the data to arrive before continuing. Don't forget that would mean u also need the async on the method.
Not a great example or use of either RX or await, but trying is how we learn!
This is the model for 3 layer application with asynchronous response I am trying to use:
GUI
Back-end
Remote Server
GUI:
private async void readFloatButton_Click(object sender, RoutedEventArgs e)
{
FloatValueLabel.Content = (await srvProtocol.ReadFloat("ReadFloatX")).ToString();
}
Back-end (uses internal variables binReader and connection)
public Task<float> ReadFloat(string cmd)
{
var tcs1 = new TaskCompletionSource<float>();
var floatTask = tcs1.Task;
RegisterResponse(cmd,
() =>
tcs1.SetResult(_binReader.ReadSingle());
);
// We are ready: now send request to server(assuming that this is a quick operation)
connection.Send(cmd);
return floatTask;
}
Here RegisterResponse adds a < Key, Action > pair to some thread-safe dictionary.
Another worker thread reads from the network stream messages (message headers) and calls actions according to the string cmd found in the message header.
Now the question:
is it possible to reach same level of conciseness when I will need to subscribe GUI elements for periodic updates (not a single callback)?
Well, you could just take what you have and put in into a loop, e.g.:
private async void StartReading()
{
while (true)
{
FloatValueLabel.Content = (await srvProtocol.ReadFloat("ReadFloatX")).ToString();
await Task.Delay(1000);
}
}
Though I personally am not fond of the "infinite async" loop. The only way to stop the loop is to throw an exception (error or cancellation).
The Rx library is more suited for subscriptions. It allows a high level of
"conciseness," but has a steeper learning curve.
You can also wrap all the subscription logic into a Model/ViewModel. This particularly makes sense when dealing with "updating" the same conceptual value. In that case, StartReading would be a method on your Model/ViewModel. It would use async (for the infinite async loop) or ObserveOn (for Rx) to bring the updates onto the UI thread, and then use the standard INotifyPropertyChanged / ObservableCollection to update the UI indirectly.