I try to develop an extension for Microsoft Edge based on native messaging and the official guide provides the example. And there is synchronization of access to the dictionaries of AppServiceConnections and their Deferrals in the OnBackgroundActivated method, but there is no such a thing in other event handling methods...
So my question is about UWP App Service threading model. Is it guaranteed that only one event handling method can be performed at a time? Or should I provide a correct synchronization of access to my data?
Is AppServiceConnection thread safe? Can I use SendMessageAsync from different threads at the same time? Or should I synchronize its usage?
I guess your issue is that you didn't see lock keyword inside events like OnAppServiceRequestReceived, OnAppServicesCanceled and so on, which is to do thread synchronization, and you're not sure if you should do this by yourself.
I think the answer should be no.lock inside OnBackgroundActivated is ensured to set correct desktopBridgeConnectionIndex or connectionIndex. Without the keyword lock inside these event handles not means that the event handle must be triggered only one time at a time. For one app service, if client A is connecting the app service, at the same time, another client B asks for the same app service, for this scenario the app service will spin up another instance of the same background task. So that for client A, its app service connection there is no side effect on client App B. In another words, each app service connection has its own instance, messages sending based on one app service connection have no influence with others. You may reference this video to look more details about app service, app service relative is about starting from 25th minute.
If you check the code snippet inside the event, you may see there are code lines to judge the request is from which app service connection, for example this.desktopBridgeConnection = desktopBridgeConnections[this.currentConnectionIndex].You will send message to correct AppServiceConnection, and this should be thread safe. If you met actual thread save issue when performing this, you could ask issue with testing details.
Related
I have a question in regard to running synchronous operations in an API design. I understand fully running asynchronous in certain scenarios like this is the correct approach this is more a conceptual understanding. If I have an Angular front end where the user can call an API endpoint with Subscribe which passes a list of CustomerId's. In the API a synchronous ActionResult is called which then iterates through these customer id's and calls a synchronous database call that can take a good amount of time to complete for each call.
The question I have is does that actually lock up the service if another user takes an action in the same UI against the backend service will the service not respond until that other user action is complete? I am just trying to wrap my head around that concept does the service lock up in that case and the UI would not be responsive. Multitask programming of course is the way to handle this just conceptually.
Thanks for input.
what you are missing is:-
angular runs on client end it means that every user will have an instance running on his system and his local instance may fail and in a result that particular client's instance may fail.
If using asynchronous backend it also won't stop it may fail the process and respond accordingly(if all errors have been handled). In the worst-case scenario even if it fails server gets restarted as mostly it's monitored using process managers like pm2. which takes no time to restart it.
so basically nothing gets blocked until there is an issue on your server and server stops to respond.
I have an Work Tracker WPF application which deployed in Windows Server 2008 and this Tracker application is communicating with (Tracker)windows service VIA WCF Service.
User can create any work entry/edit/add/delete/Cancel any work entry from Worker Tracker GUI application. Internally it will send a request to the Windows service. Windows Service will get the work request and process it in multithreading. Each workrequest entry will actually create n number of work files (based on work priority) in a output folder location.
So each work request will take to complete the work addition process.
Now my question is If I cancel the currently creating work entry. I want to to stop the current windows service work in RUNTIME. The current thread which is creating output files for the work should get STOPPED. All the thread should killed. All the thread resources should get removed once the user requested for CANCEL.
My workaround:
I use Windows Service On Custom Command method to send custom values to the windows service on runtime. What I am achieving here is it is processing the current work or current thread (ie creating output files for the work item recieved).and then it is coming to custom command for cancelling the request.
Is there any way so that the Work item request should get stopped once we get the custom command.
Any work around is much appreciated.
Summary
You are essentially talking about running a task host for long running tasks, and being able to cancel those tasks. Your specific question seems to want to know the best way to implement this in .NET. Your architecture is good, although you are brave to roll your own rather than using existing frameworks, and you haven't mentioned scaling your architecture later.
My preference is for using the TPL Task object. It supports cancellation, and is easy to poll for progress, etc. You can only use this in .NET 4 onwards.
It is hard to provide code without basically designing a whole job hosting engine for you and knowing your .NET version. I have described the steps in detail below, with references to example code.
Your approach of using the Windows Service OnCustomCommand is fine, you could also use a messaging service (see below) if you have that option for client-service comms. This would be more appropriate for a scenario where you have many clients talking to a central job service, and the job service is not on the same machine as the client.
Running and cancelling tasks on threads
Before we look at your exact context, it would be good to review MSDN - Asynchronous Programming Patterns. There are three main .NET patterns to run and cancel jobs on threads, and I list them in order of preference for use:
TAP: Task-based Asynchronous Pattern
Based on Task, which has been available only since .NET 4
The prefered way to run and control any thread-based activity from .NET 4 onwards
Much simpler to implement that EAP
EAP: Event-based Asynchronous Pattern
Your only option if you don't have .NET 4 or later.
Hard to implement, but once you have understood it you can roll it out and it is very reliable to use
APM: Asynchronous Programming Model
No longer relevant unless you maintain legacy code or use old APIs.
Even with .NET 1.1 you can implement a version of EAP, so I will not cover this as you say you are implementing your own solution
The architecture
Imagine this like a REST based service.
The client submits a job, and gets returned an identifier for the job
A job engine then picks up the job when it is ready, and starts running it
If the client doesn't want the job any more, then they delete the job, using it's identifier
This way the client is completely isolated from the workings of the job engine, and the job engine can be improved over time.
The job engine
The approach is as follows:
For a submitted task, generate a universal identifier (UID) so that you can:
Identify a running task
Poll for results
Cancel the task if required
return that UID to the client
queue the job using that identifier
when you have resources
run the job by creating a Task
store the Task in a dictionary against the UID as a key
When the client wants results, they send the request with the UID and you return progress by checking against the Task that you retrieve from the dictionary. If the task is complete they can then send a request for the completed data, or in your case just go and read the completed files.
When they want to cancel they send the request with the UID, and you cancel the Task by finding it in the dictionary and telling it to cancel.
Cancelling inside a job
Inside your code you will need to regularly check your cancellation token to see if you should stop running code (see How do I abort/cancel TPL Tasks? if you are using the TAP pattern, or Albahari if you are using EAP). At that point you will exit your job processing, and your code, if designed well, should dispose of IDiposables where required, remove big strings from memory etc.
The basic premise of cancellation is that you check your cancellation token:
After a block of work that takes a long time (e.g. a call to an external API)
Inside a loop (for, foreach, do or while) that you control, you check on each iteration
Within a long block of sequential code, that might take "some time", you insert points to check on a regular basis
You need to define how quickly you need to react to a cancellation - for a windows service it should be within milliseconds, preferably, to make sure that windows doesn't have problems restarting or stopping the service.
Some people do this whole process with threads, and by terminating the thread - this is ugly and not recommended any more.
Reliability
You need to ask: what happens if your server restarts, the windows service crashes, or any other exception happens causing you to lose incomplete jobs? In this case you may want a queue architecture that is reliable in order to be able to restart jobs, or rebuild the queue of jobs you haven't started yet.
If you don't want to scale, this is simple - use a local database that the windows service stored job information in.
On submission of a job, record its details in the database
When you start a job, record that against the job record in the database
When the client collects the job, mark it for delayed garbage collection in the database, and then delete it after a set amount of time (1 hour, 1 day ...)
If your service restarts and there are "in progress jobs" then requeue them and then start your job engine again.
If you do want to scale, or your clients are on many computers, and you have a job engine "farm" of 1 or more servers, then look at using a message queue instead of directly communicating using OnCustomCommand.
Message Queues have multiple benefits. They will allow you to reliably submit jobs to a central queue that many workers can then pick up and process, and to decouple your clients and servers so you can scale out your job running services. They are used to ensure jobs are reliably submitted and processed in a highly decoupled fashion, and this can work locally or globally, but always reliably, you can even then combine it with running your windows service on cloud workers which you can dynamically scale.
Examples of technologies are MSMQ (if you want to maintain your own, or must stay inside your own firewall), or Windows Azure Service Bus (WASB) - which is cheap, and already done for you. In either case you will want to use Patterns and Best Practices for Enterprise Integration. In the case of WASB then there are many (MSDN), many (MSDN samples for BrokeredMessaging etc.), many (new Task-based API) developer resources, and NuGet packages for you to use
I've got a few instances of the same class. During the classes lifetime, every method call on this class should be executed on the same thread. But for each instance I need a different thread.
I thought about Threadpool, but it seems that I have too less control about it.
How can I reuse a thread without using ThredPool?
Thank you! Martin
Edit (why I need this):
I have to use a win32 dll to access business logic of a third-party product. This dll is not designed for a multi-threaded environment like a web application. When I run my ASP.NET MVC application in ASP Classic Mode (STA Thread), everything works fine so far. But the problem is that all users going to block each other. This component also maintains some state. As soon as a different thread is accessing this component, it will not recognize the connection-handle I have to pass in for each method call. I got the connection handle after a logon procedure. I want to put my web application in MTA mode back again and use a worker-concept, assigning about 10 users to a worker (max. 10 users should block each other). One worker should always use the same thread to execute the api calls so the component will not stubmle.
I'm not happy with this situation, but I have to find am acceptable solution.
Update - Found a Solution:
Thanks to the "Smart Thread Pool" from Ami Bar I could accomplish the behavior I was looking for (easily). For each worker, I have now my own thread pool instance with a max and min number of one thread. Well, it's not the idea of a thread pool, but it makes it very easy to handle the work-items and it also has some nice other featrues. The web application is running on MTA now.
I'm going to prepare some load tests to see if its stable over hours.
see here: http://www.codeproject.com/Articles/7933/Smart-Thread-Pool
I've made a project than uses tcp sockets connectivity (own closed protocol), added background connectivity with Network Trigger API, as described here (starting from page 17) - StreamSocket control channel registration block, and IBackgroundTask class, that should be fired each time socket receives something.
Have tried everything to debug the code in background task, with no use:
closing the visible app with a gesture
lock the screen
tried to load some other heavy application, to make windows suspend my app
All these have not helped me to make background task run (and debug) during socket message. What am I doing wrong? Should I have to get the separate suspendable device, like a WinRT tablet, to get this working?
By default referenced projects are not added to the main one. This is not so obvious as it may seem, and that's why I spent almost a week to find out this. So the clue is: check reference projects accessibility.
Upd:
There are some more things to deal with, as I've found out during development. Some of them are not as clear as they need to. Here is a list of what I did:
Add background project to main project's references (right click on references node in solution browser).
Check if main project manifest contains right declaration (background task w/control channel, right background entry point name with full package, $targetnametoken$.exe as executable)
A thing that leads from #1: all the entities you plan to use within background, should be put into separate project in solution. This project is then referenced by both main and background projects.
Be aware of BackgroundExecutionManager.RequestAccessAsync() to be called before registering ControlChannelTrigger
A key thing I've found just in one small comment in a sample project:
// IMPORTANT: When using winRT based transports such as StreamWebSocket with the ControlChannelTrigger,
// we have to use the raw async pattern for handling reads instead of the await model.
// Using the raw async pattern allows Windows to synchronize the PushNotification task's
// IBackgroundTask::Run method with the return of the receive completion callback.
// The Run method is invoked after the completion callback returns. This ensures that the app has
// received the data/errors before the Run method is invoked.
// It is important to note that the app has to post another read before it returns control from the completion callback.
// It is also important to note that the DataReader is not directly used with the
// StreamWebSocket transport since that breaks the synchronization described above.
// It is not supported to use DataReader's LoadAsync method directly on top of the transport. Instead,
// the IBuffer returned by the transport's ReadAsync method can be later passed to DataReader::FromBuffer()
// for further processing.
More info here - http://code.msdn.microsoft.com/windowsapps/ControlChannelTrigger-91f6bed8/sourcecode?fileId=57961&pathId=2085431229
If you did all things properly, the debugging of background tasks is straightforward. Just put the breakpoint and go on, nevermind of main project is running or suspended.
ps - If the project is suspended, be aware of calling UI thread (especially awaited things) - they won't run until app is running, and will wait.
I’m looking for the best way of using threads considering scalability and performance.
In my site I have two scenarios that need threading:
UI trigger: for example the user clicks a button, the server should read data from the DB and send some emails. Those actions take time and I don’t want the user request getting delayed. This scenario happens very frequently.
Background service: when the app starts it trigger a thread that run every 10 min, read from the DB and send emails.
The solutions I found:
A. Use thread pool - BeginInvoke:
This is what I use today for both scenarios.
It works fine, but it uses the same threads that serve the pages, so I think I may run into scalability issues, can this become a problem?
B. No use of the pool – ThreadStart:
I know starting a new thread takes more resources then using a thread pool.
Can this approach work better for my scenarios?
What is the best way to reuse the opened threads?
C. Custom thread pool:
Because my scenarios occurs frequently maybe the best way is to start a new thread pool?
Thanks.
I would personally put this into a different service. Make your UI action write to the database, and have a separate service which either polls the database or reacts to a trigger, and sends the emails at that point.
By separating it into a different service, you don't need to worry about AppDomain recycling etc - and you can put it on an entire different server if and when you want to. I think it'll give you a more flexible solution.
I do this kind of thing by calling a webservice, which then calls a method using a delegate asynchronously. The original webservice call returns a Guid to allow tracking of the processing.
For the first scenario use ASP.NET Asynchronous Pages. Async Pages are very good choice when it comes to scalability, because during async execution HTTP request thread is released and can be re-used.
I agree with Jon Skeet, that for second scenario you should use separate service - windows service is a good choice here.
Out of your three solutions, don't use BeginInvoke. As you said, it will have a negative impact on scalability.
Between the other two, if the tasks are truly background and the user isn't waiting for a response, then a single, permanent thread should do the job. A thread pool makes more sense when you have multiple tasks that should be executing in parallel.
However, keep in mind that web servers sometimes crash, AppPools recycle, etc. So if any of the queued work needs to be reliably executed, then moving it out of process is a probably a better idea (such as into a Windows Service). One way of doing that, which preserves the order of requests and maintains persistence, is to use Service Broker. You write the request to a Service Broker queue from your web tier (with an async request), and then read those messages from a service running on the same machine or a different one. You can also scale nicely that way by simply adding more instances of the service (or more threads in it).
In case it helps, I walk through using both a background thread and Service Broker in detail in my book, including code examples: Ultra-Fast ASP.NET.