I have a 4 tier .NET application which consists of a
Silverlight 5 Client
MVC4 Web API Controller (Supplying data to the SL5 Client)
Windows Service - responsible for majority of data processing.
Oracle DB storage.
The workflow is simple: SL5 client sends a request to the rest service, the rest service simply stores it in the DB.
The windows service, while periodically polling the DB for new records, detects the new records and attempts to process them accordingly. Once finished it updates the records and their status in the DB.
In the meantime the SL5 Client also periodically polls the DB to see if the records have been processed. When they are, the result is retrieved and rendered on the screen.
So the question here is the following:
Is there a difference between spawning the same processing code (currently in the windows service) in a new discrete process (right out of the Web API Controller), vs keeping it as is in the windows service?
Aside from removing the constant DB polling that happens in the windows service, it simplifies processing greatly because it can be done on a per-request basis as the requests arrive from the client. But are there any other drawbacks? Perhaps server or other issues with IIS?
Yes there is a difference.
Windows services are the right tool for asynchronous processing. Operations can take a long time without producing strange effects. After all, it is a continuously running service.
IIS on the other hand, processes requests by using a thread pool. Long running tasks have the potential to exhaust that thread pool, so this may cause problems depending on the number of background tasks you start. Also, IIS makes no guarantees to keep long running tasks alive. If the web site is recycled, which happens regularly in a IIS default installation, your background task may die.
Related
Question is related ASP.NET 4.0 and IIS based azure cloud service:
need to know right number of IOCP threads to set for production web service where we make 10-20K/sec remote calls
Also need to know right number of Worker threads to set for production web service...specially to handle 10-20K/sec API calls...specially in bursts
Basically, I am facing issue that each of my cloud service VM should handle 10-20K requests/sec but it is not able to do so due to thread pool issue w.r.t. asp.net
my prod service does nothing but get data from redis and simply return
Assuming code is efficient and there is enough hardware i.e. there are no issues related to memory, cpu and n/w:
1. You should try to keep IOCP to minimal numbers 50-100
2. You should try to keep CPU threads to high to handle bursts of requests
I am not sure if it's a good idea to keep 2-5K active threads to cater to 10-20K requests/sec
I'm developing a Client/Server applications (C#, Winforms for GUI).
We have a module to perform tasks to import / export data from the database to other external sources. Activities are managed by users using any client station. The next step will be to allow the schedule to automatically execute tasks (eg, X start time and repetition every hour, daily or weekly or monthly time, and so on).
Each tasks allows to import or export a large amount of data with any datasources (excel. access or dbms), therefore they are long-running activities.
Now the DLL that implements this logic is distributed to each client station. This is not a good solution because we have to install all the potential requirements in each client (for example driver ado / oledb / odbc for all managed dbms).
I have to move this logic to the server station. In each client I want to see the tasks progress, stop or start any tasks, or change the schedule table and restart the process.
I'm considering what is the best solution. Realize a Web API or WCF. Probably WCF because service-oriented, but I've seen projects or articles with Web APIs combined with libraries like Quartz or Hangfire.
I'm also considering whether it is better to use a Windows service and to host WCF inside it.
What is the best solution? or are there any other solutions I'm not considering?
Thank you
EDIT:
From any client workstation the user can schedule all tasks to be executed depending by the applied settings (frequence time, repeat each day/week/month). Probably I should use a windows service because when the server machine is automatically switch on, this service must be automatically started and check if there are tasks to run. At the same time the user can decide to run manually any task without schedule it and, in this case, it will be queued and processed when it is his turn.
Now I'm thinking to host a WCF service into a Windows service in the server machine. Automatically I will start a background worker to check the scheduled tasks to run. In addition all clients can invoke a method to start one or more tasks. To notify the progress to all clients I'll use Contract Duplex.
You will need to compare between WCF and Web API and Choosing which technology to use according to your requirements.
If you just need HTTP only as transport protocols and Lightweight web-hosted services go with Web API.
And I will recommend Hangfire as it has many features than Windows service like Distributed, Persistent and Also, it's out of the box Dashboard that shows you all your scheduled, processing, succeeded and failed jobs.
Check also this article about
Runing Background Tasks in ASP.NET
if this is an internal application and clients are using winforms, behind the scenes you can make gets/posts to web api endpoints -- this allows users to retrieve/export data without having to install database drivers
web api driven imo, not very familiar with windows services, but one of the benefits i'm seeing is that the service can still be running on reboot
feel free to reach out to me directly
I was wondering if it's possible to do a 'soft shutdown' or 'soft reboot' of a cloud service. In other words the server would refuse new incoming http requests (which come in through ASP.net controller actions), but would finish all existing requests that are in progress. After this happens the server would then shutdown or stop as normal.
Server Version
Azure OS Family 3 Release
Windows Server 2012
.NET 4.5
iis-8.0
asp.net 4.0
Usage Scenario
I need to ensure that any actions responding to remote http requests currently in progress finish before a server begins the process of shutting down or becoming unresponsive because of a staging to production swap.
I've done some research, but don't know if this is possible.
A hacky work around might be using a CloudConfigurationManager variable to initiate that an error 503 code should be returned on any incoming actions over http, but then I'd have to sit around and wait for a while without any way to verify that condition. At that point I could then stop the service or perform a swap.
See http://azure.microsoft.com/blog/2013/01/14/the-right-way-to-handle-azure-onstop-events/ for information on how to drain HTTP requests when a role is stopping (attaching image below, I don't know why the source uses an image instead of text...):
Also note that doing a VIP swap won't affect the role instances themselves or any TCP connections to the instances, so nothing should become unresponsive just because you do a VIP swap. Once you begin shutting down the staging deployment after a VIP swap that is when the code above will help drain the requests before actually shutting down.
I have some experience with WCF services development but with this requirement I want to get some help/suggestion from some of the experienced developers here. Here is my scenario,
I will have a service(REST) (let calls it Service 1) which will receive requests from a different service (lets calls it Service Main)with some parameters. I am planing to save these parameters in a database so that I can track the status of the progress in future steps. Then I have to a start a process on the server from Service 1 which will run for in determinant time (based on the parameters) and lets call this process A. When process A is done with its task and comes back with good results then I have to start a different process which is called Process B which will use files generated by process A. When process B is done with its business and sends an acknowledgement to service 1 then I have to send the information back to Service Main.
For database i am planing to use no sql database since there are no relationships involved and it is more like a cache. I am having hard time on how to architect this entire process so that all of these steps/tasks run asynchronous and able to scale and handle lot of requests.
Approach 1: My initial idea was to have a wcf or ASP.NET Web api service(REST) use TPL framework to launch process A and wait for it to complete and call async callback method of process A then launch Process B on a new Task. But I am not sure if that is a good solutions or even possible.
Approach 2: After lot of reading i thought may be having a windows service on the hosted server to launch process A and process B. WCF service will talk to window service to start the process.
Hopefully I explained the problem clearly and waiting to hear some advises.
I have a asp.net website which processes requests using a 3rd party exe. Currently my workflow is
User accesses website using any browser and fills out a form with job details
Website calls a WCF self hosted windows service which is listening on a port
Windows service launches 3rd party exe to process the job and returns the result to website
Website displays the returned result to the user
The above website was a prototype which now needs to be turned into a production ready deployment. I realize that the above architecture has many points that could break. For example, if the machine is powered off or if the windows service crashes and is no longer listening on the port, all current requests will stop processing. To make the architecture more robust, I am considering the following
User accesses website using any browser and fills out a form with job details
Website writes out the job details to a database
Windows service which is polling the database every 10 seconds for a new job picks up the job and executes it using the 3rd party application. The results are written back to the database.
Website which has now started polling the database, picks up the results and displays them to the user.
The second architecture provides me with more logging capabilities and jobs can start again if they are in a queue. However it involves large amounts of polling which may not be scalable. Can anyone recommend a better architecture?
Instead of Polling I would go with MSMQ or RabbitMQ.
That way you could off-load your processing to multiple consumers (possibly separate servers from the web server) of the queue, and process more requests in parallel.
I have implemented the same architecture in one of my applications where users are making multiple requests to process. So I have -
Users goto website and select parameters etc. Submit the request
Request is stored into database table with all the details + user name etc
A service looks the database table and picks up the request in FIFO manner
After the request is processed, the status is updated as Failed or Completed into database table against that requestId, which can be seen by users on website
Service picks up next request if there is any, otherwise stops.
Service runs every 30 mins