I need to make a thread inside my web service, to check on some data in my database every specific time.
I would not do that. The thread would die with the application pool. Create a windows service which checks the database using a Thread or a Timer.
Why would you want to do that?
If your service is per-call (i.e. not singleton), all the resources involved in servicing a request is supposed to be released right after the call. If you spawn a thread, the request will be kept alive until your thread completes.
Also, checking on data in a database periodically does not really make sense in a web service call, which should complete within a very short time anyway, otherwise you kill scalability.
You're probably referring to a singleton web service, in that one single server object services requests. In this case, you'll need to create the singleton object first -- most likely in a Windows service that is started automatically.
Your database polling is most likely used to cache certain popular values so that servicing requests do not need to hit the database itself. In this case, your service is actually a middle-tier layer. Unless you know that data in the database changes very frequently, consider replacing the database polling with triggers in the database that calls the web service to update new data.
Related
In brief, what is the best way to create Azure resources (VM's, ResourceGroups, etc) that are defined programmatically, without locking the web app's interface because of the long time that some of these operations take?
More detailed:
I have a Net Core web application where customers are added, manually. Once added, it automatically creates some resources for Azure. However, I noticed that my interfaces is 'locked' during these operations. What is a relatively simple way of detaching these operations from the web application? I had in mind sending a trigger using a Service Bus or Azure Relay and triggering an Azure Function. However, it seems to me that all these resources return something back, and my web app is waiting for that. I need a 'send and forget method' for that. Just send out the trigger to create these resources, don't bother with the return values for now, and continue with the app.
If a 'send and return' method also works within my web app, that is also fine.
Any suggestions are welcome!
You need to queue the work to run the background and then return the action immediately. The easiest method of doing this is to create a hosted service. There's a couple of different ways to do this:
Use a queued background service and actually queue the work to be done in your action.
Just write the required info to a database table, redis store, etc. and use a timed service to perform the work on a schedule.
In either case, you may also consider splitting this off into a worker service (essentially, a separate app composed of just the hosted service, instead of running it in the same instance as your web app). This allows you to scale the service independently and also insulates your web app from problems that service might encounter.
Once you've set up your service and scheduled the work, you just need some way to let the user know when the work is complete. A typical approach is to use SignalR to allow the server to notify the client with progress updates or success notifications. However, you can also just do something simple like email the user when everything is ready.
I am developing WCF application under Windows Service which is exposing one endpoint. There can be about 40 remote clients who will connect to this endpoint over local area network at the same time. My question is whether WCF can handle multiple calls to the same endpoint by queuing them? No request from any client can be lost. Is there anything special I have to consider when developing application to handle simultaneous calls?
You can choose whether the requests should be handled asynchronously or synchronously one after another.
You can set this behavior via the InstanceContextMode settings. By default WCF handles requests ByCall which means one instance of your service will be created for each incoming request. This allows you to handle multiple requests in parallel.
Alternatively you can configure your service to spin off only one instance which ensures each request is handled after the other. This effectively is the "queuing" you mentioned. You can set this behavior via InstanceContextMode.Single. By chosing this mode, your service becomes a singleton. So this mode ensures there's only one instance of your service, which may come in handy in some cases. The framework handles the queuing.
Additionally you could set ConcurrencyMode.Multiple which allows your single instance to process multiple requests in parallel (see Andrew's comment).
However, be aware that the queued requests aren't persisted in any way. So if your service gets restarted, the not yet finished requests are lost.
I'd definitely recommend to avoid any kind of singleton if possible.
Is there anything that prevents you from chosing the parallel PerCall-mode?
For more details have a look at this: http://www.codeproject.com/Articles/86007/ways-to-do-WCF-instance-management-Per-call-Per
Here are some useful links:
https://msdn.microsoft.com/en-us/library/ms752260(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/hh556230(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/system.servicemodel.servicebehaviorattribute(v=vs.110).aspx
To answer your question, no calls will be lost whatever you choose. But if you need to process them in order, you probably should use this setup for your service
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single, EnsureOrderedDispatch = true )]
I'm writing a webservice that drops off a long-running bulk insert command to a sql db through a stored proc. I don't want the webservice hung up while waiting for a response from the db, so I'd like to just return an http response that lets the client know the request has been sent to the db after I start the task. But as soon as I return the response, the task will lose context and get trashed, right? How should I keep this alive?
In general, it's not a good idea to spin off something to do work from IIS. What happens if the AppPool restarts? What happens if there is an exception?
Instead, I would recommend writing a Windows Service and have it responsible for the work.
Based on your comments, I would see if you can ask for the following requirements (theoretically):
All external calls are done through the web service. The web service uses a separate assembly for the actual data access.
A separate windows service is used for long running processes, which would also use the same data access assembly the web service uses.
That is really the best way to go (but not necessarily doable based on requirements).
I think it's more of a architecture question than just about maintaining the 'context'. And talking about architecture, I think WCF webservices would help in your scenario.
What you would need is a service with callback contract. Where the service takes a request, returns an ack, stores the client context (for callback), and triggers off a long running database task in background. When the task completes, it reads client context and calls the callback handler with the result.
This article at MSDN suggests how to do a callback contract in webservice.
Hope this helps!
Am stuck on what am sure is a fundamental and easy to solve problem in WCF, just need to be guided towards the right way.
I have a large object (which is actually a trained text classifier) that I need to expose through a Web Service in C# .NET. The classifier object can be loaded from disk when the service initially starts, but I don't want to keep loading it from disk for every request (the object that services requests currently occupies about 6 GB in memory, and it takes a while to keep loading it from disk for every request), so instead I want to persist that object in memory throughout all requests to that web service, and the object should only be loaded when the service starts (instead of loading it when the first web request triggers it).
How would I go about doing that?
Thanks for any help!
Probably the easiest way is to create your service as a singleton. This involves specifying InstanceContextMode = InstanceContextMode.Single in a ServiceBehavior attribute on your service class definition.
However it is very questionable if sending a 6GB object over the wire using WCF is advisable. You can run into all sorts of service availability issues with this approach.
Additionally, singletons are not scalable within a host (can be only one instance per host), although you can host multiple singleton services and then load-balance the requests.
The way I've done this in projects that I've had the problem with in the past is to self host the WCF service inside a Windows Service.
I've then set the data storage object up inside the service as a singleton that persists for the life of the service. Each WCF service call then gets the singleton each time it needs to do something with the data.
I would avoid running in IIS simply because you don't have direct control of the service's lifetime and therefore don't have enough control of when things are disposed and instantiated.
Can you please answer to the following questions to enlighten me about web services.
What is the lifecycle of web service ? When the class which represents my web service gets instanced and when it's start running (executing) ?
Is it there new instance created for every webMethod call ? And what happens if there are multiple simultaneous requests for same or different web method ?
When to open connection to remote resource, that the onnection is ready before any requests. And this connection must persist through whole lifetime of web service.
Thank you in advace for all answers.
Webservices are nothing more than ASP.NET pages communicating on the SOAP protocol (XML over HTTP). Each method have its own round-trip (like a page, so new instances are created by default). ASP.NET thread pool is used for running a webservice. As web programmer you don't have lot of control over how thread pool is used since it depends on many external factors (system resources, concurrent page requests...).
If you mean database connections by 'Opening connection to remote resources' these connections also are pooled by Connection Pool of ADO.NET and it will be managed automatically. If you external resources are heavy use Singleton webservice model and load external resources in constructor. Don't use singleton patteron on a database connection (It has its own pooling mechanism). You should take care of concurrency issues for your static variables if you are choosing Singleton pattern.
At the end I should say living in managed-world of programming is easier than ever. Most of the time somebody is caring about our doubts.
That depends; You have two instancation models.
"Single Call" (an instance is created for each call made to the service)
"Singleton" (an instance is created on the first call and reused as long as the process remains alive).
See answer 1; Eleboration; Yes, each call get's its own instance
I would seperate that away from the actual Web Service class. You can use another singleton approach to achieve this functionality.
Hope this helps,