Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I need a counter in the server which holds the number of http requests it received.As each server can handle multiple requests asynchronously (lets say the same controller is being called by every user) at a specific point of time, where can I place the counter so that it can be shared between every request made.
Yes, the controller is instantiated and disposed with each request. Yes, each request gets it own thread, though that thread may be exchanged (in the case of async work) or may serve multiple requests over its lifetime.
Parallelism is a complex topic and based on numerous different factors at any give time. Simplistically though, yes, threads will run in parallel. However, they do not share resources between each other (for the most part). Ultimately, there is some resource-sharing via the parent process, but for practical application, you should consider them idempotent.
Based on all that and your final question, if you have designs on trying to implement a counter in your code, don't. It won't work and never will. Even if you can somewhat coordinate some sort of process-bound thread-safe counter, it won't work with workers and it will be killed every time the App Pool recycles, crashes or otherwise restarts for any reason.
A counter should be implemented in a database or other persistent data store. Full-stop. Even then, you'll need to be extremely careful with concurrency, and unless you devote a ton of time to excluding bots, repeat page loads by the same users, etc., your count will be off no matter what.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I lack the knowledge to even know where to start researching this topic (such as keywords to use) so I'm hoping somebody can help point me in the right direction.
I have an .NET MVC application. In this application, the user uploads a zip file with a data layer and that data layer is used as input in a python script the application calls. Due to the nature of the database cursors (it's an ArcGIS Enterprise geodatabase if anyone is familiar) only one person can run the python script at a time as the cursors make exclusive locks on the database (there is no way around this). In the extremely rare chance that two people are trying to use this web application at the same time, I need to put the people in a queue so the python script completes for the first person and then starts on the next person's dataset. Where do I get started with this, or what words can you give me to start formulating some search queries on this topic?
Maybe there are more technology specific solutions will come, but from my first view, you can use priority queue style processing.
It can be as simple as creating a table and enqueue (insert) new requests over there and process as it comes (dequeue). Or, use higher level framework like RabbitMQ
So, every time when new zip file comes,
Upload in repository
Insert new record in the queue
Get first item from queue
Process
If second person uploads new file, you will check queue table, there will be flag last item is getting processed, then you can let client know, script is queued and will be processed.
Obviously it will be worth to generate unique id of client, then you will not mix priorities and scripts, if there are multiple uploads at the same time.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have a C# ASP.NET MVC REST web service.
The webservice has two major paths/routes. One for admin, one for users.
There users are typically 1-2 admin users, and all other users are normal.
When there is a lot of traffic, the server becomes slow to respond. Currently, this means that the admin users requests are slow just like regular users.
I want the admin users requests (which use a particular route) to have top priority such that the admin requests are fast, as if there was no load on the server. A way to think about this is I want to create VIP access for admins.
One option I thought of would be just to create another server, however there are some dependencies between actions on the admin route and actions on the user route, so there would need to be additional code written to facility inter-server communication, which in turn may create a new bottleneck.
I think this can be done via code, perhaps a custom request queue which could implement priorities by creating a separate worker thread pool and give them high priority however, would this actually work - meaning a thread pool of 5 worker threads given highest priority still have priority over the main ASP.NET worker thread pool which run at default priority?
The ideal solution I am looking for a solution that requires only configuration, a small amount of code, and no new hardware. Is this possible?
OS: Windows Server 2012 R2 Datacenter (iis8).
Regarding existing bottlenecks:
This is mostly a CPU bound issue, I've looked at the performance counters while running load test. its really just an overloaded server situation, where I want to protect the performance of the admin route, and the users can temporarily suffer until the load lessens (meaning I don't want to invest in adding more capacity)
I am not familiar with prioritize Http requests on IIS server.
As you mentioned, the quickest solution is to use another server. if you use the same code base and the same database, you do not have add even a line of code.
But maybe you should investigate your problem a bit more and find your bottleneck, CPU, Memory, DB, Http requests...
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
General question
From my backend code I want to trigger events that will wait somewhere around 30 minutes and then run a bit of code.
Given that its a bad idea to spawn threads or tasks in MVC (as the pool can be killed and you don't really know if things are going to work or not). What is the best way to do this?
My options as I see them are:
Create a thread or task from the code.. bad idea as mentioned above.
Create a scheduled task (batch/powershell) on the server that calls a service every 30 minutes
that does the emailing as needed. This to me is messy as I now depend on this task working
Create an SSIS package on the SQL server to do pretty much the same as the scheduled task, but perhaps more reliably. Probably the most dependable solution, but also the most pain in the ### one..
What would you guys do?
Real world example
User "A" writes a comment on the website. User "B" and "C" both comment this post within 5 minutes of each other. I want to send an email to User "A" about the new comments from "B" and "C", but I don't want him to get 1 email for every commment.. There could be hundreds and noone wants 100 emails about 1 comment each.
So in my case I want to trigger an even that waits 30 minutes and then groups all new comments into one notification email.
There is no correct answer to this question, it is primarily opinion-based.
Personally I like #2, it doesn't seem messy to me. You could do something like a WorkerRole or WebJob. Cloud computing is as much about timed events as web requests (ok maybe not as much, but it still plays a meaningful role in many applications).
I also like #2 because it seems more unit testable to me, but maybe that's because I don't know how to write unit tests against an SSIS package.
Web server is not the right tool to for scheduled tasks. The server goes into sleep after a period of time if there is no request coming. I know there are some hacks you can do to make it work..but hacks are hacks, I always prefer to do thing the right way. To do so you would want to write some c# application and use windows service to accomplish the job, or SQL Server agent.
In my opinion there's also another option. You could expose some app on the server which has web interface so you can enqueue some task. Then internal logic of that app could send your e-mails or do anything you want. It won't be the MVC web application but some service, so it won't be at risk to be killed by application pool recycling or anything else.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
My question is about the best way to handle long running tasks inside MVC(5) while using SignalR. My application has some long running tasks, some compute bound and some that wait on services, that I run from MVC and then use SignalR to handle progress messages and cancellation.
My current implementation, which was started before async/await was out, registers the class/method in a concurrent dictionary with an id. MVC then returns an Id back to a Ajax call and then exits the view. The JavaScript sends a 'Start' message with the Id to SignalR which then recovers the class from the dictionary and then calls the long running method, i.e. blocks the Hub.
I did this on the grounds that, to be honest, it was easier that way as Tasking is hard work in ASP.NET. It also means that progress messages, which includes fairly detailed text progress messages, can use the existing Hub instance. The down side is that, I assume, SignalR keeps a thread open all the time, which isn't good.
I am now relooking at this design in light of async/await. I could change the design so that the SignalR Hub awaits a task, so freeing up the thread. Is that the best way? I assume I will then take a hit creating SignalR Hubs to send my messages, so overall it might take more processing power. However it would scale better.
Does anyone have an experience of this, as it must be a fairly standard use of SignalR in MVC. All thoughts/experiences welcome.
There's no point in making a CPU-bound background task be asynchronous, but you could do that with your I/O-bound background tasks.
If you use async/await, the hub is still there; I don't see why that would require additional hubs. SignalR understands async.
On a side note, you do want to make sure your background tasks are reliable, as #usr noted. I wrote a blog post last weekend summarizing various ways to (semi-safely) perform background work on ASP.NET.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am currently working on a social networking application that needs to be highly scalable.
I have been reading about the publish/subscribe pattern(message bus) and am struggling with understanding proper use case scenarios - when this would be appropriate and when this would be overkill?
For example:
There are several areas of the site where users enter information on a form that needs to be saved to the database;
When the database saving occurs and an email notification(s) must be made to one or more users.
Also, for the save scenarios, I would like to give the user friendly messages letting them know their data is saved on the form after saving process completes, if I were to go pub/sub
approach.
How would I return success/fail messages back to the UI after a specific task completed?
Which scenarios are ideal candidates for pub/sub pattern? It seems to be overkill for basic form database saving.
From your two scenarios, the latter is a possible candidate of being implemented with a bus. The rule is - the more complex/longer processing takes, the higher probability is it won't scale when processed synchronously. Sometimes it is even the matter of not the number of concurrent requests but also the amount of memory each request consumes.
Suppose your server has 8GB of memory and you have 10 concurrent users each taking 50 megabytes of RAM. Your server handles this easily. However, suddenly, when more users come, the processing time doesn't scale linearly. This is because concurrent requests will involve virtual memory which is a hell lot slower than the physical memory.
And this is where the bus comes into play. Bus let's you throtle concurrent requests by queuing them. Your subscribers take requests and handle them one by one but because the number of subscribers is fixed, you have the control over the resource usage.
Sending emails, what else? Well, for example we queue all requests that involve reporting / document generation. We have observed that some specific documents are generated in short specific time spans (for example: accounting reports at the end of each month) and because a lot of data is processed, we usually had a complete paralysis of our servers.
Instead, having a queue only means that users have to wait for their documents a little longer but the responsiveness of the server farm is under control.
Answering your second question: because of the asynchronous and detached nature of processed implemented with message busses, you usually make the UI actively ask whether or not the processing is done. It is not the server that pushses the processing status to the UI but rather, the UI asks and asks and asks and suddenly it learns that the processing is complete. This scales well while maintaining a two-way connection to push the notification back to the client can be expensive in case of large number of users.
I suppose there is no definite answer to your question. IMHO, nobody can evaluate the performance of a design pattern. Perhaps, someone could consider comparing it with another design pattern but even then the comparison would be at least unsafe. The performance has to do with the actual implementation which could vary between different implementations of the same design pattern. If you want to evaluate the performance of a software module, you have to build it and then profile it. As Steve McConell in his legendary book suggests, make no decisions regarding performance without profiling.
Regarding the specific pattern and your scenarios, I would suggest to avoid using it. Publish-subscribe pattern is typically used when the subscribers do not want to receive all messages published, but rather some of them satisfying some specific criteria (eg belonging to a specific kind of messages). Therefore, I would not suggest using it for your scenarios.
I would also suggest looking at the Observer pattern. I suppose you could find many more references online.
Hope I helped!