I have a current issue with one of our applications which is called by a 3rd Party.
The 3rd Party sends data (contained in a querystring) to a URL in our website and they must receive a OK response from the page within 5 seconds to verify that the call was received.
The page itself does a lot of processing of the data that is sent by the 3rd Party in the Page_Load function. It's possible that this is taking more than 5 seconds and therefore the page is not rendering until the processing is completed which in turn causes the 3rd Party to continue sending the data back to us multiple times as there system assumes we have not received it.
What I would like to know is what is the best way to offload the processing of the data so that I can render the page almost as soon as the 3rd Party calls the URL?
Note that there are no controls on the page, its purely a blank page with code behind it.
Am I right in assuming the 3rd party is just calling the page to send up the data, i.e they don't care about the result?
There are a couple of approaches that come to mind, a simple approach would be to dispatch the work as it comes in onto a thread and return "OK" immediately leaving the thread to carry on working.
The second approach would be to write the incoming query-string data to a file or database table and let an external process periodically pick it up and process batches of them.
Use JavaScript to retrieve the data after the page has loaded.
Related
I have a requirement to produce a report screen for different financial periods. As this is quite a large data set with a lot of rules the process could take a long time to run (well over an hour for some of the reports to return).
What is the best way of handling this scenario inside MVC?
I am concerned about:
screen locking
performance
usability
the request timing out
Those are indeed valid concerns.
As some of the commenters have already pointed out: if the reports do not depend on input from the user, then you might want to generate the reports beforehand, say, on a nightly basis.
On the other hand, if the reports do depend on input from the user, you can circumvent your concerns in a number of ways, but you should at least split the operation into multiple steps:
Have a request from the browser kick off the process of generating the report. You could start a new thread and tell it to generate the report, or you could put a "Create report" message on a queue and have a service consume messages and generate reports. Whatever you do, make sure this first request finishes quickly. It should return some kind of identifier identifying the task just started. At this time, you can inform the user that the system is processing the request.
Use Ajax to repeated poll the server for completion of the report using the given identifier. Preferably, the process generating the report should report its progress and this information should be provided to the user via the Ajax polling. If you want to get fancy, you could use SignalR to notify the browser of progress.
Once the report is ready, return a link to the user where he/she can access the report.
Depending on how you implement this, the user may be able to close the browser, have a sip of coffee and come back to a completed report.
In case your app is running on Windows Server with IIS your ASP.Net code can create a record in db table which will mean that report should be created.
Then you can use Windows Service or Console App which might be running on the same server and constantly checking if there any new fields in the table. This Service would create a report and during creation it should update table field to indicate progress.
Your ASP.net page might be displaying progress bar, getting progress indication from db using ajax requests or simply refreshing the page every several seconds.
If you are running on Windows Azure cloud you might use WebWorker instead of Windows Service
For screen locking on your page you may use jquery Block-UI library
Recently in C# 4.0 the task based approach for async programming was unveiled. So we were trying to develop some of our functions which used callbacks earlier.
The problem we are facing is with the implementation of multiple responses for the functions using tasks. E.g. we have a function which fetches some data from a thirdparty API. But before fetching the data from API we first check whether we already have it in our in-memory cache or in DB then only we go to the API. The main client application sends a request for a list of symbols for which data to fetch. If we find data for some symbols in cache or in DB we send it immediately via the callback. For remaining symbols we request the API.
This gives a feeling of real-time processing on client application for some symbols. And for other symbols the user gets to know that it will take time. If I do not send responses to the client instantly and first collect all the data and then only send response for the whole list then the user will be stuck for 99 symbols even if only 1 symbol is to be fetched from API.
How can I send multiple responses using the task based approach?
It seems like you want to have an async method that returns more than once. The answer is you can't.
What you can do is:
Call 2 different methods with the same "symbols". The first only checks the cache and DB and returns what it can, and the second one only calls the remote API. This way you get what you can fast from a cache and the rest more slowly.
Keep using callbacks as a "mini producer consumer" design so you can call it as many times you like.
I could try for a more concrete answer if you post the code you're using.
I would like to be able to load data into a DataGrid in Silverlight as it becomes available. Here is the scenario:
My silverlight client fires a WCF call to the server.
The server takes about 1 to 2 seconds to respond.
The response is between 1 MB and 4 MB (quite large).
This data is loaded into a DataGrid.
Although the server responds quickly, the data is not seen by the user until all 1 MB to 4 MB has been downloaded.
What is the best (or most efficient) way to load this data into the DataGrid as it is being downloaded by the client? As opposed to waiting for the download to complete?
A way of handling this is implementing custom virtualization.
Add a webservice method that returns only ids
Add a webservice method that returns a single object by id
Retrieve the ids and load only the visible objects (and perhaps a few more to allow scrolling)
Retrieve more objects when needed for scrolling.
The problem is part of what I was trying to get at with my comment (you still didn't really specify the Datatype for the return), and what Erno gave you a workable solution for. The web service serializes whatever return type you are sending, and will not give you partial results. It's not a question of how you interface with the grid, it's a question of when the web service call on the client says "ok i received the data you need, now continue processing". For instance, if you are gathering up a data table on the server side with 4MB worth of records, then in your service do a :
return MyMassiveDatatable;
Then you are going to have to wait for the entire data table to be serialized and pumped across the wire.
His solution was to break up the transfer into atomic units. I.E. query first for the id's of the records in one web service call, then iterate through those id's and request the record for each id one at a time, and as you receive the one record back, add that to your client side table so that your display writes each record as you get it.
Context: We have a very large ASP.NET Web Forms application and some pages/modules are very 'heavy' and it takes very long time to process this request. So the problem is when you misclick on the wrong menu to load a page you'll have to wait a long time for nothing. A solution is to refresh de browser (F5, or change URL) but I'm I right that de server is still processsing its request till a certain time?
The more problem with F5 in our application is that we don't preserve all states (lots of iframes etc) so the user needs to start over again.
My ideal situation would be: get a list from all current/running requests and abort the one you want to stop directly from processing. (is this even possible? what if its in the middle of a code block, can it terminate a thread or something? Do I need IIS?)
I've tried something with a custom HttpHandler that holds a list for all HttpContext objects. That will work but isn't probably the best solution. And for stopping a request i've tried Response.End() but then I get a complete white screen, are there better ways to selectivly abort a request?
You can check HttpResponse.IsClientConnected and end the processing if false, check the conditions under which the flag is true to make sure it will work for you http://msdn.microsoft.com/en-us/library/system.web.httpresponse.isclientconnected.aspx
I'm working with a rather large .net web application.
Users want to be able to export reports to PDF. Since the reports are based on aggregation of many layers of data, the best way to get an accurate snapshot is to actually take a snapshot of the UI. I can take the html of the UI and parse that to a PDF file.
Since the UI may take up to 30 seconds to load but the results never change, I wand to cache a pdf as soon as item gets saved in a background thread.
My main concern with this method is that if I go through the UI, I have to worry about timeouts. While background threads and the like can last as long as they want, aspx pages only last so long until they are terminated.
I have two ideas how to take care of this. The first idea is to create an aspx page that loads the UI, overrides render, and stores the rendered data to the database. A background thread would make a WebRequest to that page internally and then grab the results from the database. This obviously has to take security into consideration and also needs to worry about timeouts if the UI takes too long to generate.
The other idea is to create a page object and populate it manually in code, call the relevant methods by hand, and then grab the data from that. The problems with that method, aside from having no idea how to do it,is that I'm afraid I may forget to call a method or something may not work correctly because it's not actually associated with a real session or webserver.
What is the best way to simulate the UI of a page in a background thread?
I know of 3 possible solutions:
IHttpHandler
This question has the full answer. The general jiste is you capture the Response.Filter output by implementing your own readable stream and a custom IHttpHandler.
This doesn't let you capture a page's output remotely however, it only allows you to capture the HTML that would be sent to the client beforehand, and the page has to be called. So if you use a separate page for PDF generation something will have to call that.
WebClient
The only alternative I can see for doing that with ASP.NET is to use a blocking WebClient to request the page that is generating the HTML. Take that output and then turn it into a PDF. Before you do all this, you can obviously check your cache to see if it's in there already.
WebClient client = new WebClient();
string result = client.DownloadString("http://localhost/yoursite");
WatiN (or other browser automation packages)
One other possible solution is WatiN which gives you a lot of flexibility with capturing an browser's HTML. The setback with this is it needs to interact with the desktop. Here's their example:
using (IE ie = new IE("http://www.google.com"))
{
ie.TextField(Find.ByName("q")).TypeText("WatiN");
ie.Button(Find.ByName("btnG")).Click();
Assert.IsTrue(ie.ContainsText("WatiN"));
}
If the "the best way to get an accurate snapshot is to actually take a snapshot of the UI" is actually true, then you need to refactor your code.
Build a data provider that provides your aggregated data to both the UI and the PDF generator. Layer your system.
Then, when it's time to build the PDFs, you have only a single location to call, and no hacky UI interception/multiple-thread issues to deal with.