What event of FiddlerApplication specifies moment when page is fully loaded into web browser? Event AfterSessionComplete is fired before all page items are loaded...
I am using exteral dll (fiddlercore).
Fiddler.FiddlerApplication.AfterSessionComplete += delegate(Fiddler.Session session)
{
Console.WriteLine("End time:\t" + session.fullUrl + ", " + session.Timers.ClientDoneResponse.ToString("dd-MM-yyyy HH:mm:ss")); }
};
This question was answered an hour before you posted here in the forum where you originally asked it.
A web proxy (e.g. Fiddler) cannot possibly know when a web browser is
finished loading a page. Even if you were running code in the web
browser itself, it's a non-trivial exercise.
About the closest you could come is to use the proxy to inject
JavaScript which then emits out the timing information to the network
for the proxy to catch, but even doing this from JavaScript isn't
necessarily precise.
Related
How to display an alert box in UI from code behind through for loop?
I have a function that processes data from the CSV file and writes logs into the database. I also want to show a notification or alert in the UI so that I can know instantly what's happening in the system. I tried toast notification and alert but it only displays once after the loop finished.
Here is my code:
AddLog(finalJobId, taskId, "Sending batch to Payment Gateway # of Records:" + JobCount + " from" + txtCSVFile.Text);
for (int i = 0; i < JobCount; i++)
{
var items = csvLists[i];
Page.ClientScript.RegisterStartupScript(this.GetType(), "ShowMessage", string.Format("<script type='text/javascript'>alert('{0}')</script>", "The Payment is being proceed for : " + items["First Name"] + " " + items["Last Name"]));
var paymentId = items["Payment Id"];
var clientId = items["Client Id"];
var client = items["Client Name"];
var amount = items["Total Payment Amount"];
var method = items["Payment Method"];
AddLog(finalJobId, taskId, "Sending payment #" + i + "- Client: " + client + " - PaymentId: " + paymentId + " - Amount: " + amount + " - Payment Method: " + method);
//Process Actual Payment Options with Details
var task = ProcessPayment.Process(int.Parse(clientId), int.Parse(paymentId), serverValue);
AddLog(finalJobId, taskId, "Received Response #" + i + " - Client: " + client + " - PaymentId: " + paymentId + " - Amount: " + amount + " - Payment Method: " + method + " - Result: " + task.Result.Response);
}
AddLog(finalJobId, taskId, "Completed batch for Payment Request # of Records:" + JobCount);
I usually have more than 200 records in the CSV file, so I have to wait for a while after process a payment. I don't know what is the best way to shows a notification so that users will understand what is happening at the moment.
Ok, the most easy way to figure this out?
Well, when you click on a button, the page (post back) is sent to the server.
You code runs - and all code behind is BLOCKED until it is done.(you can NOT make additional calls from that web page. The web page is sitting "stuck" on server side land, waiting for code to finish.
When code does finally finish, the page is sent BACK down to the client side.
So, obviously this line of code can't work:
Page.ClientScript.RegisterStartupScript(this.GetType(), "ShowMessage" bla bla
Since, after above, the long delay code runs. ONLY AFTER that long running code is finished, then does the code stub finish, and THEN the WHOLE PAGE is sent back down to the client. (including the above idea of wanting to run some js - that will only start running after the page is sent down BACK to the client side.
So, there are as many ways to do this as their are flavors of ice cream, but the MOST simple?
Well, whatever button they click on that starts your above code simply has to show your toast message client side BEFORE the post back.
Thus, you pop up the toast, and THEN your code behind runs. We don't even have to write code to dismiss/hide/close/remove the message, since when the page comes rolling down back down the pipe to the browser?
Well the whole page is re-loaded for you in the client, and thus your toast message will go away.
This is thus the "most" easy. But it does mean the web page is blocked quite much from interaction and we ARE waiting for a whole copy of that page to come zooming down from the server WHEN that long running code is finally finished.
So, super simple:
We assume you have a asp.net button on the form.
We assume the user going to click on that button, and then run your code.
Note that the web page travels back up to the server and THEN your code runs.
The web page is not coming back to client side until that code runs and is done.
As noted, we can use this effect to our advantage.
So, your asp button click can use OnClientClick(). This means that the client side code (js) will run BEFORE your server (code behind runs) on that button click.
And it also runs before the post back to server occurs. (web page travel to server).
So, we use this to display a "div", or fill out a text box, or in your case some toast message.
So, button code will be:
<asp:Button ID="Button2" runat="server" Text="Confirm my order"
OnClientClick="GoGo2()" />
So the above is our lovable standard asp button. But, note the "OnClientClick".
So, it will run both client side and server side events. But it WILL FIRST run the client side code we call (the client side routine called GoGo2()).
That client side routine can thus display the message.
Above calls our client side code stub, say this:
<script>
function GoGo2() {
$.toast({
heading: "Working",
text: 'Processing order - - please wait',
hideAfter: false,
position: 'top-right',
icon: 'info'
})
}
</script>
Now, of course you could get/grab/pull/use a value of a text box or other controls (client side) to create a "better" message.
So the message could say:
heading: "Order #" + $('#txtBoxOrderNum').val(),
text: 'Processing order please wait',
Or whatever else you have on that web page. Do note that we can only get/grab/use current values in the web page (no server side values from code behind).
So, now we have this:
User clicks on button.
Above toast message displays (our client side js).
Heck you can display (shove) info in a text box, or even hide/show a "div" and NOT even use toast. But some kind of toast plug in is just fine.
Now that routine runs, displays your message client side.
THEN the web page makes trip up to server. Then your code behind for that button click starts running. When the long running code is done, then the page THEN makes the trip back to the browser.
Because we getting a whole new page back down browser side?
Well then the toast message will be blown out (disappear) and thus we don't need code to update, or hide the message, since we just received a fresh new copy of the web page anyway!
So above is the most easy.
If the process was LONG(er) running, say more then a few seconds, say 25 seconds? Well, you don't want to block up a whole web page for that long. (all other (server side code) and any buttons be frozen.
In that case? Well, then we need to use a thread object and thread out the call to a sperate thread. (it is easy to do).
Now the button click and browser round trip will occur very fast.
However, for status updates, we would have to START that running code AFTER the page comes back client side. (you can try and inject script as you attempted for this idea).
However, its usually better to NOT do a post back if you going to status update the page in the FIRST place.
jQuery.AJAX calls are VERY easy, and you can call web methods on the existing page (no need to setup a web server page (asmx) for this.
In effect if you REALLY want a process update (say % done), or show several messages as the long processing is occurring? We will want to avoid a post back, and avoid post back blockage. One way is to use a update-panel. But your main page will have to thread out the long running process regardless and NOT BE BLOCKED (ever!!!).
I could add a edit to this page to include a jQuery AJAX call, but the above is oh so simple, and I think should suffice fine for this example.
And this approach is super simple - no need to introduce new technology, ajax calls, or signalR concepts.
EVEN if you did/do introduce signalR, then you STILL will have to thread out the long process to ensue that the browser round trip is not blocked.
As noted, in these cases (wanting % status updates etc.), then I REALLY (no, but REALLY REALLY) would not do a post back. So one then would be wise to start the whole process and status updates using a AJAX call.
And since we assume it is most practical to AVOID post back WHEN we want a status update system (% done etc.), then a few simple ajax calls are less work and easier then signalR setup anyway.
Alrighty, guys. If you'd like to pull your hair out, the I've got a great problem for you. This problem seems very rare, but it effects my program on a few different sites that have pages that load content twice.
For instance: http://www.yelp.com/search?find_desc=donuts&find_loc=78664&ns=1#start=20
If you visit this site, you'll notice that it loads, then reloads different data. That's because there is a parameter in the URL that says start=20, so the results should start at #20 instead of #10. No matter what that is set to, Yelp loads the first 10 results. Not sure why they do this, but this is a prime example of what absolutely breaks my program. :(
Basically, whenever my program has a page that loads, it copies the source code to a string so it can display it somewhere else. It's not really important- What is important is that the string needs to actually have the last thing that is loaded in the page. Whenever a page loads, then loads again, I am not sure how to catch it and it ruins the program by exiting the while loop, and copying the source code into the string called source.
Here is a snippit of some code that I reproduced the problem with. When I attempt to use this in a new program, it will copy the source code for the first pages' results instead of what it is changed to.
GetSite = "http://www.yelp.com/search?find_desc=donuts&find_loc=78664&ns=1#start=20";
webBrowser9.Navigate(GetSite);
while (webBrowser9.ReadyState != WebBrowserReadyState.Complete)
{
p++;
if (p == 1000000)
{
MessageBox.Show("Timeout error. Click OK to skip." + Environment.NewLine + "This could crash the program, but maybe not.");
label15.Text = "Error Code: Timeout";
break;
}
Application.DoEvents();
}
mshtml.HTMLDocument objHtmlDoc = (mshtml.HTMLDocument)webBrowser9.Document.DomDocument;
Source = objHtmlDoc.documentElement.innerHTML;
Why do you wait in while loop for the browser to finish loading data?
Use DocumentCompleted event and you can remember the document's URL from there.
I am using a WebBrowser control for web scraping pages on Yahoo news. I need to use a WebBrowser rather than HtmlAgilityPack to accommodate for JavaScript and the like.
Application Type: WinForm
.NET Framework: 4.5.1
VS: 2013 Ultimate
OS: Windows 7 Professional 64-bit
I am able to scrape the required text, but I am unable to return control of the application to the calling function or any other function when scraping is complete. I also cannot verify that scraping is complete.
I need to
1. Verify that all page loads and scraping have completed.
2. Perform actions on a list of the results, as by alphabetizing them.
3. Do something with the data, such as displaying text contents in a Text box or writing them to SQL.
I declare new class variables for the WebBrowser and a list of URLs and an object with a property that contains a list of news articles..
public partial class Form1 : Form
{
public WebBrowser w = new WebBrowser(); //WebBrowser
public List<String> lststrURLs = new List<string>(); //URLs
public ProcessYahooNews pyn = new ProcessYahooNews(); //Contains articles
...
lststrURLs.Add("http://news.yahoo.com/sample01");
lststrURLs.Add("http://news.yahoo.com/sample02");
lststrURLs.Add("http://news.yahoo.com/sample03");
Pressing a button, whose handler is calling function, calls this code.
w.Navigate(strBaseURL + lststrTickers[0]); //invokes w_Loaded
foreach (YahooNewArticle article in pyn.articles)
{
textBox1.Text += article.strHeadline + "\r\n";
textBox1.Text += article.strByline + "\r\n";
textBox1.Text += article.strContent + "\r\n";
textBox1.Text += article.dtDate.ToString("yyyymmdd") + "\r\n\r\n";
}
The first problem I have is that program control appears to skip over w.Navigate and pass directly to the foreach block, which does nothing since articles has not been populated yet. Only then is w.Navigate executed.
If I could get the foreach block to wait until after w.Navigate did its work, then many of my problems would be solved. Absent that, w.Navigate will do its work, but then I need control passed back to the calling function.
I have worked on a partial work-around.
w.Navigate loads a page into the WebBrowser. When it is done loading, the event w.DocumentCompleted fires. I am handling the event with w_Loaded, which uses a class with logic to perform the web scraping.
// Sets up the class
pyn.ProcessYahooNews_Setup(w, e);
// Perform the scraping
pyn.ProcessLoad();
The result of the scraping is that pyn.articles is populated. The next page is loaded only when criteria, such as pyn.articles.Count > 0.
if (pyn.articles.Count > 0)
{
//Navigate to the next page
i++;
w.Navigate(lststrURLs[i]);
}
More pages are scraped, and articles.Count grows. However, I cannot determine that scraping is done - that there will not be more page loads resulting in more articles.
Suppose I am confident that the scraping is done, I need to make articles available for further handling, as by sorting it as a list, removing certain elements, and displaying its textual content to a TextBox.
That takes me back the foreach block that was called too early. Now, I need it, but I have no way to get articles into the foreach. I don't think I can call some other function from w_Loaded to the handling for me because it would be called for each page load, and I need to call the function once after all page loads.
It occurs to me that some threaded architecture might help, but I could use some help on figuring out what the architecture would look like.
I am attempting maintenance on a system I did not write (and aren't we all?). It is written in C Sharp and JavaScript, with Telerik reports.
It has the following code included in JavaScript that runs when the user clicks a button to display a report in a separate window:
var oIframe = $("iframe id='idReportFrame' style='display:none' name='idReportFrame' src=''>");
oIframe.load(function() { parent.ViewReports(); });
oIframe.appendTo('body');
try
{
$('#idReportForm').attr('target', 'idReportFrame');
$('#idReportForm').submit();
}
catch (err) { // I did NOT write this
}
Then the load function:
function ViewReports()
{
var rptName = $("#ReportNameField").val();
if (rptName == '') { return false; }
var winOption = "fullscreen=no,height=" + $(window).height() + "left=0,directories=yes,titlebar=yes,toolbar=yes,location=yes,status=no,menubar=yes,scrollbars=no,resizable=no, top=0, width=" + $(window).width();
var win = window.open('#Url.Action("ReportView", "MyController")?pReportName=' + rptNameCode, 'Report', winOption);
win.focus();
return false;
}
When I execute this (in Chrome, at least), it does pop up the window and put the report in it. However, breakpoints in the c# code indicate that it is getting called 2 or 3 times. Breakpoints in the JavaScript and examination of the little log in the JavaScript debugging environment in Chrome show that the call to win.focus() fails once or twice before succeeding. It returns an undefined value, and then it appears that the first routine above is executed again.
I am inclined to think it some kind of timing issue, except that the window.open() call is supposed to be synchronous as far as I can tell, and I don't know why it would succeed sometimes and not others. There is a routine that gets executed on load of the window, perhaps that's somehow screwing up the return of the value from open().
I am not a JavaScript person much, as those of you that are can likely tell by this time. If there is something with the code I've put here that you can tell me is incorrect, that's great; what I'm more hopeful for is someone who can explain how the popup-report-in-frame is supposed to work. Hopefully I can do it without having to replace too much of the code I've got, as it is brittle and was not, shall we say, written with refactoring in mind.
From what I could find the window.open will return null when it fails to open. Something may be keeping the browser from opening additional windows a couple of times; maybe it is a popup blocker.
The actual loading of the url and creation of the window are done asynchronously.
https://developer.mozilla.org/en-US/docs/Web/API/Window.open
Popup blocking
In the past, evil sites abused popups a lot. A bad page could open
tons of popup windows with ads. So now most browsers try to block
popups and protect the user.
Most browsers block popups if they are called outside of
user-triggered event handlers like onclick.
For example:
// popup blocked
window.open('https://javascript.info');
// popup allowed
button.onclick = () => {
window.open('https://javascript.info');
};
Source: https://javascript.info/popup-windows
I just ran into this and it seems to be because I had a breakpoint on the line that calls window.open and was stepping through the code, in Chrome dev tools. This was extremely hit-and-miss and seemed to fail (return null, not open a window, whether one already existed or not) more times that it succeeded.
I read #Joshua's comment that the creation is done asynchronously, so I figured that forcing the code to 'stop' each time I step might be screwing things up somehow (though on a single line like var w = window.open(...) doesn't seem like this could happen).
So, I took out my breakpoint.. and everything started working perfectly!
I also took note of https://developer.mozilla.org/en-US/docs/Web/API/Window/open where they specify that if you are re-using a window variable and name (the second argumen to window.open) then a certain pattern of code is recommended. In my case, I am wanting to write HTML content to it, rather than give it a URL and let it async load the content over the network, and I may call the whole function repeatedly without regard for the user closing the window that pops up. So now I have something like this:
var win; // initialises to undefined
function openWindow() {
var head = '<html><head>...blahblah..</head>';
var content = '<h1>Amazing content<h1><p>Isn\'t it, though?</p>';
var footer = '</body></html>';
if (!win || win.closed) {
// window either never opened, or was open and has been closed.
win = window.open('about:blank', 'MyWindowName', 'width=100,height=100');
win.document.write(head + content + footer);
} else {
// window still exists from last time and has not been closed.
win.document.body.innerHTML = content;
}
}
I'm not convinced the write call should be given the full <html> header but this seems to work 100% for me.
[edit] I found that a Code Snippet on Stackoverflow has a some kind of security feature that prevents window.open, but this jsfiddle shows the code above working, with a tweak to show an incrementing counter to prove the content update is working as intended. https://jsfiddle.net/neekfenwick/h8em5kn6/3/
A bilt late but I think it's due to the window not beeing actually closed in js or maybe the memory pointer not being dereferenced.
I was having the same problem and I solved it by enclosing the call in a try finally block.
try {
if (!winRef || winRef.closed) {
winRef = window.open('', '', 'left=0,top=0,width=300,height=400,toolbar=0,scrollbars=0,status=0,dir=ltr');
} else {
winRef.focus();
}
winRef.document.open();
winRef.document.write(`
<html>
<head>
<link rel="stylesheet" href="/lib/bootstrap/dist/css/bootstrap.min.css">
</head>
<body>
${$(id).remove('.print-exclude').html()}
</body>
</html>
`);
winRef.document.close();
winRef.focus();
winRef.print();
} catch { }
finally {
if (winRef && !winRef.closed) winRef.close();
}
Long post.. sorry
I've been reading up on this and tried back and forth with different solutions for a couple of days now but I can't find the most obvious choice for my predicament.
About my situation; I am presenting to the user a page that will contain a couple of different repeaters showing some info based on the result from a couple of webservice calls. I'd like to have the data brought in with an updatepanel (that would be querying the result table once per every two or three seconds until it found results) so I'd actually like to render the page and then when the data is "ready" it gets shown.
The page asks a controller for the info to render and the controller checks in a result table to see if there's anything to be found. If the specific data is not found it calls a method GetData() in WebServiceName.cs. GetData does not return anything but is supposed to start an async operation that gets the data from the webservice. The controller returns null and UpdatePanel waits for the next query.
When that operation is complete it'll store the data in it's relevant place in the db where the controller will find it the next time the page asks for it.
The solution I have in place now is to fire up another thread. I will host the page on a shared webserver and I don't know if this will cause any problems..
So the current code which resides on page.aspx:
Thread t = new Thread(new ThreadStart(CreateService));
t.Start();
}
void CreateService()
{
ServiceName serviceName = new ServiceName(user, "12345", "MOVING", "Apartment", "5100", "0", "72", "Bill", "rate_total", "1", "103", "serviceHost", "password");
}
At first I thought the solution was to use Begin[Method] and End[Method] but these don't seem to have been generated. I thought this seemed like a good solution so I was a little frustrated when they didn't show up.. is there a chance I might have missed a checkbox or something when adding the web references?
I do not want to use the [Method]Async since this stops the page from rendering until [Method]AsyncCompleted gets called from what I've understood.
The call I'm going to do is not CPU-intensive, I'm just waiting on a webService sitting on a slow server, so what I understood from this article: http://msdn.microsoft.com/en-us/magazine/cc164128.aspx making the threadpool bigger is not a choice as this will actually impair the performance instead (since I can't throw in a mountain of hardware).
What do you think is the best solution for my current situation? I don't really like the current one (only by gut feeling but anyway)
Thanks for reading this awfully long post..
Interesting. Until your question, I wasn't aware that VS changed from using Begin/End to Async/Completed when adding web references. I assumed that they would also include Begin/End, but apparently they did not.
You state "GetData does not return anything but is supposed to start an async operation that gets the data from the webservice," so I'm assuming that GetData actually blocks until the "async operation" completes. Otherwise, you could just call it synchronously.
Anyway, there are easy ways to get this working (asynchronous delegates, etc), but they consume a thread for each async operation, which doesn't scale.
You are correct that Async/Completed will block an asynchronous page. (side note: I believe that they will not block a synchronous page - but I've never tried that - so if you're using a non-async page, then you could try that). The method by which they "block" the asynchronous page is wrapped up in SynchronizationContext; in particular, each asynchronous page has a pending operation count which is incremented by Async and decremented after Completed.
You should be able to fake out this count (note: I haven't tried this either ;) ). Just substitute the default SynchronizationContext, which ignores the count:
var oldSyncContext = SynchronizationContext.Current;
try
{
SynchronizationContext.SetSynchronizationContext(new SynchronizationContext());
var serviceName = new ServiceName(..);
// Note: MyMethodCompleted will be invoked in a ThreadPool thread
// but WITHOUT an associated ASP.NET page, so some global state
// might be missing. Be careful with what code goes in there...
serviceName.MethodCompleted += MyMethodCompleted;
serviceName.MethodAsync(..);
}
finally
{
SynchronizationContext.SetSynchronizationContext(oldSyncContext);
}
I wrote a class that handles the temporary replacement of SynchronizationContext.Current as part of the Nito.Async library. Using that class simplifies the code to:
using (new ScopedSynchronizationContext(new SynchronizationContext()))
{
var serviceName = new ServiceName(..);
// Note: MyMethodCompleted will be invoked in a ThreadPool thread
// but WITHOUT an associated ASP.NET page, so some global state
// might be missing. Be careful with what code goes in there...
serviceName.MethodCompleted += MyMethodCompleted;
serviceName.MethodAsync(..);
}
This solution does not consume a thread that just waits for the operation to complete. It just registers a callback and keeps the connection open until the response arrives.
You can do this:
var action = new Action(CreateService);
action.BeginInvoke(action.EndInvoke, action);
or use ThreadPool.QueueUserWorkItem.
If using a Thread, make sure to set IsBackground=true.
There's a great post about fire and forget threads at http://consultingblogs.emc.com/jonathangeorge/archive/2009/09/10/make-methods-fire-and-forget-with-postsharp.aspx
try using below settings
[WebMethod]
[SoapDocumentMethod(OneWay = true)]
void MyAsyncMethod(parameters)
{
}
in your web service
but be careful if you use impersonation, we had problems on our side.
I'd encourage a different approach - one that doesn't use update panels. Update panels require an entire page to be loaded, and transferred over the wire - you only want the contents for a single control.
Consider doing a slightly more customized & optimized approach, using the MVC platform. Your data flow could look like:
Have the original request to your web page spawn a thread that goes out and warms your data.
Have a "skeleton" page returned to your client
In said page, have a javascript thread that calls your server asking for the data.
Using MVC, have a controller action that returns a partial view, which is limited to just the control you're interested in.
This will reduce your server load (can have a backoff algorithm), reduce the amount of info sent over the wire, and still give a great experience to the client.