Ria Framework DomainDataSource MoveToNextPage, MoveToPage, MoveToFirstPage doesn't move pages - c#

I'm attempting to write a save results extension to the DomainDatasourceView.
I can successfully write the contents of the current page of results but when I attempt to call MoveToNextPage(), the PageIndex stays current. MSDN docs regarding this don't provide any details other than MoveToNextPage returns a bool is it successfully moves to the next page.
The following sample code results in an infinite loop, and the Current page is never changed.
private void WriteResults(DomainDataSourceView resultsview)
{
StringBuilder csvdata = new StringBuilder();
... Do Work on current page ...
if(resultsview.CanChangePage && resultsview.MoveToNextPage())
{
csvdata.Append(WriteResults(resultsview));
}
}
Do I need to listen for the PageChanged Event to continue Saving Results?
Do I need to call Load on the DomainDataSource for each page?
The MSDN Docs on DomainDataSourceView doesn't go into too much details on this subject.
[Edit]
After playing around some more, I was able to determine that the Move...Page commands do call the the DomainDataSource Load operation, however its another Async call, so any consecutive work that needs to be done on the loaded pages, should be handled accordingly.

Related

How to execute code after blazor page has loaded

So I am trying to execute a method when the page has loaded. The OnAfterRender() override method is too early in my case. the method I am trying to do is in my #code{} block of the razor page.
I basically want to execute getAvailablePrinters when the page is loaded.
as requested my code below:
#code {
private List<string> Printers;
private List<string> LayoutTypes;
private void sendPrint()
private async Task getAvailablePrinters()
{
//get layouts
Layouts = new List<Layout>();
AvailablePrintersRepository availablePrintersRepository = new AvailablePrintersRepository();
try
{
Layouts = await availablePrintersRepository.getAvailablePrintersAsync();
}
catch (Exception e)
{
//show message
}
//sort printers & layouts
Printers = new List<string>();
LayoutTypes = new List<string>();
foreach (Layout layout in Layouts)
{
foreach (string printer in layout.Printers)
{
if (!Printers.Contains(printer))
{
Printers.Add(printer);
}
}
if (!LayoutTypes.Contains(layout.Type))
{
LayoutTypes.Add(layout.Type);
}
}
}
}
#AccessDenied I want to send a request to another API to get back data i need to display to the user. i now have a button to do it but i want to get the data after the page has loaded so the user doesn't have to press the button each time
--
Because the method isn't finished when the page has loaded so that is why I want to do it after the page has loaded
So you believe that the OnAfterRenderAsync and OnAfterRender are called too early in the pipeline, and thus are not fit for the Web Api call you want to do in order to retrieve data, right ?
You are wrong, they are, in my opinion too late for this enterprise, and you should use the OnInitializedAsync life cycle method to execute the HTTP request.
Please see the VS template how a Web Api is made to populate the ForeCast objects
in the FetchData page.
You should try code in various situations to understand how the initialization process works, and see that your ideas or perceptions are wrong. Understand this: You should retrieve your data before your page is rendered, not after it is being rendered. OnAfterRender(Async) may be used to execute code that otherwise it's too early to execute. It is most often used to initialize JS objects.
Hope this helps...

Proper usage for MvxFileDownloadCache.Clear

I'm trying to cleanup unwanted http image data that I have loaded via MvxImageViewLoader.
I've found the function clear in the FileDownloadCache which seems to do what I need.
var downloadCache = Mvx.Resolve<IMvxFileDownloadCache>();
downloadCache.Clear(_imageChart1ViewLoader.ImageUrl);
Periodically it calls a function (once a second by the looks of it) which deletes the files in the private list
private readonly List<string> _toDeleteFiles = new List<string>();
Clear adds the image by url to that list.
Except once I call this function I'm still able to see the image. I.e. it stays in memory.
So really I need to know where is a good place to be calling Clear and am I using it in the correct way. Currently I call it every time I exit my DetailView which downloads an image from a URL.
MvvmCross v4.2.2 (latest)

WebBrowser Control Loading Twice

Alrighty, guys. If you'd like to pull your hair out, the I've got a great problem for you. This problem seems very rare, but it effects my program on a few different sites that have pages that load content twice.
For instance: http://www.yelp.com/search?find_desc=donuts&find_loc=78664&ns=1#start=20
If you visit this site, you'll notice that it loads, then reloads different data. That's because there is a parameter in the URL that says start=20, so the results should start at #20 instead of #10. No matter what that is set to, Yelp loads the first 10 results. Not sure why they do this, but this is a prime example of what absolutely breaks my program. :(
Basically, whenever my program has a page that loads, it copies the source code to a string so it can display it somewhere else. It's not really important- What is important is that the string needs to actually have the last thing that is loaded in the page. Whenever a page loads, then loads again, I am not sure how to catch it and it ruins the program by exiting the while loop, and copying the source code into the string called source.
Here is a snippit of some code that I reproduced the problem with. When I attempt to use this in a new program, it will copy the source code for the first pages' results instead of what it is changed to.
GetSite = "http://www.yelp.com/search?find_desc=donuts&find_loc=78664&ns=1#start=20";
webBrowser9.Navigate(GetSite);
while (webBrowser9.ReadyState != WebBrowserReadyState.Complete)
{
p++;
if (p == 1000000)
{
MessageBox.Show("Timeout error. Click OK to skip." + Environment.NewLine + "This could crash the program, but maybe not.");
label15.Text = "Error Code: Timeout";
break;
}
Application.DoEvents();
}
mshtml.HTMLDocument objHtmlDoc = (mshtml.HTMLDocument)webBrowser9.Document.DomDocument;
Source = objHtmlDoc.documentElement.innerHTML;
Why do you wait in while loop for the browser to finish loading data?
Use DocumentCompleted event and you can remember the document's URL from there.

WebBrowser Scraping - Return Control to Calling Function or Another Function C#

I am using a WebBrowser control for web scraping pages on Yahoo news. I need to use a WebBrowser rather than HtmlAgilityPack to accommodate for JavaScript and the like.
Application Type: WinForm
.NET Framework: 4.5.1
VS: 2013 Ultimate
OS: Windows 7 Professional 64-bit
I am able to scrape the required text, but I am unable to return control of the application to the calling function or any other function when scraping is complete. I also cannot verify that scraping is complete.
I need to
1. Verify that all page loads and scraping have completed.
2. Perform actions on a list of the results, as by alphabetizing them.
3. Do something with the data, such as displaying text contents in a Text box or writing them to SQL.
I declare new class variables for the WebBrowser and a list of URLs and an object with a property that contains a list of news articles..
public partial class Form1 : Form
{
public WebBrowser w = new WebBrowser(); //WebBrowser
public List<String> lststrURLs = new List<string>(); //URLs
public ProcessYahooNews pyn = new ProcessYahooNews(); //Contains articles
...
lststrURLs.Add("http://news.yahoo.com/sample01");
lststrURLs.Add("http://news.yahoo.com/sample02");
lststrURLs.Add("http://news.yahoo.com/sample03");
Pressing a button, whose handler is calling function, calls this code.
w.Navigate(strBaseURL + lststrTickers[0]); //invokes w_Loaded
foreach (YahooNewArticle article in pyn.articles)
{
textBox1.Text += article.strHeadline + "\r\n";
textBox1.Text += article.strByline + "\r\n";
textBox1.Text += article.strContent + "\r\n";
textBox1.Text += article.dtDate.ToString("yyyymmdd") + "\r\n\r\n";
}
The first problem I have is that program control appears to skip over w.Navigate and pass directly to the foreach block, which does nothing since articles has not been populated yet. Only then is w.Navigate executed.
If I could get the foreach block to wait until after w.Navigate did its work, then many of my problems would be solved. Absent that, w.Navigate will do its work, but then I need control passed back to the calling function.
I have worked on a partial work-around.
w.Navigate loads a page into the WebBrowser. When it is done loading, the event w.DocumentCompleted fires. I am handling the event with w_Loaded, which uses a class with logic to perform the web scraping.
// Sets up the class
pyn.ProcessYahooNews_Setup(w, e);
// Perform the scraping
pyn.ProcessLoad();
The result of the scraping is that pyn.articles is populated. The next page is loaded only when criteria, such as pyn.articles.Count > 0.
if (pyn.articles.Count > 0)
{
//Navigate to the next page
i++;
w.Navigate(lststrURLs[i]);
}
More pages are scraped, and articles.Count grows. However, I cannot determine that scraping is done - that there will not be more page loads resulting in more articles.
Suppose I am confident that the scraping is done, I need to make articles available for further handling, as by sorting it as a list, removing certain elements, and displaying its textual content to a TextBox.
That takes me back the foreach block that was called too early. Now, I need it, but I have no way to get articles into the foreach. I don't think I can call some other function from w_Loaded to the handling for me because it would be called for each page load, and I need to call the function once after all page loads.
It occurs to me that some threaded architecture might help, but I could use some help on figuring out what the architecture would look like.

Calling a webservice async

Long post.. sorry
I've been reading up on this and tried back and forth with different solutions for a couple of days now but I can't find the most obvious choice for my predicament.
About my situation; I am presenting to the user a page that will contain a couple of different repeaters showing some info based on the result from a couple of webservice calls. I'd like to have the data brought in with an updatepanel (that would be querying the result table once per every two or three seconds until it found results) so I'd actually like to render the page and then when the data is "ready" it gets shown.
The page asks a controller for the info to render and the controller checks in a result table to see if there's anything to be found. If the specific data is not found it calls a method GetData() in WebServiceName.cs. GetData does not return anything but is supposed to start an async operation that gets the data from the webservice. The controller returns null and UpdatePanel waits for the next query.
When that operation is complete it'll store the data in it's relevant place in the db where the controller will find it the next time the page asks for it.
The solution I have in place now is to fire up another thread. I will host the page on a shared webserver and I don't know if this will cause any problems..
So the current code which resides on page.aspx:
Thread t = new Thread(new ThreadStart(CreateService));
t.Start();
}
void CreateService()
{
ServiceName serviceName = new ServiceName(user, "12345", "MOVING", "Apartment", "5100", "0", "72", "Bill", "rate_total", "1", "103", "serviceHost", "password");
}
At first I thought the solution was to use Begin[Method] and End[Method] but these don't seem to have been generated. I thought this seemed like a good solution so I was a little frustrated when they didn't show up.. is there a chance I might have missed a checkbox or something when adding the web references?
I do not want to use the [Method]Async since this stops the page from rendering until [Method]AsyncCompleted gets called from what I've understood.
The call I'm going to do is not CPU-intensive, I'm just waiting on a webService sitting on a slow server, so what I understood from this article: http://msdn.microsoft.com/en-us/magazine/cc164128.aspx making the threadpool bigger is not a choice as this will actually impair the performance instead (since I can't throw in a mountain of hardware).
What do you think is the best solution for my current situation? I don't really like the current one (only by gut feeling but anyway)
Thanks for reading this awfully long post..
Interesting. Until your question, I wasn't aware that VS changed from using Begin/End to Async/Completed when adding web references. I assumed that they would also include Begin/End, but apparently they did not.
You state "GetData does not return anything but is supposed to start an async operation that gets the data from the webservice," so I'm assuming that GetData actually blocks until the "async operation" completes. Otherwise, you could just call it synchronously.
Anyway, there are easy ways to get this working (asynchronous delegates, etc), but they consume a thread for each async operation, which doesn't scale.
You are correct that Async/Completed will block an asynchronous page. (side note: I believe that they will not block a synchronous page - but I've never tried that - so if you're using a non-async page, then you could try that). The method by which they "block" the asynchronous page is wrapped up in SynchronizationContext; in particular, each asynchronous page has a pending operation count which is incremented by Async and decremented after Completed.
You should be able to fake out this count (note: I haven't tried this either ;) ). Just substitute the default SynchronizationContext, which ignores the count:
var oldSyncContext = SynchronizationContext.Current;
try
{
SynchronizationContext.SetSynchronizationContext(new SynchronizationContext());
var serviceName = new ServiceName(..);
// Note: MyMethodCompleted will be invoked in a ThreadPool thread
// but WITHOUT an associated ASP.NET page, so some global state
// might be missing. Be careful with what code goes in there...
serviceName.MethodCompleted += MyMethodCompleted;
serviceName.MethodAsync(..);
}
finally
{
SynchronizationContext.SetSynchronizationContext(oldSyncContext);
}
I wrote a class that handles the temporary replacement of SynchronizationContext.Current as part of the Nito.Async library. Using that class simplifies the code to:
using (new ScopedSynchronizationContext(new SynchronizationContext()))
{
var serviceName = new ServiceName(..);
// Note: MyMethodCompleted will be invoked in a ThreadPool thread
// but WITHOUT an associated ASP.NET page, so some global state
// might be missing. Be careful with what code goes in there...
serviceName.MethodCompleted += MyMethodCompleted;
serviceName.MethodAsync(..);
}
This solution does not consume a thread that just waits for the operation to complete. It just registers a callback and keeps the connection open until the response arrives.
You can do this:
var action = new Action(CreateService);
action.BeginInvoke(action.EndInvoke, action);
or use ThreadPool.QueueUserWorkItem.
If using a Thread, make sure to set IsBackground=true.
There's a great post about fire and forget threads at http://consultingblogs.emc.com/jonathangeorge/archive/2009/09/10/make-methods-fire-and-forget-with-postsharp.aspx
try using below settings
[WebMethod]
[SoapDocumentMethod(OneWay = true)]
void MyAsyncMethod(parameters)
{
}
in your web service
but be careful if you use impersonation, we had problems on our side.
I'd encourage a different approach - one that doesn't use update panels. Update panels require an entire page to be loaded, and transferred over the wire - you only want the contents for a single control.
Consider doing a slightly more customized & optimized approach, using the MVC platform. Your data flow could look like:
Have the original request to your web page spawn a thread that goes out and warms your data.
Have a "skeleton" page returned to your client
In said page, have a javascript thread that calls your server asking for the data.
Using MVC, have a controller action that returns a partial view, which is limited to just the control you're interested in.
This will reduce your server load (can have a backoff algorithm), reduce the amount of info sent over the wire, and still give a great experience to the client.

Categories