What I'm trying to do is create an asp.net page that runs a random number generator, displays the random number, and writes it to a text file. That part is no worries, the issues is I want the number generation and file writing to continue while the page is live - ie if no one is actually viewing the page, it's just sitting on the server, the process should continue.
Is this possible?
EDIT: Foolishly overlooked using a webservice to generate the number - I've knocked up a basic service that generates a number and writes it to a text file. Can't work out how to schedule/automate it - could I set up a timer, with a given interval, then use timer_Tick?
Scheduling is new to me, any advice is appreciated.
You can use Window Service to work in backgroud, please see below link:
http://www.codeproject.com/KB/dotnet/simplewindowsservice.aspx
http://www.codeguru.com/columns/dotnet/article.php/c6919
Have you considered the use of scheduled tasks? So, rather than the page calling the updates, the scheduled task does that, and the page viewer is just seeing "latest results" at any specific point. Of course, that may not be feasible, but by the sounds of it, you're after a constantly working service/task with an ability to view the latest number, a little like an RSA token which shows new numebrs even if you dont need one.
Not sure if this is what you want. But if you are interested in using a scheduler for this task, you can try Quartz.Net. It is a very popular, full-featured and open source sheduling system.
Please describe what you are trying to achieve. There might be a better way than writing random numbers to a file.
I would not use a service (web or winservice) for this. There is no benefit to use a webservice since it will just do exactly the same as your web would do. A windows service will continue to run independent of your web, and you need to create some kind of IPC and to keep track of several timers/files.
The easiest way to do this is to use a System.Threading.Timer and keep it in a session variable. Also note that you need to kill it when the user session expires.
You should also be aware of that one timer will be created per user that uses the page.
Update
Create a Windows Service application and add a System.Threading.Timer to it. Write to the file in the timer callback.
Then open the textfile in your web app (using FileShare.ReadWrite + FileMode.Read)
Related
I am working on cloud storage system in ASP.Net MVC5. In which I made a file manager that handles cut,copy,download multiple files,edit and preview of files, but I want to edit documents like word files in real time (collaborative editing)..is there any api that can help me accordingly.
Thank you in advance.
you should use Signal R for real time applications...it may be possible with the help of application user interface but its better to write your own code according to your choice...
[http://signalr.net/][1]
dev_express and syncfusion may be your solution..try these..
This is turning into a huge comment, so I'll just explain my point of view in an answer. I'll remove it, if I see an actual answer appears.
I am suggesting you start writing your own code for collaborative editing and the reason is quite simple. You need at least slightly different processing for almost each file type, which suggests there will never be a single API to support collaborative editing for all file types, unless somebody makes it their goal to maintain it and keep up with every one created.
Start it simple, text (or hex) editing. Define how changes are made and implemented on other clients and then work your way to add as many file types (and methods that go with them) as you need.
You could use source code of 1 of these open source collaborative text editors (you'll have to find download / Github links on their websites) to get a general idea how to do it, but you will still have to put in some work and won't go far without creating your own code.
Collaborative editing requires user 1's (who just started editing) client to send either one of these:
Data pointing to changes made in file
Full file, and user 2's client (or central "server") should be able to calculate the changes made from there and implement them.
One of the problems is to overwrite only that portion of the file changes were made to (and avoid overwriting the other user 2's work).
And the biggest problem (the reason you can't have "1 for all" method/API) is each file type has its own structure meaning that different file types will have different data representing changes in file. If you try to write raw data it might work, but you'd still need to calculate and lock away specific portions of file, that contain general information, rather than data of your file.
today, I use Selenium to parse data from a website. Here is my code:
public ActionResult ParseData()
{
IWebDriver driver = new FirefoxDriver();
driver.Navigate().GoToUrl(myURL);
IList<IWebElement> nameList = driver.FindElements(By.XPath(myXPath));
return View(nameList);
}
The problem is, whenever it runs, it opens new window at myURL location, then get the data, and leave that window opening.
I don't want Selenium to open any new window here. Just run at the background and give me the parsed data. How can I achieve that? Please help me. Thanks a lot.
Generally I agree with andrei: why use Selenium if you are not planning to interact with browser window?
Having said that, simplest thing to do to prevent Selenium from leaving the window open, is to close it before returning from the function:
driver.Quit();
Another option, if the page doesn't have to be loaded in Firefox, is to use HtmlUnit Driver instead (it has no UI)
Well, it seems that on each web request you are creating (though, not closing / disposing) a Selenium driver object. As I have said in the comment, there may be better solutions for your problem...
As you want to fetch a web page and extract some data from it, feel free to use:
WebClient
WebRequest
A web application is not very a hospitable environment for a Selenium driver instance IMHO. Though, if you still want to play a bit with it, make the Selenium instance static and reuse it among requests. Still, if it will be used from concurrent requests (multiple threads running at the same time), a crush is very probable :) You have the option to protect the instance (locks, critical section etc.) but then you will have zero scalability.
Short answer: fetch the data by in another way, Selenium is just for automatic exploration tests as far as I know...
But...
If you really have to explore that website - the source of your data - with Selenium... Then fetch the data using Selenium in advance - speculatively, in another process (a console application that runs in background) and store it in some files or in a database. Then, from the web application, read the data and return it to your clients :)
If you do not have yet the data the client has asked for, respond with some error - "please try again in 5 minutes", and tell the console application (that's running in background) to fetch that data (there are various ways of communicating across process boundaries - the web app and the console app in our case, but you can use a simple file / db for queuing "data requests" - whatever)...
I need to process a complex calculation to generate a report and display as a webpage. It has to be run periodically to recalculate the formula based on new input.
I have a few ideas:
1. Create a web service to process and cache the content and then create a web application to request the content via HTTP periodically.
2. Create a service to output a file periodically and then create a web application to read the file.
3. Create a web application which has a task in there running periodically to generate the output and then create a webpage to display it.
I have read some of the old threads but I want to know which is the better approach, the pros and cons or if there are a newer way of implementing this?
JiTE!
I did a program that is similar to yours. The task is to generate a report when a user asks the application to do. Usually it is a daily report, but the calculation costs several minutes, for there are too many records, and the fomular is complex, too.
We created a periodly thread to check wether it is time to calucate.Thus the thread will calute and store the condition and result into SqlServer. When users click the button to view the daily report, yet the report is in DB and the application just needs to read it out from DB, and show it on the screen.
Let disscuss your solutions: solution 1 and 2 seems good, but sulution 3 does not match the DesignPatterns, because all tasks are put in a single application.
Hope everything goes well!
David Liu
I need to build a windows forms application to measure the time it takes to fully load a web page, what's the best approach to do that?
The purpose of this small app is to monitor some pages in a website, in a predetermined interval, in order to be able to know beforehand if something is going wrong with the webserver or the database server.
Additional info:
I can't use a commercial app, I need to develop this in order to be able to save the results to a database and create a series of reports based on this info.
The webrequest solution seems to be the approach I'm goint to be using, however, it would be nice to be able to measure the time it takes to fully load the the page (images, css, javascript, etc). Any idea how that could be done?
If you just want to record how long it takes to get the basic page source, you can wrap a HttpWebRequest around a stopwatch. E.g.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(address);
System.Diagnostics.Stopwatch timer = new Stopwatch();
timer.Start();
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
timer.Stop();
TimeSpan timeTaken = timer.Elapsed;
However, this will not take into account time to download extra content, such as images.
[edit] As an alternative to this, you may be able to use the WebBrowser control and measure the time between performing a .Navigate() and the DocumentCompleted event from firing. I think this will also include the download and rendering time of extra content. However, I haven't used the WebBrowser control a huge amount and only don't know if you have to clear out a cache if you are repeatedly requesting the same page.
Depending on how the frequency you need to do it, maybe you can try using Selenium (a automated testing tool for web applications), since it users internally a web browser, you will have a pretty close measure. I think it would not be too difficult to use the Selenium API from a .Net application (since you can even use Selenium in unit tests).
Measuring this kind of thing is tricky because web browsers have some particularities when then download all the web pages elements (JS, CSS, images, iframes, etc) - this kind of particularities are explained in this excelent book (http://www.amazon.com/High-Performance-Web-Sites-Essential/dp/0596529309/).
A homemade solution probably would be too much complex to code or would fail to attend some of those particularities (measuring the time spent in downloading the html is not good enough).
One thing you need to take account of is the cache. Make sure you are measuring the time to download from the server and not from the cache. You will need to insure that you have turned off client side caching.
Also be mindful of server side caching. Suppose you download the pace at 9:00AM and it takes 15 seconds, then you download it at 9:05 and it takes 3 seconds, and finally at 10:00 it takes 15 seconds again.
What might be happening is that at 9 the server had to fully render the page since there was nothing in the cache. At 9:05 the page was in the cache, so it did not need to render it again. Finally by 10 the cache had been cleared so the page needed to be rendered by the server again.
I highly recommend that you checkout the YSlow addin for FireFox which will give you a detailed analysis of the times taken to download each of the items on the page.
Something like this would probably work fine:
System.Diagnostics.Stopwatch sw = new Stopwatch()
System.Net.HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create("http://www.example.com");
// other request details, credentials, proxy settings, etc...
sw.Start();
System.Net.HttpWebResponse res = (HttpWebResponse)req.GetResponse();
sw.Stop();
TimeSpan timeToLoad = sw.Elapsed;
I wrote once a experimental program which downloads a HTML page and the objects it references (images, iframes, etc).
More complicated than it seems because there is HTTP negotiation so, some Web clients will get the SVG version of an image and some the PNG one, widely different in size. Same thing for <object>.
I'm often confronted with a quite similar problem. However I take a slight different approach: First of all, why should I care about static content at all? I mean of course it's important for the user, if it takes 2 minutes or 2 seconds for an image, but that's not my problem AFTER I fully developed the page. These things are problems while developing and after deployment it's not the static content, but it's the dynamic stuff that normally slows things down (like you said in your last paragraph). The next thing is, why do you trust that so many things stay constant? If someone on your network fires up a p2p program, the routing goes wrong or your ISP has some issues your server-stats will certainly go down. And what does your benchmark say for a user living across the globe or just using a different ISP? All I'm saying is, that you are benchmarking YOUR point of view, but that doesn't say much about the servers performance, does it?
Why not let the site/server itself determine how long it took to load? Here is a small example written in PHP:
function microtime_float()
{
list($usec, $sec) = explode(" ", microtime());
return ((float)$usec + (float)$sec);
}
function benchmark($finish)
{
if($finish == FALSE){ /* benchmark start*/
$GLOBALS["time_start"] = microtime_float();
}else{ /* benchmark end */
$time = microtime_float() - $GLOBALS["time_start"];
echo '<div id="performance"><p>'.$time.'</p></div>';
}
}
It adds at the end of the page the time it took to build (hidden with css). Every couple of minutes I grep this with a regular expression and parse it. If this time goes up I know that there something wrong (including the static content!) and via a RSS-Feed I get informed and I can act.
With firebug we know the "normal" performance of a site loading all content (development phase). With the benchmark we get the current server situation (even on our cell phone). OK. What next? We have to make certain that all/most visitors are getting a good connection. I find this part really difficult and are open to suggestions. However I try to take the log files and ping a couple of IPs to see how long it takes to reach this network. Additionally before I decide for a specific ISP I try to read about the connectivity and user opinions...
You can use a software like these :
http://www.cyscape.com/products/bhawk/page-load-time.aspx
http://www.trafficflowseo.com/2008/10/website-load-timer-great-to-monitor.html
Google will be helpful to find the one best suited for your needs.
http://www.httpwatch.com/
Firebug NET tab
If you're using firefox install the firebug extension found at http://getfirebug.com. From there, choose the net tab, and it'll let you know the load/response time for everything on the page.
tl;dr
Use a headless browser to measure the loading times. One example of doing so is Website Loading Time.
Long version
I ran into the same challenges you're running into, so I created a side-project to measure actual loading times. It uses Node and Nightmare to manipulate a headless ("invisible") web browser. Once all of the resources are loaded, it reports the number of milliseconds it took to fully load the page.
One nice feature that would be useful for you is that it loads the webpage repeatedly and can feed the results to a third-party service. I feed the values into NIXStats for reporting; you should be able to adapt the code to feed the values into your own database.
Here's a screenshot of the resulting values for our backend once fed into NIXStats:
Example usage:
website-loading-time rinogo$ node website-loading-time.js https://google.com
1657
967
1179
1005
1084
1076
...
Also, if the main bulk of your code must be in C#, you can still take advantage of this script/library. Since it is a command-line tool, you can call it from your C# code and process the result.
https://github.com/rinogo/website-loading-time
Disclosure: I am the author of this project.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am working on a project right now that involves receiving a message from another application, formatting the contents of that message, and sending it to a printer. The technology of choice is C# windows service. The output could be called a report, I suppose, but a reporting engine is not necessary. A simple templating engine, like StringTemplate, or even XSLT outputting HTML would be fine. The problem I'm having is finding a free way to print this kind of output from a service. Since it seems that it will work, I'm working on a prototype using Microsoft's RDLC, populating a local report and then rendering it as an image to a memory stream, which I will then print. Issues with that are:
Multi-page printing will be a big headache.
Still have to use PrintDocument to print the memory stream, which is unsupported in a Windows Service (though it may work - haven't gotten that far with the prototype yet)
If the data coming across changes, I have to change the dataset and the class that the data is being deserialized into. bad bad bad.
Has anyone had to do anything remotely like this? Any advice? I already posted a question about printing HTML without user input, and after wasting about 3 days on that, I have come to the conclusion that it cannot be done, at least not with any freely available tool.
All help is appreciated.
EDIT: We are on version 2.0 of the .NET framework.
Trust me, you will spend more money trying to search/develop a solution for this as compared to buying a third party component. Do not reinvent the wheel and go for the paid solution.
Printing is a complex problem and I would love to see the day when better framework support is added for this.
Printing from a Windows service is really painful. It seems to work... sometimes... but finally it craches or throws an exception from time to time, without any clear reason. It's really hopeless. Officially, it's even not supported, without any explanation, nor any proposal for an alternate solution.
Recently, I have been confronted to the problem and after several unsuccessful trials and experimentations, I came finally with two viable solutions:
Write your own printing DLL using the Win32 API (in C/C++ for instance), then use it from your service with P/Invoke (works fine)
Write your own printing COM+ component, then uses it from your service. I have chosen this solution with success recently (but it was third party COM+ component, not own written) It works absolutely fine too.
I've done it. It's a pain in the A*s. The problem is that printing requires that GDI engine to be in place, which normally means that you have to have the desktop, which only loads when you're logged in. If you're attempting to do this from a Service on a Server, then you normally aren't logged in.
So first you can't run as the normal service user, but instead as a real user that has interactive login rights. Then you have to tweak the service registry entries (I forget how at the moment, would have to find the code which I can do tonight if you're really interested). Finally, you have to pray.
Your biggest long term headache will be with print drivers. If you are running as a service without a logged in user, some print drivers like to pop up dialogs from time to time. What happens when your printer is out of toner? Or out of paper? The driver may pop up a dialog that will never be seen, and hold up the printer queue because nobody is logged in!
To answer your first question, this can be fairly straight forward depending on the data. We have a variety of Service-based applications that do exactly what you are asking. Typically, we parse the incoming file and wrap our own Postscript or PCL around it. If you layout is fairly simple, then there are some very basic PCL codes you can wrap it with to provide the font/print layup you want (I'd be more then happy to give you some guidance here offline).
One you have a print ready file you can send it to a UNC printer that is shared, directly to a locally installed printer, or even to the IP of the device (RAW or LPR type data).
If, however, you are going down the PDF path, the simplest method is to send the PDF output to a printer that supports direct PDF printing (many do now). In this case you just send the PDF to the device and away it prints.
The other option is to launch Ghostscript which should be free for your needs (check the licensing as they have a few different version, some GNU, some GPL etc.) and either use it's built in print function or simply convert to Postscript and send to the device. I've used Ghostscript many times in Service apps but not a huge fan as you will basically be shelling out and executing a command line app to do the conversion. That being said, it's a stable app that does tend to fail gracefully
Printing from a service is a bad idea. Network printers are connected "per-user". You can mark the service to be run as a particular user, but I'd consider that a bad security practice. You might be able to connect to a local printer, but I'd still hesitate before going this route.
The best option is to have the service store the data and have a user-launched application do the printing by asking the service for the data. Or a common location that the data is stored, like a database.
If you need to have the data printed as regular intervals, setup a Task event thru the Task Scheduler. Launching a process from a service will require knowing the user name and password, which again is bad security practice.
As for the printing itself, use a third-party tool to generate the report will be the easiest.
This may not be what you're looking for, but if I needed to do this quick&dirty, I would:
Create a separate WPF application (so I could use the built-in document handling)
Give the service the ability to interact with the desktop (note that you don't actually have to show anything on the desktop, or be logged in for this to work)
Have the service run the application, and give it the data to print.
You could probably also jigger this to print from a web browser that you run from the service (though I'd recommend building your own shell IE, rather than using a full browser).
For a more detailed (also free) solution, your best bet is probably to manually format the document yourself (using GDI+ to do the layout for you). This is tedious, error prone, time consuming, and wastes a lot of paper during development, but also gives you the most control over what's going to the printer.
If you can output to post script some printers will print anything that gets FTPed to a certain directory on them.
We used this to get past the print credits that our university exposed on us, but if your service outputs to a ps then you can just ftp the ps file to the printer.
We are using DevExpress' XtraReports to print from a service without any problems. Their report model is similar to that of Windows Forms, so you could dynamically insert text elements and then issue the print command.
I think we are going to go the third party route. I like the XSL -> HTML -> PDF -> Printer flow... Winnovative's HTML to PDF looks good for the first part, but I'm running into a block finding a good PDF printing solution... any suggestions? Ideally the license would be on a developer basis, not on a deployed runtime basis.
In answer to your question about PDF printing, I have not found an elegant solution. I was "shell" ing out to Adobe which was unreliable and required a user to be logged in at all times. To fix this specific problem, I requested that the files we process (invoices) be formatted as multi-page Tiff files instead which can be split apart and printed using native .NET printing functions. Adobe's position seems to be "get the user to view the file in Adobe Reader and they can click print". Useless.
I am still keen to find a good way of producing quality reports which can be output from the web server...
Printing using System.Drawing.Printing is not supported by MS, as per Yann Trevin's response. However, you might be able to use the new, WPF-based, System.Printing (I think)