I am developing a website using VS 2008 (C#). My current mission is to develop a module that should perform the following tasks:
Every 15 minutes a process need to communicate with the database to find out whether a new user is added to the "User" table in the database through registration
If it finds an new entry, it should add that entry to an xml file (say NewUsers18Jan2009.xml).
In order to achieve this, which of the following one is most appropriate?
Threads
Windows Service
Other
Are there any samples available to demonstrate this?
Separate this task from your website. Everything website does goes through webserver. Put the logic into class library (so you can use it in the future if you will need to ad on-demand checking), and use this class in console application. Use Windows “Scheduled task” feature and set this console app to run every 15 minutes. This is far better solution than running scheduled task via IIS.
It doesn't sound like there's any UI part to your task. If that's the case, use either a windows service, or a scheduled application. I would go with a service, because it's easier to control remotely.I fail to see a connection to a web site here...
Why in the world would an admin need to get pinged every 15 minutes? Poor admin!
I would just put a time stamp on each entry in your users table and create a quick report to allow the admin to query the data whenever they need it.
I think your approach is wrong. You should do one of two things:
Add a user to the XML file as the last step in creating the user.
Generate the XML file on demand when it is requested. This will give you real-time information with very little overhead.
A windows service would give you a good solution - or if you're using SQL Server you could fire this kind of processing from a SQL Agent job.
If you want the code to be 'part' of your web application you could always fire the logic fro a heartbeat page which runs your task(s) whenever the url is called - you can then poll the url from a service or agent job.
The simplest approach is to create a System.Threading.Timer in your app on Application_Start and put it in some static field so it does not get collected. That way you can have your web app poll the database without an external process needed. Of course if your app goes down so does the timer.
For the polling logic just keep a last userId (if you have an increment policy) and check for new added users by filtering WHERE id > lastId.
Create a new table (or text file) that stores the last time you did a "new user export" or additionally just look for the modified/creation date of the last export file. When your script hits your DB do a SQL command to get all users created after the last export time. Spit out new XML for each user.
Set this script to run as a windows scheduled task/cron job/maybe even a database trigger.
Since you seem to be adding all users for a given day to a single XML file (as per your post), why do you need to do this every 15 minutes? Wouldn't it be sufficient to do this once after midnight?
When you do that, preferably in a Windows Service (if it has to run every 15 minutes) or in a command line app that's scheduled to run once at e.g. 0:15 hours, you'll just need to check the "sign up" date for the users and if there's any that signed up the past day, you add them to your list and export that list to the XML file at the end of processing the table.
Marc
While I also voted for creating a Windows Service to perform this function for you, the simplest way I could think to do it would be to put a trigger on your "Users" table that would create the xml file for you when a user is inserted.
I second Chuck's answer - you could pull this data from the database using an XML query and send it directly, no need to much around creating files on the system.
Related
My client wanted me to write a simple desktop app such that :
User will select a date. Then the program will travel all files in the windows pc and then create an excel report for all files which are created before the selected date.
Actually it seems really simple application. However, my client told me he has petabytes of data which made me think the app would run for hours even days. I need a really smart solution for that. For instance despite the possibility of a crash or unexpected error etc, I am planning to divide the report into parts and create a new excel file for each 50000 records. In addition, the application should continue from where it left off. In another words, if the application is closed, the application should not start scanning all over again. What is the most logical way for the program to work efficiently and continue from where it left off? What comes to my mind is to put a tick on the folders so that the already looked folders are not looked at again. What is the best way to implement this logic in C#? Is it efficient to create a separate list for the visited folders and then first look if the folder is alreay in list? Which data structures would you use for this problem and how would you implement this?
Thanks.
Background
We have a legacy MS Access application that uses MS SQL tables as the backend, and prints a report (product label) on demand based on a user action. This works as expected, and we'd prefer not to change this part.
I have a new standalone (C# Windows Forms) application running on a different PC that performs some mostly unrelated actions, but requires the user to print the above mentioned label towards the end of the process.
Question
Instead of having the user switch to the legacy Access app on a different PC (which may be at a different location), how can I trigger the Access VBA that runs the report when data is updated from the standalone C# application?
One opproach
I was thinking I could create a table RequestedLabels with columns LabelId, PrinterId. The standalone app would insert a row to request a label, and the MS Access app would query the table at small intervals to see if there are any new labels to be printed for this instance.
However, there are several things I don't like about this approach:
The label should be printed immediately after the request so my query interval would have to be quite small (1-2s max). Even at 2s the delay would be a nuisance. At higher rates I suspect this could interfere with the MS Access application's responsiveness.
This would be running on 10-20 machines at at time so the number of queries to the SQL server would be around 20/sec or more.
The user is only printing labels every few minutes, so most of the processing/network capacity required to do this is wasted.
A better approach
I was hoping for some way to have changes to the underlying data trigger a Form Event in MS Access, or trigger some VBA function that could print the labels on demand.
Caveats
The computers are across two subnets separated by a strict firewall and run mixed OSs both on and off the domain. RPC between PCs has proven to be tricky in the past due to the non-uniform setup.
I would prefer a solution that relies on updating the data on the SQL server, or some other MS SQL feature that both the C# and MS Access applications can take advantage of.
UPDATE:
Since there have been two votes to close, I have refined the wording to focus more clearly on triggering the VBA when the underlying data changes.
You can build a macro in MS Access to send the report to the printer. The report should already have the data you want to print.
Let's say your macro's name is mcrPrintReport
Then you can execute MS Access from the command line to open the Access DB file and run the macro like this:
PathToAccessExe PathToAccessFile /x MacroName
C:\Program Files\Microsoft Office\Office16\MSACCESS.EXE C:\AccessDB.accdb /x mcrPrintReport
You can then schedule this to run on an interval, maybe every 5 minutes, using the Task Scheduler on a dedicated PC.
See this related question also: Running Macro from .bat with another Access db open
Lets say we have a log file (preferably a txt file) available in the web server. How can we put it online and update it constantly like for every minute. Also, how can we create a script in which it will fetch the log file every minute, analyze its contents and perform some tasks.
It should be enough to publish it in a web server folder.
By this i mean It is possible to write this file and publish it to either an ftp connection or on the web server itself if you have access to this.
I'll be happy to provide more details if you tell me which way you prefer to go :)
The best approach here is to create/write a bash file, cron job or script. The contents will be following the logic : copy the file towards the public web folder every minute.
for client analyzing the log file. one way is to also create a cron job with the following logic: fetch the new log file, then perform analysis (for example, if it found the word "stopped", then some action will be taken like restarting the process/application) and some action. repeat this for every minute
I am developing an application in C# ASP.NET to allow users to customise a number of graphics templates with their company logo.
Typically,
User uploads their company logo.
User selects a number of templates (through a HTML form) of different file types for customising.
My web application will accept the user's request, go through each individual item requested and determine its file type. It will then call the appropriate module depending on the file type (e.g. to customise a PDF) and finally insert the logo in the template. These steps are repeated for each file requested.
Once all requested templates for a user are generated, they are grouped together in a zip file and a link for downloading is sent via email.
I would like some advice on how best to accept the user's requests and process the files in ASP.NET.
One way of doing this is to keep the user waiting until all files are generated, therefore until the form handling script would have completed its execution. I reckon this is likely to trigger script timeout errors quite easily for requests that take long to be processed (large number of templates requested or sizable number of concurrent users), and as such is not a very efficient solution.
Another option would be to register the user's request, redirect him/her to another page immediately after (explaining that an email will be sent shortly with a download link), and then proceed to process the files on the server using some background job or similar without the risk of script timeouts. An email is sent when all files for that user are generated.
I am familiar with web application development but this is one of my first forays into .NET development so your help is greatly appreciated.
How can I implement the second option in C#?
It sounds like what you want is a web site which accepts the user input and then passes control to an offline process (such as a Windows Service) to perform the more intensive tasks asynchronously from the site, allowing the user to continue using the site (or go do something else) while processing takes place. Something like this:
User "initiates" a "batch" on the website. Whether it's setting up some values, uploading some kind of batch of things to process, etc. is entirely up to what you're doing. The main point is that the user starts it off.
The necessary information for the process to do its magic is persisted to the database and the user is told that the process has been queued for processing. The user can now go about doing other things.
The back-end application (or multiple applications) polls the database periodically (every minute, every 5 minutes, etc.) or, in the case of an uploaded "batch" file could use something like a FileSystemWatcher, to look for new things to do and does them. Use multi-threading or whatever you need to make that happen, but the main point is that it's an "offline" process that the website isn't waiting on.
When the process completes, some flag is set in the database (on the record being processed, or maybe a "message" record in the user's "inbox" or whatever) indicating that the process is done.
The website UI has some indicator which, any time it's loaded, checks for the aforementioned flag(s) to indicate to the user that a queued process has been completed and is ready to be viewed.
So, essentially, you have a single database accessed by two applications (your web application and your Windows Service (or console app run by a task scheduler, etc.).
Is that basically what you're looking for? I feel like I could be more specific, do you have any specific concerns about the setup?
I have a windows mobile 5.0 application (smartphone) that contains a flat data file (CSV), which contains a header row (two doubles), and a list of entries (two doubles, DateTime, and a string).
Occasionally, I need to "sync" the mobile application with a desktop application. The desktop application should read the CSV from the mobile device, and replace it with a new CSV file, based on the contents of the old one.
This seems pretty easy via RAPI (I'm guessing), but I need to ensure that the mobile application is not running. Is there a way to do this?
Mutex? Remote Process Viewer like stuff? File locking?
Thanks for any help you have
Mike
Just use a simple file locking mechanism for the file being read/updated.
Either rename the file before use or create a second 'lock' file which you can check for the existence of.
For whatever reason, the built-in RAPI Functions don't have anything for checking running processes like the ToolHelp API's. With C you could create a set of custom functions in a device library that call the ToolHelp APIs and in turn are called through CeRapiInvoke (which is a generic catch-all entry point for custom RAPI functions). Unfortunately there's no simple mechanism to do this in managed code.
Just thought of an easy way.
Every 3 seconds the program is running, update a registry key with the current datetime.
When I want to sync, check the mobile registry, check it again 5 seconds later. If the values changed, the program is still running.