I'm developing an automatic software update service, in which clients get updated version of the software over internet, we are considering downloads from a remote server over HTTP because it is immune to firewall restrictions.
The update server, must be able to authenticate the request, check license of the software and provide the client with the correct update files (as if there might be several versions of the software which need updating)
An ASP.net web application might be able to do the job, however I'm trying to avoid a web application because it needs to be installed in IIS. I am considering a basicHttp WCF library hosted in a windows service with Streamed transferMode, however I've read articles that say "its not a good practice to transfer files with WCF! I wonder why WCF is not a technology for file transfers? What are the restrictions and alternatives?
Do you suggest a WCF windows service for this job?
This doesn't exactly answer your question regarding WCF and transferring files, however I built an auto-updater application not too long ago that had a small client front-end to a WinForms app. It would get a list of local files, generate an MD5 hash of each one and send it up to a web service for comparison against a local list of files on the web server. The web service method returned the list of files that changed.
The client would then loop through that list and call another web service that would return a byte array for each file and dump it out to the local hard drive.
When this was done, it would do a Process.Start on the exe of the updated app.
It's been running in production with a couple hundred active users for over a year with no issues.
Related
My company has several desktop applications that needs to be launched from an ASP.Net Core web application. Also the applications are going to be updated if there is a newer version. The web application would be working offline (only accessible to a specific LAN), and all the applications and clients would be in the same network. So basically, I am trying to create a Launcher/Updater Web Application.
The problem here is, web browsers are not capable manipulating (installing, launching or updating) applications that are on the client machine in a direct manner. We already checked Microsoft`s ClickOnce solution on updating applications but there are some reasons why we do not want to use it.
My question is, if there is any way to read, write and edit client side data, with or without an extra client side application from a Web application?
I had a similar problem. There is no way to update client without using an additional program (at least in my experience). You won't be able to delete and replace libs and executable when the files are in use (read the software is running).
I wrote a small application to handle restart, update, and monitoring of the main application.
If you need more info about the approach I used let me know.
I'm looking for a way to establish a simple communication between a c# web application and the operating system.
Since i'm working on Silverlight, i get everything i need to create files into any folder on the C:/ Disk. The problem is that we're going to migrate from Silverlight to Html 5 / C#
So i'd need a way to create files FROM any browser to any OS : Windows,Mac,Linux ..
I thought about using Microsoft Active X but that's not cross platforms.
I'm simply looking for a technology/plugin/software or anything that would allow me to do that, the less client interaction would be the best.
I think your need is in conflict with any common sense about security. If there was a simple way to create any file on any computer that loads your web app, just imagine how quickly all sorts of malware would spread.
But going back to your question - I think it will not be simple (btw. was it really simple in silverlight?). What I can imagine is to have some kind of service running on a client PC (the user would have to install it, or it could be corporate policy if your web app is targeted at corporate solutions). Then the service would listen on some TCP port and your web app could send requests to that port with the intent to create particular file with particular content. All the security concerns would be then implemented in mentioned service so that it doesn't get abused by hostile web apps
I need some architectural advice on how to build a background service application.
Background:
I have 2 websites, and I need to transfer some data from website A to website B. Service
would have to run in a background (as windows service) and should connect (every 5 minutes)
to websites's A database directly (MSSQL) grab some data and insert this data through
websites's B API (API is build on MVC Web Api). Both websites are hosted on a same virtual
machine (Windows Server 2008 R2 Datacenter), but this might change (website B can be switched to another virtual server or cloud hosting as Windows Azure or Amazon AWS).
Question:
What do you suggest (best practices) and what guidelines you can give me? I want this to
be scalable and fast as possible and that service will receive multiple requests.
Thank you,
Jani
If it is important to know what data was transferred, then:
Add logs - log4net for instance
Issue tickets if the process stops, and close the ticket when it restarts, this way you will know if a process fails. Depending on the amount of data use you could use Redis/Riak.
Put monitoring on each service A and B, and you might also consider restarting the service via IIS on fail down.
Here's the scenerio.
I have to access a web service on the local LAN to obtain a list of files which I then must retrieve from the machine running the web service. The question has arisen whether to use a mapped drive or just retrieve the files via HTTP from the web service (or web server if the service is self-hosting).
All machines are running Windows XP or later.
I am leaning towards the web server approach - because it has the fewest unknowns as far as having the necessary permissions to access the files.
So basically the question is which is the better approach - web server or network share?
I would go the webservice route because it reduces the number of variables in the equation. Based on your current setup you already need a web service in order to get a list of files to download. At this point you know access to the web service isn't a problem so putting the files there removes a lot of unknowns.
If you put files onto another machine you run the risk of hitting at least the following problems that do not exist with the web service (since you already know you have access)
Permission Issues
Firewall issues
I would think it depends on various factors you haven't mentioned: will lots of clients be trying to access these files at a given time? Will the app be distributed across multiple servers in the future? Might you need to implement a caching system in the future?
If the answer is no to all of these, then you should probably pick what's easiest.
I would lean towards plain old HTTP. Doing it via the web service would probably involve marshalling the file as an array, for example, which makes it larger. A file share means needing to worry about permissions.
I'm working on a graduation project for one of my university courses, and I need find some place to run several crawlers I wrote in C# from. With no web hosting experience, I'm a bit lost. Is this something that any site allows? Do I need a special host that gives more access to the server? The crawler is a simple app that does its work, then periodically writes information to a remote database.
A web crawler is a simulation of a normal user. It acess sites like browsers do, getting the html code (javascript, etc.) returned from the server (so no internal access to server code). Being that, any site can be crawled.
Be aware of some web crawler ethics guidelines. There are pages you shouldn't index or follow its links. And web developers build some files and instructions to web crawlers, saying what you can index or follow.
If you can't run it off your desktop for some reason, you'll need a host that lets you execute arbitrary C# code. Most cheap web servers don't do this due to the potential security implications, since there will be several other people running on the same server.
This means you'll need to be on a server where you have your own OS. Either a VPS - Virtual Private Server, where virtualization is used to give you your own OS but share the hardware - or your own dedicated server, where you have both the hardware and software to yourself.
Note that if you're running on a server that's shared in any way, you'll need to make sure to throttle yourself so as to not cause problems for your neighbors; your primary issue will be not using too much CPU or bandwidth. This isn't just for politeness - most web hosts will suspend your hosting if you're causing problems on their network, such as denying the other users of the hardware you're on resources by consuming them all yourself. You can usually burst higher usage levels, but they'll cut you off if you sustain them for a significant period of time.
This doesn't seem to have anything to do with web hosting. You just need a machine with an internet connection and a database server.
I'd check with your university if I were you. At least in my time, a lot was possible to arrange in-house when it came to graduation projects.
Failing that, you could look into a simple VPS (Virtual Private Server) account. Unless you are sure your app runs under Mono, you will need a Windows one. The resource limits are usually a lot lower than you'd get from a dedicated server, but they're relatively affordable. Some will offer a MS SQL Server database you can use next to the VPS account (on another machine). Installing SQL Server on the VPS itself can be a problem license wise.
Make sure you check the terms of usage before you open an account, as well as the (virtual) system specs though. Also check if there is some kind of minimum contract period. Sometimes this can be longer than a single month, especially if there is no setup fee.
If at all possible, find a host that's geographically close to you. A server on the other side of the world can get a little annoying to access remotely using Remote Desktop.
80legs lets you use their crawlers to process millions of web pages with your own program.
The rates are:
$2.00 per million pages
$0.03 per CPU-hour
They claim to crawl 2 billion web pages a day.
You will need a VPS(Virtual private server) or a full on dedicated server. Crawlers are nothing more then applications that "crawl" the internet. While you could set up a web site to be a crawler, it is not practical because the web page would have to be accessed for you crawler to work. You will have to read the ToS(Terms of service) for the host to see what the terms are for usage. Some of the lower prices hosts will cut your connection with a reason of "negatively impacting the network" if you try to use to much bandwidth even though they have given you plenty to use.
VPS are around $30-80 for a linux server and $60+ for a windows server.
Dedicated services run $100+ for both linux and windows servers.
You don't need any web hosting to run your spider. Just ask for a PC with web connection that can act as a dedicated server,configure the database and run the crawler from there.