where to store log information from load-balanced servers - c#

We have 2 asp.net web servers working through the LoadBalancer that are accessible externally. Earlier, for all applications we did logging into DB. Now we have 1 more app that doesn't work with DB, it is used for 'messages transferring'. On TEST environment it does logging into files into a local folder.
If we deploy it "as is" to PROD we will have 2 separate log files... that is not very good idea. Connecting to DB just for logging doesn't seem reasonable too...
Possible solution could be store log file into a shared folder somewhere on another server... but, I not sure if this is the best solution.
Please advise.

In the past, I have just stored the web logs on each machine in the farm. We then used an off the shelf piece of software (I don't recall which one it was) to periodically download the files from each machine in the farm.
This solution worked well for us - YMMV.

The feature of our "Load Balancer" is that most of the time only 1 server works (we use very cheap solution that on low load it doesn't allow really balancing, it provides availability only). So for now I will store log information in log-folder on "own" server...

Related

Solution for a no-server multi-user application/database?

I am at a dead end an I could really use some help.
I intern for a huge company. My projects involves creating an application to automate/simplify the work of a retiring employee.
The problem here lies in the strict company policies. I am a developer stuck at business end of the company. Therefor IT gives me nothing:
I don't have a server (nor web nor database)
I can't create a server, because no pc will be running and we can't keep them logged in due to single sign on with company cards.
I can't install anything on the pc's in the network.
I can access a share file server, that is backed up every day.
The libraries involved have to be free
A central database has to be accessed by a dozen of users (at once)
The database will recieve new data every day and will grow accordingly
The users will both read and write from/to the database
Preferably C#.NET or WPF solution
Application needs to open files stored on the shared drive. ( Only once, the important information will be extracted and stored in the database.. the file will then be removed)
My initial idea was to use silverlight (which runs standalone) in combination with SQLite. I ran a test and Silverlight files stored on the shared drive work. (Silverlight is installed on every pc on the network) This is my preferred front end. However (correct me if i'm wrong) I tried SQLite-net and I needed to add the sqlite3.dll to my windows/system32 folder, but on the network PC's I don't have access to the Windows folder, so this can not be done.
Also I read that SQLite or files in general can become corrupt when accessed by multiple users as one, so maybe I thought locking was an idea.
Which solutions are there to my problem?
I worked for a company for several years writing software for police departments to manage traffic collision reports. Police stations usually have little-to-no IT support, so we faced many similar limitations. The company actually did pretty well using Microsoft Access databases, with the setup looking something like this:
The shared drive had an Access database file (.mdb or .accdb) which was the actual "database".
Client computers (at the officers' desks) had Access applications with local "utility" tables for temporary storage, UI defined in Forms, and logic defined in Modules. Each of the client machines were connected to the repository on the shared drive by using linked tables. Local client configuration was stored either in the Access application in a config table, or in a text file on the machine.
It's not the cleanest solution, but it would allow you to create and maintain a unified solution using files that don't need to be installed and don't require any funny permissions, as long as everyone has read/write access to the shared drive.
Create a website. Today you can host ASP web apps in a stand alone .exe. By doing so you can make sure that the shared files are only accessed by one process. You can also limit the access to sqlite.
It also means that you do not have to distribute anything. Simply start your application and tell your users which url and port they have to browse too.
As for permissions, only the account running your webhost requires access to shared files etc.
You should take a look at ScimoreDB. It's an embedded database that supports multi-process read/write access. If needed it can also act as a client/server database; even as a distributed database with multiple nodes.
It's free to use and deploy. It has support for C++ and .NET. Only disadvantage is that it only works on Windows.

Organization of a multi-user application

I have a WPF C# multi-user applicaton which interacts with Sql Server Express database. Currently I have faced up with the following issue:
How to organize the application and the database in order for several users on different stations be able to work on it , maybe i should put the database file on a server, and make my application on all other stations refer to that server when they interact with the datatbase? If so, how can I provide security of the database file.
Is there any scenario in which I could install my application on server and sign it as server and while installing on other machines point that server?
Any advice on general strategies in such cases would be appreciated.
Thanks in advance!
If all the users are concurrent then your going to need to place the SQL instance on a server that they all have access too..
your also going to need to know look at quite a few things like this such as how your going to manage your transactions and just how your persistence layer is going to function in general.
each of those topics are probably going to breed many more SO questions :)
this could help for some inspiration on how your going to structure the persistance layer..
http://msdn.microsoft.com/en-us/magazine/dd569757.aspx
For multi-user application, you definitely should put the database onto a server. And because the application is for multi-users, the first screen shown when a user opens the application is the login screen (just like the case of web application).
Security isn't a matter, once a database is put in the filesystem, only the users on that computer can access it. And of course, the computer which contains databases is supposed to have only administrators as users. Another point is that Windows may have IIS running, don't put the database files under public root of IIS so that non-user people won't be able to download them through HTTP.
Let's say the users are working on the same office. You can assign any computer in the LAN as a server and install the database on it. Any computer in the same LAN has a LAN IP (eg. 192.168.1.100), your application can connect to this IP for database operations.

How to calculate bandwidth consumption for a Hosting Account using C# in ASP.Net Application?

HI all,
I am working on SaaS Hosting Software. a large number of sites are hosted on the server. I am trying to calculate bandwidth consumption, (bytes transferred in and out) using C#, described Here using the MS Log Parser.
In the above case, if the log files are deleted by the user or any administrator even, the bandwidth calculation will not be possible.
Q1: What is the standard way to measure the Bandwidth for various Hosting accounts (of websites) on a single server?
Q2: If Log parser mechanism (as described above) is used, then how to take care of the security issue? Is there some system directory or event viewer logs or something which cannot be deleted except by the System account and contains bandwidth data?
Please point me in the right direction.
Thanks
The logs your talking about can be deleted by an administrator but so can the entire site. You should probably talk to them about your need to access/use these files. You could also change IIS to log to a DB versus a file, so you can keep the data in your own repository. In addition to getting information directly from your logs, administrators may have other tools to use to monitor and report bandwidth (Firewalls, Routers, etc...). You should probably be working together with them to develop your solution.

Obtaining Files From A Local Web Server or Network Share - Which Is Better

Here's the scenerio.
I have to access a web service on the local LAN to obtain a list of files which I then must retrieve from the machine running the web service. The question has arisen whether to use a mapped drive or just retrieve the files via HTTP from the web service (or web server if the service is self-hosting).
All machines are running Windows XP or later.
I am leaning towards the web server approach - because it has the fewest unknowns as far as having the necessary permissions to access the files.
So basically the question is which is the better approach - web server or network share?
I would go the webservice route because it reduces the number of variables in the equation. Based on your current setup you already need a web service in order to get a list of files to download. At this point you know access to the web service isn't a problem so putting the files there removes a lot of unknowns.
If you put files onto another machine you run the risk of hitting at least the following problems that do not exist with the web service (since you already know you have access)
Permission Issues
Firewall issues
I would think it depends on various factors you haven't mentioned: will lots of clients be trying to access these files at a given time? Will the app be distributed across multiple servers in the future? Might you need to implement a caching system in the future?
If the answer is no to all of these, then you should probably pick what's easiest.
I would lean towards plain old HTTP. Doing it via the web service would probably involve marshalling the file as an array, for example, which makes it larger. A file share means needing to worry about permissions.

What sort of web host lets you run crawlers on it?

I'm working on a graduation project for one of my university courses, and I need find some place to run several crawlers I wrote in C# from. With no web hosting experience, I'm a bit lost. Is this something that any site allows? Do I need a special host that gives more access to the server? The crawler is a simple app that does its work, then periodically writes information to a remote database.
A web crawler is a simulation of a normal user. It acess sites like browsers do, getting the html code (javascript, etc.) returned from the server (so no internal access to server code). Being that, any site can be crawled.
Be aware of some web crawler ethics guidelines. There are pages you shouldn't index or follow its links. And web developers build some files and instructions to web crawlers, saying what you can index or follow.
If you can't run it off your desktop for some reason, you'll need a host that lets you execute arbitrary C# code. Most cheap web servers don't do this due to the potential security implications, since there will be several other people running on the same server.
This means you'll need to be on a server where you have your own OS. Either a VPS - Virtual Private Server, where virtualization is used to give you your own OS but share the hardware - or your own dedicated server, where you have both the hardware and software to yourself.
Note that if you're running on a server that's shared in any way, you'll need to make sure to throttle yourself so as to not cause problems for your neighbors; your primary issue will be not using too much CPU or bandwidth. This isn't just for politeness - most web hosts will suspend your hosting if you're causing problems on their network, such as denying the other users of the hardware you're on resources by consuming them all yourself. You can usually burst higher usage levels, but they'll cut you off if you sustain them for a significant period of time.
This doesn't seem to have anything to do with web hosting. You just need a machine with an internet connection and a database server.
I'd check with your university if I were you. At least in my time, a lot was possible to arrange in-house when it came to graduation projects.
Failing that, you could look into a simple VPS (Virtual Private Server) account. Unless you are sure your app runs under Mono, you will need a Windows one. The resource limits are usually a lot lower than you'd get from a dedicated server, but they're relatively affordable. Some will offer a MS SQL Server database you can use next to the VPS account (on another machine). Installing SQL Server on the VPS itself can be a problem license wise.
Make sure you check the terms of usage before you open an account, as well as the (virtual) system specs though. Also check if there is some kind of minimum contract period. Sometimes this can be longer than a single month, especially if there is no setup fee.
If at all possible, find a host that's geographically close to you. A server on the other side of the world can get a little annoying to access remotely using Remote Desktop.
80legs lets you use their crawlers to process millions of web pages with your own program.
The rates are:
$2.00 per million pages
$0.03 per CPU-hour
They claim to crawl 2 billion web pages a day.
You will need a VPS(Virtual private server) or a full on dedicated server. Crawlers are nothing more then applications that "crawl" the internet. While you could set up a web site to be a crawler, it is not practical because the web page would have to be accessed for you crawler to work. You will have to read the ToS(Terms of service) for the host to see what the terms are for usage. Some of the lower prices hosts will cut your connection with a reason of "negatively impacting the network" if you try to use to much bandwidth even though they have given you plenty to use.
VPS are around $30-80 for a linux server and $60+ for a windows server.
Dedicated services run $100+ for both linux and windows servers.
You don't need any web hosting to run your spider. Just ask for a PC with web connection that can act as a dedicated server,configure the database and run the crawler from there.

Categories