I'm trying to ensure that the server I'm currently running a piece of code on is a web front end server. I thought it might be as simple as this:
SPServer.Local.Role.ToString().Equals("WebFrontEnd")
However, if you are running your WFE in addition to app servers, etc on the same box, this will return "Application" and fail to correctly identify it as a web front end.
My idea is that by determining if the Microsoft SharePoint Foundation Workflow Web Application (service) is started and running on the server. This can be determined by going to Central Admin > System Settings > Manage Services on Server.
I need to do this programatically in C#. I'm fairly sure that these services and their statuses can be obtained via powershell which is a viable solution, but I'm not sure how to do it either way.
EDIT -- I'm aware of a way to loop through "services" using the following code:
SPServiceCollection services = SPFarm.Local.Services;
foreach (SPService service in services) {
}
However, this includes some items that look suspiciously similar to the list under "Services on Server" but are all listed with a status of "Online" and dont seem to include this.
I'm not on a machine to check, but I've a feeling you'll have more luck with SPServer.Local.ServiceInstances - that sounds like it should give the services on the server in particular rather than the farm in general.
Unfortunately not even that is reliable, as the Foundation Web Application service can be running on servers that are not actually front end servers (i.e. the load balancer never directs traffic to them). Ultimately it is the list of servers that the load balancing mechanism has that determines which servers are true front ends, and because there's various types of load balancers and no single interface to all of them I don't believe there is any one guaranteed method of determining the number of true web front ends in a farm.
Related
I have a Windows Service written in C#. I have recently added CassiniDev to it to allow remote web administration and monitoring of the service. The integration went really well except for my inability to interact with data layer of my Windows Service from hosted ASP.NET pages.
I have tried putting everything of interest into a common assembly but the debugger shows there are two loaded assemblies with the same name but from different paths. Cassini runs ASP.NET off some temp folder so the assembly I am using is really "a different instance" in the address space of the same process.
I am not sure what is going on here. Probably some "application domain" separation stuff that I do not understand at this time.
So with Windows Service and the web server running in the same process, how can I make them interact? Say I have some status in the Service part that I want to report in the ASP.NET part. Any ideas how I could make this happen? Shared memory or TCP comes to mind but it sounds like an overkill for purely intra-process communication.
If security isn't an immediate concern, i.e. the data isn't highly sensitive and in a controlled environment, then you could have success using Named Pipes. A managed API for processing piping has been implemented as part of the framework, so you don't need to think in native calls.
I am developing a project for college and i need some suggestions over the development. Its a website which shows information from other websites like Links, Images etc.
I have prepared below given model for the website.
A Home.aspx page which shows data from tables (sql server).
I have coded a crawler (in c#) which can crawl (fetch data) required website data.
I want some way through which i can run the crawler at back end for some time interval and it can insert updates in the tables. I want that i can get updated information in my database so that Home.aspx shows updated info. (Its like smaller version of Google News website)
I want to host the wesbite in Shared Hosted Environment (i.e a 3rd party hosting provider company and that may use IIS platform)
I posted simliar situation to different .NET forums and communities and they suggested lot of different things such as
Create a web service (is it really necessary ?)
Use WCF
Create a Console application and run windows task sheduler (is it okay with asp.net (win forms website) and in shared hosted)
Run crawler on local machine and update database accordingly. (No i want everything online) etc etc
Please suggest me a clear way out so that i complete the task. Please suggest elobrated technology and methods which suits my project.
Waiting...
Thanks...
Your shared host constraint really impacts on technologies restrictions.
In theory, the best way to host your crawler would have been a Windows service, since you can take advantage of windows services configuration. A service is always up, can be automatically started at startup, writes errors in event log, can be automatically restarted after failure...
Then, you Home.aspx would have been a regular website in IIS.
If you really stay on a shared host (where you cannot setup a service), I would have make the crawler as a module which is run on your application startup.
Problem is, IIS application pool doesnt live forever if your web site is not in use, and it may stop the crawler. It is configurable, but I dont know how much in a shared host.
In IIS 7.5, think about starting your module at application warm up
Finally if you need to run the crawler at interval times (like every day at midnight), if your shared host does not let you set task scheduling, think about Quartz Framework, which allow you perform task scheduling inside your application (without the intervention of the OS)
Integrate your crawler code into a aspx page
Setup a task scheduler on your host to call that page every X minutes
When the page is called check that localhost has called the page
If localhost called it run the crawl routine and
If localhost hasn't called it throw a 404 eror
As the title suggests I'd like to create a single Web Site in IIS that creates multiple instances of an ASP.net application based on the requested Host.
So that all instances are running the same codebase, but each instance has it's own Application object, Session's collection etc.
For example :
host1.domain.tld/default.aspx -> this.Application["foo"] = "host1"
host2.domain.tld/default.aspx -> this.Application["foo"] = "host2"
host3.domain.tld/default.aspx -> this.Application["foo"] = "host3"
I know I can configure IIS to listen to a specific IP address, and set the DNS for host(1|2|3).domain.tld to point at this IP address. Then use Global.asax to check the requested host to setup host specific settings. But the application will still be running as a single instance on the server.
I'd rather have multiple instances of the application running for each host so that their runtime is fully separated, it would also be nice if I could have them in separate Application Pools too, but that's not so important
Of course, I could add the sites individually into the IIS on the servers, but there are some 1600 instances that will need to be configured and this will be very time consuming to do and difficult to manage.
Being able to setup a single instances on a number of servers, then control the load balancing via DNS configuration or filtering on the firewalls both of which can be easily controlled programmatically.
FYI - the asp.net version in use is 4.0 and the IIS is running on a Windows Server 2008.
Any suggestions would be great.
Many thanks
The simplest and most robust way to do this would be to setup individual IIS sites. I know you don't want to do this because it will be very time consuming and definately would be difficult to manage.
However, you've already created a website so now perhaps it's time to create a management tool for instances of that website. As there are 1600 instances that you want to configure, there's a fairly good chance that you already have details of those 1600 instances stored somewhere, such as a database or a spreadsheet.
So:
Get the data about the 1600 instances into a usable format, a Sql Server database (Express, or paid for!) would probably be ideal.
Investigate the IIS7 provisioning APIs
Put together a tool that allows you to create all 1600 instances from the data you have about them, automatically / in batches, via the IIS7 API.
Maintain tool and expand it ready for the inevitable changes that will be required when you need to add or remove instances.
Don't forget that putting your own tool together for a task such as this gives you a lot of flexibility, although there may be tools out there for this purpose that are worthy of investigation. For that (i.e. a non-programmatic solution), I'd suggest asking at http://www.serverfault.com
Here's the scenerio.
I have to access a web service on the local LAN to obtain a list of files which I then must retrieve from the machine running the web service. The question has arisen whether to use a mapped drive or just retrieve the files via HTTP from the web service (or web server if the service is self-hosting).
All machines are running Windows XP or later.
I am leaning towards the web server approach - because it has the fewest unknowns as far as having the necessary permissions to access the files.
So basically the question is which is the better approach - web server or network share?
I would go the webservice route because it reduces the number of variables in the equation. Based on your current setup you already need a web service in order to get a list of files to download. At this point you know access to the web service isn't a problem so putting the files there removes a lot of unknowns.
If you put files onto another machine you run the risk of hitting at least the following problems that do not exist with the web service (since you already know you have access)
Permission Issues
Firewall issues
I would think it depends on various factors you haven't mentioned: will lots of clients be trying to access these files at a given time? Will the app be distributed across multiple servers in the future? Might you need to implement a caching system in the future?
If the answer is no to all of these, then you should probably pick what's easiest.
I would lean towards plain old HTTP. Doing it via the web service would probably involve marshalling the file as an array, for example, which makes it larger. A file share means needing to worry about permissions.
I'm working on a graduation project for one of my university courses, and I need find some place to run several crawlers I wrote in C# from. With no web hosting experience, I'm a bit lost. Is this something that any site allows? Do I need a special host that gives more access to the server? The crawler is a simple app that does its work, then periodically writes information to a remote database.
A web crawler is a simulation of a normal user. It acess sites like browsers do, getting the html code (javascript, etc.) returned from the server (so no internal access to server code). Being that, any site can be crawled.
Be aware of some web crawler ethics guidelines. There are pages you shouldn't index or follow its links. And web developers build some files and instructions to web crawlers, saying what you can index or follow.
If you can't run it off your desktop for some reason, you'll need a host that lets you execute arbitrary C# code. Most cheap web servers don't do this due to the potential security implications, since there will be several other people running on the same server.
This means you'll need to be on a server where you have your own OS. Either a VPS - Virtual Private Server, where virtualization is used to give you your own OS but share the hardware - or your own dedicated server, where you have both the hardware and software to yourself.
Note that if you're running on a server that's shared in any way, you'll need to make sure to throttle yourself so as to not cause problems for your neighbors; your primary issue will be not using too much CPU or bandwidth. This isn't just for politeness - most web hosts will suspend your hosting if you're causing problems on their network, such as denying the other users of the hardware you're on resources by consuming them all yourself. You can usually burst higher usage levels, but they'll cut you off if you sustain them for a significant period of time.
This doesn't seem to have anything to do with web hosting. You just need a machine with an internet connection and a database server.
I'd check with your university if I were you. At least in my time, a lot was possible to arrange in-house when it came to graduation projects.
Failing that, you could look into a simple VPS (Virtual Private Server) account. Unless you are sure your app runs under Mono, you will need a Windows one. The resource limits are usually a lot lower than you'd get from a dedicated server, but they're relatively affordable. Some will offer a MS SQL Server database you can use next to the VPS account (on another machine). Installing SQL Server on the VPS itself can be a problem license wise.
Make sure you check the terms of usage before you open an account, as well as the (virtual) system specs though. Also check if there is some kind of minimum contract period. Sometimes this can be longer than a single month, especially if there is no setup fee.
If at all possible, find a host that's geographically close to you. A server on the other side of the world can get a little annoying to access remotely using Remote Desktop.
80legs lets you use their crawlers to process millions of web pages with your own program.
The rates are:
$2.00 per million pages
$0.03 per CPU-hour
They claim to crawl 2 billion web pages a day.
You will need a VPS(Virtual private server) or a full on dedicated server. Crawlers are nothing more then applications that "crawl" the internet. While you could set up a web site to be a crawler, it is not practical because the web page would have to be accessed for you crawler to work. You will have to read the ToS(Terms of service) for the host to see what the terms are for usage. Some of the lower prices hosts will cut your connection with a reason of "negatively impacting the network" if you try to use to much bandwidth even though they have given you plenty to use.
VPS are around $30-80 for a linux server and $60+ for a windows server.
Dedicated services run $100+ for both linux and windows servers.
You don't need any web hosting to run your spider. Just ask for a PC with web connection that can act as a dedicated server,configure the database and run the crawler from there.