We have a web farm of 4 SharePoint 2010 server. As web1, web2, web3, web4 and its AAM is http://xxx.sharepoint.com. Now I am making a web part where some data is going to be cached by HttpContext.Current.Cache. Now as we know that ASP.NET cache is limited to server memory and it is not shared between other servers in the web farm, So I created a solution where I will go to each server and in web.config I will fill the server list in the farm with semicolon separated and whenever any thing add to the cache I will send data to the other server and sync the cache and do the same for cache items removed.
Now my problem is how I will call other server individually by its own name in the SharePoint farm.
For example (http://web1/)
Not a direct answer but you can use timer jobs on Sharepoint for implementing something on all servers. Note that Sharepoint timer service must be running on all servers and it does not happen real-time(it doesn't take long either).
I used memcache, which is working good :) as expected
Microsoft Velocity is perfect use case for your scenario. Velocity will provide you distributed and in-memory cache.
Link:
Build Better Data-Driven Apps With Distributed Caching
Related
I'm trying to ensure that the server I'm currently running a piece of code on is a web front end server. I thought it might be as simple as this:
SPServer.Local.Role.ToString().Equals("WebFrontEnd")
However, if you are running your WFE in addition to app servers, etc on the same box, this will return "Application" and fail to correctly identify it as a web front end.
My idea is that by determining if the Microsoft SharePoint Foundation Workflow Web Application (service) is started and running on the server. This can be determined by going to Central Admin > System Settings > Manage Services on Server.
I need to do this programatically in C#. I'm fairly sure that these services and their statuses can be obtained via powershell which is a viable solution, but I'm not sure how to do it either way.
EDIT -- I'm aware of a way to loop through "services" using the following code:
SPServiceCollection services = SPFarm.Local.Services;
foreach (SPService service in services) {
}
However, this includes some items that look suspiciously similar to the list under "Services on Server" but are all listed with a status of "Online" and dont seem to include this.
I'm not on a machine to check, but I've a feeling you'll have more luck with SPServer.Local.ServiceInstances - that sounds like it should give the services on the server in particular rather than the farm in general.
Unfortunately not even that is reliable, as the Foundation Web Application service can be running on servers that are not actually front end servers (i.e. the load balancer never directs traffic to them). Ultimately it is the list of servers that the load balancing mechanism has that determines which servers are true front ends, and because there's various types of load balancers and no single interface to all of them I don't believe there is any one guaranteed method of determining the number of true web front ends in a farm.
As the title suggests I'd like to create a single Web Site in IIS that creates multiple instances of an ASP.net application based on the requested Host.
So that all instances are running the same codebase, but each instance has it's own Application object, Session's collection etc.
For example :
host1.domain.tld/default.aspx -> this.Application["foo"] = "host1"
host2.domain.tld/default.aspx -> this.Application["foo"] = "host2"
host3.domain.tld/default.aspx -> this.Application["foo"] = "host3"
I know I can configure IIS to listen to a specific IP address, and set the DNS for host(1|2|3).domain.tld to point at this IP address. Then use Global.asax to check the requested host to setup host specific settings. But the application will still be running as a single instance on the server.
I'd rather have multiple instances of the application running for each host so that their runtime is fully separated, it would also be nice if I could have them in separate Application Pools too, but that's not so important
Of course, I could add the sites individually into the IIS on the servers, but there are some 1600 instances that will need to be configured and this will be very time consuming to do and difficult to manage.
Being able to setup a single instances on a number of servers, then control the load balancing via DNS configuration or filtering on the firewalls both of which can be easily controlled programmatically.
FYI - the asp.net version in use is 4.0 and the IIS is running on a Windows Server 2008.
Any suggestions would be great.
Many thanks
The simplest and most robust way to do this would be to setup individual IIS sites. I know you don't want to do this because it will be very time consuming and definately would be difficult to manage.
However, you've already created a website so now perhaps it's time to create a management tool for instances of that website. As there are 1600 instances that you want to configure, there's a fairly good chance that you already have details of those 1600 instances stored somewhere, such as a database or a spreadsheet.
So:
Get the data about the 1600 instances into a usable format, a Sql Server database (Express, or paid for!) would probably be ideal.
Investigate the IIS7 provisioning APIs
Put together a tool that allows you to create all 1600 instances from the data you have about them, automatically / in batches, via the IIS7 API.
Maintain tool and expand it ready for the inevitable changes that will be required when you need to add or remove instances.
Don't forget that putting your own tool together for a task such as this gives you a lot of flexibility, although there may be tools out there for this purpose that are worthy of investigation. For that (i.e. a non-programmatic solution), I'd suggest asking at http://www.serverfault.com
Here's the scenerio.
I have to access a web service on the local LAN to obtain a list of files which I then must retrieve from the machine running the web service. The question has arisen whether to use a mapped drive or just retrieve the files via HTTP from the web service (or web server if the service is self-hosting).
All machines are running Windows XP or later.
I am leaning towards the web server approach - because it has the fewest unknowns as far as having the necessary permissions to access the files.
So basically the question is which is the better approach - web server or network share?
I would go the webservice route because it reduces the number of variables in the equation. Based on your current setup you already need a web service in order to get a list of files to download. At this point you know access to the web service isn't a problem so putting the files there removes a lot of unknowns.
If you put files onto another machine you run the risk of hitting at least the following problems that do not exist with the web service (since you already know you have access)
Permission Issues
Firewall issues
I would think it depends on various factors you haven't mentioned: will lots of clients be trying to access these files at a given time? Will the app be distributed across multiple servers in the future? Might you need to implement a caching system in the future?
If the answer is no to all of these, then you should probably pick what's easiest.
I would lean towards plain old HTTP. Doing it via the web service would probably involve marshalling the file as an array, for example, which makes it larger. A file share means needing to worry about permissions.
I'm working on a graduation project for one of my university courses, and I need find some place to run several crawlers I wrote in C# from. With no web hosting experience, I'm a bit lost. Is this something that any site allows? Do I need a special host that gives more access to the server? The crawler is a simple app that does its work, then periodically writes information to a remote database.
A web crawler is a simulation of a normal user. It acess sites like browsers do, getting the html code (javascript, etc.) returned from the server (so no internal access to server code). Being that, any site can be crawled.
Be aware of some web crawler ethics guidelines. There are pages you shouldn't index or follow its links. And web developers build some files and instructions to web crawlers, saying what you can index or follow.
If you can't run it off your desktop for some reason, you'll need a host that lets you execute arbitrary C# code. Most cheap web servers don't do this due to the potential security implications, since there will be several other people running on the same server.
This means you'll need to be on a server where you have your own OS. Either a VPS - Virtual Private Server, where virtualization is used to give you your own OS but share the hardware - or your own dedicated server, where you have both the hardware and software to yourself.
Note that if you're running on a server that's shared in any way, you'll need to make sure to throttle yourself so as to not cause problems for your neighbors; your primary issue will be not using too much CPU or bandwidth. This isn't just for politeness - most web hosts will suspend your hosting if you're causing problems on their network, such as denying the other users of the hardware you're on resources by consuming them all yourself. You can usually burst higher usage levels, but they'll cut you off if you sustain them for a significant period of time.
This doesn't seem to have anything to do with web hosting. You just need a machine with an internet connection and a database server.
I'd check with your university if I were you. At least in my time, a lot was possible to arrange in-house when it came to graduation projects.
Failing that, you could look into a simple VPS (Virtual Private Server) account. Unless you are sure your app runs under Mono, you will need a Windows one. The resource limits are usually a lot lower than you'd get from a dedicated server, but they're relatively affordable. Some will offer a MS SQL Server database you can use next to the VPS account (on another machine). Installing SQL Server on the VPS itself can be a problem license wise.
Make sure you check the terms of usage before you open an account, as well as the (virtual) system specs though. Also check if there is some kind of minimum contract period. Sometimes this can be longer than a single month, especially if there is no setup fee.
If at all possible, find a host that's geographically close to you. A server on the other side of the world can get a little annoying to access remotely using Remote Desktop.
80legs lets you use their crawlers to process millions of web pages with your own program.
The rates are:
$2.00 per million pages
$0.03 per CPU-hour
They claim to crawl 2 billion web pages a day.
You will need a VPS(Virtual private server) or a full on dedicated server. Crawlers are nothing more then applications that "crawl" the internet. While you could set up a web site to be a crawler, it is not practical because the web page would have to be accessed for you crawler to work. You will have to read the ToS(Terms of service) for the host to see what the terms are for usage. Some of the lower prices hosts will cut your connection with a reason of "negatively impacting the network" if you try to use to much bandwidth even though they have given you plenty to use.
VPS are around $30-80 for a linux server and $60+ for a windows server.
Dedicated services run $100+ for both linux and windows servers.
You don't need any web hosting to run your spider. Just ask for a PC with web connection that can act as a dedicated server,configure the database and run the crawler from there.
I am working on small application of sharepoint.
There two ways in which WE can access Sharepoint data:
1.By Using Microsoft.Sharepoint.dll In this case you need to do coding on same machine (windows server).
2.Second way is to use Sharepoint Web Services. This will allow developer to do developement work on different machine.
But which way i should prefer?
Regards,
Jene
That all depends on what you want to do and where you want to run it. The SharePoint object model (Microsoft.SharePoint.dll) is substantially faster than the web services, but like you said, it only runs on the SharePoint machine. So, if you are on the SharePoint machine, definitely use the object model, otherwise, use the web services.
You don't have to develop on the same machine, but you do need to develop on server with SharePoint installed. I can tell you from experience that the web services are not the nicest to work with...I would use the SharePoint object model.
Just saying "development on my production server" makes me cringe. To that end, you may want to look into using some kind of VMWare to do your development on. You can install Sharepoint and Visual studio on the virtual machine to do your development. The only issues with you doing it this way is that you are not going to have the same content (the lists and other user created libraries/sites), but you can easily make your own to resemble your production environment (as is usually the case with most dev environments).
One thing to consider is database connections. If you create a seperate application using the SharePoint DLL then your DB connections will be managed through your application. So your application will have to run using an account that has read/write privs to the SharePoint database.
If you use the web services then this is not the case since the database connections will be handled inside the IIS Application Pool (like a normal SharePoint web site).
Unless you are unable to deploy solutions to the SharePoint server due to corporate security policies, shared hosting or similar, you will want to use the object model. Get a SharePoint VM (one of the evaluation virtual pc downloads from Microsoft is a good starting point if you haven't set one up before) and WSPBuilder, and it's easy enough to make packages to be deployed to the server.