Using Amazon EC2 to host Asp.net application - c#

I’m currently developing an application that will be heavy on images, that I hope to host “in the cloud”
It’s a c# / asp.net application.
So i'm considering using Amazon S3 for storing the images.
That bit’s fine.
However, i'm considering using EC2 to host the application on.
The application uses SQL server (only on a fairly basic level)
Im wondering how to set up my hosting solution.
Would it be advisable to:
Have 1 small instance dedicated for
SQL server (would use the express
edition to start with)
Have 1 small instance dedicated to
running IIS (and hosting the
application) point the sql conn
string to above mentioned sql
instance
Use elastic block store to store the
SQL data & aspx pages, compiled
assemblies etc…
Any other ideas??

Keep them all on the same instance for now, don't prematurely optimise/scale. You might find simply upgrading to a medium-cpu instance (36c/hr instead of 12c/hr) will be enough to keep you running for months without any kind of scaling headaches.
In the future, if you outgrow your single-server setup, then you can move your DB onto a separate instance, initially a small-cpu, upgrading to a medium later.
One thing that's worth noting is that if you can't upgrade from a medium-cpu to high-cpu instances because the 32-bit OS images won't run on larger instances, and 64-bit won't run on smaller instances.
Pick 32-bit Windows (because EC2 uses this for smaller and medium instances), run on a smaller, single instance and then scale up when you need to.
Regarding EBS - I'd recommend creating a healthy sized volume that'll keep you going for a while and configure SQLServer to store its data there.
You could store your ASP.NET app on an EBS volume as well, but the instance's 10GB OS drive might be fine, I don't think there's much difference here.
I'd strongly recommend using an Elastic IP rather than the temporary IP EC2 assign you on launching an instance. Create an Elastic IP, update your DNS to point to it and associate it with your instance.
After getting your image configured how you want it, shut it down, bundle the instance and then register a new AMI for it (privately). It'll take about 40 minutes. This means if something horrible happens to your instance, you can recover within 15 minutes by following these steps:
Detach your EBS volume
Disassociate your Elastic IP
Terminate your faulty instance
Launch an instance of your AMI
Attach your EBS volume to the new instance
Associate your Elastic IP with the new instance

Related

how to create system environmental variables on remote servers

We have a number of web servers and app servers set up that all connect to some databases on our network. We're trying to make our code more secure by moving the database connection strings out of the code. I have set up some system environmental variables that hold the connection strings and can read them within the app so that works fine. However, thinking through making this a production solution for security means I need a way to register all these variables on all of our servers and that could be a bit of a maintenance nightmare down the road.
So I am wondering if anyone has any ideas on how to set up a central distribution app that could register all the variables across a list of servers whenever they need updated? I'm working in a windows .net environment. Or is there a better solution to store this information outside of the code base?
If you are targeting MS SQL Server, I'd recommend using Windows Authentication and Integrated Security so you just need to provided host and database names in your connection.
The remaining connection string is usually best put into your respective Web.config/App.config. If you insist on distributing environment variables, use the Windows Registry - you can access it easily via Remote PowerShell or .NET.

What happens to open sockets when scaling down Azure web app?

We have a scenario where we use multiple Web apps in Azure, when scaling up I understand Azure is simply starting more Web processes and as such allows for connections to multiple servers, there's a broadcast system in place for the synchronization. The issue is, what happens to an open socket if we manually or automatically scales down? Say we have 5 servers which each have an open web socket, and we scale down to 1, what will happen to the 4 sockets that were connected to the servers that are being removed?
As a side note, if they stick and keep up until the socket is disconnected to the client, will Azure bill me for that time?
If they don't stick, it's only a matter of making sure the client reconnects properly.
By what I've seen so far, it seems to stick, but that might just be a grace period while it's scaling down so I'd rather be on the sure here with an answer from someone who actually knows.
From another thread a few years ago it was the newest instance that is removed (most of the time) but I cannot find anything about it waiting for the connections to drop.
Which instances are stopped when I scale my Azure role down?
There is however a management API that you can access to scale down (delete) specific cloud service roles.
The Delete Role Instances operation deletes multiple role instances
from a deployment in a cloud service.
POST Request
https://management.core.windows.net/<subscription-id>/services/hostedservices/<cloudservice-name>/deployments/<deployment-name>/roleinstances/
Using this you can monitor which instances you want to remove and send the delete command programmatically. That way you could wait for the users to cleanly disconnect from the instance before deleting it.
Reference to the Microsoft API doc for this:
https://msdn.microsoft.com/library/azure/dn469418.aspx
So, after quite a bit of development and testing there's an answer. We're deploying using Kudu so Azure builds and publishes the Web app. The IIS instances that have open Websockets will run it's Application_End cycle and shut down the TCP connection.
As far as i can tell so far, this happens before the new site is spun up and accepts connections. Thus, there is no fear of being billed for additional hours. This also seem to occur for all Web apps (sites) within the plan when scaling the Web app plan (server farm), no matter if it's up or down.
This might perhaps an inconvenience for our users, but with proper shutdown on the server side and re-connection from the client side it should work out just fine.
There's only one way to find out.. test it. (or ask an Azure engineer, but this might take ages..)
I would assume that it would not scale down a machine if someone is connected. Imagine watching a stream and it randomly stops to connect to another server? I wouldn't think Microsoft would create it to drop connections.
The first paragraph mentions web roles:
On the Scale page of the Azure Management Portal, you can manually scale your application or you can set parameters to automatically scale it. You can scale applications that are running Web Roles, Worker Roles, or Virtual Machines. To scale an application that is running instances of Web Roles or Worker Roles, you add or remove role instances to accommodate the work load.
Web apps or web roles require the use of a VM. This is detailed in the first bullet point listed:
You should consider the following information before you configure scaling for your application:
•You must add Virtual Machines that you create to an availability set to scale an application that uses them. The Virtual Machines that you add can be initially turned on or turned off, but they will be turned on in a scale-up action and turned off in a scale-down action. For more information about Virtual Machines and availability sets, see Manage the Availability of Virtual Machines.
The information that follows the bullet points details the scaling process.
For additional information this link also mentions the use of vm's for web apps. The verbiage below can be found in the section titled Web App Concepts:
Auto Scaling - Web Apps enables you to quickly scale-up or out to handle any incoming customer load. Manually select the number and size of VMs or set up auto-scaling to scale your servers based on load or schedule.
https://azure.microsoft.com/en-us/documentation/articles/app-service-web-overview/

Can I run my C# .exe on a rented / cloud computer?

I have built a working C# application, a multi-threaded application that crunches numbers for research.
I can only go so far with my own PC, so I'm wondering if there exists a service where I can rent a high powered computer that can run my C# code. Something with a lot of processor cores, so I could run... say, 64 threads concurrently.
Windows Azure can do that. Just install a VM and create an as big machine as you need.
Troy Hunt wrote a good blog post about how he used a cloud SQL Server for that kind of job.
In windows Azure you can create your service model, can specify the size to which to deploy an instance of your role, depending on its resource requirements.
Amazon EC2 also provides a number of options to choose an instance as per your requirement.

Multiple instances of an ASP.net application from a single web site configuration in IIS

As the title suggests I'd like to create a single Web Site in IIS that creates multiple instances of an ASP.net application based on the requested Host.
So that all instances are running the same codebase, but each instance has it's own Application object, Session's collection etc.
For example :
host1.domain.tld/default.aspx -> this.Application["foo"] = "host1"
host2.domain.tld/default.aspx -> this.Application["foo"] = "host2"
host3.domain.tld/default.aspx -> this.Application["foo"] = "host3"
I know I can configure IIS to listen to a specific IP address, and set the DNS for host(1|2|3).domain.tld to point at this IP address. Then use Global.asax to check the requested host to setup host specific settings. But the application will still be running as a single instance on the server.
I'd rather have multiple instances of the application running for each host so that their runtime is fully separated, it would also be nice if I could have them in separate Application Pools too, but that's not so important
Of course, I could add the sites individually into the IIS on the servers, but there are some 1600 instances that will need to be configured and this will be very time consuming to do and difficult to manage.
Being able to setup a single instances on a number of servers, then control the load balancing via DNS configuration or filtering on the firewalls both of which can be easily controlled programmatically.
FYI - the asp.net version in use is 4.0 and the IIS is running on a Windows Server 2008.
Any suggestions would be great.
Many thanks
The simplest and most robust way to do this would be to setup individual IIS sites. I know you don't want to do this because it will be very time consuming and definately would be difficult to manage.
However, you've already created a website so now perhaps it's time to create a management tool for instances of that website. As there are 1600 instances that you want to configure, there's a fairly good chance that you already have details of those 1600 instances stored somewhere, such as a database or a spreadsheet.
So:
Get the data about the 1600 instances into a usable format, a Sql Server database (Express, or paid for!) would probably be ideal.
Investigate the IIS7 provisioning APIs
Put together a tool that allows you to create all 1600 instances from the data you have about them, automatically / in batches, via the IIS7 API.
Maintain tool and expand it ready for the inevitable changes that will be required when you need to add or remove instances.
Don't forget that putting your own tool together for a task such as this gives you a lot of flexibility, although there may be tools out there for this purpose that are worthy of investigation. For that (i.e. a non-programmatic solution), I'd suggest asking at http://www.serverfault.com

What sort of web host lets you run crawlers on it?

I'm working on a graduation project for one of my university courses, and I need find some place to run several crawlers I wrote in C# from. With no web hosting experience, I'm a bit lost. Is this something that any site allows? Do I need a special host that gives more access to the server? The crawler is a simple app that does its work, then periodically writes information to a remote database.
A web crawler is a simulation of a normal user. It acess sites like browsers do, getting the html code (javascript, etc.) returned from the server (so no internal access to server code). Being that, any site can be crawled.
Be aware of some web crawler ethics guidelines. There are pages you shouldn't index or follow its links. And web developers build some files and instructions to web crawlers, saying what you can index or follow.
If you can't run it off your desktop for some reason, you'll need a host that lets you execute arbitrary C# code. Most cheap web servers don't do this due to the potential security implications, since there will be several other people running on the same server.
This means you'll need to be on a server where you have your own OS. Either a VPS - Virtual Private Server, where virtualization is used to give you your own OS but share the hardware - or your own dedicated server, where you have both the hardware and software to yourself.
Note that if you're running on a server that's shared in any way, you'll need to make sure to throttle yourself so as to not cause problems for your neighbors; your primary issue will be not using too much CPU or bandwidth. This isn't just for politeness - most web hosts will suspend your hosting if you're causing problems on their network, such as denying the other users of the hardware you're on resources by consuming them all yourself. You can usually burst higher usage levels, but they'll cut you off if you sustain them for a significant period of time.
This doesn't seem to have anything to do with web hosting. You just need a machine with an internet connection and a database server.
I'd check with your university if I were you. At least in my time, a lot was possible to arrange in-house when it came to graduation projects.
Failing that, you could look into a simple VPS (Virtual Private Server) account. Unless you are sure your app runs under Mono, you will need a Windows one. The resource limits are usually a lot lower than you'd get from a dedicated server, but they're relatively affordable. Some will offer a MS SQL Server database you can use next to the VPS account (on another machine). Installing SQL Server on the VPS itself can be a problem license wise.
Make sure you check the terms of usage before you open an account, as well as the (virtual) system specs though. Also check if there is some kind of minimum contract period. Sometimes this can be longer than a single month, especially if there is no setup fee.
If at all possible, find a host that's geographically close to you. A server on the other side of the world can get a little annoying to access remotely using Remote Desktop.
80legs lets you use their crawlers to process millions of web pages with your own program.
The rates are:
$2.00 per million pages
$0.03 per CPU-hour
They claim to crawl 2 billion web pages a day.
You will need a VPS(Virtual private server) or a full on dedicated server. Crawlers are nothing more then applications that "crawl" the internet. While you could set up a web site to be a crawler, it is not practical because the web page would have to be accessed for you crawler to work. You will have to read the ToS(Terms of service) for the host to see what the terms are for usage. Some of the lower prices hosts will cut your connection with a reason of "negatively impacting the network" if you try to use to much bandwidth even though they have given you plenty to use.
VPS are around $30-80 for a linux server and $60+ for a windows server.
Dedicated services run $100+ for both linux and windows servers.
You don't need any web hosting to run your spider. Just ask for a PC with web connection that can act as a dedicated server,configure the database and run the crawler from there.

Categories