I was trying to setup Elastic cache (memcached engine) & use in my .NET application through a memcache c# client API "Enyim".
I'm new to AWS and facing some problems. I have few questions :-
Question 1 : Can I access my cache cluster nodes from my local machine ?
Question 2 : What is the process of setting up complete AWS Elastic cache instance. Correct
me if i'm wrong :-
Setup VPC (by default)
Setup a security group in EC2 (by default)
Setup cache cluster as with same VPC.
Now how can i use this same cache cluster ?
I have setup the memcache engine on local & the same code through Enyim was running but i was not able to run same (get/set) code for the Elasticache node instances.
It is not possible to directly access Elasticache from outside AWS, however it can be done using an SSH tunnel through an EC2 instance on the same VPC.
Here is how to setup SSH tunnel:
http://howto.ccs.neu.edu/howto/windows/ssh-port-tunneling-with-putty/
Here is a full C# example about how to use Elasticache:
http://www.omidmufeed.com/how-to-use-elasticache-memcached-or-runtime-caching-in-c/
As far as Question #1, I am certain when using the redis flavor of elastic cache you cannot (according to aws never will be able) to access the cache from anywhere except within AWS.
For debugging purposes it would be nice to be able to, but in production mode, accessing the cache from outside aws would introduce sufficient latency to defeat any benefit you might get from using the cache in the first place.
Related
Quick version of the question:
How do I configure a .Net Core windows service to target the correct database in a multi-tenant environment where each tenant has their own database and they all run on the same self-hosted server?
Some background info:
I am working on a new windows service and, since it is completely new, we are going to use .Net Core. I have read this page and it does talk about how to set environment variable in an IIS app, azure, globally, per command window but it does not really mention a windows service, azure devops or how to handle a multi-tenant environment.
The best I can figure is that you are supposed to set the environment variable in the start parameters for the windows service once it is created but that seems very fragile. This becomes more of a problem when you are looking at 25 with a potential of 100 or more (we are a small growing company). The thought of having to go back and set all of these variables manually if we decide to migrate services to another server is not pleasant.
So the longer version of the question is: Am I on the correct track or is there some better way to set this up so that this does not become such a manual process? Perhaps setting a variable when the service is deployed to the server would do the trick? How would I do that with azure devops though?
Edit 1
Here is a representation of the environment we are running these services in.
Databases (separate machine):
Shared1
Db1
Db2
Db3
Machines:
Server1
Windows Services:
Service1
Service2
Service3
The databases are running on the db server. There is one database per service and there is a shared database which stores some common information (don't think this is relevant for the question). There is one server. On that one server there are copies of the code that needs to run as a windows service. The number of copies of the service corresponds to the number of databases. So for the above scenario: Serivce1 connects to Db1, Service2 connects to Db2, Service3 connects to Db3, etc...
Since I only have one machine, if I set an environment variable to something like ASPNETCORE_DB1 then all three services will read that variable and connect to Db1 which is not what needs to happen.
If I set multiple environment variables: ASPNETCORE_Db1, ASPNETCORE_Db2, ASPNETCORE_Db2, how do each of the services know which environment variable to read?
I have a feeling that you are mixing few things here.
If you want to have different settings per environment you just need to use CreateHostBuilder method. You can read about this here. Then you need set ASPNETCORE_ENVIRONMENT environment variable on machine where you host app. In this approach application at runtime selects proper configuration. However, please do not keep sensitive data in your config files. In this approach you have one compiled code for all tenants.
You can pass configuration when you build an app. To do that you should define variable groups (on Azure DevOps) (one per tenat) and flush your secrets to config file. In addition you can use runtime parameters to define target tenant. Still not secure enough as in artifact (compiled code) you can find secrets. In this approach you have one compiled code per tenant.
If you can use Azure KeyVault I would recommend this approch. You can create one KeyVault per tenat and then in CreateHostBuilder load secrets from proper KeyVault (if you name KeyVault with tenant name, it will simplify selecting proper KeyVault). You can read about this here.
According to you question I assumed that your tenat = envrionemnt. If not then you can have one KeyVault per env like Dev, QA and PROD. Then you should name your secrets with following pattern {tenant_name}_{secret_name}. You should load all secrets atarting app and then at runtime (based on tenant id) select proper secret.
I hope it will help you make good decision.
Getting corresponding database connectionstring per environment variable is the better way. Simple steps:
Define the same environment variable with different value (connectionstring) in each environment/machine manually
Get that environment variable value in your windows service app and use that value as database connectionstring if not null, otherwise use the default connectionstring.
Edit:
For multiple windows services on the same machine, I suggest that you could specify connectionstring per to current service name.
How can a Windows Service determine its ServiceName?
I am trying to implement Azure Redis Cache in my app. When I see the documents they say I have to define a cache storage in the Azure online tool. I am wondering is there a way to skip that step and use Redis for development without using the actual server thing?
You can install Redis locally and use localhost. That might be one of your options even though I don't think it's faster.
You can download it and install it from here.
You can run Redis server locally, and start the experiment. But if you have decided to use Azure Redis, you should develop toward the real one as early as possible. Several reasons:
Azure Redis supports SSL and this is the default port. You should use this.
Azure Redis has high availability support through master slave.
Azure Redis provide Cluster support.
It might experience unexpected patching process causing temporarily data loss
These things are not easily setup and test locally.
I am using the MongoDB worker role project to use MongoDB on Azure. I have two separate cloud services, in one of them everything works fine, however in the other, the MongoDB worker role is stuck in a Busy (Waiting for role to start... Calling OnRoleStart state.
I connected to one of the MongoDB worker roles and accessed the MongoDB log file and found the following error:
[rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
There are threads on how to fix this normally, but not with Windows Azure. I did not configure anything for the MongoDB worker role (apart from Azure storage connection strings), and it works in another service, so I don't know why it isn't working for this service. Any idea?
Some time ago I was trying to host RavenDB in Azure as Worker Role and had lot's of issues with it as well.
Today, I believe it's better to run database the "suggested" way on target platform which is as Windows Service according to this "Install MongoDB on Windows" guide. This way you won't have to deal with Azure-specific issues. To achieve this you can:
Use Azure CmdLets along with CsPack.exe to create the package for MondoDB.
A solution similiar to RavenDB Master-Slave reads on Azure which I posted on GitHub.
Sign up for Virtual Machine (beta) on Azure, kick off a machine and install MongoDB manually there.
But I guess the most important question when hosting DB is: where do you plan to store the actual DB?
Azure's CloudDrive, which is a VHD stored in Cloud Storage, has the worst IO performance possible. Not sufficient for normal DB usage I'd say.
Ephemeral storage, a Cloud Service local disk space, has perfect IO, but you lose all data once VM is deleted. This means you usually want to make continious or at least regular backups to Cloud Storage, maybe through CloudDrive.
Azure VM attached disk - has better IO than CloudDrive, but still not as good as Ephemeral storage.
As for the actual troubleshooting to your problem. I'd suggest wrapping OnRoleStart with try-catch, writting it to the log, enabling RDP to the box and then connecting and looking into the actual issue right in place. Another alternative is using IntelliTrace, but you need VS Ultimate for that. Also, don't forget that Azure requires usage of Local Resources if your app needs to make writes to the disk.
I have a problem with new hosting. So far I have been using an fluent nhibernate aproch to access data from remote database. Due to certain circumstances I had to change to another hosting which don't have external database access. End users use internet connections without static IP (it is public for most of them, but it changes every 24-48h) What can I do in my situation to keep changes at minimum in my application ?
Data transfer is in both ways.
My ideas:
Use new hosting ftp to upload files for processing with php. Lots of work.
Design some kind of webaccess service. Same as above.
Out off above questions comes second one:
How access to database is provided in big systems where one can't limit connection only to known and safe sources ?
DMZ ?
If you do not have external access to a database (which is pretty common if not the default) you could use a VPN or SSH tunnel to connect to the external server and access the database as if it were a local one.
I’m currently developing an application that will be heavy on images, that I hope to host “in the cloud”
It’s a c# / asp.net application.
So i'm considering using Amazon S3 for storing the images.
That bit’s fine.
However, i'm considering using EC2 to host the application on.
The application uses SQL server (only on a fairly basic level)
Im wondering how to set up my hosting solution.
Would it be advisable to:
Have 1 small instance dedicated for
SQL server (would use the express
edition to start with)
Have 1 small instance dedicated to
running IIS (and hosting the
application) point the sql conn
string to above mentioned sql
instance
Use elastic block store to store the
SQL data & aspx pages, compiled
assemblies etc…
Any other ideas??
Keep them all on the same instance for now, don't prematurely optimise/scale. You might find simply upgrading to a medium-cpu instance (36c/hr instead of 12c/hr) will be enough to keep you running for months without any kind of scaling headaches.
In the future, if you outgrow your single-server setup, then you can move your DB onto a separate instance, initially a small-cpu, upgrading to a medium later.
One thing that's worth noting is that if you can't upgrade from a medium-cpu to high-cpu instances because the 32-bit OS images won't run on larger instances, and 64-bit won't run on smaller instances.
Pick 32-bit Windows (because EC2 uses this for smaller and medium instances), run on a smaller, single instance and then scale up when you need to.
Regarding EBS - I'd recommend creating a healthy sized volume that'll keep you going for a while and configure SQLServer to store its data there.
You could store your ASP.NET app on an EBS volume as well, but the instance's 10GB OS drive might be fine, I don't think there's much difference here.
I'd strongly recommend using an Elastic IP rather than the temporary IP EC2 assign you on launching an instance. Create an Elastic IP, update your DNS to point to it and associate it with your instance.
After getting your image configured how you want it, shut it down, bundle the instance and then register a new AMI for it (privately). It'll take about 40 minutes. This means if something horrible happens to your instance, you can recover within 15 minutes by following these steps:
Detach your EBS volume
Disassociate your Elastic IP
Terminate your faulty instance
Launch an instance of your AMI
Attach your EBS volume to the new instance
Associate your Elastic IP with the new instance