deploying asp.net appication using public ip - c#

i want to deploy our official asp.net web application (Enterprise Resource Plan ) which mostly contain critical data. currently we have deployed it on internal machine on intranet now we want to open it for public i.e on web
what would be the best deploying strategy so that application remain fast,data secure and prevent unauthorized access.
we have also a public ip and also have a server machine.

It's not a deployment strategy you need, but really a check that your application has been properly architected and written with security in mind.
What sort of authentication do you use, how is your database secured, have you surveyed your code for possible security issues?
Microsoft do some good material on this - there's a lot to it and it depends on what domain you work in: http://msdn.microsoft.com/en-us/library/ff649874.aspx

Paddy is correct in that you need to make sure you application was securely written rather than do anything with your deploy strategy.
I did a quick search and found these sites which would be a good place to start:
http://www.symantec.com/connect/articles/five-common-web-application-vulnerabilities
http://www.owasp.org/index.php/Top_10_2007
Personally the most common things I see are:
SQL Injection
Cross Server Scripting (XSS)
I would make sure you don't have any of these vulnerabilities first.

Related

Hosting a RESTful API

I'm a C# ASP.NET junior dev and have worked with Code First C# Databases, RESTful API's, MVC & Vue (a frontend framework sort of like React) to create websites.
Now at work and during my education, I've never handled deployment.
At this time I have a personal project. I have succesfully hosted my relational MySQL Database on phpMyAdmin and can update it from my local desktop.
My hosting site let me know they do not host C# or anything of the sort.
I found some posts suggesting Azure, AWS, others, but for every post I find I find equal people protesting those.
What is a good site to host my first REST API? I'm looking for something that can go beyond Minimum Viable Product and I'd like to host my website under the hosting service I'm currently using (so not paired hosting with the API).
What would the costprice look like for an API that's deployed and being used by clients?
I realize this cost depends on the amount of traffic, but assume a basic API used for, let's say, posting orders in an online shop (though website/app/w.e, it all would communicate through the API).
Any tips are welcome as I feel I'm swimming in the dark researching this.
Thank you
Any hosting service that grants you real access to a machine will be able to run your API as some specialized for the .net/core ecossystem.
I supposed you know about php based on phpMyAdmin service and the ecossystem of hosts that support php, although cheaper they do not exactly give you access to the machine and probably will not support .net/core as inumerous others tech stacks.
As a Junior Developer I believe you should have a little bit of practice in any deployment ecossystem so I encorage you to try most of the big clouds (Azure, GCP, AWS), but also some smaller hosts to gain experience and understand a bit more about the differences in deployment and ecosystem.
Azure will be really easy, you can create an account and post your API without any costs using an free WebApp, VS will even have publishing tools that will handle 90% of the job, GCP will be a little trickier and will require you to know a bit about containers and clusters, if you go for a non specilized host like digitalocean you will need to understand more about the Operational system and associated servers/controllers to deploy and publish
about the cost part is a lot more difficult. It will depend on the host that you are using and the load (process, memory, size, and throughput of data). In my experience I had some very small-scale APIs that required more processing or memory to accomplish some tasks like PDF generation than a medium-scale API that only had json data transaction

What is the ideal method for creating a Windows application and service package?

I have a project I am working on where I need to create an app and service package for Windows. I would like the service process to run as SYSTEM or LOCALSYSTEM so that credentials are irrelevant. The application frontend will be installed and executable by any user on the machine. Data from the frontend application will be passed to the service - most likely paths to directories selected by users. Once started the service will listen for a command to do some action while accepting the aforementioned paths.
I'm using C# on the .NET platform and I've looked into creating a standalone service and a standalone application separately as well as creating a WCF service library and host application - that's as far as I've gotten.
All of these methods seem overly complex for what I am trying to achieve. What is modern convention when attempting something like this? I'm willing and able to learn the best method for moving forward.
Edit: This was flagged duplicate. I'm not looking for information on HOW to communicate with a Windows service. That's remedial and not at all what I'm asking. I'm looking for validation that I'm on the right track and if I'm not, I'm looking for suggestions. I've been told that I'm on the right track and pointed towards named pipe binding.
Windows Service is certainly an option for hosting WCF, although it kind of is a deployment nightmare. It really depends on your environment and the capability and support of your system admins as I've had many clients where deploying a windows service, as you need admin rights to install and update it, was simply not practical.
Console applications may sound like a terrible idea but the practicality of being able to drop them on a share and run a powershell script to start them is very compelling.
But frankly IIS hosting has the most advantages in my mind as the product is designed for ease of deployment and up time. And you can use any transport binding in IIS that you can use in a Windows Service or Console.
As for the binding itself named pipe is not really a popular option in many enterprise scenarios as it is incompatible with anything but .NET. Although the same can be said for binary which is one of the more performant bindings. The WSHttpBinding is probably the most popular binding in scenarios that require unknown callers. WebHttpBinding is an interesting option as its HTTP/REST based, although that requires further decoration of your operations and honestly if your going that route you should really be using Web API.

What is a good place to store production logs of an ASP.NET application?

Environment: Windows instance hosted in AWS. ASP.NET application logs events and errors using log4net to an SQL Server database.
Now I'm looking to offload that logging to another server with the final goal of 1) reducing the load on the SQL Server and the server itself and 2) have a better way to search and analyze those logs, which are currently being stored in a non-indexed table.
I’m looking for something faster yet simple to install, configure and use. I've already investigated Seq, Logstash, and others, but none seem to fit my requirements very well. Can you suggest a solution?
The requirements:
Free (and Open Source if possible)
Runs on Windows
Can be configured with log4net
The server can be run on another server (or it has a minimal memory footprint)
Fast
The good to haves:
Minimal security to protect the server if it's public on the Internet.
Supports event forwarding since the production server and the log server will be in loosely connected machines.
UI with basic search & analytics
One option is Elasticsearch and Kibana. Use Google, but a good starting point for you could be http://www.ben-morris.com/using-logstash-elasticsearch-and-log4net-for-centralized-logging-in-windows/
The idea behind putting the logs in a search engine instead of into a database; is to make it easier to search your logs. You will get indexing on the fields you choose and full text search for free here. At the same time, Kibana provides an excellent user interface for exploring and aggregating logs.
My personal experience is that this makes for a very good logging setup that will make it easier for you to spot errors and negative trends faster.

Best Practice for Connecting ASP.NET to SQL Server

We have an ASP.NET 4.0 Web application that connects to a SQL Server on a separate machine across a LAN. I use a ConnectionString (with SQL Server authentication) stored in my Web.config to do this. Basically, it's a fairly traditional Web-Server-to-SQL strategy.
However, one of our clients is arguing that this strategy is not secure. This client says that we should only connect to the SQL Server through a separate Web Services layer.
I really don't want to rewrite this app just to satisfy this client. What should I tell him? Does any one know how I might best refute this?
Thanks in advance...
Security is always a trade-off. What is the client really afraid of?
Having database credential "in the clear"? I have seen auditors point this out as a potential vulnerability, but really, if someone has compromised your web server they can run arbitrary code against the database, so encrypting database credentials doesn't really buy you much.
Your web app should be using a minimal-rights user to connect to the database, so compromising the web server should only give you the rights to read & update data. How would that change if everything went through a web services layer? Again, there is a very real cost - in complexity, and in performance - by going to a web services layer. Only the client can answer whether or not that cost is worth it.
If this is a web project, you need to change IIS servers running user to a domain user and give permission on sql server to that user.
Than you can use SSPI on your connection string like below.
Like this, you don't need to keep your username or password clearly on web.config.
<configuration>
<system.web>
<identity impersonate="true"/>
</system.web>
and your connectionString
"Integrated Security=SSPI;Initial Catalog=TestDb;Data Source=10.10.10.10"
There are many customers that argue the work of an IT professional, just like there are many people visiting the doctor asking for the medicine instead of what disease they have, because they already know the answer since they read about it on the internet.
I mean, they ask you to build the application and you as an IT professional should know best when your application works as expected. You as a professional should have balls to tell your customer that if he think can get somewhere else better, he should go there or perhaps build the application himself; that's what have done in the past with positive results :)
Regarding security; perhaps for their confidence you can encrypt the web.config and show them, but actually it means nothing; if someone can access the server, they could decode it. On the other hand, someone that want to break in to your database should pass trough a lot of barriers. It's hard to break in, perhaps impossible. Another options is simply blocking connections from outside network the network or ip range or whatever. I think this shouldn't be something to worry about.
There much more and either more realistic concerns to worry about, such as preventing cross site scripting and such common treats.
The client is wrong introducing another tier would not automatically improve security.
In a nutshell use SQL server roles for data access for example the built in data_reader and data_writer roles are a good place to start. Always use the most appropriate least privilege account for the application. If you only need to read data use an account that only has access to read.
Use Windows authentication where possible, if this isn't possible then at least encrypt the connectionstring.
More information on how to do what I've described can be found at http://msdn.microsoft.com/en-us/library/ff650037.aspx#pagpractices0001_dataaccess
One possibility is to encrypt the section in the web.config. So, only user who can access the webserver directly can decrypt this section.
Here is how this works with the help of the iisreg-tool:
http://msdn.microsoft.com/en-us/library/zhhddkxy.aspx
You could enable Encrypted connections in your database and tell the client that the connections are encrypted so fully secure?
To make it more secure, and satisfy your customer you can use Tunneling between your computers.
You setup a server tunneling program where the database exist, and client tunneling programs on the client computer. You connect the one computer to the other via tunneling, and the database connection happens over tunneling.
And everything is exchange high secure (and compressed if the tunneling support it).
http://en.wikipedia.org/wiki/Tunneling_protocol
http://en.wikipedia.org/wiki/HTTP_tunnel
Ps I connect to my server only via tunneling for anything that I do.
I never liked using the web config. The registry is more secure.
Best Practices is to:
Hide the important items of the connection string in the registry
Encrypt the important items in the connection like user names, passwords and server name in the
registry
Access the registry through a class
Build the connection string on the fly and only when needed per page
Error handle every page so the connection string won't show, in case of error
Always close the connections once done. Avoid memory leaks
Always close and reset DataReaders
For additional Security
You can build a separate program to create the connection string and
reference that project as a library inside the solution
If you want to be really secure your client is correct. Send the information to a dll that will communicate with the DB. This is a lot of work
Sources:
From ScottGu # http://msdn.microsoft.com/en-us/library/Aa302406
Scalability:
In regards to scalability: Creating\Editing registry keys across a web farm can be accomplished very easily, with custom admin software.
http://weblogs.asp.net/scottgu/archive/2010/09/08/introducing-the-microsoft-web-farm-framework.aspx
On a final note to all the people who voted me down. Security out of the box is just not enough. Security is an Art and not Science.
Hackers know where the password is stored by default......
ASP.NET 4.0 Fans
Microsoft makes it easy for asp.net 4.0 web sites to deploy registry settings:
http://msdn.microsoft.com/en-us/library/dd394698.aspx

Shaky connectivity - favor web or desktop app?

I'm a desktop application developer who is temporarily working in the web. I'm working with a client that wants me to build an app for use by locations all over the state; however, these locations have very shaky connectivity.
They really want a centralized web app and are suggesting I build a "lean" web app. I don't know what a "lean web app" means: small HTTP requests but lots of them? or large HTTP requests with few of them? I tend to favor chunky vs chatty.. but I've never had to worry about connectivity before.
Do I suggest a desktop app that replicates data when connectivity exists? If not, what's the best way to approach a web app when connectivity is shaky?
EDIT:
I must qualify my question with further information. Assuming the web option, they've disallowed the use of browser runtime technologies and anything that requires installation. Thus, Silverlight is out, Flash is out, Gears is out - only asp.net and javascript is available to me. Having state this, part of my question was whether to use a desktop app; I suppose that can be extended to "thicker technologies".
EDIT #2: Network is homogeneous - every node is Windows. This won't be changing.
You should get a definition of what the client means by "lean" so that you don't have confusion surrounding it. Maybe present them with several options of lean that you think they might mean. One thing I've found is it's no good at all to guess about client requirements. Just get clarification before you waste a bunch of time.
Shaky connectivity definitely favors a desktop application. Web apps are great for users that have always-on Internet connections, and that might be using a variety of different browsers and operating systems.
Your client probably has locations that are all using Windows, so a desktop application is an appropriate choice. One other advantage of web applications is that they make the deployment issue easy to deal with. Auto-update technologies like ClickOnce make the deployment and update of desktop applications almost as easy.
And not to knock Google Gears, but it's relatively new and would have to be considered more risky than a tried-and-true desktop application.
Update: and if you're limited to just javascript on the client side, you definitely do not want to make this a web app. Your application simply will not be available whenever the Internet connection is down. There are ways to save stuff locally in javascript using cookies and user stores and whatnot, but you just don't want to do this.
If connectivity is so bad, I would suggest that you write a WinForm app that downloads information, locally edits it and then uploads it. This way, if your connection goes down, all you have to do is retry until it works.
They seem to be suggesting a plain vanilla web app that doesn't use AJAX or rely on .NET postbacks or do anything that might make it break down horribly if your connection goes away for a bit. Instead, it should be designed so that you can hit Refresh until it works. In other words, they seem to want the closest thing to a WinForm app, only uglier.
You may consider using a framework like Google Gears to help provide functionality during network down time. This allows users to connect to the web page once (with a functioning connection) and then be able to use the web app from then on, even without a connection.
When the network is restored, the framework can sync changes back with the central database.
There is even a tutorial for using Google Gears with the .Net Framework.
Gears with other languages
You mention that connectivity is shaky at these locations, but that the app needs to be centralized. One thing you might consider is using multiple decentralized read database servers and a single centralized write server. Mysql makes this possible and affordable if your app is small.
Have the main database server at the datacenter/central office. Put up small web/db servers at each location, with your app installed. You can even run them off a user computer if the remote location is not too big. Make the local database servers connect to the centralized database server as replication slaves. As changes come in to the centralized database, the slave servers will pull down the data and make it available locally. When the connection is unavailable, your app data is still at least available, if not up to date. When the connection is available, the database handles replicating all relevant data down.
Now all you have to do is make your app use two separate database handles: reading data it uses the local database, writing data it uses the central database.

Categories