I have a issue where an application I am writing (written in .net using C#) has started to be blocked by the firewall when making SQL connections, the reason seems to be the inbound port client side is coming in on a massive range that isn't allowed by the firewall (around port 50,000 - 60,000).
Is there a way to make SQL connections run on a specific inbound port range client side so this smaller port range can be added as a exception to the clients firewall? The server uses Microsoft SQL Server 2008.
I know one solution is to add the application itself to the firewall but the delployment method used at the company is Click Once and when it updates it changes the directory of the installed program meaning an admin has to update the firewall for each client every time an update is made.
As I pointed out in my comment (quoting myself here), you might want to consider moving the queries in a web service, and let it provide you the data (it'll be deployed on a specific, well-known port, so you'll be alright).
More on the subject: I strongly recommend you never allow the outside world inside your corporate database, it's a big security issue.
Related
I am working on a web application in C#, ASP.NET, and .NET framework 4.5 with the use of WebSockets. In order to plan for scalability in the future, the application pool has the option for web gardens enabled to simulate multiple web servers on my single development machine.
The issue I am having is how to handle re-connects on the websocket side. When a new websocket session is initially created, the client browser can indirectly lock records in a SQL database. But when the connection is lost, my boss would like the browser to attempt to re-connect to the same instance of the websocket server session so it doesn't need to re-lock anything.
I don't know if something like this is possible because on re-connect the load balancer will "randomly" select which web server to handle the new connection. I was thinking of some hack to work around this but it isn't very clean:
Client opens initial websocket connection on Server A and locks a record.
Client temporarily loses internet connection and the websocket closes. (It is important to note that the server side will wait up to 60 seconds before it "disposes" itself; therefore, the SQL record will remain locked until the 60 seconds has elapsed).
Client internet connection is restored and reconnects to the website but this time on Server B.
Server B sees that this context was initially connected on Server A; therefore, transfers the session to Server A.
Server A checks the process id to see if it is running in the correct worker process (in the case of a web garden).
Server A has found the initial instance and handles the connection.
I tried Googling this question but it doesn't seem like a very common issue because I don't think most websocket web apps keep records locked for as long that my applications does (which is could be up to an hour).
Thanks in advance for all of your help!
Update 3/15/2016
I was hoping that the Server.TransferRequest would have been helpful however it doesn't seem to work for web sockets. Would anyone know of a way to best transfer a websocket context from one process to another?
First, you might want to re-examine why you're locking records for a long time and requiring a client to come back to the same server every time. That is not the usual type of high scale web architecture and perhaps you're just creating this need to reconnect to the identical server because of that requirement when maybe you should rethink how that is designed so that your application would work just fine no matter which host a user connects to.
That would certainly simplify scaling to large numbers of users and servers if you could remove that requirement. You can always then implement local caching and semi-sticky connections later as a performance enhancement, but only after you release the requirement to 100% of the time connect to the same host.
If you're going to stick with that requirement to always connect to the same host, then you will ultimately need some sort of sticky load balancing. There are a lot of different schemes. Some are driven by the networking infrastructure in front of your server, some are driven by your server and some are even client driven. They all have different tradeoffs. Here's a brief run-down of some of the schemes:
Hardware, networking load balancer. Here you have a fairly transparent mechanism by which a hardware load balancer (which is really just software running on a custom piece of hardware) sits in front of your web server farm and uses various techniques to make sure whatever server a given user is originally connected to it will get reconnected to on subsequent connections. This can be based on various schemes (IP address, cookie value, etc...) as the key to identifying a particular user and it typically has a number of possible configurations for how it can work.
Proxy load balancer. This is essentially an all software version of the hardware load balancer. Here a proxy sits in front of your server farm and directs connections to a particular server based on some algorithm (IP address, cookie value, etc...).
Server Redirect. Here an incoming connection is randomly assigned to a server. Upon connection the server figures out where the connection is supposed to be connected to an returns a 302 redirect to the actual host causing the client to reconnect to the proper server. This involves one less layer of infrastructure (no physical load balancers), but exposes the different server endpoints to the outside world which the first two options do not.
Client Selection Algorithm. Here the client is given knowledge of the various server endpoints and is coded with an algorithm for consistently selecting one for this user. It could be a hash of a userID that is then divided into the server bucket pool and the end result is that client ends up choosing a particular DNS name such as cl003.myserver.com which it then connects to. This choice requires the least work server-side so can be simpler to implement, but it requires changing the client code in order to modify the algorithm.
For an article on sticky load balancing for Amazon Web Services to give you an idea on how one mechanism works, you can read this: Elastic Load Balancing: Configure Sticky Sessions for Your Load Balancer.
Here's another article on how the nginx proxy is configured for sticky load balancing.
You can find lots of other articles with a Google search for "sticky load balancing".
A discussion of the pros/cons of the various schemes is the subject of a much longer discussion and some of it involves knowledge of more specific requirements and specific capabilities of your infrastructure.
We have the following set up:
Windows Mobile Device with GPRS connection
Windows Server PC with SQL Server 2012
VPN Network where both devices contained (the cell carrier routes certain IPs inside VPN)
Status:
With the above set up I can ping directly from the mobile device to Windows server internal IP via GPRS.
Question:
Can I create connection to SQL server from my Mobile using the server's internal IP?
My con string is:
"Data Source =xxxxxxxx,1433;Initial Catalog=xxxxx;Integrated Security=SSPI;User id=xxxxx;Password=xxxxx;Connect Timeout=15"
EDIT:
More Questions:
How can I implement it if yes
What are the pros and cons in accordance to David's comment
If you have a VPN and can ping the internal server then you can connect directly to SQL Server using the normal data access libraries available in the .Net Framework. Having said that, I would strongly advise against it. It's much preferable to have a middle tier service that interfaces between the mobile device and the database. Here are some reasons (off the top of my head) why this is better:
Mobile connections are inherently unstable and SQL connections are not great at handling that.
Having a service means you don't even need a VPN as it can be public facing (with relevant security of course).
If in future you decide to move form SQL Server to DocumentDB/Azure/carrier pigeon, then you need to update every single mobile device to cope with the change. If you have an intermediate server, you can just update that.
If database schema changes, you may break all of your client applications in one go.
Your middle tier can do other useful things like caching, logging etc.
My system has a server and multiple clients. the server has a service for the clients and each client has a service too, for talking to other clients.
I forward the server's service port manually on the router but the future client can not do it by him self after the installation.
Is there a way to automatically forward ports by code from the client side through the installation?
My main question is - Does this approach is wise? Should the system needs to be build deferentially?
Project details:
C# - WCF, Communication - NetTcpBinding.
The server is on my computer (Home network). Server's service port : 8080.
The clients can be installed everywhere. Client's service port: 8081.
*I'm not known with the IIS technology, can it help in this scenario?
The model you're describing sounds like a mesh network, generally you do not want clients to forward ports, be it automatically or not.
If it's absolutely necessary you could implement UPnP, there is an elaborate article here describing how to do so in .NET with a library. Note that you will have to select a different port.
I would strongly recommend to go for a different option though, having the server manage connections between clients is more managable and safer. There are very few valid arguments in favor of a model where a server is present and clients omit it at times:
Bandwidth, the server might not be able to handle all the data with reasonable throughput (i.e. torrent)
Security, the server might be only there for client updates (i.e. P2P chatclient with updater)
From the sound of it, your project does not apply to either.
EDIT: Because you have indicated the project is basically a torrent client, I would recommend reading up on the UPnP article.
Please actually read my post before placing it on hold!!
Let me start by saying I've been searching for a solution all afternoon and so far I have seen plenty of examples for WCF but none that would do what I need.
I have developed an application in c# that will be installed on customer servers and accesses a sql server on the customer's local network. The application also has the ability to control network relays on the customer's local network and records the status of these in sql. I am trying to figure out a way to have the customer's server establish a connection to our datacenter and be able to issue commands back to the customer's server (retrieve datasets from sql, control the network relays, etc). I have found plenty of ways to have a client call classes on a server but have so far been unsuccessful in finding the reverse. One consideration was writing a web service as part of the application on the customer's server but need a way to establish this connection for customers with dynamic IP addresses and without having to publish through firewalls, etc.
Have you considered using
VPN - Virtual private network
or
Configuring a Port Forwarding redirect on the ADSL modem, and using a solution like www.noip.com ?
If I understand correctly you want to get information from the customer's database, which is behind a firewall and has no known static ip, in addition there might be several hundred customers so a dedicated VPN to the customer is not viable.
First of all: you should not contact the customer database directly. Databases are not designed for this scenario and would probably be left open to attack if exposed directly to the internet.
So you need a service on top of the database. There are two main options you can use for this service:
Polling service
The service is actually a client calling some web service on your network and asking for instructions.
Benefits: easy to implement and deploy.
Downsides: With polling there is always the cost-benefit of scalability/bandwidth use vs. speed of service. There are also some considerations in selecting the time to poll to prevent all the client polling at the same time.
The service is a tcp-server
This can be a usual web service (or RESTfull service) or some other service. The only difference is that it needs to advertise itself. For that you need to have a known directory server. When the service starts it then connects to the directory service and tells it the port it can be contacted on (the directory knows the ip from the connection). It will then need to periodically contact the directory to let it know it is still alive and so any change in IP is detected.
A client on your network would now query the directory to find the address of the client and connect directly to it to issue commands.
Benefit: Scalable and bandwidth efficient.
Downside: More difficult to implement. Requires firewall traversal solutions (UPNP or firewall exceptions).
I have a little .net 2.0 systray app (C#) that checks for network connectivity periodically. It does this by attempting to open a connection to a SQL-Server instance on another computer (and selection a row from a table). The application saves documents created by another process to a database when it finds a connection. It is going to be used in environments with potentially dicey wireless networks.
In testing our QA team is using ipconfig /Release on the DB server (hosting a Sql-Server 2005 DB). What we found was that the application continued to claim it was network connected, because it kept right on successfully opening connections to SQL Server. I found the systray app's behavior erratic in my own testing using ipconfig /release.
At the suggestion of our not-currently-present network guy I changed my own internal testing (app hosted on a VM, connecting to DB on my workstation) to instead turn off the VM network connection. This produces the expected behavior (the systray app can't find a network connection). The QA guys are a little leery about my suggested that they do the same, and I need to put them at ease.
It was suggested to me that SQL Server was using named-pipes to accept incoming connections. If Named Pipes and TCP/IP are enabled, does this invalidate the ipconfig /release test?
I don't really know much about networking. Named-pipes, according to what I have read, sounds like it is designed for use between applications on the same server. But it can be used for intranet communications?
Is there something else going on here that I an unware of? Something about how ipconfig /release works
For your application, I would only use TCP/IP communication protocol. Although SQL Server supports other communication protocols such as named pipes, I would disable them on your server so it only accepts TCP/IP connections. This is least amount of overhead and should perform the best regardless of connection speed.
Named pipes is a different protocol then TCP/IP, so releasing the IP address may not effect named pipe communication in any way (Sounds like it is).
On the SQL Server machine, put TCP/IP as the number 1 protocol and disable named pipes. The have QA re-run the test. I have included a configuration screen shot for reference.