It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I hope you can help me or at least point me to the direction.
Everything has been programmed in c#, but client applications can be in whatever language, just need to know tcp protocol used for communicating with servers.
I made a server application, which include a lobby and a game server. You can mount as many game server as you need (actually, I have three, just for test, each one with a different kind of game).
The client application connects to lobby for authentication, nd other little things, and request for a game ... after this it's redirected to the appropiate game server.
All the information processed during the game, statistics, chats, ... is saved in a PostgreSQL database (you can configure this for use MySQL, MS SQL).
Now, the players would like to make some queries to get past info about themself, or stats, or whatever ... my question is:
¿Should I keep players query directly the database, I mean obtain the results from the database server (sending the respective sotre procedure command to get the results)?
or (due to they keep an active alive connection with lobby server by socket (and anothers with game server))
¿Do I receive the request and send the results of each query via lobby server using the permanent active connection socket?
¿Which one is the most performance or even secure? or ¿Will you recommend another suggestion?
It's good to say that 5.000 - 10.000 concurrent connections or even more are expected on all game servers.
.- I forgot to mention that some queries can has large results, I mean, 500/2000 records (rows) with some columns.
.- Forgot to say that I was thinking to set the queries via socket, processing and packaging the query in the server and sending the result to player(s) zipped (compressed).
Thanks in advance
E/R
You should absolutely not have the players clients directly submitting queries to your database servers; your game servers should have their own protocol for queries that the clients submit, and the 'game query servers' should sanity-check those queries before building the actual SQL.
Under no circumstances should you be building queries directly from user-submitted info, though.
Most secure is ovbiously the solution where the user does not know the SQL-Server, that is where the lobby server handles all connections and communication (assuming no exploitable bugs on either server ofc). Also you have more control over the implementation of security, having one user (the lobby server), most likely in the same domain, who gets read and execute permissions for exactly only the databases he needs to have those permissions for. Performance < Security, therefore the question regarding performance really does not matter much, but then performance is really also rather a matter of your implementation anyway.
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I am interesting to write a C# Client Server application and there are my issues:
1-the client App must be a WinServer
2-the client side, store some files, zipped them and send by http (no TCP or UDP or Socket) to the Server
3-the client side must check the server if the server was offline must be store zip files in some place of local machine and by schedule check the server to start transfer them.
4-the server send some command to client (or client check the server for commands) and need a command execution in client side
5-the client need to check server for some parameters or replace them by default(if server was offline or did not suggest any parameter)
I need some idea about the implementation of this, Does any one have any Idea about it?
I would start with thinking about WCF service in your server application, then have the clients communicate with the server. If you're asking for the all the specifics of how to go about this challenge, I suggest you start with researching WCF and seeing if you can get a basic client/server up and running first, then ask about specific issues.
Then, progress to a WCF host in the client, so the server can communicate back.
Then you can tackle such things as file transfers, offline availability etc.
I've not done this, but this is how I would figure out how to start tackling this.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I used C# language in visual studio 2008 and some SQL server management databases in my program
but here in my computer the program runs well but in other Pcs it just can't be run :(
The error is "The application failed initialize property(0Xc0000135). click on OK to terminate the application"
Why this error happens ?
Without more details this answer is virtually impossible to answer. Here are a few things to check/try ...
Run the program on your PC outside of visual studio
Make your database available on the same network your client PCs are on
If this is indeed a networked installation, make sure you can ping the server from your client PCs
Make sure you've enabled remote connections to your database
Make sure your firewall isn't getting in the way of database connections
If you're still experiencing problems you should consider adding logging of some kind to your application (this is a good thing to do regardless of any problems you're experiencing) so you can find out at what point your application is failing. If you are getting error messages, posting those messages here will help us figure out what the problem is much more quickly. Also, if you can, put some code into your question so we can see what it is you're trying to achieve
make sure your database is available on the network, and change to use sql authorization.
You could go to SQL Management Studio on your local PC. Create a public user account or pub account under security and add public and connect permissions to that given user.
If you are on a domain, and want to use Windows Authentication add the user accounts for the Windows Users to that Database Security. Either option will work.
Are you using WinForms or Web App? For Web apps, verify your sh*t in web.config, and make sure your connection string is legit. Make sure you change your authentication to Windows or Forms, based on what you decide to do.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I want to write a warehouse program which has near 80 clients.My program has to make some reports and control entrance and exit of commodities in warehouse and store data in SQL Server Database.I need to know that which one is better: using windows application which give me a lot of feathers or creating website which reduce my ability in using objects?I have to mention that my clients will not increase.
Why don't you create a webservice which does all the actual work. In that way it's quite easy to build either a windows app, a web app or both.
It's really easy if you use WCF.
Why do you say that it reduces your ability to use objects ?
I would always go with web application and not CSA (Client Server Application.
Reasons (Source: http://www.eforcesoftware.com/brochures/Web%20Benefits%20over%20Client%20Server%20Technology.pdf)
Web applications are easier to deploy (install on the server only) Gives workers
secure, easy and instant access to enterprise applications, information,
processes and people, no matter where they are located, from anywhere, at
anytime, using any device, over any connection.
o CSAs must have software installed on both the client and the server. You
must have people update both sides for software updates and upgrades
Web applications give easier access (all you need is a secure internet connection
and a browser)
o CSAs need to have the client piece installed and are more difficult to
share outside a firewall
o CSAs run slowly when you are not on the same network
Web applications are centrally administered. Enables IT staffs to manage the
centralized management of applications, simplifying their deployment, monitoring
and measurement.
o CSAs are more complex to centrally administer
Web applications require less processing power on the client
o CSAs require more processing power on clients and are more expensive
Web applications are more flexible (can be tailored to your needs easier). They
can easily be integrated with other agency systems and databases to provide a
seamless Agency wide system.
o CSAs are delivered as binary applications and are harder to customize
Web applications don’t require much training (everyone knows how to use a web
browsers and surf the Internet)
o CSA require more detailed training and takes more time for users to get
comfortable and then adopt the new system.
Have you thought about Silverlight?
It's basically a browser-plugin, but the Silverlight apps are almost as good as "real" Windows apps (Silverlight is a stripped-down WPF, basically).
It's a bit the best of both worlds - rich and powerful UI, and basically installed/deployed on a web server.
How website does reduce your ability to using objects? I am sure the server side of programming is still C# though if you use ASP.NET. Plus ASP.NET gives you a lot more flexibility in handling more user connections via IIS.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I was just trying to post something to a website via my localhost to retrieve some data, and suddenly, this idea came to my mind: What happens if I create a post, put it into a for loop that runs over 1 million times, and send requests to a specific url for a million time? I just did not want to try to avoid any harm, but I wonder. And if this could cause some harm, how can I avoid such an attack?
this kind of things actually happen a lot. some are intentional and some are not. take for example: http://en.wikipedia.org/wiki/Slashdot_effect
other times, this is intentional, and its called a DoS (Denial Of Service). a lot of websites are taken down with these attacks, and not always involve an actual connection. it may suffice to saturate the listen backlog of the underlying os.
how to avoid it.. you cant, basically. you can make the best effort at it, but you will never be able to actually avoid it. after all, your website is there to be accessed, right?
You could add a rule in your firewall to block a specific IP address if that were to happen. If it is a sophisticated denial of service, I'm sure the IP address is spoofed and will be random. But for normal web sites, you won't need to worry about this.
Well, the server will get progressively bogged down until it figures out how to handle all 1,000,000 of those requests. Odds are, unless you have legendary hardware, it will become unresponsive and next to useless, creating a great disruption to everyone wanting to access it. This is called a Denial Of Service attack, or a DOS.
There's a few things you can do to prevent this:
Require users to verify that they are human before the server will process their request. This is usually done with Captchas.
Use an intelligent firewall to drop the packets or figure out how to have the server ignore requests from IP addresses that have been sending too many.
Make sure everybody loves your site so much that they wouldn't even think of doing anything to hurt it.
1 is probably most effective and simplest to do, and 3 is impossible. I can't offer a lot of advice about 2 due to lack of experience, and its probably fairly difficult and easy enough to exploit.
Short Story: Go with a Captcha. ;)
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I'm planning a project which will require different websites to pass non-personal data to the server for processing/storing in a database. My idea is to set up a
very basic threaded TCP server (in C# running in the background on a server) IIS and just have it wait for incoming connections. When a connection is made and the data is passed, I will have the web server determine whether or not to save to the database. I plan on saving static javascript files to my server (one for each website) and having the different websites reference them. These js files will handle the passing of data from the websites to my server.
I've never done anything like this, so I'd like to know if there are any major safeguards I need to implement while developing this. The data is not sensitive, so I don't need to encrypt it.
More specific questions:
I plan on verifying parameters once it reaches the server (like injection safeguards), but is it typical to also confirm the location of the request or the authenticity?
Should I make any attempt to hide where the javascript files are sending the data to? Again, it's not sensitive data, I'm just concerned exploits which I'm not familiar with when dealing with javascript -> server.
Is there a different way I should be going about this, given the basics of what I'm trying to accomplish?
(3). Is there a different way I should be going about this, given the basics of what I'm trying to accomplish?
Yes, yes there is. Under no circumstance would I consider taking the route you've outlined. It adds a tremendous amount of code for exactly zero benefit.
Instead do this:
Have your primary server be a regular web server. Let it serve the javascript files directly to the clients.
The other web servers should simply include a regular script reference to your server that is hosting the javascript. This is no different than how google tracking works or even how jQuery is often delivered.
Your primary server should also have a set of web services exposed. This will accept the information for processing. Whether you use generic handlers (.ashx), web services (.asmx), or windows communication foundation (.wcf) is immaterial.
Under no circumstance do you bother with writing tcp level code in javascript. You will pull your hair out and the people that end up maintaining this will at some point in the very near future delete it all anyway.
Use the traditional means. There is zero reason to do things the way you've identified.
Beyond this, read #Frug's answer. You can't trust the javascript once it reaches the browser anyway. It will be read, opened, and potentially modified. The only thing you can do is use SSL to ensure that the traffic isn't captured or modified between you and the end client browser.
There is nothing specific you need to watch out for because there is nothing you can do to ensure that the javascript is secured from being modified... so operate under the assumption that they can override your javascript. That means you need to be sure to sanitize the inputs on the server side from sql injections.
If it's not sensitive and there's not a whole lot of data, you can even pass it by having your javascript shove it in the URL as a querystring (like a get method form). It can be really easy to debug stuff sent that way.