Websites passing to remote server [closed] - c#

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I'm planning a project which will require different websites to pass non-personal data to the server for processing/storing in a database. My idea is to set up a
very basic threaded TCP server (in C# running in the background on a server) IIS and just have it wait for incoming connections. When a connection is made and the data is passed, I will have the web server determine whether or not to save to the database. I plan on saving static javascript files to my server (one for each website) and having the different websites reference them. These js files will handle the passing of data from the websites to my server.
I've never done anything like this, so I'd like to know if there are any major safeguards I need to implement while developing this. The data is not sensitive, so I don't need to encrypt it.
More specific questions:
I plan on verifying parameters once it reaches the server (like injection safeguards), but is it typical to also confirm the location of the request or the authenticity?
Should I make any attempt to hide where the javascript files are sending the data to? Again, it's not sensitive data, I'm just concerned exploits which I'm not familiar with when dealing with javascript -> server.
Is there a different way I should be going about this, given the basics of what I'm trying to accomplish?

(3). Is there a different way I should be going about this, given the basics of what I'm trying to accomplish?
Yes, yes there is. Under no circumstance would I consider taking the route you've outlined. It adds a tremendous amount of code for exactly zero benefit.
Instead do this:
Have your primary server be a regular web server. Let it serve the javascript files directly to the clients.
The other web servers should simply include a regular script reference to your server that is hosting the javascript. This is no different than how google tracking works or even how jQuery is often delivered.
Your primary server should also have a set of web services exposed. This will accept the information for processing. Whether you use generic handlers (.ashx), web services (.asmx), or windows communication foundation (.wcf) is immaterial.
Under no circumstance do you bother with writing tcp level code in javascript. You will pull your hair out and the people that end up maintaining this will at some point in the very near future delete it all anyway.
Use the traditional means. There is zero reason to do things the way you've identified.
Beyond this, read #Frug's answer. You can't trust the javascript once it reaches the browser anyway. It will be read, opened, and potentially modified. The only thing you can do is use SSL to ensure that the traffic isn't captured or modified between you and the end client browser.

There is nothing specific you need to watch out for because there is nothing you can do to ensure that the javascript is secured from being modified... so operate under the assumption that they can override your javascript. That means you need to be sure to sanitize the inputs on the server side from sql injections.
If it's not sensitive and there's not a whole lot of data, you can even pass it by having your javascript shove it in the URL as a querystring (like a get method form). It can be really easy to debug stuff sent that way.

Related

creating demo version of c# windows application [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have coded a software which is a windows application in c#.
I wish to create a demo version of my software. How can I use a timer so that the installed software runs for a definate period of time ?
Also is it possible that even if the user machine is formatted, the application wont install after my preset time is exhausted ?
How can I use a timer so that the installed software runs for a
definite period of time ?
You create your own hashing function which generates license key, here one of the components of the key is a final day. So you save that key into a file, and on every start-up check the value present inside it against the actual value of date on PC.
Pro: Easy to implement and put in production
Cons: Easy to hack. Just enough to cheat the DateTime settings of OS.
Another option: you can use some licensing software:
Look here on possible options: Licensing System for .NET
Another again: some web service where you check the data (avoid client side data cheating)
Another again: is limit demo version not on time, but on functionality (limited features are available, you can save data on disk limited amount of time, you can run application limited times... and so on)
Also is it possible that even if the user machine is formatted, the
application wont install after my pre-set time is exhausted ?
no, it's not possible, as formatting means data complete erase.
If you're not going to connect the application to a server-side registration mechanism (which I wouldn't recommend), then you can place a value in the Registry that you use to determine when the trial started. Subtract that value from the current date and time when the application is loaded and you'll be able to determine when it should stop. However, this is not safe from a format or hacking.
My recommended solution is to for server-side registration of the trial software during the first load. This will allow you to ensure that even if they reformatted the drive they couldn't get past the registration. Though this still isn't fool proof because they could register under many aliases, it's at least a lot more trouble. One remaining issue with this idea is what happens if they aren't connected to the internet? Are you going to stop them from using the application? You could couple this idea with the first one and leverage the Registry if they don't have an internet connection.
Either way, preventing people from hacking your registration process is difficult at best. Microsoft has struggled with it since their inception.

Client in C++, Server in C# [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I'm creating a MMO game. I will be using TCP client/server communication. At first my approach was to write both client and server in C++, but now I'm starting to think if it wouldn't be easier to write server in C#.
Client is going to be written in C++ because it needs to be cross-platform, but server will always be on Windows system. I consider choosing C# because it has an easier way of handling threads, built-in XML parser, etc.
My question is if it would be a good solution? Performance is still important to me, so if choosing C# over C++ would have a drastic influence on performance, I'd stick with C++.
Also if you think it's good idea, do you know if there are any tutorials that present communication between C# server and C++ client?
Thanks in advance for answers.
The performance difference between C++ and C# is not as large as you might think.
For the communications, if you're bothered about performance, use sockets and something like Google Protocol Buffers or Thrift. If you're less bothered about performance, use HTTP and JSON or XML.
Using different languages for client and server forces you to rewrite quite a bit of things in separate languages I would personally want to keep in sync:
Serialization and deserialization, although you could of course use some library for that. Google Protocol Buffers come to my mind, as they are explicitly designed to save bandwith and work "cross language".
Almost every entity of your game will have a more or less common set of properties. When going with different languages, you would have to keep these in sync manually and probably write them twice. This is especially annoying with more complex setters ;)
You will want to have some shared logic to "emulate" server answers on the client. Its important to predict on the client side what the server does to get a "snappy" behaviour. And what could emulate that behaviour better then using the same codebase for server and client?
I would't see a great problem with performance when using C# on the server though. That shouldn't be an aspect that strongly influences your decision.

Get Database Results from a Multiplayer game [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I hope you can help me or at least point me to the direction.
Everything has been programmed in c#, but client applications can be in whatever language, just need to know tcp protocol used for communicating with servers.
I made a server application, which include a lobby and a game server. You can mount as many game server as you need (actually, I have three, just for test, each one with a different kind of game).
The client application connects to lobby for authentication, nd other little things, and request for a game ... after this it's redirected to the appropiate game server.
All the information processed during the game, statistics, chats, ... is saved in a PostgreSQL database (you can configure this for use MySQL, MS SQL).
Now, the players would like to make some queries to get past info about themself, or stats, or whatever ... my question is:
¿Should I keep players query directly the database, I mean obtain the results from the database server (sending the respective sotre procedure command to get the results)?
or (due to they keep an active alive connection with lobby server by socket (and anothers with game server))
¿Do I receive the request and send the results of each query via lobby server using the permanent active connection socket?
¿Which one is the most performance or even secure? or ¿Will you recommend another suggestion?
It's good to say that 5.000 - 10.000 concurrent connections or even more are expected on all game servers.
.- I forgot to mention that some queries can has large results, I mean, 500/2000 records (rows) with some columns.
.- Forgot to say that I was thinking to set the queries via socket, processing and packaging the query in the server and sending the result to player(s) zipped (compressed).
Thanks in advance
E/R
You should absolutely not have the players clients directly submitting queries to your database servers; your game servers should have their own protocol for queries that the clients submit, and the 'game query servers' should sanity-check those queries before building the actual SQL.
Under no circumstances should you be building queries directly from user-submitted info, though.
Most secure is ovbiously the solution where the user does not know the SQL-Server, that is where the lobby server handles all connections and communication (assuming no exploitable bugs on either server ofc). Also you have more control over the implementation of security, having one user (the lobby server), most likely in the same domain, who gets read and execute permissions for exactly only the databases he needs to have those permissions for. Performance < Security, therefore the question regarding performance really does not matter much, but then performance is really also rather a matter of your implementation anyway.

Windows application or website [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I want to write a warehouse program which has near 80 clients.My program has to make some reports and control entrance and exit of commodities in warehouse and store data in SQL Server Database.I need to know that which one is better: using windows application which give me a lot of feathers or creating website which reduce my ability in using objects?I have to mention that my clients will not increase.
Why don't you create a webservice which does all the actual work. In that way it's quite easy to build either a windows app, a web app or both.
It's really easy if you use WCF.
Why do you say that it reduces your ability to use objects ?
I would always go with web application and not CSA (Client Server Application.
Reasons (Source: http://www.eforcesoftware.com/brochures/Web%20Benefits%20over%20Client%20Server%20Technology.pdf)
Web applications are easier to deploy (install on the server only) Gives workers
secure, easy and instant access to enterprise applications, information,
processes and people, no matter where they are located, from anywhere, at
anytime, using any device, over any connection.
o CSAs must have software installed on both the client and the server. You
must have people update both sides for software updates and upgrades
 Web applications give easier access (all you need is a secure internet connection
and a browser)
o CSAs need to have the client piece installed and are more difficult to
share outside a firewall
o CSAs run slowly when you are not on the same network
 Web applications are centrally administered. Enables IT staffs to manage the
centralized management of applications, simplifying their deployment, monitoring
and measurement.
o CSAs are more complex to centrally administer
 Web applications require less processing power on the client
o CSAs require more processing power on clients and are more expensive
 Web applications are more flexible (can be tailored to your needs easier). They
can easily be integrated with other agency systems and databases to provide a
seamless Agency wide system.
o CSAs are delivered as binary applications and are harder to customize
 Web applications don’t require much training (everyone knows how to use a web
browsers and surf the Internet)
o CSA require more detailed training and takes more time for users to get
comfortable and then adopt the new system.
Have you thought about Silverlight?
It's basically a browser-plugin, but the Silverlight apps are almost as good as "real" Windows apps (Silverlight is a stripped-down WPF, basically).
It's a bit the best of both worlds - rich and powerful UI, and basically installed/deployed on a web server.
How website does reduce your ability to using objects? I am sure the server side of programming is still C# though if you use ASP.NET. Plus ASP.NET gives you a lot more flexibility in handling more user connections via IIS.

What happens if I post to a site a million times repeatedly? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I was just trying to post something to a website via my localhost to retrieve some data, and suddenly, this idea came to my mind: What happens if I create a post, put it into a for loop that runs over 1 million times, and send requests to a specific url for a million time? I just did not want to try to avoid any harm, but I wonder. And if this could cause some harm, how can I avoid such an attack?
this kind of things actually happen a lot. some are intentional and some are not. take for example: http://en.wikipedia.org/wiki/Slashdot_effect
other times, this is intentional, and its called a DoS (Denial Of Service). a lot of websites are taken down with these attacks, and not always involve an actual connection. it may suffice to saturate the listen backlog of the underlying os.
how to avoid it.. you cant, basically. you can make the best effort at it, but you will never be able to actually avoid it. after all, your website is there to be accessed, right?
You could add a rule in your firewall to block a specific IP address if that were to happen. If it is a sophisticated denial of service, I'm sure the IP address is spoofed and will be random. But for normal web sites, you won't need to worry about this.
Well, the server will get progressively bogged down until it figures out how to handle all 1,000,000 of those requests. Odds are, unless you have legendary hardware, it will become unresponsive and next to useless, creating a great disruption to everyone wanting to access it. This is called a Denial Of Service attack, or a DOS.
There's a few things you can do to prevent this:
Require users to verify that they are human before the server will process their request. This is usually done with Captchas.
Use an intelligent firewall to drop the packets or figure out how to have the server ignore requests from IP addresses that have been sending too many.
Make sure everybody loves your site so much that they wouldn't even think of doing anything to hurt it.
1 is probably most effective and simplest to do, and 3 is impossible. I can't offer a lot of advice about 2 due to lack of experience, and its probably fairly difficult and easy enough to exploit.
Short Story: Go with a Captcha. ;)

Categories