Windows application or website [closed] - c#

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I want to write a warehouse program which has near 80 clients.My program has to make some reports and control entrance and exit of commodities in warehouse and store data in SQL Server Database.I need to know that which one is better: using windows application which give me a lot of feathers or creating website which reduce my ability in using objects?I have to mention that my clients will not increase.

Why don't you create a webservice which does all the actual work. In that way it's quite easy to build either a windows app, a web app or both.
It's really easy if you use WCF.

Why do you say that it reduces your ability to use objects ?
I would always go with web application and not CSA (Client Server Application.
Reasons (Source: http://www.eforcesoftware.com/brochures/Web%20Benefits%20over%20Client%20Server%20Technology.pdf)
Web applications are easier to deploy (install on the server only) Gives workers
secure, easy and instant access to enterprise applications, information,
processes and people, no matter where they are located, from anywhere, at
anytime, using any device, over any connection.
o CSAs must have software installed on both the client and the server. You
must have people update both sides for software updates and upgrades
 Web applications give easier access (all you need is a secure internet connection
and a browser)
o CSAs need to have the client piece installed and are more difficult to
share outside a firewall
o CSAs run slowly when you are not on the same network
 Web applications are centrally administered. Enables IT staffs to manage the
centralized management of applications, simplifying their deployment, monitoring
and measurement.
o CSAs are more complex to centrally administer
 Web applications require less processing power on the client
o CSAs require more processing power on clients and are more expensive
 Web applications are more flexible (can be tailored to your needs easier). They
can easily be integrated with other agency systems and databases to provide a
seamless Agency wide system.
o CSAs are delivered as binary applications and are harder to customize
 Web applications don’t require much training (everyone knows how to use a web
browsers and surf the Internet)
o CSA require more detailed training and takes more time for users to get
comfortable and then adopt the new system.

Have you thought about Silverlight?
It's basically a browser-plugin, but the Silverlight apps are almost as good as "real" Windows apps (Silverlight is a stripped-down WPF, basically).
It's a bit the best of both worlds - rich and powerful UI, and basically installed/deployed on a web server.

How website does reduce your ability to using objects? I am sure the server side of programming is still C# though if you use ASP.NET. Plus ASP.NET gives you a lot more flexibility in handling more user connections via IIS.

Related

Highly available windows service [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am developing a Windows Service (.NET, C#). One of the non functional requirement is to ensure the high availability of this Windows Service. I understand that installing this Windows Service on a Failover Cluster will make this highly available. To install this service on a Cluster, is there any specific code I have to write within this service? I have heard about cluster aware services, however I have not came across any article that explains how to develop a cluster aware windows service. Is it really required to make a windows service to install it on a cluster?
Microsoft Failover Cluster Services allows for this high availability without the need for you to code complicated heartbeats into your service. The Failover Cluster Service manages this for you.
For more information on Failover Clusters
For Windows Services, there generally isn't much you need to do beyond writing your windows service. That said, you may need to use the Failover Cluster API in your code if you need to know about your environment from your code, use local resources such IP addresses/disks, etc. However, most simple services I've seen don't require modifications to make calls to the Clustering APIs and can be installed directly into a properly configured failover cluster with the proper resource groups defined.
View Microsoft's guidance on writing Cluster Aware applications.
Visit Creating a failover cluster for help and for services, the clustered role type should be "Generic Service".
Cheers,
cbuzzsaw
First of all this question is EXTREMELY broad, but here are my two cents.
It depends on the service.
If executing multiple instances simultaneously of your service doesn't breaks it's purpose, then you don't need to do nothing, if there can be only one service being executed then you must coordinate these instances (udp broadcast messages maybe?) to only have one active and in the case the instance which is active stops start another one.
A cluster is just a bunch of machines with a same purpose (yes yes, there is a lot more of things but for this case that comparison is enough), so think it as if you were running that service on a local network in multiple machines.

What is a native client? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
What is a native client?
Is Native client same as thick client?
Can anybody explaing it for me?
Native client for me traditionally means not interpreted by a virtual execution environment or sandbox but executed by the CPU and bound to the operating system (think Win32). I'd contrast native with HTML, JVM, CLR, etc.
Thick client for me traditionally means some business logic executing on the client, (think WPF, WinForms) as opposed to web/browser or other lightweight presentation container where most business logic is executing on the (web) server and minimal logic is executing on the client.
Traditionally, the two distinctions are unrelated, with "native clients" often being "thick". However, with the introduction of devices this distinction has become skewed, since it's not crystal clear anymore if a native app on a little device can still be considered thick. Many people avoid saying "thick" and refer to "rich" instead.
Nishakant, since you asked this in context of my tweet, let me explain what i meant by it. Native in that context meant a native Windows 8 application, which in turn means an application conforming to the new modern UI guidelines, runs on WinRT and is downloaded and installed from Windows 8 Store and runs locally on the Windows 8 machine. It isn't a web application, but locally installed. You could co-relate it to thick client applications in regular desktop world.
Additionally, this particular application is built by Twitter itself and hence another subtle meaning to the word native
While a native client may be about anything (for example a Native American paying you to write software for him), I'd say that in terms of software, a native client is some piece of software that is compiled to CPU bytecode, as opposed to a piece of software that's compiled to bytecode, which is compiled to CPU bytecode by an execution environment (Java, .NET, etc.) when run.
I'm pretty sure that at present, the term Native Client is only used to refer to Google Native Client (NaCl), which is a tool for running native code from within a browser, and yes in this case, Google definitely can explain it to you.
I can only guess since there is lack of context. I guess the Native Client you refered, is related to Google Chromium, is that right?
Chromium is an OS base on web browser, which means developer should NOT be able to go deeper than the browser. No directly manipulating the hardware, or optimizing your code in a CPU level, things like that.
However the requirement is there, so the Native Client is a technology, which provides a sandbox, to run the Native code(not really native, just you wrote with typical native language) inside the web browser.
You can see it's not same as Thick client.

.Net Web Application Redesign - Recommendations? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
My employer has a 9 year old proprietary .Net Web Application that needs redesigned. Its creation was originally outsourced to another company and its architecture and coding were done very poorly. It was designed in ASP.net 1.14, VB.Net, and C#. We continued building on the poor architecture, improving when and where possible. We have upgraded to ASP.Net 4.0 and VS 2010.
The system is only used internally and nearly all business processes and workflows are handled through the system for about 140 users. However, the system has grown so big it has become difficult to maintain and its architecture makes it difficult include new functionality in line with the latest technology.
If you could design a .Net system from scratch, keeping mobile technology and Windows 8 in mind, how would you do it? Our SQL 2008 database is sound and we do not intend to change anything there.
.Net Webforms or MVC?
What architecture would you use? Examples? We would need to allow for a customizable business logic layer in the event that an external company with different business rules uses the system.
Any recommended books on designing from the ground floor up?
Other considerations?
This will be a limited answer, but based on my experience I would say the following... If you're going for a rewrite:
Use MVC 4x.
Web Forms are outdated now. If you look at their original purpose you will notice that they were built with existing windows forms developers in mind (button events etc.).
MVC is more of an evolution of target audience, because web developers have matured to the degree where they realise how important standards are and hence require an environment that will allow them to produce the kind of results modern platforms require. It is also far stronger in the separation of concerns arena with things like unit testing.
For the business layer I would seriously consider Web Services, or in your case WCF. This will allow you to deal with a varied number of situations in the future including business to business scenarios where other companies might want to integrate directly with your internal system.
Other Advice?
We've been involved with quite a few systems over the years where we've adopted someone else's system. Whether badly written/designed, or just outdated, starting from scratch is always difficult and you should pay serious attention to the following:
Budget - How much is the company really prepared to fork out
User Temperament: If you are working with users who don't like change, then this is an area where you should pay special attention. Get them involved in the process so that they feel that the outcome was as much their doing as yours.
Be careful of loss of functionality. Because of time/money constraints you may redevelop something but end up not having the time or money to implement what you considered a minor feature. On deployment your users become agitated because it was in fact a pretty serious piece of functionality. The net result is that they start to loose confidence in the system and you'll pay dearly for that.
Processes: If you can consider Agile as a development strategy. It doesn't lend itself well to smaller projects, but does make a big difference in bigger long terms projects if implemented correctly.
Skills/Resources: Be sure that your team are utilized affectively and have the right skills for the right job.
Wow I could say so much more, but it would take a day or two.
Hope that helps a small amount.
Regards,
Jacques

Get Database Results from a Multiplayer game [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I hope you can help me or at least point me to the direction.
Everything has been programmed in c#, but client applications can be in whatever language, just need to know tcp protocol used for communicating with servers.
I made a server application, which include a lobby and a game server. You can mount as many game server as you need (actually, I have three, just for test, each one with a different kind of game).
The client application connects to lobby for authentication, nd other little things, and request for a game ... after this it's redirected to the appropiate game server.
All the information processed during the game, statistics, chats, ... is saved in a PostgreSQL database (you can configure this for use MySQL, MS SQL).
Now, the players would like to make some queries to get past info about themself, or stats, or whatever ... my question is:
¿Should I keep players query directly the database, I mean obtain the results from the database server (sending the respective sotre procedure command to get the results)?
or (due to they keep an active alive connection with lobby server by socket (and anothers with game server))
¿Do I receive the request and send the results of each query via lobby server using the permanent active connection socket?
¿Which one is the most performance or even secure? or ¿Will you recommend another suggestion?
It's good to say that 5.000 - 10.000 concurrent connections or even more are expected on all game servers.
.- I forgot to mention that some queries can has large results, I mean, 500/2000 records (rows) with some columns.
.- Forgot to say that I was thinking to set the queries via socket, processing and packaging the query in the server and sending the result to player(s) zipped (compressed).
Thanks in advance
E/R
You should absolutely not have the players clients directly submitting queries to your database servers; your game servers should have their own protocol for queries that the clients submit, and the 'game query servers' should sanity-check those queries before building the actual SQL.
Under no circumstances should you be building queries directly from user-submitted info, though.
Most secure is ovbiously the solution where the user does not know the SQL-Server, that is where the lobby server handles all connections and communication (assuming no exploitable bugs on either server ofc). Also you have more control over the implementation of security, having one user (the lobby server), most likely in the same domain, who gets read and execute permissions for exactly only the databases he needs to have those permissions for. Performance < Security, therefore the question regarding performance really does not matter much, but then performance is really also rather a matter of your implementation anyway.

Websites passing to remote server [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I'm planning a project which will require different websites to pass non-personal data to the server for processing/storing in a database. My idea is to set up a
very basic threaded TCP server (in C# running in the background on a server) IIS and just have it wait for incoming connections. When a connection is made and the data is passed, I will have the web server determine whether or not to save to the database. I plan on saving static javascript files to my server (one for each website) and having the different websites reference them. These js files will handle the passing of data from the websites to my server.
I've never done anything like this, so I'd like to know if there are any major safeguards I need to implement while developing this. The data is not sensitive, so I don't need to encrypt it.
More specific questions:
I plan on verifying parameters once it reaches the server (like injection safeguards), but is it typical to also confirm the location of the request or the authenticity?
Should I make any attempt to hide where the javascript files are sending the data to? Again, it's not sensitive data, I'm just concerned exploits which I'm not familiar with when dealing with javascript -> server.
Is there a different way I should be going about this, given the basics of what I'm trying to accomplish?
(3). Is there a different way I should be going about this, given the basics of what I'm trying to accomplish?
Yes, yes there is. Under no circumstance would I consider taking the route you've outlined. It adds a tremendous amount of code for exactly zero benefit.
Instead do this:
Have your primary server be a regular web server. Let it serve the javascript files directly to the clients.
The other web servers should simply include a regular script reference to your server that is hosting the javascript. This is no different than how google tracking works or even how jQuery is often delivered.
Your primary server should also have a set of web services exposed. This will accept the information for processing. Whether you use generic handlers (.ashx), web services (.asmx), or windows communication foundation (.wcf) is immaterial.
Under no circumstance do you bother with writing tcp level code in javascript. You will pull your hair out and the people that end up maintaining this will at some point in the very near future delete it all anyway.
Use the traditional means. There is zero reason to do things the way you've identified.
Beyond this, read #Frug's answer. You can't trust the javascript once it reaches the browser anyway. It will be read, opened, and potentially modified. The only thing you can do is use SSL to ensure that the traffic isn't captured or modified between you and the end client browser.
There is nothing specific you need to watch out for because there is nothing you can do to ensure that the javascript is secured from being modified... so operate under the assumption that they can override your javascript. That means you need to be sure to sanitize the inputs on the server side from sql injections.
If it's not sensitive and there's not a whole lot of data, you can even pass it by having your javascript shove it in the URL as a querystring (like a get method form). It can be really easy to debug stuff sent that way.

Categories