As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm implementing a SOAP Webservice for sending thousands of emails and storing thousands of XML response records in a local database. (c#.net, visual studio 2012)
I would like to make my service consumer as fast and lightweight as possible.
I need to know some of the considerations. I always have a feeling that my code should run faster than it is.
E.g.
I've read that using datasets increase overhead. So should I use lists of objects instead?
Does using ORM introduce slowness into my code?
Is a console application faster than a winform? Because the user needs no GUI to deal with. There are simply some parameters sent to the app that invoke some methods.
What are the most efficient ways to deal with a SOAP Web Service?
Make it work, then worry about making it fast. If you try to guess where the bottle necks will be, you will probably guess wrong. The best way to optimize something is to measure real code before and after.
Datasets and ORM and win form apps, and console apps can all run plenty fast. Use the technologies that suit you, then tune the speed if you actually need it.
Finally if you do have a performance problem, changing your choice of algorithms to better suit your problem will likely yield much greater performance impact than changing any of the technologies you mentioned.
Considering my personal experience with soap, in this scenario I would say your main concern should be on how you retrieve this information from your database (procedures, views, triggers, indexes and etc).
The difference between console, winform and webapp isn't that relevant.
After the app is done you should make a huge stress test on it to be able to see where lies your performance problem, if it exists.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am an entry level programmer with only a few months of experience.
Yesterday I was discussing with a colleague how we can improve the performance of a project we are working on together.
The project is built with C# + Ext.NET + JS
The plan was to move as many things as possible to client-side JavaScript instead of interacting with the server all the time.
I thought this was a good idea, but couldn't help but wonder if there is a point where bringing everything to client-side starts making the web application slower. I understand that interacting with the server and reloading unnecessary data all the time is a waste of time in most cases, but I've also seen websites loaded with so much JS that the browser actually lags and the browsing the web application is just a pain.
Is there a golden point? Are there certain 'rules'? How do you achieve maximum performance? Take Google Cloud apps, such as Docs for example, they're pretty fast for what they do, and they're web applications. That is some very good performance.
JavaScript is incredibly fast on the client-side. I assume Ext.NET is like AJAX? If not, you can use AJAX to communicate with the server using JavaScript. It will be pretty fast configured like that. However, the style of coding will change drastically if you're currently using .NET controls on the DOM with click events.
My 2 cents: Use lazy loading of xtypes whenever possible on the client (ie. you can define an xtype but it is only instantiated when it is needed). Especially if those xtypes make ajax calls!
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I work at a small startup as a Data Scientist, and I'm looking for ways to make my analysis a bit more visible/useful to the organization. I'd like to be able to put up a simple web service which allows internal users to run my scripts remotely. They should be able to input a few parameters via a very simple UI, and they should have the option to have the results appear in the browser window (after a possibly long wait), or have them emailed. Results may be a few pdf figures, and they may be Excel spreadsheets (maybe more exotic in the future, but this is it for now).
The scripts are going to be all in Python, which will handle the analysis.
So, I'd like to know what the pros and cons are of using C#/WCF vs. something like Django or Python. I have significant experience in C# working in the Client-side code base here, but I have much less experience with WCF. All of my analysis work is done in Python (and R, to a lesser extent). The main goal is to not take all of my time building a fancy web service/UI---the front end just has to be friendly enough to not intimidate the marketing people. I don't have to worry about encryption, the server will be behind our firewall. I'm pretty platform agnostic, but I think the servers are all Windows based, if this helps.
Thanks in advance.
For extra credit, how does your answer change if some of my scripts are in F#?
You might consider using the Django web framework. You could set up a small app with your python scripts as different views. https://www.djangoproject.com/
And if you don't want to put that much effort into creating a friendly UI you could use twitter bootstrap. http://twitter.github.com/bootstrap/
Then just run the app internally to gather and display data either via HTTP GETs or via e-mail.
edit: I'm sorry I did not read carefully "pros and cons are of using C#/WCF vs. something like Django". I recently made a Django app and it was fairly straight forward.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
We are about to implement a small automated securities trader. The trader will be build on top of the excellent quickfix FIX engine.
After due though, we narrowed our options down to implementing it in C# or in Python. Please specify the pros and cons of each language for this task, in term of:
Performance (The fact that Python uses a GIL troubles me in terms of thread concurrency)
Productivity
Scalability (We may need to scale this trader to a fully-sized platform)
EDIT
I've rephrased the question to make it less "C# vs. Python" (which I find irrelevant - both languages have their merits), but I'm simply trying to draw a comparison table before I make the decision.
I like both languages and a think both would be a good choice. The GIL might really be the most important difference. But I'm not sure if it's a problem in your case. The GIL only affects code running in pure Python. I assume that your tool depends more on I/O than on raw number crunching. If your I/O libraries handle the GIL correctly, they can execute concurrent code without problems. And even for number crunching you still have numpy.
My choice would depend on your existing knowledge. If you have experienced C# developers at hand I would go for C#. If you start absolutly from scratch and it's really 50:50, then I would go for Python. It's easier to learn, free and in many cases more productive.
And just to mention it: You might also have a look at IronPython. ;-)
For points "Performance" and "Scalability" I would suggest C# (although a large part of performance depends on your algorithms). Productivity is much of a subjective thing, but now C# has all cool features like lambda, anonymous method, classes etc which makes it much more productive.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Last week I interviewed for a position at a TripleA MMORPG game company here in NE. I didn't get the job but one of the areas that came up during the interview was the about the scalability of the code that you write and how it should be considered early on in the design of your architecture and classes.
Sadly to say I've never thought very much about the scalability of the .NET code that I've written (I work with single user desktop and mobile applications and our major concerns are usually with device memory and rates of data transmission). I'm interested in learning more about writing code that scales up well so it can handle a wide range of remote users in a client server environment, specifically MMORPGs.
Are there any books, web sites, best practices, etc. that could get me started researching this topic?
Here are some places to start:
http://highscalability.com/blog/2010/2/8/how-farmville-scales-to-harvest-75-million-players-a-month.html
http://www.cs.cornell.edu/people/~wmwhite/papers/2009-ICDE-Virtual-Worlds.pdf
In particular, http://highscalability.com is full or articles about huge websites that scale and how they do it (Digg, flickr, facebook, YouTube, ...)
Just one point I'd like to highlight here. Just cache your reads. Work out a proper caching policy where you determine which objects can be cached and for what periods. Having a distributed caching farm will take load off your DB servers, which will greatly benefit performance.
Even just caching some pieces of data for a few seconds - in a very high load multi-user scenario - will provide you with substantial benefit.
If you are looking for physical validation, what I usually find that helps is doing some prototyping. This gives you a good idea usually of any unforeseen problems that might be in your design and just how easy it is to add onto it. I would try to apply any design patterns possible to allow future scalability. Elements of Reusable Object-Oriented Software is a great reference for that. Here are some good examples that show before and after code using design patterns. This can help you visualize how design patterns could make your code more scalable as well. Here is an SO post about specific design patterns for software scalability.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm doing a little research in developing a simple socket server that will run as a windows service. I'm going to write this in C#, but I've been made aware that .Net sockets are slllooooowwwww. For the initial roll out using the .Net Networking classes will be fine, but I'm wondering if anyone has experience with a high performance (and hopefully free) socket library. I'm thinking probably something written in c++ that I can use as a com object in .net.
I've used indy sockets before, but it doesn't look like there is any active development going on with the project anymore. I've done some googling and I've found a few libraries, but I was hoping to get feedback from someone who has actually used a socket library with good success.
Any help appreciated. Thanks.
I would revisit your initial assumption - I don't believe it's accurate.
In my experience, the overhead of using .NET's framework socket libraries is not high - they perform quite well. The main cause of very slow socket code, that I've seen, is when people try to port non-C# code into C# directly, in particular, trying to port synchronous C++ socket code. The sockets in .NET's BCL are all designed to be used asynchronously. If you try to force them into a synchronous model, you'll end up adding quite a bit of blocking, which definitely causes very slow code.
Try using the socket classes the way they were designed - I think you'll be very happy with the performance, as well as the usability.
Sounds like you are optimizing before you need to. I would go ahead and build it with .NET and see if you have a performance problem before you try to do something that would potentially be slower. COM has a lot of overhead.