SignalR fails under high load [closed] - c#

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a website with very high load and keeping my test app under a hidden iframe to make sure that the target framework is a good choice for my use case. First tried SignalR test app and then Pokein under same server config. Currently we are using Flash remoting solutions but soon we are planning to change it.
I spent some time to make my SignalR based test application to handle concurrent client updating under the high load of my website. It was working good under the scenario (some of the clients requests for message).. when most of the connected clients request for the messages at the same time, it failed dramatically (I needed to remove it from the iframe call).. I had suspected my server configuration is the problem but the same scenario work under other paid solution Pokein without any issue.
Is there any trick i forget?
Feb.10.2012 Update:
Although we decided to implement PokeIn into our solution, I tried the latest SignalR code on Github (might be helpful for others).. and the result is the same.
March.13.2012 Update:
Scenario: (One more time)
-Try to send a message to the thousands of connected clients under a given interval lets say (1 sec). It won't be hard to test and see the result. I feel like, i am the only person around stressing the libraries for this type of very common usage.
Details (How to reproduce - tested with 0.5 from Github)
- Server 2008 R2 32GB DDR3, i7-2600 3.4Ghz, 2x256 GB Crucial M4
- ASP.NET 3.5
Single page app. updates the time on the client side from the server every seconds
This page is embedded into a hidden iframe loaded by several web sites in order make a real life load test.
Issues
System locks at some point ( approx 800 users) and most of the clients doesn't get the updated time from server
Once the system locks, that single app page stops responding
I also tried to increase the interval to 5 secs. This time the system was more responsive (approx. 950 users) but the result was same. I tried this on .NET 2 and .NET 4 application pools.
Hope these details are enough. Repeating this test is quite easy for me and as soon as i found a free time, i will repeat the test with future version.

Related

Server-side vs Client-side web application Performance [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am an entry level programmer with only a few months of experience.
Yesterday I was discussing with a colleague how we can improve the performance of a project we are working on together.
The project is built with C# + Ext.NET + JS
The plan was to move as many things as possible to client-side JavaScript instead of interacting with the server all the time.
I thought this was a good idea, but couldn't help but wonder if there is a point where bringing everything to client-side starts making the web application slower. I understand that interacting with the server and reloading unnecessary data all the time is a waste of time in most cases, but I've also seen websites loaded with so much JS that the browser actually lags and the browsing the web application is just a pain.
Is there a golden point? Are there certain 'rules'? How do you achieve maximum performance? Take Google Cloud apps, such as Docs for example, they're pretty fast for what they do, and they're web applications. That is some very good performance.
JavaScript is incredibly fast on the client-side. I assume Ext.NET is like AJAX? If not, you can use AJAX to communicate with the server using JavaScript. It will be pretty fast configured like that. However, the style of coding will change drastically if you're currently using .NET controls on the DOM with click events.
My 2 cents: Use lazy loading of xtypes whenever possible on the client (ie. you can define an xtype but it is only instantiated when it is needed). Especially if those xtypes make ajax calls!

Very simple web service: take inputs, email results [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I work at a small startup as a Data Scientist, and I'm looking for ways to make my analysis a bit more visible/useful to the organization. I'd like to be able to put up a simple web service which allows internal users to run my scripts remotely. They should be able to input a few parameters via a very simple UI, and they should have the option to have the results appear in the browser window (after a possibly long wait), or have them emailed. Results may be a few pdf figures, and they may be Excel spreadsheets (maybe more exotic in the future, but this is it for now).
The scripts are going to be all in Python, which will handle the analysis.
So, I'd like to know what the pros and cons are of using C#/WCF vs. something like Django or Python. I have significant experience in C# working in the Client-side code base here, but I have much less experience with WCF. All of my analysis work is done in Python (and R, to a lesser extent). The main goal is to not take all of my time building a fancy web service/UI---the front end just has to be friendly enough to not intimidate the marketing people. I don't have to worry about encryption, the server will be behind our firewall. I'm pretty platform agnostic, but I think the servers are all Windows based, if this helps.
Thanks in advance.
For extra credit, how does your answer change if some of my scripts are in F#?
You might consider using the Django web framework. You could set up a small app with your python scripts as different views. https://www.djangoproject.com/
And if you don't want to put that much effort into creating a friendly UI you could use twitter bootstrap. http://twitter.github.com/bootstrap/
Then just run the app internally to gather and display data either via HTTP GETs or via e-mail.
edit: I'm sorry I did not read carefully "pros and cons are of using C#/WCF vs. something like Django". I recently made a Django app and it was fairly straight forward.

Consuming a SOAP Web Service with the lowest overhead [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm implementing a SOAP Webservice for sending thousands of emails and storing thousands of XML response records in a local database. (c#.net, visual studio 2012)
I would like to make my service consumer as fast and lightweight as possible.
I need to know some of the considerations. I always have a feeling that my code should run faster than it is.
E.g.
I've read that using datasets increase overhead. So should I use lists of objects instead?
Does using ORM introduce slowness into my code?
Is a console application faster than a winform? Because the user needs no GUI to deal with. There are simply some parameters sent to the app that invoke some methods.
What are the most efficient ways to deal with a SOAP Web Service?
Make it work, then worry about making it fast. If you try to guess where the bottle necks will be, you will probably guess wrong. The best way to optimize something is to measure real code before and after.
Datasets and ORM and win form apps, and console apps can all run plenty fast. Use the technologies that suit you, then tune the speed if you actually need it.
Finally if you do have a performance problem, changing your choice of algorithms to better suit your problem will likely yield much greater performance impact than changing any of the technologies you mentioned.
Considering my personal experience with soap, in this scenario I would say your main concern should be on how you retrieve this information from your database (procedures, views, triggers, indexes and etc).
The difference between console, winform and webapp isn't that relevant.
After the app is done you should make a huge stress test on it to be able to see where lies your performance problem, if it exists.

Moving reporting logic into .NET code: Releasing fixes? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
At this moment, all of our reports are currently written in stored procedures. We want to start moving this away from a SQL platform and rather to focus most of the logic in .NET code. Reasons include the use of our ORM entities, ease of debugging, parallel processing, more unification with business logic, etc.
The problem we face is that by moving our report logic into .NET code, we cannot deploy support fixes as easily as running a script on our production environment. Releasing binaries means that the whole business has to stop using our application, which is almost impossible during office hours.
One solution is to separate each report into a new project and release just that DLL. The problem with this is that we have over 500 reports. Maintaining that will be a nightmare.
Has anyone experienced something similar or have any other solutions to this problem?
Thanks,
Dave
Why not use Reporting Services? It is made for this! You can even train people in the business (non-devs) to create reports for you and publish them. There are so many features that users can just leverage. Authentication/Authorization, subscribe, export (PDF, Excel, Word), etc.
I wouldn't invest in rebuilding Reporting Services if I was you. I always stir away from writing reports as part of you application or in 'code'.
If you really, really want to do this (which I totally think you shouldn't do) then I would develop a separate service that generates reports (on a separate end-point) that you can call from your main application. Put a queue in the middle that stores 'report requests' when you need to update and the requests can be served after a restart.
Other option would be to go with dynamically loading assemblies. Let a filewatcher watch a folder and as soon as there is a new dll load it dynamically. Unloading is more difficult. You could restart the service when it is not so busy to get remove the old reports from memory or you need to create separate appdomains that you can unload.
A lot of options, but again, you will be wasting time by building and testing a custom report framework. I would go for plain SQL, even if you really like C#, this way you can hire a BI person that can just create reports instead of a dev.
I agree with a lot #bart's answer. My first reaction is that this seems like a unneeded reinvention.
However if you do need .net code and and convenience of declarative code, then why not use a DLR based language like iron python?
We've stored iron python in the db and loaded it on demand. Once jitted it's no different than any non dlr based code, deploying fixes was a dream.

Run web service API on same or separate servers? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I have a web portal and the web portal has web services API.
Which solution would be best and why?
Should I....
1) Run the web portal and the web portal API on the same server or
2) Run the web portal and the web portal API on separate servers
It's all a matter of trading off different forces, there just can't be the same answer for everybody.
Here's a few things to consider:
Having the UI (portal) and it's dependent services on the same box makes for a very clear set of dependencies, when diagnosing problems you've got just one place to look. You can scale by adding more such boxes, each being self-contained. Clarity has a lot of operational value.
But, it's likely that the portal or the services will have different resource requirements, hence you are scaling (say) the portal when the services are not using much resource. Hence you have more copies of something portal or service than you strictly need. This may have considerable costs. Examples:
Licence costs. Suppose you have 10 copies of portal but really only needed 5, then that's 5 licences wasted.
Memory consumption. Suppose there's a fixed overhead in getting the services (or portal) up irrespective of load demands (think caching or database connections) then you are paying that cost for the un-needed instances
Back-end costs. Your services may connect to enterprise systems, eg a database. Each connection costs resources on the back-end. If you have un-needed instances you pay needless costs.
3.Platform tuning. You may need to tune the platform differently for your portal and the services. This issue is more noticable when considering whether to co-locate the database too.

Categories