Polling servers seems to be a theme that comes up for certain types of enterprise applications, particularly social networking web applications.
From what I have read, people on the Unix side use memcache that can give you a few thousand requests per second.
What options do Windows developers have since we don't have a memcache alternative? (I don't have the time to learn Unix at that level).
I can't promise that memcached is actually suitable for your problem, but it shouldn't be too difficult to get it working on a Windows server. Check out memcached for Win32 for a good starting point. If your problem is big enough to need memcached, then the effort to port it or get a port working probably shouldn't daunt you.
Related
My company needs to internally test infrastructure. Our organization is rapidly moving from a very small development environment to a larger more scalable production environment. Specifically we want to flood some of our new request routers from the inside. We need to write the tool ourselves, and it has to be automatable. We can't use third-party tools because, to be frank, they're written by less than scrupulous hats and are virtually all are chock full of malware.
To be more specific, the tool is to be written in C# and the first obstacle we have to get past is overcoming kernel imposed restrictions on half-open outbound requests (we're clearly a Windows shop...I wouldn't be posting if we were all Linux). We would be dealing with Vista.
Any tips on anything are greatly appreciated. Where to get started, open source tools (so they can be verified to not be malware), anything. Thanks in advance.
And no, I'm not a hat or a script kiddie.
EDIT: The main goal here is not the actual generation of the traffic (I can do that), but rather programmatically overcoming the OS imposed "10 half-open outbound requests" restriction. Theoretically a single system should be able to spawn 255 of these half-open requests at a time, but the OS artificially locks this down.
Upgrade to Vista SP2 (you say you're specifically dealing with Vista, although the title suggests differently) - the connection limit is removed in Vista SP2.
http://technet.microsoft.com/en-us/library/dd335036(WS.10).aspx?ppud=4
Notably, it's the last bullet point under Operating System Experience updates
And for added value...
How to turn it on again if you wanted...
http://support.microsoft.com/kb/969710
Your best bet is going to be the winpcap driver. Note that many of these "kernel-imposed restrictions" are also imposed in hardware, in which case you are S.O.L. until you buy new hardware.
Is running mono on linux a possibility? You might need to write/use some third party unmanaged code I would imagine, but you would not have that restriction to deal with.
I want it to work on windows servers.
It will be a cloud type server - it'll consist of modules\parts running on different machines all over the world using http\tcp + upnp to connect to each other
There are going to be controlling\monitoring\observing modules on each machine to provide stats on performance
This net is going to be working with large amount of VIDEO\AUDIO life streaming\broadcasting data
It is going to use FFMPEG for re-encoding and OpenGL, OpenCV and such for filtering (.NET wrappers exist and work BTW)
It will not use any WCF or IIS
I want to develop it in team of 2-4 developers, smart students.
So is it OK to create this in C# .Net or I shall not waste my time on promises of ease it could provide to a developer and go C\C++?
So is it reasonable to write a server application in C# in my case?
Offtop - why not WCF
Warning: it gets way to subjective in here.
WCF is grate when you have big corp with relatively small data exchange per one session of service.
When you have video, LIVE video, it all gets complicated. Large amounts of data, lots of users stream in and out from your service at the same time.
Try to do live video streaming over http binding - than try it with others than you'll see why I do not like idea of live streaming with WCF - it is slow, with way2much not needed for live streaming info and after all have you ever seen a live video streaming app on WCF? No - you haven't - may be you have seen +- live video on Silverlight + IIS pair which I do not like because it is just for Silverlight\WindowsMediaPlayer video streaming solution while I want more than that.
I love to have cross-platform clients with reach UI’s. And I do not like (it is all here my personal opinion - so it is subjective) Silverlight+IIS+WCF group. So what shall I do - right go to sockets, streams in such old and simple formats like FLV and Flash as back end client - Simpler in development in some parts, more conservative way of doing live video over the web than one you get from MS today.
I love Flash FLV live streaming because you just open socket and start sending live FLV video data onto it (for each user FLV header and than FLV "TAG's", one by one: video tag, audio tag, video tag, audio tag etc) and Flash plays it! With no special\unusual code. It is fast, easy in supporting, and does not make client need anything new\unusual. And you on server side can take grate use of that "TAG" form of video\audio data representation.
So that is in short why I just do not want to use WCF - hard to get live video playing out from it on client side, no general benefits for live video server.
And when most of live data goes thru sockets why to bother with using WCF for service management.
During last half of 2009 and first half of 2010 I was getting into WCF, live video streaming, silverlight and flash, comparing process of client\server creation, reading different formats with a team of wary interesting developers. In general at the end of project we had lots of mini servers streaming live data and lots of different clients receiving it. Comparing all we've done we came to conclusions which are near one I present you here.
That is why I do not want to use WCF in my nearest project - I do not want to think about how to deliver media data, I want to focus on its filtering\editing.
Why the question appeared
We started playing with FFmpeg\OpenCV in C, and it is pretty simple to manipulate data using them... in C... on Linux...
But when we started to play with there .Net bindings (we are now playing with Tao.FFmpeg) we found that in most cases we end up playing with C# Marshal a lot, and having 2 variables for its C analog (problem of pointers) and so on. I hope we will not see such problem with Emgu CV but steel it makes me a little bit afraid...
I think it's entirely reasonable. The benefits of C# with regard to ease of development will greatly outweigh any performance drawbacks of not using C++.
C# is generally more cross-platform than C++. True, C++ is a cross-platform language, but there are large differences between the APIs that C++ programs use to interact with the system. C# and .Net/Mono have a much more standardized interface to the socket layer.
Finally, with ambitious projects like this, getting the project into a usable form is a much more important goal than getting the highest performance possible. Performance only matters if the project is complete. Write it in C# because that will give you the greatest odds of completion. Then worry about performance.
I'm not exactly sure why people have brought up Cross Platform concerns as clearly the OP has stated the app will run on Windows.
As to the actual questions.
Can you build a server application that communicates via tcp/http in C# that does not have to run in IIS. -> Yes.
Can you build a server application that is performant and scales in C# -> Yes.
Can you do so with Students -> Maybe. Depends on the students... ;) But that is irrespective of the language in use.
Is this something I would do? Yes. We've done that. We have a c# app running on approximately 20,000 machines right now that are communicating effectively over tcp. We aren't using WCF, but we did decide to use RESTful style services over http for the data transfer.
Our biggest issue was simply tuning the app to transfer the "right" amount of data over the wire at a time. This network is for data collection and storage. It's averaging around 200GB of data collected a day..
UPDATE
I wanted to clarify a bit about the above app. The 20,000 machines at the above installation are clients (XP, Vista, 7, 2003 Server, and 2008 Servers). There's only one data collection point server in the mix. The clients post data to the server, when connected to a network, once every 45 seconds. Roughly 97% of the machines stay connected in this manner, the rest connect a couple times a week.
This works out to the server processing about 37 million requests a day.
Now, to be sure, each request is relatively small at around 5KB to 6KB each. However, the shear number of requests shows that a C# application can handle managing those connections, which is the bigger part of the OP's problem.
Because the OP's files are large (Video), then the real issue is simply in data transfer. Which will be hindered more by hard drive speeds, as well as network speed and latency. Those issues are irrespective of which language you are working in and will limit the number of connections per server based on available bandwidth.
Working this out let's limit it down to one server for an example. If you have a video rate of 400kb/s then and a 25MB connection to the internet, then that box could physically only handle around 62 simultaneous connections. Which is so FAR below the number of connections our app is doing as to be a rounding error.
Assuming perfect network conditions (which don't exist), pumping that internet connection up to 100MB (which can be expensive) means a 4x increase in simultaneous connections to 240; still completely manageable.
However, the network is only one side of the equation. Drive speed on the servers matters a lot. You better have a good disk array capable of continuously delivering that amount of data. I know drives claim 3GB data transfers, but a drive which can saturate the channel has never been built. Which means serious planning and money in the server setup.
The point of all of this is to say that the language doesn't matter one bit in your situation. You have other much larger contention issues. With that being the case, go with the language that will help you get the project done faster.
Why stop at C#, if you (possibly) want cross-platform, write it in Python or similar, you'll find that the networking aspects of a scripting language are far better than C# (as that's pretty much the role scripting languages are put to nowadays, running web-based servers).
You'll find developer productivity is much improved over C# (just as C# has better productivity over C++), and there are lots of people who know and want to work on these systems. It sounds like performance of the servers themselves is of less importance than the networking, so it appears that script would be your best choice. Plus ffmpeg libraries are more tightly integrated with python using pyffmpeg than C# (well, mostly).
And it'd be a lot cooler, more fun, and very much cross-platform!
If you want C# and also cross-platform abilities, your development will have to target the Mono platform (or another cross-platform .NET runtime, if you can find one). You might have to give up VisualStudio, and maybe some Microsoft-specific libraries and tools, but you can still have C# on multiple platforms. Just make sure you start the multi-platform building and testing EARLY in the process or it will be hell to change things later.
If the target of the application is to run only on Windows platforms, I'm completely sure to write this application in C#. Many applications like that can be running right now and we don't even know that.
If the target is to run on multiple platformms, you should encapsulate first all the problems that a non-windows platform can bring to your application.
Why do you have to write it in C++ if, in this case, C# is capable to do everything that C++ does? I would use C++ to program things on hardware-level things, like a robot or something else. To write a server application, C# will fit very well what you want, it was designed for these things.
And C# is cross-platform, you just need the right tool to make it work on a specific platform.
Im interested in programming a Project which distributes a certain computation on large files throughout several computers. The need for distributed computing arises from the crashy and unstable nature of the software I'm using to do the actual computing - so it might crash on some computers but others will surely do the job.
The ideas I have so far Include:
-Using several servers, each pulling a task from a master server whenever its possible
-Using VMwares
-Using a load-balancing Cluster
What is more suited for the job? Any other ideas I should be aware of?
Also, If you can recommend any reliable distributed computing C# framework, it will be helpful.
haven't used any of these myself (yet), but I bookmarked this question a little while ago. some good suggestions there.
Have you looked at Hadoop MapReduce? It's an open-source implementation of Google's MapReduce framework. Though it's Java and not C#, it sounds like it could be perfectly suited to your scenario; the master server automatically handles load balancing and fault-tolerance in a distributed environment.
I would check out Appistry CloudIQ Platform. It links together multiple machines into a single computing framework identified by a unified address. Your client simply submits jobs to the unified address, and the framework distributes jobs to individual machines. It also monitors task execution, and can automatically restart failed jobs. So if your application is prone to crashing, this framework could be ideal. Rather than submitting the same job to multiple machines (and wasting CPU) to cover the failure case, just submit it once, and let the framework handle restarting the jobs that actually fail. I would consider it ideal for your reliability concerns.
Can anyone help me with a question about webservices and scalability? I have written a webservice as a facade into our document management system and need to think about scalability issues. What areas should I be looking at to ensure performance and availability?
Thanks in advance
Performance is separate from scalability. Scalability means that you can add more servers to linearly increase system throughput (i.e more client connections). The best way to start is having stateless webservices. That way any client can call any of the n webservice intance on n different machines. If there is a shared database at the end for persistence that will ultimately be your bottleneck. There are ways to reduce that with data partitioning and sharding, but only when you get to that point.
First of all, decide what is acceptable behaviour of your web service. What should it be able to cope - 1000 connections per second? What response time will each connection have?
Then you need to automate the usage of your web service so you can stress test the system.
What happens when you have 100 requests per second? 1000? 10000?
Then you can make a decision about if performance is ok, if the acceptable behaviour is too strict, or if you need to do heavy performance tuning based on actual profiling data.
You should be looking to host your WCF service in IIS. IIS has a lot of performance, scalability, security etc. mechanisms built in and is the best starting point to save you reinventing the wheel.
Some of the performance is certainly due to your own code, but lets assume that it's already optimized. At that point, the additional performance scaling issues involve the service host (e.g. IIS) the machines that host it, and their network (inter/intranet) connection speeds. You'll need to do some speed tests to be sure of things.
Well it really depends on what you're doing in your web service, but the only way you're going to find out is by simulating lots of users and measuring it.
Take a look at my answer to this question: Measuring performance
When we tested our code in this manor (where the web services were hosted in Windows service(s)), we found that the bottleneck was authenticating each user in the facade service. In particular the windows component LSASS was using most of the CPU.
Luckily we were able to create new machines, each with a facade service, which then called through to our main set of web services. This enable us to scale up to a large number of users (in the region of 100,000 users using our software normally).
I currently have an application that sends XML over TCP sockets from windows client to windows server.
We are rewriting the architecture and our servers are going to be in Java. One architecture we are looking at is a REST architecture over http. So C# WinForm clients will send info using this. We are looking for high throughput and low latency.
Does anyone have any performance metrics on this approach versus some other C# client to Java server communication options.
This isn't really well-enough defined to make any metric statements; how big are the messages, how often would you be hitting the REST service, is it straight HTTP or do you need to secure it with SSL? In other words, what can you tell us about the workload parameters?
(I say this over and over again on performance questions: unless you can tell me something about the workload, I can't -- nobody really can -- tell you what will give better performance. That's why they used to say you couldn't consider performance until you had an implementation: it's not that you can't think about performance, it's that people often couldn't or at least wouldn't think about workload.)
That said, though, you can make some good estimates simply by looking at how many messages you want to exchange, because setup time for TCP/IP often dominates REST. REST offers two advantages here: first, the TCP/IP time often does dominate the message transmission, and that's pretty well optimized in production web servers like Apache or lighttpd; second, a RESTful architecture enhances scalability by eliminating session state. That means you can scale freely using just a simple TCP/IP load balancer.
I would set up a test to try it and see. I understand that the only part of your application you're changing is the client/server communication. So analyse what you're sending now, and put together a test client/server setup sending messages which are representative of what you think your final solution is going to be doing (perhaps representative only in terms of size/throughput).
As noted in the previous post, there's not enough detail to really judge what the performance is going to be like. e.g.
is your message structure/format going to be the same, but merely over HTTP rather than raw sockets ?
are you going to be sending subsets of XML data ? Processing large quantities of XML can be memory intensive (e.g. if you're using DOM-based approach).
What overhead is your chosen REST framework going to be introducing (hopefully very little, but at the moment we don't know).
The best solution is to set something up using (say) Jersey and spend some time testing various scenarios. If you're re-architecting a solution, it's going to be worth a few days investigating performance (let alone functionality, ease of development etc.)
It's going to be plenty fast, unless you have a very, very large number of concurrent clients hitting those servers. The XML shredding keeps getting faster in both Java and .NET. If you are on CLR2 and Java 5 or above, you will be fine. But of course you still need to do the tests to verify.
We've tested in our lab, REST and SOAP transactions, and they are faster than you might think. Tens of thousands of messages per second. Small numbers of modern CPUs generating XML messages can easily saturate a gigabit network. In other words, the network is the bottleneck (transmission of data), not the CPU (serializing & de-serializing XML).
AND, If you do your software design properly, in the very unlikely situation where REST is not sufficient, then swapping out the message format layer (REST => protobufs) will get you better transmission perf, with minimal disruption.
But before you need to go there, you will be able to send some money to Cisco and get lots more headroom.