I'm looking to interrupt network connection and or cause lag to the point where nothing will go through, basically clogging the line.
I'm making a "lag-switch" you could say.
I've tried using Windows Firewall API and blocking an application but that causes an instant disconnect rather than the lag I was hoping to achieve.
Original idea was to only block one application but I'm not sure how doable that is unless I can limit the bandwidth usage of said application so the latency would skyrocket. Blocking the entire local connection would be plan B.
Anything that immediately closes the connection will disconnect instantly.
I havn't tried anything else than blocking applications via firewall.
If someone could give me a kick in the back it would be real helpful!
I've found that sending tons of UDP packets to any address (say, 127.0.0.1) in a while loop with no delay causes immense network lag, but I haven't tested this in Windows.
Related
Ok this might be a vague question, but I'll try to keep it specific!
In C#, I'm running 1 server that listens for UDP sockets. The server runs a Timer object to monitor the client's last packet sent "time" to decide whether to forget it after 15 seconds.
For each of the UDP clients launched (in their own process / terminal), they ALSO run their own Timer object to send a packet every now and then (more or less to keep it alive).
Now, I ran a stress test and could "only" reach about 104 simultaneous UDP clients connections in their own terminals / command-line windows. After that, it just gives up and shows me this:
So, being new to network programming and all, I don't have a sweet clue which specific resource it ran-out off or what limit it reached to cause it to hang at that specific nth process. Makes it difficult to understand how some frameworks claim to reach >100,000 concurrent connections!
Now, if I was to take a few guesses, I'm thinking I've ran into a limit of:
UDP sockets?
Used up as much concurrent ports the OS allows?
Timers the OS can process? (I sure hope not!)
Too many terminal windows open, especially from the same process *.exe?
Thread-Locking the wrong parts / at the wrong time?
Maybe my tests are just wrong / poorly conducted? (Maybe instead of testing individual UDP clients in their own process, I should batch a few in the same terminal?)
For the more visual, here's basically what's happening:
I'm not sure how to approach this. I am hesitant about showing my code because it's a university assignment. I need some place to start.
I'm making a TCP card game with four players and a server. Every 100ms, a player asks for an update from the server using a background worker. The server accepts a client connection and reads in a Enumeration value (sent as an Int32) that tells it the action it wants server to send it (update, card, player, etc) and a value that is read in based on the Enumeration value (Recieving an Update Enumeration means it needs to read in a Int32 next). The server sends back a response based on the Enumeration read in.
Here's where the problem occurs. I have a custom built computer (2500K processor, Win8 x64) and when I execute the program on it, it will loop forever, accepting client requests and sending the appropriate response back. Exactly as expected! However, on my laptop (Levono YogaPad, Win8 x64) the back and forth exchange lasts for around 30-50 requests and then deadlocks. It's always at the same spot. The server has read in the Enumeration and is awaiting for the second value. The client is past the part of sending the enum and value and is waiting for the results. It is always stable on my desktop and always deadlocks on my laptop. I even slow the program down to update every second and it still deadlocks. I'm not sure what to do.
I've built the program on each computer. I've built the program on my desktop and ran it on my laptop and it still deadlocks. Does anyone have any suggestions?
Many thanks!
You are lucky that the code hangs on one machine before you send the assignment in and it hangs on your teachers machine. You are also lucky that the problem is reproducible, so better find out where exactly it hangs. Without having access to the code I have the following wild guesses:
you forgot to do proper error handling in all places and now it busy loops because of an unexpected error
it hangs inside a read where you try to read N bytes but the peer sends only K<N bytes
These are wild guesses, but without having access to even the basic structure of your program you cannot probably expect anything more.
I have some c# code that is doing some file uploads to my apache server via HttpWebRequests. While the upload is in progress, I am able to use ls -la to see the growing file size.
Now, if I for example pull my computers network cable, the partial file upload remains on the server.
However, if I simply close my c# app, the partial file is deleted!
I assume this is being caused by my streams being closed gracefully. How can I prevent this behavior? I want my partial file uploads to remain regardless of how the uploading app behaves.
I have attempted to use a destructor to abort my request stream, as well as call System.Environment.Exit(1), neither of which had any effect.
Pulling the network cable will never be equivalent to aborting the stream or closing the socket, as it is a failure in a lower OSI level.
Whenever the application is closed, the networking session is aborted and any pending operation cancelled. I don't think there's any workaround, unless you programmatically split the file transfer in
smaller chunks and save them as you go along (this way you'd have a manual incremental transfer, but it requires some code server-side).
Write a very simple HTTP proxy that keeps accepting connections but never closes a connection to your server
Even simpler, using netcat 1.10 (though this will accept just one connection)
nc -q $FOREVER -l -p 12345 -c 'nc $YOUR_SERVER 80'
Then connect your C# client to localhost:12345
This might be a silly suggestion but what if you call Process.GetCurrentProcess().Kill(); while the application is being closed?
Before looking at processing of partial uploads, start by testing whether turning keepalives on in Apache configuration solves your problem of receiving partial uploads.
This may have the effect of seeing fewer disconnects and thus less need to process their partial data. Such disconnects may be due to the client, the server, but often they are due to an intermediate node such as a firewall. The keepalives option has the effect of maintaining steady "dummy" traffic (0 byte long data payload), thus advertising to all parties that the connection is still alive.
For a large site with heavy concurrent load, keepalives are a bad thing which is why they are off by default. The option makes connection management for Apache much more complicated, preventing optimized connection reuse, and there is also a little bit of extra network traffic. But maybe you have a specialized use case where this is not a concern.
Keepalives will never help you at all if your clients simply tend to crash too soon (that is, if you see steady progress on the uploads at all times). They may help you considerably if the issue is network related.
They will help you tremendously if your clients generate the data gradually, with long delays in between uploaded chunks.
Have you checked, if your application steps into
void FinishUpload(IAsyncResult result) {…}
(line 240) when aborting/killing the app? If so, you may consider to not enter the callback. This is a bit dirty but may give you a location to start digging.
Does Apache support the SendChunked property of HTTPRequest?
If so it is worth trying out.
http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.sendchunked.aspx
My program starts with windows startup,
But a background worker is supposed to work instantly after the program is opened.
But it starts with a delay and then even returns false signs(it returns if a site is up),
Only after about 15 seconds the background-worker continues to work normally and the program too. I think this is because of .net framework trying to load, or internet connection that is not up yet, or something that didn't load yet(windows startup).
What can solve this, and what is the probable cause? (WinForm C#)
Edit:
Here is something I thought of,
I don't think though that this is a good practice. Is there a better way?
(Load method):
while (!netConnection())
{
}
if(netConnection())
bwCheck.RunWorkerAsync();
I think this is because of .net framework trying to load
Nope. If that were the case your program wouldn't run.
or internet connection that is not up yet, or
Yup. The network card/interface/connection/whatever is not initialized and connected to the internet yet. You can't expect a PC to be connected to the internet immediately at startup. Even more, what if your customer is on a domain using network authentication? What if they delay network communications until some task is complete (this was actually the problem in my case below. Seriously.)
It may take even longer to get it up and running in that case (read: don't add a Thread.Sleep() in a vain attempt to 'fix' the issue.
I had to fix a problem like this once in a systems design where we communicated to a motion control board via the ethernet bus in a PC. I ended up adding some code to monitor the status of the network connection and, only when it was established, started talking to the device via the network card.
EDIT: As SLaks pointed out in the comments, this is pretty simple in C#: The NetworkAvailabilityChanged event for your programming pleasure.
It is absolutely because of everything still starting up. Services can still be coming online long after you log in, the quick login dialog you see was an optimization in windows to let you log in while everything else still starts up.
Take note of
How to detect working internet connection in C#?
specifically a technique that avoids the loopback adapter:
System.Net.NetworkInformation.NetworkInterface.GetIsNetworkAvailable()
The requirement of the TCP server:
receive from each client and send
result back to same client (the
server only do this)
require to cater for 100 clients
speed is an important factor, ie:
even at 100 client connections, it should not be laggy.
For now I have been using C# async method, but I find that I always encounter laggy at around 20 connections. By laggy I mean taking around almost 15-20 seconds to get the result. At around 5-10 connections, time to get result is almost immediate.
Actually when the tcp server got the message, it will interact with a dll which does some processing to return a result. Not exactly sure what is the workflow behind it but at small scale you do not see any problem, so I thought the problem might be with my TCP server.
Right now, I thinking of using a sync method. Doing so, I will have a while loop to block the accept method, and spawn a new thread for each client after accept. But at 100 connections, it is definitely overkill.
Chance upon IOCP, not exactly sure, but it seems to be like a connection pool, as the way it handles tcp is quite like the normal way.
For these TCP methods I am also not sure whether it is a better option to open and close connection each time message needs to be passed. On average, message are passed from each client at around 5-10 min interval.
Another alternative might be to use a web, (looking at generic handler) to form only 1 connection with the server. Any message that needs to be handled will be passed to this generic handler, which then sends and receive message from the server.
Need advice from especially those who did TCP in large scale. I do not have 100 PC for me to test out, so quite hard for me. Language wise C# or C++ will do, I'm more familar with C#, but will consider porting to C++ for the speed.
You must be doing it wrong. I personally wrote C# based servers that could handle 1000+ connections, sending more than 1 message per second, with <10ms response time, on commodity hardware.
If you have such high response times it must be your server process that is causing blocking. Perhaps contention on locks, perhaps plain bad code, perhaps blocking on external access leading to thread pool exhaustion. Unfortunately, there are plenty of ways to screw this up, and only few ways to get it right. There are good guidelines out there, starting with the fundamentals covered in Rick Vicik's High Performance Windows Programming articles, going over the SocketAsyncEventArgs example which covers the most performant way of writing socket apps in .Net since the advent of Socket Performance Enhancements in Version 3.5 and so on and so forth.
If you find yourself lost at the task ahead (as it seems you happen to be) I would urge you to embrace an established communication framework, perhaps WCF with a net binding, and use the declarative service model programming of WCF. This way you'll piggyback on the WCF performance. While this may not be enough for some, it will get you far enough, much further than you are right now for sure, with regard to performance.
I don't see why C# should be any worse than C++ in this situation - chances are that you've not yet hit upon the 'right way' to handle the incoming connections. Spawning off a separate thread for each client would certainly be a step in the right direction, assuming that workload for each thread is more I/O bound than CPU intensive. Whether you spawn off a thread per connection or use a thread pool to manage a number of threads is another matter - and something to determine through experimentation and also whilst considering whether 100 clients is your maximum!