Writing To A File With Multiple Streams C# - c#

I am trying to download a large file (>1GB) from one server to another using HTTP. To do this I am making HTTP range requests in parallel. This lets me download the file in parallel.
When saving to disk I am taking each response stream, opening the same file as a file stream, seeking to the range I want and then writing.
However I find that all but one of my response streams times out. It looks like the disk I/O cannot keep up with the network I/O. However, if I do the same thing but have each thread write to a separate file it works fine.
For reference, here is my code writing to the same file:
int numberOfStreams = 4;
List<Tuple<int, int>> ranges = new List<Tuple<int, int>>();
string fileName = #"C:\MyCoolFile.txt";
//List populated here
Parallel.For(0, numberOfStreams, (index, state) =>
{
try
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create("Some URL");
using(Stream responseStream = webRequest.GetResponse().GetResponseStream())
{
using (FileStream fileStream = File.Open(fileName, FileMode.OpenOrCreate, FileAccess.Write, FileShare.Write))
{
fileStream.Seek(ranges[index].Item1, SeekOrigin.Begin);
byte[] buffer = new byte[64 * 1024];
int bytesRead;
while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
if (state.IsStopped)
{
return;
}
fileStream.Write(buffer, 0, bytesRead);
}
}
};
}
catch (Exception e)
{
exception = e;
state.Stop();
}
});
And here is the code writing to multiple files:
int numberOfStreams = 4;
List<Tuple<int, int>> ranges = new List<Tuple<int, int>>();
string fileName = #"C:\MyCoolFile.txt";
//List populated here
Parallel.For(0, numberOfStreams, (index, state) =>
{
try
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create("Some URL");
using(Stream responseStream = webRequest.GetResponse().GetResponseStream())
{
using (FileStream fileStream = File.Open(fileName + "." + index + ".tmp", FileMode.OpenOrCreate, FileAccess.Write, FileShare.Write))
{
fileStream.Seek(ranges[index].Item1, SeekOrigin.Begin);
byte[] buffer = new byte[64 * 1024];
int bytesRead;
while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
if (state.IsStopped)
{
return;
}
fileStream.Write(buffer, 0, bytesRead);
}
}
};
}
catch (Exception e)
{
exception = e;
state.Stop();
}
});
My question is this, is there some additional checks/actions that C#/Windows takes when writing to a single file from multiple threads that would cause the file I/O to be slower than when writing to multiple files? All disk operations should be bound by the disk speed right? Can anyone explain this behavior?
Thanks in advance!
UPDATE: Here is the error the source server is throwing:
"Unable to write data to the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond."
[System.IO.IOException]: "Unable to write data to the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond."
InnerException: "A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond"
Message: "Unable to write data to the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond."
StackTrace: " at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size)\r\n at System.Net.Security._SslStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)\r\n at System.Net.Security._SslStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)\r\n at System.Net.Security.SslStream.Write(Byte[] buffer, Int32 offset, Int32 count)\r\n

Unless you're writing to a striped RAID, you're unlikely to experience performance benefits by writing to the file from multiple threads concurrently. In fact, it's more likely to be the opposite – the concurrent writes would get interleaved and cause random access, incurring disk seek latencies that makes them orders of magnitude slower than large sequential writes.
To get a sense of perspective, look at some latency comparisons. A sequential 1 MB read from disk takes 20 ms; writes take approximately the same time. Each disk seek, on the other hands, takes around 10 ms. If your writes are interleaved at 4 KB chunks, then your 1 MB write will require an additional 2560 ms of seek time, making it 100 times slower than sequential.
I would suggest only allowing one thread to write to the file at any time, and use parallelism just for the network transfer. You can use a producer–consumer pattern where downloaded chunks are written to a bounded concurrent collection (such as BlockingCollection<T>), which then get picked up and written to disk by a dedicated thread.

fileStream.Seek(ranges[index].Item1, SeekOrigin.Begin);
That Seek() call is a problem, you'll seek to a part of the file that's very far removed from the current end-of-file. Your next fileStream.Write() call forces the file system to extend the file on disk, filling the unwritten parts of it with zeros.
This can take a while, your thread will be blocked until the file system is done extending the file. Might well be long enough to trigger a timeout. You'd see this go wrong early at the start of the transfer.
A workaround is to create and fill the entire file before you start writing real data. Otherwise a very common strategy used by downloaders, you might have seen .part files before. Another nice benefit is that you have a decent guarantee that the transfer cannot fail because the disk ran out of space. Beware that filling a file with zeros is only cheap when the machine has enough RAM. 1 GB should not be a problem on modern machines.
Repro code:
using System;
using System.IO;
using System.Diagnostics;
class Program {
static void Main(string[] args) {
string path = #"c:\temp\test.bin";
var fs = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Write);
fs.Seek(1024L * 1024 * 1024, SeekOrigin.Begin);
var buf = new byte[4096];
var sw = Stopwatch.StartNew();
fs.Write(buf, 0, buf.Length);
sw.Stop();
Console.WriteLine("Writing 4096 bytes took {0} milliseconds", sw.ElapsedMilliseconds);
Console.ReadKey();
fs.Close();
File.Delete(path);
}
}
Output:
Writing 4096 bytes took 1491 milliseconds
That was on an fast SSD, a spindle drive is going to take much longer.

Here's my guess from the information given so far:
On Windows, when you write to a position that extends the file size Windows needs to zero initialize everything that comes before it. This prevents old disk data to leak which would be a security problem.
Probably, all but your first thread need to zero-init so much data that the download times out. This is not really streaming anymore because the first write takes ages.
If you have the LPIM privilege you can avoid zero initialization. Otherwise you cannot for security reasons. Free Download Manager shows a message that it starts zero-initing at the start of each download.

So after trying all the suggestions I ended up using a MemoryMappedFile and openening a stream to write to the MemoryMappedFile on each thread:
int numberOfStreams = 4;
List<Tuple<int, int>> ranges = new List<Tuple<int, int>>();
string fileName = #"C:\MyCoolFile.txt";
//Ranges list populated here
using (MemoryMappedFile mmf = MemoryMappedFile.CreateFromFile(fileName, FileMode.OpenOrCreate, null, fileSize.Value, MemoryMappedFileAccess.ReadWrite))
{
Parallel.For(0, numberOfStreams, index =>
{
try
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create("Some URL");
using(Stream responseStream = webRequest.GetResponse().GetResponseStream())
{
using (MemoryMappedViewStream fileStream = mmf.CreateViewStream(ranges[index].Item1, ranges[index].Item2 - ranges[index].Item1 + 1, MemoryMappedFileAccess.Write))
{
responseStream.CopyTo(fileStream);
}
};
}
catch (Exception e)
{
exception = e;
}
});
}

System.Net.Sockets.NetworkStream.Write
The stack trace shows that the errors happens when writing to the server. It is a timeout. This can happen because of
network failure/overloading
an unresponsive server.
This is not an issue with writing to a file. Analyze the network and the server. Maybe the server is not ready for concurrent usage.
Prove this theory by disabling writing to the file. The error should remain.

Related

WebRequest fails to download large files (~ 1 GB) properly

I am attempting to download a large file from a public URL. It seemed to work fine at first but 1 / 10 computers seem to timeout. My initial attempt was to use WebClient.DownloadFileAsync but because it would never complete I fell back to using WebRequest.Create and read the response streams directly.
My first version of using WebRequest.Create found the same problem as WebClient.DownloadFileAsync. The operation times out and the file does not complete.
My next version added retries if the download times out. Here is were it gets weird. The download does eventually finish with 1 retry to finish up the last 7092 bytes. So the file is downloaded with exactly the same size BUT the file is corrupt and differs from the source file. Now I would expect the corruption to be in the last 7092 bytes but this is not the case.
Using BeyondCompare I have found that there are 2 chunks of bytes missing from the corrupt file totalling up to the missing 7092 bytes! This missing bytes are at 1CA49FF0 and 1E31F380, way way before the download times out and is restarted.
What could possibly be going on here? Any hints on how to track down this problem further?
Here is the code in question.
public void DownloadFile(string sourceUri, string destinationPath)
{
//roughly based on: http://stackoverflow.com/questions/2269607/how-to-programmatically-download-a-large-file-in-c-sharp
//not using WebClient.DownloadFileAsync as it seems to stall out on large files rarely for unknown reasons.
using (var fileStream = File.Open(destinationPath, FileMode.Create, FileAccess.Write, FileShare.Read))
{
long totalBytesToReceive = 0;
long totalBytesReceived = 0;
int attemptCount = 0;
bool isFinished = false;
while (!isFinished)
{
attemptCount += 1;
if (attemptCount > 10)
{
throw new InvalidOperationException("Too many attempts to download. Aborting.");
}
try
{
var request = (HttpWebRequest)WebRequest.Create(sourceUri);
request.Proxy = null;//http://stackoverflow.com/questions/754333/why-is-this-webrequest-code-slow/935728#935728
_log.AddInformation("Request #{0}.", attemptCount);
//continue downloading from last attempt.
if (totalBytesReceived != 0)
{
_log.AddInformation("Request resuming with range: {0} , {1}", totalBytesReceived, totalBytesToReceive);
request.AddRange(totalBytesReceived, totalBytesToReceive);
}
using (var response = request.GetResponse())
{
_log.AddInformation("Received response. ContentLength={0} , ContentType={1}", response.ContentLength, response.ContentType);
if (totalBytesToReceive == 0)
{
totalBytesToReceive = response.ContentLength;
}
using (var responseStream = response.GetResponseStream())
{
_log.AddInformation("Beginning read of response stream.");
var buffer = new byte[4096];
int bytesRead = responseStream.Read(buffer, 0, buffer.Length);
while (bytesRead > 0)
{
fileStream.Write(buffer, 0, bytesRead);
totalBytesReceived += bytesRead;
bytesRead = responseStream.Read(buffer, 0, buffer.Length);
}
_log.AddInformation("Finished read of response stream.");
}
}
_log.AddInformation("Finished downloading file.");
isFinished = true;
}
catch (Exception ex)
{
_log.AddInformation("Response raised exception ({0}). {1}", ex.GetType(), ex.Message);
}
}
}
}
Here is the log output from the corrupt download:
Request #1.
Received response. ContentLength=939302925 , ContentType=application/zip
Beginning read of response stream.
Response raised exception (System.Net.WebException). The operation has timed out.
Request #2.
Request resuming with range: 939295833 , 939302925
Received response. ContentLength=7092 , ContentType=application/zip
Beginning read of response stream.
Finished read of response stream.
Finished downloading file.
this is the method I usually use, it hasn't failed me so far for the same kind of loading you need. Try using my code to change yours up a bit and see if that helps.
if (!Directory.Exists(localFolder))
{
Directory.CreateDirectory(localFolder);
}
try
{
HttpWebRequest httpRequest = (HttpWebRequest)WebRequest.Create(Path.Combine(uri, filename));
httpRequest.Method = "GET";
// if the URI doesn't exist, exception gets thrown here...
using (HttpWebResponse httpResponse = (HttpWebResponse)httpRequest.GetResponse())
{
using (Stream responseStream = httpResponse.GetResponseStream())
{
using (FileStream localFileStream =
new FileStream(Path.Combine(localFolder, filename), FileMode.Create))
{
var buffer = new byte[4096];
long totalBytesRead = 0;
int bytesRead;
while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
totalBytesRead += bytesRead;
localFileStream.Write(buffer, 0, bytesRead);
}
}
}
}
}
catch (Exception ex)
{
throw;
}
You should change the timeout settings. There seem to be two possible timeout issues:
Client-side timeout - try changing the timeouts in WebClient. I find for large file downloads sometimes I need to do that.
Server-side timeout - try changing the timeout on the server. You can validate this is the problem using another client, e.g. PostMan
For me your method on how to read the file by buffering looks very weird.
Maybe the problem is, that you do
while(bytesRead > 0)
What if, for some reason, the stream doesnt return any bytes at some point but it is still not yet finished downloading, then it would exit the loop and never come back. You should get the Content-Length, and increment a variable totalBytesReceived by bytesRead. Finally you change the loop to
while(totalBytesReceived < ContentLength)
Allocate buffer size bigger than expected file size .
byte[] byteBuffer = new byte[65536];
so that , if the file is 1GiB in size, you allocate a 1 GiB buffer, and then you try to fill the whole buffer in one call. This filling may return fewer bytes but you've still allocated the whole buffer. Note that the maximum length of a single array in .NET is a 32-bit number which means that even if you recompile your program for 64bit and actually have enough memory available.

File Chunking Performance in C#

I am trying to empower users to upload large files. Before I upload a file, I want to chunk it up. Each chunk needs to be a C# object. The reason why is for logging purposes. Its a long story, but I need to create actual C# objects that represent each file chunk. Regardless, I'm trying the following approach:
public static List<FileChunk> GetAllForFile(byte[] fileBytes)
{
List<FileChunk> chunks = new List<FileChunk>();
if (fileBytes.Length > 0)
{
FileChunk chunk = new FileChunk();
for (int i = 0; i < (fileBytes.Length / 512); i++)
{
chunk.Number = (i + 1);
chunk.Offset = (i * 512);
chunk.Bytes = fileBytes.Skip(chunk.Offset).Take(512).ToArray();
chunks.Add(chunk);
chunk = new FileChunk();
}
}
return chunks;
}
Unfortunately, this approach seems to be incredibly slow. Does anyone know how I can improve the performance while still creating objects for each chunk?
thank you
I suspect this is going to hurt a little:
chunk.Bytes = fileBytes.Skip(chunk.Offset).Take(512).ToArray();
Try this instead:
byte buffer = new byte[512];
Buffer.BlockCopy(fileBytes, chunk.Offset, buffer, 0, 512);
chunk.Bytes = buffer;
(Code not tested)
And the reason why this code would likely be slow is because Skip doesn't do anything special for arrays (though it could). This means that every pass through your loop is iterating the first 512*n items in the array, which results in O(n^2) performance, where you should just be seeing O(n).
Try something like this (untested code):
public static List<FileChunk> GetAllForFile(string fileName, FileMode.Open)
{
var chunks = new List<FileChunk>();
using (FileStream stream = new FileStream(fileName))
{
int i = 0;
while (stream.Position <= stream.Length)
{
var chunk = new FileChunk();
chunk.Number = (i);
chunk.Offset = (i * 512);
Stream.Read(chunk.Bytes, 0, 512);
chunks.Add(chunk);
i++;
}
}
return chunks;
}
The above code skips several steps in your process, preferring to read the bytes from the file directly.
Note that, if the file is not an even multiple of 512, the last chunk will contain less than 512 bytes.
Same as Robert Harvey's answer, but using a BinaryReader, that way I don't need to specify an offset. If you use a BinaryWriter on the other end to reassemble the file, you won't need the Offset member of FileChunk.
public static List<FileChunk> GetAllForFile(string fileName) {
var chunks = new List<FileChunk>();
using (FileStream stream = new FileStream(fileName)) {
BinaryReader reader = new BinaryReader(stream);
int i = 0;
bool eof = false;
while (!eof) {
var chunk = new FileChunk();
chunk.Number = i;
chunk.Offset = (i * 512);
chunk.Bytes = reader.ReadBytes(512);
chunks.Add(chunk);
i++;
if (chunk.Bytes.Length < 512) { eof = true; }
}
}
return chunks;
}
Have you thought about what you're going to do to compensate for packet loss and data corruption?
Since you mentioned that the load is taking a long time then I would use asynchronous file reading in order to speed up the loading process. The hard disk is the slowest component of a computer. Google does asynchronous reads and writes on Google Chrome to improve their load times. I had to do something like this in C# in a previous job.
The idea would be to spawn several asynchronous requests over different parts of the file. Then when a request comes in, take the byte array and create your FileChunk objects taking 512 bytes at a time. There are several benefits to this:
If you have this run in a separate thread, then you won't have the whole program waiting to load the large file you have.
You can process a byte array, creating FileChunk objects, while the hard disk is still trying to for-fill read request on other parts of the file.
You will save on RAM space if you limit the amount of pending read requests you can have. This allows less page faulting to the hard disk and use the RAM and CPU cache more efficiently, which speeds up processing further.
You would want to use the following methods in the FileStream class.
[HostProtectionAttribute(SecurityAction.LinkDemand, ExternalThreading = true)]
public virtual IAsyncResult BeginRead(
byte[] buffer,
int offset,
int count,
AsyncCallback callback,
Object state
)
public virtual int EndRead(
IAsyncResult asyncResult
)
Also this is what you will get in the asyncResult:
// Extract the FileStream (state) out of the IAsyncResult object
FileStream fs = (FileStream) ar.AsyncState;
// Get the result
Int32 bytesRead = fs.EndRead(ar);
Here is some reference material for you to read.
This is a code sample of working with Asynchronous File I/O Models.
This is a MS documentation reference for Asynchronous File I/O.

Antivirus significantly slowing down writing to disk, is there a workaround that prevents storing data in memory?

Let's say I'm receiving a file over a socket stream, I am receiving 1024 bytes at a time. Each time I write to the harddisk, my antivirus scans the entire file. The bigger the file gets, the longer it takes to write the next 1024 bytes. Not to mention the "file is in use by another process" errors.
My workaround at the moment is to just store the bytes in a byte array in the memory, up to X megabytes (user defined), the byte array is appended to the file on the harddisk every time it fills up.
byte[] filebytearray = new byte[filesize]; //Store entire file in this byte array.
do
{
serverStream = clientSocket.GetStream();
bytesRead = serverStream.Read(inStream, 0, buffSize); //How many bytes did we just read from the stream?
recstrbytes = new byte[bytesRead]; //Final byte array this loop
Array.Copy(inStream, recstrbytes, bytesRead); //Copy from inStream to the final byte array this loop
Array.Copy(recstrbytes, 0, filebytearray, received, bytesRead); //Copy the data from the final byte array this loop to filebytearray
received += recstrbytes.Length; //Increment bytes received
}while (received < filesize);
addToBinary(filebytearray, #"C:\test\test.exe"); //Append filebytearray to binary
(In this simplified example it just stores the entire file in memory before unloading it to hdd)
But I absolutely hate this method because it significantly increases the memory my program uses.
How do other programmers tackle this issue? When I download with firefox, as an example, it just downloads with full speed, my AV doesn't seem to pick it up until it's done, and it barely increases the process' memory usage. What's the big secret here?
Append to binary function I am using (WIP):
private bool addToBinary(byte[] msg, string filepath)
{
Console.WriteLine("Appending "+msg.Length+" bytes of data.");
bool succ = false;
do
{
try
{
using (Stream fileStream = new FileStream(filepath, FileMode.Append, FileAccess.Write, FileShare.None))
{
fileStream.Write(msg, 0, msg.Length);
fileStream.Flush();
fileStream.Close();
}
succ = true;
}
catch (IOException ex) { /*Console.WriteLine("Write Exception (addToBinary) : " + ex.Message);*/ }
catch (Exception ex) { Console.WriteLine("Some Exception occured (addToBinary) : " + ex.Message); return false; }
} while (!succ);
return true;
}
I see that you reopen the file every time you write data. Why not keep the file stream opened? Every time you close it, the antivirus scans it, because it was modified.
And one suggestion, the WriteLine function works like printf in c++, so... Instead of doing:
Console.WriteLine("Appending "+msg.Length+" bytes of data.");
you could do:
Console.WriteLine("Appending {0} bytes of data.", msg.Length);
This could really save your time sometimes.
First, you can use a memory stream.
Second, you have to write to disk some times, just do it in the background so user won't notice.
Make a concurrent queue of memory stream, and create a handler that will try to empty the queue.
You could add exclusions to your antivirus to stop it interfering. If you want the data to be scanned, then download it to an excluded folder and then move it to a folder (that will be scanned) when the file is complete.
Other approaches would be to buffer the data so you are not writing in tiny 1k increments, and hold the file open until you have finished writing.

Insufficient system resources exist to complete the requested service

I have a Web Application hosted in IIS 6 on a Windows Server 2003 box and have to handle 2 large PDF files around 7-8mb, these files are read by the website from a network share and the bytes passed to a WCF service for saving elsewhere.
here is the code I use to read the Files:
public static byte[] ReadFile(string filePath)
{
int count;
int sum = 0;
byte[] buffer;
FileStream stream = new FileStream(filePath, FileMode.Open, FileAccess.Read);
try
{
int length = (int)stream.Length;
buffer = new byte[length];
while ((count = stream.Read(buffer, sum, length - sum)) > 0)
sum += count;
return buffer;
}
catch (Exception)
{
throw;
}
finally
{
stream.Close();
stream.Dispose();
}
}
An error is thrown on the stream.Read() and the error is:
Insufficient system resources exist to complete the requested service
This code works in my dev environment but as soon as I post to our production environment we get this error message.
I have seen this error has surfaced a few times searching round and the worfaround for this is to use File.Move() but we can not do this as the file needs to be passed to a WCF service method.
Is there something in IIS6 that needs to be changed to allow holding 15-20mb in memory when reading file? or is there something else that needs to be configured?
Any ideas?
See this:
Why I need to read file piece by piece to buffer?
It seems you are reading the whole file, without buffering..
buffer = new byte[length];
Best regards.

Under what conditions does a NetworkStream not read in all the data at once?

In the callback for NetworkStream.BeginRead I seem to notice that all bytes are always read. I see many tutorials check to see if the BytesRead is less than the total bytes and if so, read again, but this never seems to be the case.
The condition if (bytesRead < totalBytes) never fires, even if a lot of data is sent at once (thousands of characters) and even if the buffer size is set to a very small value (16 or so).
I have not tested this with the 'old-fashioned way' as I am using Task.Factory.FromAsync instead of calling NetworkStream.BeginRead and providing a callback where I call EndRead. Perhaps Tasks automatically include this functionality of not returning until all data is read? I'm not sure.
Either way, I am still curious as to when all data would not be read at once. Is it even required to check if not all data was read, and if so, read again? I cannot seem to get the conditional to ever run.
Thanks.
Try sending megabytes of data over a slow link. Why would the stream want to wait until it was all there before giving the caller any of it? What if the other side hadn't closed the connection - there is no concept of "all the data" at that point.
Suppose you open a connection to another server and call BeginRead (or Read) with a large buffer, but it only sends 100 bytes, then waits for your reply - what would you expect NetworkStream to do? Never give you the data, because you gave it too big a buffer? That would be highly counterproductive.
You should absolutely not assume that any stream (with the arguable exception of MemoryStream) will fill the buffer you give it. It's possible that FileStream always will for local files, but I'd expect it not to for shared files.
EDIT: Sample code which shows the buffer not being filled - making an HTTP 1.1 request (fairly badly :)
// Please note: this isn't nice code, and it's not meant to be. It's just quick
// and dirty to demonstrate the point.
using System;
using System.IO;
using System.Net;
using System.Net.Sockets;
using System.Text;
class Test
{
static byte[] buffer;
static void Main(string[] arg)
{
TcpClient client = new TcpClient("www.yoda.arachsys.com", 80);
NetworkStream stream = client.GetStream();
string text = "GET / HTTP/1.1\r\nHost: yoda.arachsys.com:80\r\n" +
"Content-Length: 0\r\n\r\n";
byte[] bytes = Encoding.ASCII.GetBytes(text);
stream.Write(bytes, 0, bytes.Length);
stream.Flush();
buffer = new byte[1024 * 1024];
stream.BeginRead(buffer, 0, buffer.Length, ReadCallback, stream);
Console.ReadLine();
}
static void ReadCallback(IAsyncResult ar)
{
Stream stream = (Stream) ar.AsyncState;
int bytesRead = stream.EndRead(ar);
Console.WriteLine(bytesRead);
Console.WriteLine("Asynchronous read:");
Console.WriteLine(Encoding.ASCII.GetString(buffer, 0, bytesRead));
string text = "Bad request\r\n";
byte[] bytes = Encoding.ASCII.GetBytes(text);
stream.Write(bytes, 0, bytes.Length);
stream.Flush();
Console.WriteLine();
Console.WriteLine("Synchronous:");
StreamReader reader = new StreamReader(stream);
Console.WriteLine(reader.ReadToEnd());
}
}

Categories