I am planning to pass MemoryStream via WCF Streaming but it seems not working but when I slightly change the code to pass FileStream instead, it is working. In fact, my purpose is to pass large collection of business objects (serializable). I am using basicHttpBinding. Your suggestion would be much appreciated!
Edited:
The symptoms of the issue is that the incoming stream is empty. There is neither error nor exception.
You're not providing many details, however, I'm almost certain I know what the issue is as I've seen that happening a lot.
If you write something to a MemoryStream in order to return that one as the result of a WCF service operation, you need to manually reset the stream to its beginning before returning it. WCF will only read the stream from it current position, hence will return an empty stream if that position hasn't been reset.
That would at least explain the problem you're describing. Hope this helps.
Here some sample code:
[OperationContract]
public Stream GetSomeData()
{
var stream = new MemoryStream();
using(var file = File.OpenRead("path"))
{
// write something to the stream:
file.CopyTo(stream);
// here, the MemoryStream is positioned at its end
}
// This is the crucial part:
stream.Position = 0L;
return stream;
}
Related
I had using the BinaryFormatter to Serialize an object through NetworkStream
The code like this
//OpenConnection ...
TCPClient client = server.AcceptTCPConnection();
Message message = new Message("bla bla"); // This is the serializable class
NetworkStream stream = client.GetStream(); // Get Stream
BinaryFormatter bf = new BinaryFormatter();
bf.Serialize(stream, message);
stream.Flush();
stream.Close(); //Close Connection
And in client Code, we just need to Read from stream
bf.Deserialize(stream) as Message
to get the object we just sent from Server.
But there is a problem here, if I delete the line stream.Close(); the client cannot read this Object. Or I can change to stream.Dispose();
However, I want to use this stream again to send another Message, how I can do? Please help, it make me feel so headache ##
UPDATE:
I found the reason of this issue. Because I used one machine to run both client and server. It definitely worked well in two different machines. Someone can tell me why? Get big problem with this for a couple day ago.
Sending multiple separate messages involves "framing" - splitting the single channel into separate chunks that don't ever require the client to "read to end". Oddly, though, I was under the impression that BinaryFormatter already implemented basic framing - but: I could be wrong. In the general case, when working with a binary protocol, the most common approach is to prefix each message with the length of the payload, i.e.
using(var ms = new MemoryStream()) {
while(...)
{
// not shown: serialize to ms
var len BitConverter.GetBytes((int)ms.Length);
output.Write(len, 0, 4);
output.Write(ms.GetBuffer(), 0, (int) ms.Length);
ms.SetLength(0); // ready for next cycle
}
}
the caller has to:
read exactly 4 bytes (at least, for the above), or detect EOF
determine the length
read exactly that many bytes
deserialize
repeat
If that sounds like a lot of work, maybe just use a serializer that does all this for you; for example, with protobuf-net, this would be:
while(...) { // each item
Serializer.SerializeWithLengthPrefix(output, PrefixStyle.Base128, 1);
}
and the reader would be:
foreach(var msg in Serializer.DeserializeItems<Message>(
input, PrefixStyle.Base128, 1))
{
// ...
}
(note: this does not use the same format / rules as BinaryFormatter)
In an attempt to create a non-buffered file upload I have extended System.Web.Http.WebHost.WebHostBufferPolicySelector, overriding function UseBufferedInputStream() as described in this article: http://www.strathweb.com/2012/09/dealing-with-large-files-in-asp-net-web-api/. When a file is POSTed to my controller, I can see in trace output that the overridden function UseBufferedInputStream() is definitely returning FALSE as expected. However, using diagnostic tools I can see the memory growing as the file is being uploaded.
The heavy memory usage appears to be occurring in my custom MediaTypeFormatter (something like the FileMediaFormatter here: http://lonetechie.com/). It is in this formatter that I would like to incrementally write the incoming file to disk, but I also need to parse json and do some other operations with the Content-Type:multipart/form-data upload. Therefore I'm using HttpContent method ReadAsMultiPartAsync(), which appears to be the source of the memory growth. I have placed trace output before/after the "await", and it appears that while the task is blocking the memory usage is increasing fairly rapidly.
Once I find the file content in the parts returned by ReadAsMultiPartAsync(), I am using Stream.CopyTo() in order to write the file contents to disk. This writes to disk as expected, but unfortunately the source file is already in memory by this point.
Does anyone have any thoughts about what might be going wrong? It seems that ReadAsMultiPartAsync() is buffering the whole post data; if that is true why do we require var fileStream = await fileContent.ReadAsStreamAsync() to get the file contents? Is there another way to accomplish the splitting of the parts without reading them into memory? The code in my MediaTypeFormatter looks something like this:
// save the stream so we can seek/read again later
Stream stream = await content.ReadAsStreamAsync();
var parts = await content.ReadAsMultipartAsync(); // <- memory usage grows rapidly
if (!content.IsMimeMultipartContent())
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}
//
// pull data out of parts.Contents, process json, etc.
//
// find the file data in the multipart contents
var fileContent = parts.Contents.FirstOrDefault(
x => x.Headers.ContentDisposition.DispositionType.ToLower().Trim() == "form-data" &&
x.Headers.ContentDisposition.Name.ToLower().Trim() == "\"" + DATA_CONTENT_DISPOSITION_NAME_FILE_CONTENTS + "\"");
// write the file to disk
using (var fileStream = await fileContent.ReadAsStreamAsync())
{
using (FileStream toDisk = File.OpenWrite("myUploadedFile.bin"))
{
((Stream)fileStream).CopyTo(toDisk);
}
}
WebHostBufferPolicySelector only specifies if the underlying request is bufferless. This is what Web API will do under the hood:
IHostBufferPolicySelector policySelector = _bufferPolicySelector.Value;
bool isInputBuffered = policySelector == null ? true : policySelector.UseBufferedInputStream(httpContextBase);
Stream inputStream = isInputBuffered
? requestBase.InputStream
: httpContextBase.ApplicationInstance.Request.GetBufferlessInputStream();
So if your implementation returns false, then the request is bufferless.
However, ReadAsMultipartAsync() loads everything into MemoryStream - because if you don't specify a provider, it defaults to MultipartMemoryStreamProvider.
To get the files to save automatically to disk as every part is processed use MultipartFormDataStreamProvider (if you deal with files and form data) or MultipartFileStreamProvider (if you deal with just files).
There is an example on asp.net or here. In these examples everything happens in controllers, but there is no reason why you wouldn't use it in i.e. a formatter.
Another option, if you really want to play with streams is to implement a custom class inheritng from MultipartStreamProvider that would fire whatever processing you want as soon as it grabs part of the stream. The usage would be similar to the aforementioned providers - you'd need to pass it to the ReadAsMultipartAsync(provider) method.
Finally - if you are feeling suicidal - since the underlying request stream is bufferless theoretically you could use something like this in your controller or formatter:
Stream stream = HttpContext.Current.Request.GetBufferlessInputStream();
byte[] b = new byte[32*1024];
while ((n = stream.Read(b, 0, b.Length)) > 0)
{
//do stuff with stream bit
}
But of course that's very, for the lack of better word, "ghetto."
I'm trying to fix a bug where the following code results in a 0 byte file on S3, and no error message.
This code feeds in a Stream (from the poorly-named FileUpload4) which contains an image and the desired image path (from a database wrapper object) to Amazon's S3, but the file itself is never uploaded.
CloudUtils.UploadAssetToCloud(FileUpload4.FileContent, ((ImageContent)auxSRC.Content).PhysicalLocationUrl);
ContentWrapper.SaveOrUpdateAuxiliarySalesRoleContent(auxSRC);
The second line simply saves the database object which stores information about the (supposedly) uploaded picture. This save is going through, demonstrating that the above line runs without error.
The first line above calls in to this method, after retrieving an appropriate bucketname:
public static bool UploadAssetToCloud(Stream asset, string path, string bucketName, AssetSecurity security = AssetSecurity.PublicRead)
{
TransferUtility txferUtil;
S3CannedACL ACL = GetS3ACL(security);
using (txferUtil = new Amazon.S3.Transfer.TransferUtility(AWSKEY, AWSSECRETKEY))
{
TransferUtilityUploadRequest request = new TransferUtilityUploadRequest()
.WithBucketName(bucketName)
.WithTimeout(TWO_MINUTES)
.WithCannedACL(ACL)
.WithKey(path);
request.InputStream = asset;
txferUtil.Upload(request);
}
return true;
}
I have made sure that the stream is a good stream - I can save it anywhere else I have permissions for, the bucket exists and the path is fine (the file is created at the destination on S3, it just doesn't get populated with the content of the stream). I'm close to my wits end, here - what am I missing?
EDIT: One of my coworkers pointed out that it would be better to the FileUpload's PostedFile property. I'm now pulling the stream off of that, instead. It still isn't working.
Is the stream positioned correctly? Check asset.Position to make sure the position is set to the beginning of the stream.
asset.Seek(0, SeekOrigin.Begin);
Edit
OK, more guesses (I'm down to guesses, though):
(all of this is assuming that you can still read from your incoming stream just fine "by hand")
Just for testing, try one of the simpler Upload methods on the TransferUtility -- maybe one that just takes a file path string. If that works, then maybe there are additional properties to set on the UploadRequest object.
If you hook the UploadProgressEvent on the UploadRequest object, do you get any additional clues to what's going wrong?
I noticed that the UploadRequest's api includes both an InputStream property, and a WithInputStream fluent API. Maybe there's a bug with setting InputStream? Maybe try using the .WithInputStream API instead
Which Stream are you using ? Does the stream you are using, support mark() and reset() method.
Might be while upload method first calculate the MD5 for the given stream and then upload it, So if you stream is not supporting these two method then at the time of MD5 calculation it reaches at eof and then unable to preposition for the stream to upload the object.
This question already has answers here:
Can you keep a StreamReader from disposing the underlying stream?
(9 answers)
Closed 7 years ago.
I need to read a stream two times, from start to end.
But the following code throws an ObjectDisposedException: Cannot access a closed file exception.
string fileToReadPath = #"<path here>";
using (FileStream fs = new FileStream(fileToReadPath, FileMode.Open))
{
using (StreamReader reader = new StreamReader(fs))
{
string text = reader.ReadToEnd();
Console.WriteLine(text);
}
fs.Seek(0, SeekOrigin.Begin); // ObjectDisposedException thrown.
using (StreamReader reader = new StreamReader(fs))
{
string text = reader.ReadToEnd();
Console.WriteLine(text);
}
}
Why is it happening? What is really disposed? And why manipulating StreamReader affects the associated stream in this way? Isn't it logical to expect that a seekable stream can be read several times, including by several StreamReaders?
This happens because the StreamReader takes over 'ownership' of the stream. In other words, it makes itself responsible for closing the source stream. As soon as your program calls Dispose or Close (leaving the using statement scope in your case) then it will dispose the source stream as well. Calling fs.Dispose() in your case. So the file stream is dead after leaving the first using block. It is consistent behavior, all stream classes in .NET that wrap another stream behave this way.
There is one constructor for StreamReader that allows saying that it doesn't own the source stream. It is however not accessible from a .NET program, the constructor is internal.
In this particular case, you'd solve the problem by not using the using-statement for the StreamReader. That's however a fairly hairy implementation detail. There's surely a better solution available to you but the code is too synthetic to propose a real one.
The purpose of Dispose() is to clean up resources when you're finished with the stream. The reason the reader impacts the stream is because the reader is just filtering the stream, and so disposing the reader has no meaning except in the context of it chaining the call to the source stream as well.
To fix your code, just use one reader the entire time:
using (FileStream fs = new FileStream(fileToReadPath, FileMode.Open))
using (StreamReader reader = new StreamReader(fs))
{
string text = reader.ReadToEnd();
Console.WriteLine(text);
fs.Seek(0, SeekOrigin.Begin); // ObjectDisposedException not thrown now
text = reader.ReadToEnd();
Console.WriteLine(text);
}
Edited to address comments below:
In most situations, you do not need to access the underlying stream as you do in your code (fs.Seek). In these cases, the fact that StreamReader chains its call to the underlying stream allows you to economize on the code by not using a usings statement for the stream at all. For example, the code would look like:
using (StreamReader reader = new StreamReader(new FileStream(fileToReadPath, FileMode.Open)))
{
...
}
Using defines a scope, outside of which an object will be disposed, thus the ObjectDisposedException. You can't access the StreamReader's contents outside of this block.
I agree with your question. The biggest issue with this intentional side-effect is when developers don't know about it and are blindly following the "best practice" of surrounding a StreamReader with a using. But it can cause some really hard to track down bugs when it is on a long-lived object's property, the best (worst?) example I've seen is
using (var sr = new StreamReader(HttpContext.Current.Request.InputStream))
{
body = sr.ReadToEnd();
}
The developer had no idea the InputStream is now hosed for any future place that expects it to be there.
Obviously, once you know the internals you know to avoid the using and just read and reset the position. But I thought a core principle of API design was to avoid side effects, especially not destroying data you are acting upon. Nothing inherent about a class that supposedly is a "reader" should clear the data it reads when done "using" it. Disposing of the reader should release any references to the Stream, not clear the stream itself. The only thing I can think is that the choice had to be made since the reader is altering other internal state of the Stream, like the position of the seek pointer, that they assumed if you are wrapping a using around it you are intentionally going to be done with everything. On the other hand, just like in your example, if you are creating a Stream, the stream itself will be in a using, but if you are reading a Stream that was created outside of your immediate method, it is presumptuous of the code to clear the data.
What I do and tell our developers to do on Stream instances that the reading code doesn't explicitly create is...
// save position before reading
long position = theStream.Position;
theStream.Seek(0, SeekOrigin.Begin);
// DO NOT put this StreamReader in a using, StreamReader.Dispose() clears the stream
StreamReader sr = new StreamReader(theStream);
string content = sr.ReadToEnd();
theStream.Seek(position, SeekOrigin.Begin);
(sorry I added this as an answer, wouldn't fit in a comment, I would love more discussion about this design decision of the framework)
Dispose() on parent will Dispose() all owned streams. Unfortunately, streams don't have Detach() method, so you have to create some workaround here.
I don't know why, but you can leave your StreamReader undisposed. That way your underlying stream won't be disposed, even when StreamReader got collected.
I recently was using HttpWebResponse to return xml data from a HttpWebRequest, and I noticed that the stream returned a null terminated string to me.
I assume its because the underlying library has to be compatible with C++, but I wasn't able to find a resource to provide further illumination.
Mostly I'm wondering if there is an easy way to disable this behavior so I don't have to sanitize strings I'm passing into my xml reader.
Edit here is a sample of the relevant code:
httpResponse.GetResponseStream().Read(serverBuffer, 0, BUFFER_SIZE);
output = processResponse(System.Text.UTF8Encoding.UTF8.GetString(serverBuffer))
where processResponse looks like:
processResponse(string xmlResponse)
{
var Parser = new XmlDocument();
xmlResponse = xmlResponse.Replace('\0',' '); //fix for httpwebrequest null terminating strings
Parser.LoadXml(xmlResponse);
This definitely isn't normal behaviour. Two options:
You made a mistake in the reading code (e.g. creating a buffer and then calling Read on a stream, expecting it to fill the buffer)
The web server actually returned a null-terminated response
You should be able to tell the difference using Wireshark if nothing else.
Could it be that you are setting a size (wrong size) to the buffer you are loading?
You can use a StreamReader to avoid the temp buffer if you don't need it.
using(var stream = new StreamReader(httpResponse.GetResponseStream()))
{
string output = stream.ReadToEnd();
//...
}
Hmm... I doubt it returns a null-terminated string since simply there is no such concept in C#. At best you could have a string with a \0u0000 character at the end, but in this case it would mean that the return from the server contains such a character and the HttpWebRequest is simply doing it's duty and returns whatever the server returned.
Update
after reading your code, the mistake is pretty obvious: you are Read()-ing from a stream into a byte[] but not tacking notice of how much you actually read:
int responseLength = httpResponse.GetResponseStream().Read(
serverBuffer, 0, BUFFER_SIZE);
output = processResponse(System.Text.UTF8Encoding.UTF8.GetString(
serverBuffer, 0, responseLength));
this would fix the immediate problem, leaving only the other bugs in your code to deal with, like the fact that you cannot handle correctly a response larger than BUFFER_SIZE... I would suggest you open a XML document reader on the returned stream instead of manipulating the stream via an (unnecessary) byte[ ] copy operation:
Parser.Load(httpResponse.GetResponseStream());