Response.BinaryWrite creating a .partial file? - c#

I am attempting to write an audio file as a .wav in a memorystream out to the response so the client can download it. It looks like on client side when trying to open the file it has a ".partial" extension. It is almost as if the file is not getting released to the client.
The below is my code... Attempting to write the bytes directly to the local machine works fine (you will see that code commented out).
// Initialize a new instance of the speech synthesizer.
using (SpeechSynthesizer synth = new SpeechSynthesizer())
using (MemoryStream stream = new MemoryStream())
{
// Create a SoundPlayer instance to play the output audio file.
MemoryStream streamAudio = new MemoryStream();
// Configure the synthesizer to output to an audio stream.
synth.SetOutputToWaveStream(streamAudio);
synth.Speak("This is sample text-to-speech output. How did I do?");
streamAudio.Position = 0;
// Set the synthesizer output to null to release the stream.
synth.SetOutputToNull();
// Insert code to persist or process the stream contents here.
// THIS IS NOT WORKING WHEN WRITING TO THE RESPONSE, .PARTIAL FILE CREATED
Response.Clear();
Response.ContentType = "audio/wav";
Response.AppendHeader("Content-Disposition", "attachment; filename=mergedoutput.wav");
Response.BinaryWrite(streamAudio.GetBuffer());
Response.Flush();
// THIS WORKS WRITING TO A FILE
//System.IO.File.WriteAllBytes("c:\\temp\\als1.wav", streamAudio.GetBuffer());
}

MemoryStream.GetBuffer is not the correct method to call:
Note that the buffer contains allocated bytes which might be unused.
For example, if the string "test" is written into the MemoryStream
object, the length of the buffer returned from GetBuffer is 256, not
4, with 252 bytes unused. To obtain only the data in the buffer, use
the ToArray method; however, ToArray creates a copy of the data in
memory.
so use MemoryStream.ToArray instead:
Response.BinaryWrite(streamAudio.ToArray());

Looks like the issue was the fact the speak method needs to be run on its own thread. The following provides the solution to get back the byte array properly and then be able to write that to the response.
C# SpeechSynthesizer makes service unresponsive

Related

Audio file is not working via FTP upload programatically

I am uploading an .mp3 file via FTP code using C#, the file is uploaded successfully on server but when i bind to a simple audio control or directly view in browser it does not work as expected, whereas when i upload manually on the server it works perfectly.
Code:
var inputStream = FileUpload1.PostedFile.InputStream;
byte[] fileBytes = new byte[inputStream.Length];
inputStream.Read(fileBytes, 0, fileBytes.Length);
Note: When i view the file in Firefox it shows MIME type is not supported.
Thanks!
You're reading the file as a string then using UTF8 encoding to turn it into bytes. If you do that, and the file contains any binary sequence that doesn't code to a valid UTF8 value, parts of the data stream will simply get discarded.
Instead, read it directly as bytes. Don't bother with the StreamReader. Call the Read() method on the underlying stream. Example:
var inputStream = FileUpload1.PostedFile.InputStream
byte[] fileBytes = new byte[inputStream.Length];
inputStream.Read(fileBytes, 0, fileStream.Length);

C# Convert FileStream.WriteLine to go to a MemoryStream

I wrote some code in a console program and tested with files.
Now I want to port it to a BizTalk Pipeline Component that implements a specific interface. I wasn't aware that that .Write and .WriteLine methods from a File to a Memory Stream were so different. I thought I would just be able to swap my objects. There is no .WriteLine method, and the .Write method requires offset and bytes (additional parameters).
So now, what is the best way to change my tested code to write to the memory stream, given that I have a lot of .WriteLine statements. I could write to a StringBuffer first, but then I think that would blow the concept of streaming (i.e. would have the whole document in memory at one time).
// This is how I used the streams in the Console program
//FileStream originalStream = File.Open(inFilename, FileMode.Open);
//StreamWriter streamToReturn = new StreamWriter(outFilename);
// This is how to get the input stream in the BizTalk Pipeline Componenet
System.IO.Stream originalStream = pInMsg.BodyPart.GetOriginalDataStream();
MemoryStream streamToReturn = new MemoryStream();
streamToReturn.WriteLine("<" + schemaStructure.rootElement + ">");
There's a lot more code not shown here. Above is just to set the stage for what I did.
Use a StreamWriter which you can use to call WriteLine.
MemoryStream streamToReturn = new MemoryStream();
var writer = new StreamWriter(streamToReturn);
writer.WriteLine("<" + schemaStructure.rootElement + ">");

Generate zip file with xml content on the fly [duplicate]

I want to write a String to a Stream (a MemoryStream in this case) and read the bytes one by one.
stringAsStream = new MemoryStream();
UnicodeEncoding uniEncoding = new UnicodeEncoding();
String message = "Message";
stringAsStream.Write(uniEncoding.GetBytes(message), 0, message.Length);
Console.WriteLine("This:\t\t" + (char)uniEncoding.GetBytes(message)[0]);
Console.WriteLine("Differs from:\t" + (char)stringAsStream.ReadByte());
The (undesired) result I get is:
This: M
Differs from: ?
It looks like it's not being read correctly, as the first char of "Message" is 'M', which works when getting the bytes from the UnicodeEncoding instance but not when reading them back from the stream.
What am I doing wrong?
The bigger picture: I have an algorithm which will work on the bytes of a Stream, I'd like to be as general as possible and work with any Stream. I'd like to convert an ASCII-String into a MemoryStream, or maybe use another method to be able to work on the String as a Stream. The algorithm in question will work on the bytes of the Stream.
After you write to the MemoryStream and before you read it back, you need to Seek back to the beginning of the MemoryStream so you're not reading from the end.
UPDATE
After seeing your update, I think there's a more reliable way to build the stream:
UnicodeEncoding uniEncoding = new UnicodeEncoding();
String message = "Message";
// You might not want to use the outer using statement that I have
// I wasn't sure how long you would need the MemoryStream object
using(MemoryStream ms = new MemoryStream())
{
var sw = new StreamWriter(ms, uniEncoding);
try
{
sw.Write(message);
sw.Flush();//otherwise you are risking empty stream
ms.Seek(0, SeekOrigin.Begin);
// Test and work with the stream here.
// If you need to start back at the beginning, be sure to Seek again.
}
finally
{
sw.Dispose();
}
}
As you can see, this code uses a StreamWriter to write the entire string (with proper encoding) out to the MemoryStream. This takes the hassle out of ensuring the entire byte array for the string is written.
Update: I stepped into issue with empty stream several time. It's enough to call Flush right after you've finished writing.
Try this "one-liner" from Delta's Blog, String To MemoryStream (C#).
MemoryStream stringInMemoryStream =
new MemoryStream(ASCIIEncoding.Default.GetBytes("Your string here"));
The string will be loaded into the MemoryStream, and you can read from it. See Encoding.GetBytes(...), which has also been implemented for a few other encodings.
You're using message.Length which returns the number of characters in the string, but you should be using the nubmer of bytes to read. You should use something like:
byte[] messageBytes = uniEncoding.GetBytes(message);
stringAsStream.Write(messageBytes, 0, messageBytes.Length);
You're then reading a single byte and expecting to get a character from it just by casting to char. UnicodeEncoding will use two bytes per character.
As Justin says you're also not seeking back to the beginning of the stream.
Basically I'm afraid pretty much everything is wrong here. Please give us the bigger picture and we can help you work out what you should really be doing. Using a StreamWriter to write and then a StreamReader to read is quite possibly what you want, but we can't really tell from just the brief bit of code you've shown.
I think it would be a lot more productive to use a TextWriter, in this case a StreamWriter to write to the MemoryStream. After that, as other have said, you need to "rewind" the MemoryStream using something like stringAsStream.Position = 0L;.
stringAsStream = new MemoryStream();
// create stream writer with UTF-16 (Unicode) encoding to write to the memory stream
using(StreamWriter sWriter = new StreamWriter(stringAsStream, UnicodeEncoding.Unicode))
{
sWriter.Write("Lorem ipsum.");
}
stringAsStream.Position = 0L; // rewind
Note that:
StreamWriter defaults to using an instance of UTF8Encoding unless specified otherwise. This instance of UTF8Encoding is constructed without a byte order mark (BOM)
Also, you don't have to create a new UnicodeEncoding() usually, since there's already one as a static member of the class for you to use in convenient utf-8, utf-16, and utf-32 flavors.
And then, finally (as others have said) you're trying to convert the bytes directly to chars, which they are not. If I had a memory stream and knew it was a string, I'd use a TextReader to get the string back from the bytes. It seems "dangerous" to me to mess around with the raw bytes.
You need to reset the stream to the beginning:
stringAsStream.Seek(0, SeekOrigin.Begin);
Console.WriteLine("Differs from:\t" + (char)stringAsStream.ReadByte());
This can also be done by setting the Position property to 0:
stringAsStream.Position = 0

How to write audio stream out to response using SpeechSynthesizer C#

I am using the SpeechSynthesizer class in a C# asp.net app to attempt to convert some text to speech and then write that audio memory stream out to the response so it can be downloaded as a .wav by the end user.
I can save the memorystream to a filestream and the wav plays fine (tested this in a seperate function) however when trying to send a memorystream to the response (code below) it looks like IE says the .wav file is being downloaded however it is almost like something is hanging on to the object and won't let it complete downloading. The .wav as a result can not be opened to play the sound.
Any thoughts on how to properly do this? Here is my code below.
SpeechSynthesizer synth = new SpeechSynthesizer();
MemoryStream streamAudio = new MemoryStream();
// Configure the synthesizer to output to an audio stream.
synth.SetOutputToWaveStream(streamAudio);
// Speak a phrase.
synth.Speak("This is sample text-to-speech output.");
// Set audio stream to beginning
streamAudio.Position = 0;
// Send the memory steam bytes out to the response/
Response.Clear();
Response.ContentType = "audio/wav";
Response.AppendHeader("Content-Disposition", "attachment; filename=mergedoutput.wav");
streamAudio.CopyTo(Response.OutputStream);
Response.Flush();
Response.End();
Found the solution... The speak method needs to run in its own thread and then return the byte array that can be witten to the response.
Found the solution in this post
C# SpeechSynthesizer makes service unresponsive

Playing wav with HTML5

I am trying to output a text-to-speech wav file and play it with the HTML5 <audio> tag. The text-to-speech method is outputting the bytes, but the html5 control isn't playing it.
If instead of streaming the bytes directly to the control, i save it as a file first, then convert the file to bytes with filestream and output them, it starts to play, but i don't want to have to save the file every time. I'm using MVC 4.
// in a class library
public byte[] GenerateAudio(string randomText)
{
MemoryStream wavAudioStream = new MemoryStream();
SpeechSynthesizer speechEngine = new SpeechSynthesizer();
speechEngine.SetOutputToWaveStream(wavAudioStream);
speechEngine.Speak(randomText);
wavAudioStream.Flush();
Byte[] wavBytes = wavAudioStream.GetBuffer();
return wavBytes;
}
// in my controller
public ActionResult Listen()
{
return new FileContentResult(c.GenerateAudio(Session["RandomText"].ToString()), "audio/wav");
}
// in my view
<audio controls autoplay>
<source src="#Url.Content("~/Captcha/Listen")" type="audio/wav" />
Your browser does not support the <audio> element.
</audio>
I am also playing back a wav file to an audio element, and your code has the same logic as mine. I just noticed that you are flushing your stream before you return the byte array, which will seem to be empty.
Also, you can use file as return type and pass the byte array to its constructor. The content type is just the same as in your code. I would like to mention (maybe it could be a help also) that I used 2 streams: an outside scope stream and the actual stream where the data will be saved. After I have populated the actual stream I copied its contents to the outside stream using stream.CopyTo() and the instance of that outside stream is the one I used in my return statement. This avoids the error "Cannot Access Closed stream" (not the exact the error statement).

Categories