Save WPF BitmapSources as h264 encoded video directly from Memory - c#

Currently I am utilizing a List-variable to store BitmapSources provided by a camera and save them as an AVI file with the help of SharpAvi. In a second step I then encode the saved file via Nrecos ffmpeg wrapper to decrease file size. Finally I delete the original AVI file and only keep the encoded one.
To me this seems poorly designed and might cause harmful write-cycles to the SSD the application is running on (I'll probably create up to a TB a day in unencoded video), which is why I want to change it to a more integrated solution utilizing the PC's RAM.
SharpAvi as well as Nreco however rely on creating and reading actual files.
Nreco does have the ConvertLiveMedia method that accepts a stream - however in my experiments it simply did not create a file while giving me no error warnings.

Ok I think I solved it:
I changed the SharpAvi code so that the AviWriter has an additional constructor accepting a MemoryStream instead of just a string. This MemoryStream is then passed to the BinaryWriter and no FileStream is created.
Also the close Method had to be changed so that the MemoryStream stays alive even when the BinaryWriter is closed.fileWriter.Closeis replaced with
if (fileWriter.BaseStream is MemoryStream)
fileWriter.Flush();
else
fileWriter.Close();
This leaves me with a usable MemoryStream in my main application.
memstream.Position = 0; //crucial or otherwise ffmpeg will not work
var task = nrecoconverter.ConvertLiveMedia(memstream, "avi", filepath, "avi", settings);
task.Start();
task.Wait();
Edit: SharpAvi has officially been changed on github to allow the use of Streams now.

Related

Are there any difference in using File.Copy to move a file or to write a stream to the location?

I am refactoring some code an have a question that i could use a few comments on.
The original code download a file to a stream. Then it writes the stream to a File in a temp directory before using File.Copy to overwrite an existing file in the production directory.
Are there any benefits from writing it to the temp dir first and using the File.Copy in contrast to just writing the stream to the production directory right away?
One reason could be that the File.Copy is faster then writing a stream, and reducing the chance that someone is reading the file while its being written. But can that even happen? What else should I have in mind. I am considering factoring out the temp directory.
MemoryStream stream = new MemoryStream();
....Download and valiate stream....
using (Stream sourceFileStream = stream)
{
using (FileStream targetFileStream = new FileStream(tempPath, FileMode.CreateNew))
{
const int bufferSize = 8192;
byte[] buffer = new byte[bufferSize];
while (true)
{
int read = sourceFileStream.Read(buffer, 0, bufferSize);
targetFileStream.Write(buffer, 0, read);
if (read == 0)
break;
}
}
}
File.Copy(tempPath, destination, true);
in contrast to just writing the stream to destination.
This is just the code I had, i would properly use something like sourceFileStream.CopyToAsync(TargetFileStream);
well, think about what happens when you start the download and override the existing file and then for some reason the download gets aborted, you'd be left with an broken file.
however, downloading it first in another location and copying it to the target directory factors that problem out.
EDIT: okay, seeing the code now. if the file is already in the MemoryStream there's really no reason to write the file to a temp location and copy it over. you could just just do File.WriteAllBytes(destination,stream.ToArray());
File.Copy simply encapsulates the usage of streams, etc. No difference at all.
It is a better practice to assemble the file stream of bytes into an isolated location and only after it is assembled to copy it to production area for the following reasons:
Assume a power shortage during the assemble phase, when it is in
isolated folder such as 'temp' you just end up with partial file
which you can recheck and ignore later on.. however if you directly
assemble the file on production and a power shortage occur - next
time you turn on your application you will need to check integrity
of all files which are not 'static' to your app.
If the file is being used on production and 'being-assembled' new file is long - you end up having your user wait for the process of assembling to complete, However when the process of assembling the buffers is as in your example - simply copying the ready file will inflict shorter waiting period for your user.
Assume the disk space is full... same problem as in #1.
Assume Another process in your application has a memory leak and the application is crushed out before completing the assemble process.
Assume [fill in your disaster case]...
So, yes. it is better practice to do as in your example, but the real question is how important is this file to your application? is it just another data file like a 'save-game' file or is it something that can crush your application if invalid?

get image from video stream in C#

I am trying to get image from a stream (MemoryStream to be more precise). I can not find anything from Microsoft that can solve my problem.
I am getting my streams from SQL so if there is some way to get an image from there, it will be OK.
I have checked ffmpeg and the problem is that I need to save the video files. The files can reach up to 2GB and if there is a way not writing to the disk it will be helpful. If there is a way to read only the first 10MB or other limited size and read the image from it, that can also be a solution.
Video feed might be as simple as raw uncompressed video frames side by side to more complex multiplexed file format compatible chunk of data, e.g. .MP4 file. While the former case might be pretty simple, the latter requires you to demultiplex the file, seek within the stream, start decoding, possibly skip a few frames, then grab the frame of interest. The point is that it might be not as simple as it seems.
Video processing APIs in Windows are DirectShow, Media Foundation. With DirectShow it is possible to create a custom data source on top of SQL backed data stream and stream from there fetching DB data on demand, using API interfaces components (stock and third party) to do the rest of the task.
It is possible to capture frames with free VideoConverter for .NET that actually is a wrapper to FFMpeg tool. The idea is using live streaming capabilities (to C# Stream) of VideoConverter for special FFMpeg format "rawvideo" that actually is bitmap stream that can be processed by C# program, something like that:
var videoConv = new FFMpegConverter();
var ffMpegTask = videoConv.ConvertLiveMedia(
"input.mp4",
null, // autodetect live stream format
rawBmpOutputStream, // this is your special stream that will capture bitmaps
"rawvideo",
new ConvertSettings() {
VideoFrameSize = "320x200", // lets resize to exact frame size
CustomOutputArgs = " -pix_fmt bgr24 ", // windows bitmap pixel format
VideoFrameRate = 5, // lets consume 5 frames per second
MaxDuration = 5 // lets consume live stream for first 5 seconds
});
VideoConverter can read live streams from another .NET Stream (if input format can be used with live stream conversion).

C# saving multiple png file from a MemoryStream

I am getting multiple pngs from another process from its standard output as a stream. I want to take this memory stream and save it as multiple png files. I have looked at PngBitmapEncoder/PngBitmapDecoder, but I can't seem to get a multiple page out of it (whenever I create a decoder using PngBitmapDecoder.Create, decoder.Frames.Count is always 1. Here is how I create the decoder:
BitmapDecoder decoder = PngBitmapDecoder.Create(memStream,
BitmapCreateOptions.PreservePixelFormat,
BitmapCacheOption.Default);
Am I doing something wrong?
There is no such thing a s multi-page PNG.
A PNG decoder will never return more than one frame.
You need to read each image separately.
You have sample here on msdn
http://msdn.microsoft.com/fr-fr/library/system.windows.media.imaging.bitmapdecoder.aspx
i am getting multiple pngs from another process from its standard
output as a stream
It's not clear what this means. PNG does not support multiple image or pages in one file. Are you receiving several PNG files concatenated as a single stream? If this is the case (that would be rather strange) you don't really need to decode the PNGs, just to split the stream and write each one (blindly) in a different file. A quick and dirty approach (not totally foolproof) is to scan the stream for the PNG signature (8 bytes) to detect the start of a new image.
If you rather want to decode the succesive streams (seems overkill), you can use this pngcs library, instantiating a PngReader for each image; just be sure to call
PngReader.ShouldCloseStream(false) so that the stream is not close when each image ends.
Yes, there is such a thing as a multi-page PNG. It's called MNG (Multiple-image Network Graphics). It's almost as old as PNG (Check libpng.org for the format MNG).
And there is a C# library that can help you with that
http://www.codeproject.com/Articles/35289/NET-MNG-Viewer
In the last 4 years a format called APNG (Animated Portable Network Graphics) started being accepted and used by browsers like Firefox. There is a wrapper for C#
https://code.google.com/p/sharpapng/
Saving multiple PNGs using one file only will be much faster than using multiple files.

Open file from byte array

I am storing attachments in my applications.
These gets stored in SQL as varbinary types.
I then read them into byte[] object.
I now need to open these files but dont want to first write the files to disk and then open using Process.Start().
I would like to open using inmemory streams. Is there a way to to this in .net. Please note these files can be of any type
You can write all bytes to file without using Streams:
System.IO.File.WriteAllBytes(path, bytes);
And then just use
Process.Start(path);
Trying to open file from memory isn't worth the result. Really, you don't want to do it.
MemoryStream has a constructor that takes a Byte array.
So:
var bytes = GetBytesFromDatabase(); // assuming you can do that yourself
var stream = new MemoryStream(bytes);
// use the stream just like a FileStream
That should pretty much do the trick.
Edit: Aw, crap, I totally missed the Process.Start part. I'm rewriting...
Edit 2:
You cannot do what you want to do. You must execute a process from a file. You'll have to write to disk; alternatively, the answer to this question has a very complex suggestion that might work, but would probably not be worth the effort.
MemoryMappedFile?
http://msdn.microsoft.com/en-us/library/system.io.memorymappedfiles.memorymappedfile.aspx
My only issue with this was that I will have to make sure the user has write access to the path where I will place the file...
You should be able to guarantee that the return of Path.GetTempFileName is something to which your user has access.
...and also am not sure how I will detect that the user has closed the file so I can delete the file from disk.
If you start the process with Process.Start(...), shouldn't you be able to monitor for when the process terminates?
If you absolutely don't want to write to disk yourself you can implement local HTTP server and serve attachemnts over HTTP (like http://localhost:3456/myrecord123/attachment1234.pdf).
Also I'm not sure if you get enough benefits doing such non-trivial work. You'll open files from local security zone that is slightly better then opening from disk... and no need to write to disk yourself. And you'll likely get somewhat reasonable warning if you have .exe file as attachment.
On tracking "process done with the attachment" you more or less out of luck: only in some cases the process that started openeing the file is the one that is actually using it. I.e. Office applications are usually one-instance applications, and as result document will be open in first instance of the application, not the one you've started.

How to read using C++ (C#) sound stream sent by flash?

I need to read sound stream sent by flash audio in my C++ application (C++ is not a real limitation, it may be C# or any other desktop language).
Now flash app sends audio to another flash app but I need to receive the same audio by desktop application.
So, is there a standard or best way how to do it?
Thank you for your answers.
How is the sound actually sent? Via the network?
Edit: You'd be either capturing the audio from an HTTP stream, or an RTMP stream. Run Wireshark to find out, but I suspect you're doing something slightly shady...
You could try using the sound system from the Gnash project.
So basically you want to connect to RTMP sound stream from flash media server from an arbitrary non-flash application? Have you taken a look at http://en.wikipedia.org/wiki/Real_Time_Messaging_Protocol ?
Unfortunately, Adobe IS relatively proprietary (hence the apple-adobe wars happening lately), but for several languages, there are projects to help out with RTMP.
WebOrb is commercial, for .NET, Java, PHP:
http://www.themidnightcoders.com/products.html
FluorineFX is open source for .NET only:
http://www.fluorinefx.com/
I haven't used either myself for RTMP, but I have used FluorineFX to connect to a flash remoting (AMF) gateway. I imagine it may do what you need for receiving the audio stream from a .NET-enabled client.
Getting the frames, frame rate and other attributes of video clip
If you have experience with writing applications in Microsoft DirectShow Editing Services (codename Dexter), this will sound very familiar to you. In the Windows environment, traditionally capturing still frames has been done using C++ and Dexter Type Library to access DirectShow COM objects. To do this in .NET Framework, you can make an Interop assembly of DexterLib which is listed under COM References in VS 2005. However it takes you a good amount of work to figure out how to convert your code from C++ to C# .NET. The problem occurs when you need to pass in a pointer reference as an argument to a native function, CLR does not directly support pointers as the memory position can change after each garbage collection cycle. You can find many articles on how to use DirectShow on the CodeProject or other places and we try to keep it simple. Here our goal is to convert a video file into an array of Bitmaps and I tried to keep this as short as possible, of course you can write your own code to get the Bitmaps out of a live stream and buffer them shortly before you send them.
Basically we have two option for using the DirectShow for converting our video file to frames in .NET:
Edit the Interop assembly and change the type references from pointer to C# .NET types.
Use pointers with unsafe keyword.
We chose the unsafe (read fast) method. It means that we extract our frames outside of .NET managed scope. It is important to mention that managed does not always mean better and unsafe does not really mean unsafe!
MediaDetClass mediaClass = new MediaDetClass();
_AMMediaType mediaType;
... //load the video file
int outputStreams = mediaClass.OutputStreams;
outFrameRate=0.0;
for (int i = 0; i < outputStreams; i++)
{
mediaClass.CurrentStream = i;
try{
//If it can the get the framerate, it's enough,
//we accept the video file otherwise it throws an exception here
outFrameRate = mediaClass.FrameRate;
.......
//get the attributes here
.....
}catch
{ // Not a valid meddia type? go to the next outputstream }
}
// No frame rate?
if (outFrameRate==0.0)
throw new NotSupportedException( " The program is unable" +
" to read the video file.");
// we have a framerate? move on...
...
//Create an array to hold Bitmaps and intilize
//other objects to store information...
unsafe {
...
// create a byte pointer to store the BitmapBits
...
while (currentStreamPos < endPosition)
{
mediaClass.GetBitmapBits(currentStreamPos, ref bufferSize,
ref *ptrRefFramesBuffer,
outClipSize.Width, outClipSize.Height);
...
//add frame Bitmap to the frameArray
...
}
}
...
Transfer extracted data over HTTP
So far we have converted our video to an array of Bitmap frames. The next step is to transfer our frames over HTTP all the way to the client�s browser. It would be nice if we could just send our Bitmap bits down to the client but we cannot. HTTP is designed to transport text characters which mean your browser only reads characters that are defined in the HTML page character set. Anything else out of this encoding cannot be directly displayed.
To accomplish this step, we use Base64 encoding to convert our Bitmap to ASCII characters. Traditionally, Base64 encoding has been used to embed objects in emails. Almost all modern browsers including Gecko browsers, Opera, Safari, and KDE (not IE!) support data: URI scheme standard to display Base64 encoded images. Great! Now, we have our frames ready to be transferred over HTTP.
System.IO.MemoryStream memory = new System.IO.MemoryStream();
while (currentStreamPos < endPosition)
{
...
// Save the Bitmpas somewhere in the (managed) memory
vdeoBitmaps.Save(memory, System.Drawing.Imaging.ImageFormat.Jpeg);
//Convert it to Base64
strFrameArray[frameCount] = System.Convert.ToBase64String(memory.ToArray());
//Get ready for the next one
memory.Seek(0, System.IO.SeekOrigin.Begin);
}
memory.Close();
...
But we cannot just send out the encoded frames as a giant string. We create an XML document that holds our frames and other information about the video and then send it to the client. This way the browser can receive our frames as a DOM XML object and easily navigate through them. Just imagine how easy it is to edit a video that is stored in XML format:
14.9850224700412
{Width=160, Height=120}
6.4731334
/9j/4AAQSkZJRgABAQEAYAB....
....
This format also has its own drawbacks. The videos that are converted to Base64 encoded XML files are somewhere between 10% (mostly AVI files) to 300 % or more (some WMV files) bigger than their binary equivalent.
If you are using an XML file, you even don't need a web server , you can open the HTML from a local directory and it should work! I included an executable in the article's download file that can convert your video file to XML document which later can be shown in the browser. However using big files and high resolution videos is not a good idea!
OK, now we can send out our �Base64 encoded video� XML document as we would do with any other type of XML files. Who says XML files always have to be boring record sets anyway

Categories