Implementing IRandomAccessStream, not copying buffers - c#

I'm a bit confused with what I'm supposed to do with targetBuffer in ReadAsync() implementation (Unversal store application for win 8.1).
public IAsyncOperationWithProgress<IBuffer, uint> ReadAsync(IBuffer targetBuffer, uint count, InputStreamOptions options)
The problem is, I can't find a way to write to targetBuffer and to change its Length given my specific implementation requirements.
What I have inside is an encrypted stream with some block cipher. I want to wrap it with IRandomAccessStream, so it can be used with xaml framework components (such as passing encrypted images/video to Image or MediaElement objects). Inside the class I have an array of bytes which I reuse for every block, passing it to encryption library which fills it and reports chunk size.
So, when IRandomAccessStream.ReadAsync() is called, I need to somehow get my bytes into the targetBuffer and set its Length to proper value... Which I don't seem to manage.
I tried this:
var stream = targetBuffer.AsStream();
while(count > 0) {
/* doing something to get next chunk of data decrypted */
// byte[] chunk is the array used to hold decrypted data
// int chunkLength is the length of data (<= chunk.Length)
count -= chunkLength;
await stream.WriteAsync(chunk, 0, chunkLength);
}
return targetBuffer;
And targetBuffer.Length remains zero, yet if I try to print its content, the data is there!
Debug.WriteLine(targetBuffer.GetByte(0..N));
I now have a naïve implementation that uses a memory stream (in addition to byte array buffer), collects data there and reads back from it to targetBuffer. This works, but looks bad. Managed streams write to byte[] and WinRT streams write to IBuffer, and I just can't find a way around, so that I don't waste memory and performance.
I'd appreciate any ideas.
This is what it looks like now. I end up using a byte array as a decryption buffer and a resizeable memory stream as a proxy.
public IAsyncOperationWithProgress<IBuffer, uint> ReadAsync(IBuffer targetBuffer, uint count, InputStreamOptions options)
{
return AsyncInfo.Run<IBuffer, uint>(async (token, progress) => {
Transport.Seek(0); // Transport is InMemoryRandomAccessStream
var remaining = count;
while(remaining > 0) {
/*
ReadAsync() overload reads & decrypts data,
result length is <= remaining bytes,
deals with block cipher alignment and the like
*/
IBuffer chunk = await ReadAsync(remaining);
await Transport.WriteAsync(chunk);
remaining -= chunk.Length;
}
Transport.Seek(0);
// copy resulting bytes to target buffer
await Transport.ReadAsync(targetBuffer, count, InputStreamOptions.None);
return targetBuffer;
});
}
UPDATE: I've tested the solution above with an encrypted image of 7.9Mb. I fed it to Image instance like this:
var image = new BitmapImage();
await image.SetSourceAsync(myCustomStream);
Img.Source = image; // Img is <Image> in xaml
All is Ok untill execution reaches await Transport.ReadAsync(targetBuffer, count, InputStreamOptions.None);: there memory consumption skyrockets (from around 33mb to 300+mb), which effectively crashes phone emulator (desktop version shows image alright, though memory is consumed just the same). The hell is going on there?!
SOLVED in March 2017
First, I somehow did not realize I could just set the Length directly after writing data to buffer. Second, if yoou do just about anything wrong in my case (custom IRandomAccessStream implementation is the source for a XAML Image element), the app crashes not leaving any logs and not showing any errors, so it's really hard to figure out what has gone awry.
This is what the code looks like now:
public IAsyncOperationWithProgress<IBuffer, uint> ReadAsync(IBuffer targetBuffer, uint count, InputStreamOptions options)
{
return AsyncInfo.Run<IBuffer, uint>(async (token, progress) => {
var output = targetBuffer.AsStream();
while (count > 0) {
//
// do all the decryption stuff and get decrypted data
// to a reusable buffer byte array
//
int bytes = Math.Min((int) count, BufferLength - BufferPosition);
output.Write(decrypted, bufferPosition, bytes);
targetBuffer.Length += (uint)bytes;
BufferPosition += bytes;
progress.Report((uint)bytes);
count -= (uint)bytes;
}
}
return targetBuffer;
});

using System.Runtime.InteropServices.WindowsRuntime;
(your byte array).CopyTo(targetBuffer);
the Length property in IBuffer has a setter
the following code is perfectly valid
targetBuffer.Length = (your integer here)
you have more variants of CopyTo to choose from. have a look at this one:
public static void CopyTo(this byte[] source, int sourceIndex, IBuffer destination, uint destinationIndex, int count);

Related

.NET Native incredibly slower than Debug build with ReadAsync calls

so I just found a really weird issue in my app and it turns out it was caused by the .NET Native compiler for some reason.
I have a method that compares the content of two files, and it works fine. With two 400KBs files, it takes like 0.4 seconds to run on my Lumia 930 in Debug mode. But, when in Release mode, it takes up to 17 seconds for no apparent reason. Here's the code:
// Compares the content of the two streams
private static async Task<bool> ContentEquals(ulong size, [NotNull] Stream fileStream, [NotNull] Stream testStream)
{
// Initialization
const int bytes = 8;
int iterations = (int)Math.Ceiling((double)size / bytes);
byte[] one = new byte[bytes];
byte[] two = new byte[bytes];
// Read all the bytes and compare them 8 at a time
for (int i = 0; i < iterations; i++)
{
await fileStream.ReadAsync(one, 0, bytes);
await testStream.ReadAsync(two, 0, bytes);
if (BitConverter.ToUInt64(one, 0) != BitConverter.ToUInt64(two, 0)) return false;
}
return true;
}
/// <summary>
/// Checks if the content of two files is the same
/// </summary>
/// <param name="file">The source file</param>
/// <param name="test">The file to test</param>
public static async Task<bool> ContentEquals([NotNull] this StorageFile file, [NotNull] StorageFile test)
{
// If the two files have a different size, just stop here
ulong size = await file.GetFileSizeAsync();
if (size != await test.GetFileSizeAsync()) return false;
// Open the two files to read them
try
{
// Direct streams
using (Stream fileStream = await file.OpenStreamForReadAsync())
using (Stream testStream = await test.OpenStreamForReadAsync())
{
return await ContentEquals(size, fileStream, testStream);
}
}
catch (UnauthorizedAccessException)
{
// Copy streams
StorageFile fileCopy = await file.CreateCopyAsync(ApplicationData.Current.TemporaryFolder);
StorageFile testCopy = await file.CreateCopyAsync(ApplicationData.Current.TemporaryFolder);
using (Stream fileStream = await fileCopy.OpenStreamForReadAsync())
using (Stream testStream = await testCopy.OpenStreamForReadAsync())
{
// Compare the files
bool result = await ContentEquals(size, fileStream, testStream);
// Delete the temp files at the end of the operation
Task.Run(() =>
{
fileCopy.DeleteAsync(StorageDeleteOption.PermanentDelete).Forget();
testCopy.DeleteAsync(StorageDeleteOption.PermanentDelete).Forget();
}).Forget();
return result;
}
}
}
Now, I have absolutely no idea why this same exact method goes from 0.4 seconds all the way up to more than 15 seconds when compile with the .NET Native toolchain.
I fixed this issue using a single ReadAsync call to read the entire files, then I generated two MD5 hashes from the results and compared the two. This approach worked in around 0.4 seconds on my Lumia 930 even in Release mode.
Still, I'm curious about this issue and I'd like to know why it was happening.
Thank you in advance for your help!
EDIT: so I've tweaked my method in order to reduce the number of actual IO operations, this is the result and it looks like it's working fine so far.
private static async Task<bool> ContentEquals(ulong size, [NotNull] Stream fileStream, [NotNull] Stream testStream)
{
// Initialization
const int bytes = 102400;
int iterations = (int)Math.Ceiling((double)size / bytes);
byte[] first = new byte[bytes], second = new byte[bytes];
// Read all the bytes and compare them 8 at a time
for (int i = 0; i < iterations; i++)
{
// Read the next data chunk
int[] counts = await Task.WhenAll(fileStream.ReadAsync(first, 0, bytes), testStream.ReadAsync(second, 0, bytes));
if (counts[0] != counts[1]) return false;
int target = counts[0];
// Compare the first bytes 8 at a time
int j;
for (j = 0; j < target; j += 8)
{
if (BitConverter.ToUInt64(first, j) != BitConverter.ToUInt64(second, j)) return false;
}
// Compare the bytes in the last chunk if necessary
while (j < target)
{
if (first[j] != second[j]) return false;
j++;
}
}
return true;
}
Reading eight bytes at a time from an I/O device is a performance disaster. That's why we are using buffered reading (and writing) in the first place. It takes time for an I/O request to be submitted, processed, executed and finally returned.
OpenStreamForReadAsync appears to not be using a buffered stream. So your 8-byte requests are actually requesting 8 bytes at a time. Even with the solid-state drive, this is very slow.
You don't need to read the whole file at once, though. The usual approach is to find a reasonable buffer size to pre-read; something like reading 1 kiB at a time should fix your whole issue without requiring you to load the whole file in memory at once. You can use BufferedStream between the file and your reading to handle this for you. And if you're feeling adventurous, you could issue the next read request before the CPU processing is done - though it's very likely that this isn't going to help your performance much, given how much of the work is just I/O.
It also seems that .NET native has a lot bigger overhead than managed .NET for asynchronous I/O in the first place, which would make those tiny asynchronous calls all the more of a problem. Fewer requests of larger data will help.

Reading String from Stream

I am encrypting data to a stream. If, for example, my data is of type Int32, I will use BitConverter.GetBytes(myInt) to get the bytes and then write those bytes to the stream.
To read the data back, I read sizeof(Int32) to determine the number of bytes to read, read those bytes, and then use BitConverter.ToInt32(byteArray, 0) to convert the bytes back to an Int32.
So how would I do this with a string? Writing the string is no problem. But the trick when reading the string is knowing how many bytes to read before I can then convert it back to a string.
I have found similar questions, but they seem to assume the string occupies the entire stream and just read to the end of the stream. But here, I can have any number of other items before and after the string.
Note that StringReader is not an option here since I want the option of handling file data that may be larger than I want to load into memory.
You would normally send a content length header, and then read the length determined by that information.
Here is some sample code:
public async Task ContinouslyReadFromStream(NetworkStream sourceStream, CancellationToken token)
{
while (!ct.IsCancellationRequested && sourceStream.CanRead)
{
while (sourceStream.CanRead && !sourceStream.DataAvailable)
{
// Avoid potential high CPU usage when doing stream.ReadAsync
// while waiting for data
Thread.Sleep(10);
}
var lengthOfMessage = BitConverter.ToInt32(await ReadExactBytesAsync(stream, 4, ct), 0);
var content = await ReadExactBytesAsync(stream, lengthOfMessage, ct);
// Assuming you use UTF8 encoding
var stringContent = Encoding.UTF8.GetString(content);
}
}
protected static async Task<byte[]> ReadExactBytesAsync(Stream stream, int count, CancellationToken ct)
{
var buffer = new byte[count];
var totalBytesRemaining = count;
var totalBytesRead = 0;
while (totalBytesRemaining != 0)
{
var bytesRead = await stream.ReadAsync(buffer, totalBytesRead, totalBytesRemaining, ct);
ct.ThrowIfCancellationRequested();
totalBytesRead += bytesRead;
totalBytesRemaining -= bytesRead;
}
return buffer;
}
The solutions that come to mind are to either provide a predetermined sentinel value to signal the end of the string (ASM uses a 0 byte for this, for example), or to provide a fixed-length block of metadata ahead of each new datatype. In that block of metadata would be the type and the length, plus whatever other information you found it useful to include.
For compactness I would use the sentinel value if it will work in your system.

Send PC information over TCP

I am trying to send various bits of PC information such as free HDD space, total RAM etc to a Windows Service over TCP. I have the following code which basically creates a string of information split by a |, ready for processing within the Windows Service TCP server to be put in to a SQL table.
Is it best to do this as I have done or is there a better way?
public static void Main(string[] args)
{
Program stc = new Program(clientType.TCP);
stc.tcpClient(serverAddress, Environment.MachineName.ToString() + "|" + FormatBytes(GetTotalFreeSpace("C:\\")).ToString());
Console.WriteLine("The TCP server is disconnected.");
}
public void tcpClient(String serverName, String whatEver)
{
try
{
//Create an instance of TcpClient.
TcpClient tcpClient = new TcpClient(serverName, tcpPort);
//Create a NetworkStream for this tcpClient instance.
//This is only required for TCP stream.
NetworkStream tcpStream = tcpClient.GetStream();
if (tcpStream.CanWrite)
{
Byte[] inputToBeSent = System.Text.Encoding.ASCII.GetBytes(whatEver.ToCharArray());
tcpStream.Write(inputToBeSent, 0, inputToBeSent.Length);
tcpStream.Flush();
}
while (tcpStream.CanRead && !DONE)
{
//We need the DONE condition here because there is possibility that
//the stream is ready to be read while there is nothing to be read.
if (tcpStream.DataAvailable)
{
Byte[] received = new Byte[512];
int nBytesReceived = tcpStream.Read(received, 0, received.Length);
String dataReceived = System.Text.Encoding.ASCII.GetString(received);
Console.WriteLine(dataReceived);
DONE = true;
}
}
}
catch (Exception e)
{
Console.WriteLine("An Exception has occurred.");
Console.WriteLine(e.ToString());
}
}
Thanks
Because TCP is stream-based, it is important to have some indicator in the message to signal the other end when it has read the complete message. There are two traditional ways of doing this. First, you could have some special byte pattern at the end of each message. When the other end reads the data, it knows that it has read a full message when that special byte pattern is seen. Using this mechanism requires a byte pattern that is not likely to be included in the actual message. The other way is to include the length of the data at the beginning of the message. This is the way I do it. All my TCP messages include a short header structured like this:
class MsgHeader
{
short syncPattern; // e.g., 0xFDFD
short msgType; // useful if you have different messages
int msgLength; // length of the message minus header
}
When the other side starts receiving data, it reads the first 8 bytes, verifies the sync pattern (for the sake of sanity), and then uses the message length to read the actual message. Once the message has been read, it processes the message based on the message type.
I'd suggest creating a class that gathers the system information you're interested in and is capable of encoding/decoding it, something like:
using System;
using System.Text;
class SystemInfo
{
private string machineName;
private int freeSpace;
private int processorCount;
// Private so no one can create it directly.
private SystemInfo()
{
}
// This is a static method now. Call SystemInfo.Encode() to use it.
public static byte[] Encode()
{
// Convert the machine name to an ASCII-based byte array.
var machineNameAsByteArray = Encoding.ASCII.GetBytes(Environment.MachineName);
// *THIS IS IMPORTANT* The easiest way to encode a string value so that it
// can be easily decoded is to prepend the length of the string. Otherwise,
// you're left guessing on the decode side about how long the string is.
// Calculate the message length. This does *NOT* include the size of
// the message length itself.
// NOTE: As new fields are added to the message, account for their
// respective size here and encode them below.
var messageLength = sizeof(int) + // length of machine name string
machineNameAsByteArray.Length + // the machine name value
sizeof(int) + // free space
sizeof(int); // processor count
// Calculate the required size of the byte array. This *DOES* include
// the size of the message length.
var byteArraySize = messageLength + // message itself
sizeof(int); // 4-byte message length field
// Allocate the byte array.
var bytes = new byte[byteArraySize];
// The offset is used to keep track of where the next field should be
// placed in the byte array.
var offset = 0;
// Encode the message length (a very simple header).
Buffer.BlockCopy(BitConverter.GetBytes(messageLength), 0, bytes, offset, sizeof(int));
// Increment offset by the number of bytes added to the byte array.
// Note that the increment is equal to the value of the last parameter
// in the preceding BlockCopy call.
offset += sizeof(int);
// Encode the length of machine name to make it easier to decode.
Buffer.BlockCopy(BitConverter.GetBytes(machineNameAsByteArray.Length), 0, bytes, offset, sizeof(int));
// Increment the offset by the number of bytes added.
offset += sizeof(int);
// Encode the machine name as an ASCII-based byte array.
Buffer.BlockCopy(machineNameAsByteArray, 0, bytes, offset, machineNameAsByteArray.Length);
// Increment the offset. See the pattern?
offset += machineNameAsByteArray.Length;
// Encode the free space.
Buffer.BlockCopy(BitConverter.GetBytes(GetTotalFreeSpace("C:\\")), 0, bytes, offset, sizeof(int));
// Increment the offset.
offset += sizeof(int);
// Encode the processor count.
Buffer.BlockCopy(BitConverter.GetBytes(Environment.ProcessorCount), 0, bytes, offset, sizeof(int));
// No reason to do this, but it completes the pattern.
offset += sizeof(int).
return bytes;
}
// Static method. Call is as SystemInfo.Decode(myReceivedByteArray);
public static SystemInfo Decode(byte[] message)
{
// When decoding, the presumption is that your socket code read the first
// four bytes from the socket to determine the length of the message. It
// then allocated a byte array of that size and read the message into that
// byte array. So the byte array passed into this function does *NOT* have
// the 4-byte message length field at the front of it. It makes no sense
// in this class anyway.
// Create the SystemInfo object to be populated and returned.
var si = new SystemInfo();
// Use the offset to navigate through the byte array.
var offset = 0;
// Extract the length of the machine name string since that is the first
// field encoded in the message.
var machineNameLength = BitConverter.ToInt32(message, offset);
// Increment the offset.
offset += sizeof(int);
// Extract the machine name now that we know its length.
si.machineName = Encoding.ASCII.GetString(message, offset, machineNameLength);
// Increment the offset.
offset += machineNameLength;
// Extract the free space.
si.freeSpace = BitConverter.ToInt32(message, offset);
// Increment the offset.
offset += sizeof(int);
// Extract the processor count.
si.processorCount = BitConverter.ToInt32(message, offset);
// No reason to do this, but it completes the pattern.
offset += sizeof(int);
return si;
}
}
To encode the data, call the Encode method like this:
byte[] msg = SystemInfo.Encode();
To decode the data once it's been read from the socket, call the Decode method like this:
SystemInfo si = SystemInfo.Decode(msg);
As to your actual code, I'm not sure why you're reading from the socket after writing to it unless you're expecting a return value.
A few things to consider. Hope this helps.
EDIT
First of all, use the MsgHeader if you feel you need it. The example above simply uses the message length as the header, i.e., it does not include a sync pattern or a message type. Whether you need to use this additional information is up to you.
For every new field you add to the SystemInfo class, the overall size of the message will increased, obviously. Thus, the messageLength value needs to be adjusted accordingly. For example, if you add an int to include the number of processors, messageLength will increase by sizeof(int). Then, to add it to the byte array, simply use the same System.Buffer.BlockCopy call. I've adjusted the example to show this with a little more detail, including making the method static.

Understanding the NetworkStream.EndRead()-example from MSDN

I tried to understand the MSDN example for NetworkStream.EndRead(). There are some parts that i do not understand.
So here is the example (copied from MSDN):
// Example of EndRead, DataAvailable and BeginRead.
public static void myReadCallBack(IAsyncResult ar ){
NetworkStream myNetworkStream = (NetworkStream)ar.AsyncState;
byte[] myReadBuffer = new byte[1024];
String myCompleteMessage = "";
int numberOfBytesRead;
numberOfBytesRead = myNetworkStream.EndRead(ar);
myCompleteMessage =
String.Concat(myCompleteMessage, Encoding.ASCII.GetString(myReadBuffer, 0, numberOfBytesRead));
// message received may be larger than buffer size so loop through until you have it all.
while(myNetworkStream.DataAvailable){
myNetworkStream.BeginRead(myReadBuffer, 0, myReadBuffer.Length,
new AsyncCallback(NetworkStream_ASync_Send_Receive.myReadCallBack),
myNetworkStream);
}
// Print out the received message to the console.
Console.WriteLine("You received the following message : " +
myCompleteMessage);
}
It uses BeginRead() and EndRead() to read asynchronously from the network stream.
The whole thing is invoked by calling
myNetworkStream.BeginRead(someBuffer, 0, someBuffer.Length, new AsyncCallback(NetworkStream_ASync_Send_Receive.myReadCallBack), myNetworkStream);
somewhere else (not displayed in the example).
What I think it should do is print the whole message received from the NetworkStream in a single WriteLine (the one at the end of the example). Notice that the string is called myCompleteMessage.
Now when I look at the implementation some problems arise for my understanding.
First of all: The example allocates a new method-local buffer myReadBuffer. Then EndStream() is called which writes the received message into the buffer that BeginRead() was supplied. This is NOT the myReadBuffer that was just allocated. How should the network stream know of it? So in the next line numberOfBytesRead-bytes from the empty buffer are appended to myCompleteMessage. Which has the current value "". In the last line this message consisting of a lot of '\0's is printed with Console.WriteLine.
This doesn't make any sense to me.
The second thing I do not understand is the while-loop.
BeginRead is an asynchronous call. So no data is immediately read. So as I understand it, the while loop should run quite a while until some asynchronous call is actually executed and reads from the stream so that there is no data available any more. The documentation doesn't say that BeginRead immediately marks some part of the available data as being read, so I do not expect it to do so.
This example does not improve my understanding of those methods. Is this example wrong or is my understanding wrong (I expect the latter)? How does this example work?
I think the while loop around the BeginRead shouldn't be there. You don't want to execute the BeginRead more than ones before the EndRead is done. Also the buffer needs to be specified outside the BeginRead, because you may use more than one reads per packet/buffer.
There are some things you need to think about, like how long are my messages/blocks (fixed size). Shall I prefix it with a length. (variable size) <datalength><data><datalength><data>
Don't forget it is a Streaming connection, so multiple/partial messages/packets can be read in one read.
Pseudo example:
int bytesNeeded;
int bytesRead;
public void Start()
{
bytesNeeded = 40; // u need to know how much bytes you're needing
bytesRead = 0;
BeginReading();
}
public void BeginReading()
{
myNetworkStream.BeginRead(
someBuffer, bytesRead, bytesNeeded - bytesRead,
new AsyncCallback(EndReading),
myNetworkStream);
}
public void EndReading(IAsyncResult ar)
{
numberOfBytesRead = myNetworkStream.EndRead(ar);
if(numberOfBytesRead == 0)
{
// disconnected
return;
}
bytesRead += numberOfBytesRead;
if(bytesRead == bytesNeeded)
{
// Handle buffer
Start();
}
else
BeginReading();
}

Is Stream.Copy piped?

Suppose I am writing a tcp proxy code.
I am reading from the incoming stream and writing to the output stream.
I know that Stream.Copy uses a buffer, but my question is:
Does the Stream.Copy method writes to the output stream while fetching the next chunk from the input stream or it a loop like "read chunk from input, write chunk to ouput, read chunk from input, etc" ?
Here's the implementation of CopyTo in .NET 4.5:
private void InternalCopyTo(Stream destination, int bufferSize)
{
int num;
byte[] buffer = new byte[bufferSize];
while ((num = this.Read(buffer, 0, buffer.Length)) != 0)
{
destination.Write(buffer, 0, num);
}
}
So as you can see, it reads from the source, then writes to the destination. This could probably be improved ;)
EDIT: here's a possible implementation of a piped version:
public static void CopyToPiped(this Stream source, Stream destination, int bufferSize = 0x14000)
{
byte[] readBuffer = new byte[bufferSize];
byte[] writeBuffer = new byte[bufferSize];
int bytesRead = source.Read(readBuffer, 0, bufferSize);
while (bytesRead > 0)
{
Swap(ref readBuffer, ref writeBuffer);
var iar = destination.BeginWrite(writeBuffer, 0, bytesRead, null, null);
bytesRead = source.Read(readBuffer, 0, bufferSize);
destination.EndWrite(iar);
}
}
static void Swap<T>(ref T x, ref T y)
{
T tmp = x;
x = y;
y = tmp;
}
Basically, it reads a chunk synchronously, starts to copy it to the destination asynchronously, then read the next chunk and waits for the write to complete.
I ran a few performance tests:
using MemoryStreams, I didn't expect a significant improvement, since it doesn't use IO completion ports (AFAIK); and indeed, the performance is almost identical
using files on different drives, I expected the piped version to perform better, but it doesn't... it's actually slightly slower (by 5 to 10%)
So it apparently doesn't bring any benefit, which is probably the reason why it isn't implemented this way...
According to Reflector it does not. Such behavior better be documented because it would introduce concurrency. This is never safe to do in general. So the API design to not "pipe" is sound.
So this is not just a question of Stream.Copy being more or less smart. Copying in a concurrent way is not an implementation detail.
Stream.Copy is synchronous operation. I don't think it is reasonable to expect it to use asynchronous read/write to make simultaneous read and write.
I would expect asynchrounous version (like RandomAccessStream.CopyAsync) to use simultaneous read and write.
Note: using multiple threads during copy would be unwelcome behavior, but using asynchronous read and write to run them at the same time is ok.
Writing to the output stream is impossible (when using one buffer) while fetching next chunk because fetching the next chunk can overwrite the buffer while its being used for output.
You can say use double buffering but its pretty much the same as using a double sized buffer.

Categories