Plotting .wav file while playing with nAudio - c#

I'm wanting to open a wave file and play it back while plotting it's amplitude and frequency response on charts.
I have a function that takes arrays of bytes or floats and does this for me.
What I am trying to do is sample the audio from the last time I sampled it pass an array of floats representing that time to the function.
I'm trying to use an ISampleProvider with it's read method for this but I just can't seem to get it to work. The first read works perfectly but then the thread crashes on subsequent reads (occasionally it crashes on the first read also).
This is how I'm setting up the audio, the file plays just fine:
_waveOut = new WaveOutEvent();
_waveReader = new WaveFileReader(_cncFilePath.Substring(0, _cncFilePath.Length - 4) + "wav");
_waveOutSampleProvider = new SampleChannel(_waveReader, true);
_waveOut.Init(_waveOutSampleProvider);
_waveOut.Play();
This is running on a 100ms timer, it will work perfectly for the first tick, the second will crash, the lock stays on and all other calls get backed up until the whole program crashes.
private void WavOutChartTimeInterrupt(object waveReader)
{
lock (AudioLock) //todo add skipto code, use audio lock to do it.
{
try
{
var curPos = _waveOut.GetPositionTimeSpan(); //get currentPos
if (curPos <= AudioCurrentPosition)
{
AudioCurrentPosition = curPos;
return;
}
var bufferLength = (curPos - AudioCurrentPosition);
var samplesSec = _waveOutSampleProvider.WaveFormat.SampleRate;
var channels = _waveOut.OutputWaveFormat.Channels;
var length = (int) (bufferLength.TotalSeconds * samplesSec * channels) % (samplesSec * channels);
var wavOutBuffer = new float[length];
_waveOutSampleProvider.Read(wavOutBuffer, 0, length);
AudioCurrentPosition = curPos; //update for vCNC with where we are
}
catch (Exception e)
{
string WTF = e.StackTrace;
throw new ArgumentException(#"Wave out buffer crashed" + e.StackTrace.ToString());
}
}
Stack Trace added (hope I did it correctly)
at NAudio.Wave.WaveFileReader.Read(Byte[] array, Int32 offset, Int32 count)\r\n
at NAudio.Wave.SampleProviders.Pcm16BitToSampleProvider.Read(Single[] buffer, Int32 offset, Int32 count)\r\n
at NAudio.Wave.SampleProviders.MeteringSampleProvider.Read(Single[] buffer, Int32 offset, Int32 count)\r\n
at NAudio.Wave.SampleProviders.VolumeSampleProvider.Read(Single[] buffer, Int32 offset, Int32 sampleCount)\r\n
at RecordCNC.Form1.WavOutChartTimeInterrupt(Object waveReader) in C:\\Cloud\\ITRI\\Visual Studio\\RecordCNC\\RecordCNC\\Form1.cs:line 715
Haydan

The issue was that I wasn't correctly checking the length of the buffer I was requesting. Buffers always have to be a multiple of block align.
private void WavOutChartTimeInterrupt(object waveReader)
{
lock (AudioLock) //todo add skipto code, use audio lock to do it.
{
try
{
var curPos = _waveOut.GetPositionTimeSpan(); //get currentPos
if (curPos <= AudioCurrentPosition)
{
AudioCurrentPosition = curPos;
return;
}
var bufferLength = (curPos - AudioCurrentPosition);
var samplesSec = _waveOutSampleProvider.WaveFormat.SampleRate;
var channels = _waveOut.OutputWaveFormat.Channels;
var length = (int) (bufferLength.TotalSeconds * samplesSec * channels) % (samplesSec * channels);
length -= length% (blockAlign / channels); //<- THIS FIXED IT
var wavOutBuffer = new float[length];
_waveOutSampleProvider.Read(wavOutBuffer, 0, length);
AudioCurrentPosition = curPos; //update for vCNC with where we are
}
catch (Exception e)
{
string WTF = e.StackTrace;
throw new ArgumentException(#"Wave out buffer crashed" + e.StackTrace.ToString());
}
}

Related

Playback duration from MF SinkWriter mp4 file is half the time when adding an audio sample also the playback speed of the images is twice as fast

I created a managed c++ library for my c# project to encode images and audio to a mp4 container base on the MSDN tutorial SinkWriter. To test if the result is ok I created a method that provides 600 frames. This frames represent a 10 second video with 60 frames per second.
The images I provide change every second and my audio file contains a voice that counts to 10.
The problem I am facing is that the output video actualy is only 5 seconds long. The meta data of the video is showing that it is 10 seconds but isn't. Also the voice barely counts up to 5.
If I only write the image samples without the audio part the duration of the video is the expected 10 seconds.
What am I missing here?
Here are some parts of my application.
This is the c# part I am using to create the 600 frames and then I call the PushFrame method also in the c# part.
var videoFrameCount = 10 * FPS;
SetBinaryImage();
for (int i = 0; i <= videoFrameCount; i++)
{
// New picture every second
if (i > 0 && i % FPS == 0)
{
SetBinaryImage();
}
PushFrame();
}
The PushFrame method copies the image and audio data to the pointer provided by the SinkWriter. Then I call the PushFrame method of the SinkWriter.
private void PushFrame()
{
try
{
encodeStopwatch.Reset();
encodeStopwatch.Start();
// Video
var frameBufferHandler = GCHandle.Alloc(frameBuffer, GCHandleType.Pinned);
frameBufferPtr = frameBufferHandler.AddrOfPinnedObject();
CopyImageDataToPointer(BinaryImage, ScreenWidth, ScreenHeight, frameBufferPtr);
// Audio
var audioBufferHandler = GCHandle.Alloc(audioBuffer, GCHandleType.Pinned);
audioBufferPtr = audioBufferHandler.AddrOfPinnedObject();
var readLength = audioBuffer.Length;
if (BinaryAudio.Length - (audioOffset + audioBuffer.Length) < 0)
{
readLength = BinaryAudio.Length - audioOffset;
}
if (!EndOfFile)
{
Marshal.Copy(BinaryAudio, audioOffset, (IntPtr)audioBufferPtr, readLength);
audioOffset += audioBuffer.Length;
}
if (readLength < audioBuffer.Length && !EndOfFile)
{
EndOfFile = true;
}
unsafe
{
// Copy video data
var yuv = SinkWriter.VideoCapturerBuffer();
SinkWriter.Encode((byte*)frameBufferPtr, ScreenWidth, ScreenHeight, (int)SWPF.SWPF_RGB, yuv);
// Copy audio data
var audioDestPtr = SinkWriter.AudioCapturerBuffer();
SinkWriter.EncodeAudio((byte*)audioBufferPtr, audioDestPtr);
SinkWriter.PushFrame();
}
encodeStopwatch.Stop();
Console.WriteLine($"YUV frame generated in: {encodeStopwatch.TakeTotalMilliseconds()} ms");
}
catch (Exception ex)
{
}
}
Here are some parts I added to the SinkWriter in c++. The MediaTypes for the audio part are ok I guess because the playback of the audio works.
The rtStart and rtDuration are defined like this:
LONGLONG rtStart = 0;
UINT64 rtDuration;
MFFrameRateToAverageTimePerFrame(fps, 1, &rtDuration);
The two buffers from the encoders are used like this
int SinkWriter::Encode(Byte * rgbBuf, int w, int h, int pxFormat, Byte * yufBuf)
{
const LONG cbWidth = 4 * VIDEO_WIDTH;
const DWORD cbBuffer = cbWidth * VIDEO_HEIGHT;
// Create a new memory buffer.
HRESULT hr = MFCreateMemoryBuffer(cbBuffer, &pFrameBuffer);
// Lock the buffer and copy the video frame to the buffer.
if (SUCCEEDED(hr))
{
hr = pFrameBuffer->Lock(&yufBuf, NULL, NULL);
}
if (SUCCEEDED(hr))
{
// Calculate the stride
DWORD bitsPerPixel = GetBitsPerPixel(pxFormat);
DWORD bytesPerPixel = bitsPerPixel / 8;
DWORD stride = w * bytesPerPixel;
// Copy image in yuv pointer
hr = MFCopyImage(
yufBuf, // Destination buffer.
stride, // Destination stride.
rgbBuf, // First row in source image.
stride, // Source stride.
stride, // Image width in bytes.
h // Image height in pixels.
);
}
if (pFrameBuffer)
{
pFrameBuffer->Unlock();
}
// Set the data length of the buffer.
if (SUCCEEDED(hr))
{
hr = pFrameBuffer->SetCurrentLength(cbBuffer);
}
if (SUCCEEDED(hr))
{
return 0;
}
else
{
return -1;
}
return 0;
}
int SinkWriter::EncodeAudio(Byte * src, Byte * dest)
{
DWORD samplePerSecond = AUDIO_SAMPLES_PER_SECOND * AUDIO_BITS_PER_SAMPLE * AUDIO_NUM_CHANNELS;
DWORD cbBuffer = samplePerSecond / 1000;
// Create a new memory buffer.
HRESULT hr = MFCreateMemoryBuffer(cbBuffer, &pAudioBuffer);
// Lock the buffer and copy the video frame to the buffer.
if (SUCCEEDED(hr))
{
hr = pAudioBuffer->Lock(&dest, NULL, NULL);
}
CopyMemory(dest, src, cbBuffer);
if (pAudioBuffer)
{
pAudioBuffer->Unlock();
}
// Set the data length of the buffer.
if (SUCCEEDED(hr))
{
hr = pAudioBuffer->SetCurrentLength(cbBuffer);
}
if (SUCCEEDED(hr))
{
return 0;
}
else
{
return -1;
}
return 0;
}
This is the PushFrame method of the SinkWriter that passes the SinkWriter, streamIndex, audioIndex, rtStart and rtDuration to the WriteFrame method.
int SinkWriter::PushFrame()
{
if (initialized)
{
HRESULT hr = WriteFrame(ptrSinkWriter, stream, audio, rtStart, rtDuration);
if (FAILED(hr))
{
return -1;
}
rtStart += rtDuration;
return 0;
}
return -1;
}
And here's the WriteFrame method that combines the video and audio sample.
HRESULT SinkWriter::WriteFrame(IMFSinkWriter *pWriter, DWORD streamIndex, DWORD audioStreamIndex, const LONGLONG& rtStart, const LONGLONG& rtDuration)
{
IMFSample *pVideoSample = NULL;
// Create a media sample and add the buffer to the sample.
HRESULT hr = MFCreateSample(&pVideoSample);
if (SUCCEEDED(hr))
{
hr = pVideoSample->AddBuffer(pFrameBuffer);
}
if (SUCCEEDED(hr))
{
pVideoSample->SetUINT32(MFSampleExtension_Discontinuity, FALSE);
}
// Set the time stamp and the duration.
if (SUCCEEDED(hr))
{
hr = pVideoSample->SetSampleTime(rtStart);
}
if (SUCCEEDED(hr))
{
hr = pVideoSample->SetSampleDuration(rtDuration);
}
// Send the sample to the Sink Writer.
if (SUCCEEDED(hr))
{
hr = pWriter->WriteSample(streamIndex, pVideoSample);
}
// Audio
IMFSample *pAudioSample = NULL;
if (SUCCEEDED(hr))
{
hr = MFCreateSample(&pAudioSample);
}
if (SUCCEEDED(hr))
{
hr = pAudioSample->AddBuffer(pAudioBuffer);
}
// Set the time stamp and the duration.
if (SUCCEEDED(hr))
{
hr = pAudioSample->SetSampleTime(rtStart);
}
if (SUCCEEDED(hr))
{
hr = pAudioSample->SetSampleDuration(rtDuration);
}
// Send the sample to the Sink Writer.
if (SUCCEEDED(hr))
{
hr = pWriter->WriteSample(audioStreamIndex, pAudioSample);
}
SafeRelease(&pVideoSample);
SafeRelease(&pFrameBuffer);
SafeRelease(&pAudioSample);
SafeRelease(&pAudioBuffer);
return hr;
}
The problem was that the calculation of the buffer size for the audio was wrong.
This is the right calculation:
var avgBytesPerSecond = sampleRate * 2 * channels;
var avgBytesPerMillisecond = avgBytesPerSecond / 1000;
var bufferSize = avgBytesPerMillisecond * (1000 / 60);
audioBuffer = new byte[bufferSize];
In my question I had the buffer size for one millisecond. So it seems the MF Framework speeds up the images so the audio sounds fine. After I fixed the buffer size the video has exactly the duration I expected and the sound also has no errors.

AVT Firewrap.net Camera zooms in even though it wasn't coded

For my internship I have to create photo's with a prosillica camera, I have been looking at different api's to be able to work with the camera. Now that i found one that works, and I write the code that a previous intern wrote("guessing" he says). i get images but they are all really zoomed in. While in the official Firegrab program the pictures look fine and aren't zoomed in at all. You can look at the images here. The code i wrote to connect to the camera was as followed:
Ctrl = FireWrap_CtrlCenter.GetInstance();
Ctrl.OnFrameReady += OnFrameReady;
Result = Ctrl.FGInitModule();
if (Result == enFireWrapResult.E_NOERROR)
{
Result = InfoContainer.FGGetNodeList();
var NodeCnt = InfoContainer.Size();
InfoContainer.GetAt(NodeInfo, 0);
Result = Cam.Connect(NodeInfo.Guid);
cCamera.Items.Add(Cam.DeviceAll);
if (Result == enFireWrapResult.E_NOERROR)
{
Cam.m_Guid = NodeInfo.Guid;
}
if (Result == enFireWrapResult.E_NOERROR)
{
Result = Cam.SetParameter(enFGParameter.E_IMAGEFORMAT,
(((uint)enFGResolution.E_RES_SCALABLE << 16) |
((uint)enColorMode.E_CCOLORMODE_Y8 << 8) |
0));
}
if (Result == enFireWrapResult.E_NOERROR)
Result = Cam.OpenCapture();
// Print device settings
Result = Cam.GetParameter(enFGParameter.E_XSIZE, ref XSize);
Result = Cam.GetParameter(enFGParameter.E_YSIZE, ref YSize);
width = XSize;
height = YSize;
// Start camera
if (Result == enFireWrapResult.E_NOERROR)
{
Result = Cam.StartDevice();
}
}
When i connect to the camera, I also tell it to start recording instantly. The frames i get when the camera turns on are processed in OnFrameReady, which I used the following code for:
Debug.WriteLine("OnFrameReady is called");
FGEventArgs args = (FGEventArgs)__p2;
FGFrame Frame;
Guid.High = args.High;
Guid.Low = args.Low;
if (Guid.Low == Cam.m_Guid.Low)
{
Result = Cam.GetFrame(Frame, 0);
// Process frame, skip FrameStart notification
if (Result == enFireWrapResult.E_NOERROR & Frame.Length > 0)
{
byte[] data = new byte[Frame.Length];
// Access to frame data
if (Frame.CloneData(data))
{
//DisplayImage(data.Clone());
SaveImageFromByteArray(data);
// Here you can start your image processsing logic on data
string debug = String.Format("[{6}] Frame #{0} length:{1}byte [ {2} {3} {4} {5} ... ]",
Frame.Id, Frame.Length, data[0], data[1], data[2], data[3], Cam.m_Guid.Low);
Debug.WriteLine(debug);
}
// Return frame to module as fast as posible after this the Frame is not valid
Result = Cam.PutFrame(Frame);
}
}
So in this function i get the frame and put it in a byte[], then i call the function SaveImageFromByteArray(); where i put the byte[] in a list. So i can access all my pictures later on to save them. The code for the SaveImageFromByteArray is as followed:
public void SaveImageFromByteArray(byte[] byteArray)
{
try
{
//bytearray size determined
byte[] data = new byte[width * height * 4];
int o = 0;
//bytearray size filled
for (int io = 0; io < width * height; io++)
{
byte value = byteArray[io];
data[o++] = value;
data[o++] = value;
data[o++] = value;
data[o++] = 0;
}
bytearrayList.Add(data);
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
}
After im done recoding all of my frames, I click save, stops the camera and then i call the following functions to save it to a bitmap file:
public void SaveData()
{
try
{
foreach (byte[] data1 in bytearrayList)
{
byte[] data = Save(data1);
lock (this)
{
unsafe
{
fixed (byte* ptr = data)
{
try
{
using (image = new Bitmap((int) width, (int) height, (int) width * 4,
System.Drawing.Imaging.PixelFormat.Format32bppPArgb, new IntPtr(ptr)))
{
image.Save(path + nextpicture + ".bmp", ImageFormat.Bmp);
Debug.WriteLine("Image saved at " + path + nextpicture + ".bmp");
nextpicture++;
}
}
catch (Exception ex)
{
Debug.Write(ex.ToString());
}
}
}
}
}
}
catch (Exception ex)
{
Debug.Write(ex.ToString());
}
}
The save function called in the function above is written as followed:
private byte[] Save(byte[] data1)
{
//bytearray size determined
byte[] data = new byte[width * height * 4];
int o = 0;
//bytearray size filled
for (int io = 0; io < width * height; io++)
{
byte value = data1[io];
data[o++] = value;
data[o++] = value;
data[o++] = value;
data[o++] = 0;
}
return data;
}
I think the problem of the zooming happens when we connect to the camera and we execute this line of code:
if (Result == enFireWrapResult.E_NOERROR)
{
Result = Cam.SetParameter(enFGParameter.E_IMAGEFORMAT,
(((uint)enFGResolution.E_RES_SCALABLE << 16) |
((uint)enColorMode.E_CCOLORMODE_Y8 << 8)|
0));
}
But the problem is that there is no documentation to be found about Firewrap.net or their api. Even when we try to edit the 16 to like 15, the camera won't even startup
Found the problem! pixels were stretched out into 4 pixels horizontally, that was because we did this twice:
byte[] data = new byte[width * height * 4];
int o = 0;
//bytearray size filled
for (int io = 0; io < width * height; io++)
{
byte value = byteArray[io];
data[o++] = value;
data[o++] = value;
data[o++] = value;
data[o++] = 0;
}

naudio WaveOut not playing all data provided by iSampleProvider

I am working on a click generator using nAudio. As a test, I created a ISampleProvidor class to read/play from an audio file. The iSampleProvider reads in a PCM (32-bit ieee) wav file and then plays it back using the WaveOut player. The WaveOut plays only about 1/4 of the audio passed via the IsampleProvidor Read() method. This results in a choppy playback. The ISampleProvider read() method requests the correct amount of data at the correct time intervals, but the WaveOut only plays the first 25% of the samples provided back to the interface. Any Idea how to address this, or am I using the wrong classes to build a click track (the BufferedWaveProvider might also work, but it only buffers 5 seconds of audio)?
public void TestSampleProvider()
{
ISampleProvider mySamples = new MySamples();
var _waveOut = new WaveOut(WaveCallbackInfo.FunctionCallback()) {DeviceNumber = 0};
_waveOut.Init(mySamples);
_waveOut.Play();
Console.ReadLine();
}
public class MySamples : ISampleProvider
{
private float[] samplesFloats;
private int position;
private WaveFileReader clicksound;
public int Read(float[] buffer, int offset, int count)
{
var copyCount = count;
if (position + copyCount > samplesFloats.Count())
{
copyCount = samplesFloats.Count() - position;
}
Console.WriteLine("samplesFloats {0} position {1} copyCount {2} offset {3} time {4}", samplesFloats.Count(), position, copyCount, offset, DateTime.Now.Millisecond);
Buffer.BlockCopy(samplesFloats, position, buffer, offset, copyCount);
position += copyCount;
return copyCount;
}
public MySamples()
{
clicksound = new WaveFileReader(#"C:\temp\sample.wav");
WaveFormat = clicksound.WaveFormat;
samplesFloats = new float[clicksound.SampleCount];
for (int i = 0; i < clicksound.SampleCount; i++)
{
samplesFloats[i] = clicksound.ReadNextSampleFrame()[0];//it;s a mono file
}
}
public WaveFormat WaveFormat { get; private set; }
}
I think there may be an issue with the WaveOut using the ISampleProvider, so I used the IWaveProvider interface to do the same thing. In fact, here's a bare bones class for sending a non-ending click to the waveout. This might run into memory issues if you let it run a long time but for pop songs it should be fine. Also this will only work for 32-bit files (note the *4 on the byte buffer)
public class MyClick : IWaveProvider
{
private int position;
private WaveFileReader clicksound;
private byte[] samplebuff;
MemoryStream _byteStream = new System.IO.MemoryStream();
public MyClick(float bpm=120)
{
clicksound = new WaveFileReader(#"click_sample.wav");
var bpmsampleslen = (60 / bpm) * clicksound.WaveFormat.SampleRate;
samplebuff = new byte[(int) bpmsampleslen*4];
clicksound.Read(samplebuff, 0,(int) clicksound.Length);
_byteStream.Write(samplebuff, 0, samplebuff.Length);
_byteStream.Position = 0;
WaveFormat = clicksound.WaveFormat;
}
public int Read(byte[] buffer, int offset, int count)
{
//we reached the end of the stream add another one to the end and keep playing
if (count + _byteStream.Position > _byteStream.Length)
{
var holdpos = _byteStream.Position;
_byteStream.Write(samplebuff, 0, samplebuff.Length);
_byteStream.Position = holdpos;
}
return _byteStream.Read(buffer, offset, count);
}
public WaveFormat WaveFormat { get; private set; }
}

How to play an exact part of MP3 file between given positions in ms?

(Newbie question)
NAudio allows to start playing an MP3 file from a given position (by converting it from ms into bytes using Waveformat.AverageBytesPerSecond), but is it possible to make it stop playing exactly at another given position (in ms)? Do I have to somehow manipulate the wavestream or there are easier ways?
There is a solution using a Timer simultaneously together with starting playback and stopping playback after the timer ticks, but it doesn't produce reliable results at all.
I'd create a custom IWaveProvider that only returns a maximum of a specified number of bytes from Read. Then reposition your Mp3FileReader to the start, and pass it in to the custom trimming wave provider
Here's some completely untested example code to give you an idea.
class TrimWaveProvider
{
private readonly IWaveProvider source;
private int bytesRead;
private readonly int maxBytesToRead;
public TrimWaveProvider(IWaveProvider source, int maxBytesToRead)
{
this.source = source;
this.maxBytesToRead = maxBytesToRead;
}
public WaveFormat WaveFormat { get { return source.WaveFormat; } }
public int Read(byte[] buffer, int offset, int bytesToRead)
{
int bytesToReadThisTime = Math.Min(bytesToRead, maxBytesToRead - bytesRead);
int bytesReadThisTime = source.Read(buffer, offset, bytesToReadThisTime);
bytesRead += bytesReadThisTime;
return bytesReadThisTime;
}
}
// and call it like this...
var reader = new Mp3FileReader("myfile.mp3");
reader.Position = reader.WaveFormat.AverageBytesPerSecond * 3; // start 3 seconds in
// read 5 seconds
var trimmer = new TrimWaveProvider(reader, reader.WaveFormat.AverageBytesPerSecond * 5);
WaveOut waveOut = new WaveOut();
waveOut.Init(trimmer);

RFB/VNC Receiving FrameBuffer and Byte Data issue C#

I'm hoping someone that's had experience with the RFB protocol will give me an answer.
Following the RFB protocol, I've implemented a 3.3 client, the handshake and all is fine. What I don't understand / having issues with, is the FrameUpdateRequest and FrameUpdate using Raw data.
I've read and implemented the documentation verbatim # contents 6.4.3 and 6.5.1 from http://www.realvnc.com/docs/rfbproto.pdf
It's a bit messy as I've been playing left, right and center with it. But here's what I'm doing:
public int beginNormalOperation()
{
byte[] fbUp = new byte[10];
fbUp[0] = 0003; //Message type, 3= FrameBufferUpdate
fbUp[1] = 0001; //Incremental 0=true, 1=false
fbUp[2] = 0000; //X Position High Byte
fbUp[3] = 0000; //X Position Low Byte
fbUp[4] = 0000; //Y Position High Byte
fbUp[5] = 0000; //Y Position Low Byte
fbUp[6] = INTtoU16(serverSetWidth)[0];
fbUp[7] = INTtoU16(serverSetWidth)[1];
fbUp[8] = INTtoU16(serverSetHeight)[0];
fbUp[9] = INTtoU16(serverSetHeight)[1];
//fbUp[6] = 0000;
//fbUp[7] = 100;
//fbUp[8] = 0000;
//fbUp[9] = 100;
sock.Send(fbUp);
System.Drawing.Image img;
byte[] bufferInfo = new byte[4];
try
{
sock.Receive(bufferInfo);
textBox4.AppendText("\r\n" + Encoding.Default.GetString(bufferInfo));
}
catch (SocketException ex) { MessageBox.Show("" + ex); }
return U16toINT(bufferInfo[2], bufferInfo[3]);
}
The return value is the number of rectangles, because I'm calling this method from a button click, then passing it to:
public void drawImage(int numRectangles)
{
//Now within the class
//int xPos = 0;
//int yPos = 0;
//int fWidth = 0;
//int fHeight = 0;
if (myBmp == null)
{
myBmp = new Bitmap(serverSetWidth, serverSetHeight); //I'm requesting full size, so using server bounds atm
}
for (int i = 0; i < numRectangles; i++)
{
byte[] bufferData = new byte[12];
int headerLen = 0;
if (!gotRectangleHeader)
{
try
{
sock.Receive(bufferData);
}
catch (SocketException ex) { MessageBox.Show("" + ex); }
xPos = U16toINT(bufferData[0], bufferData[1]);
yPos = U16toINT(bufferData[2], bufferData[3]);
fWidth = U16toINT(bufferData[4], bufferData[5]);
fHeight = U16toINT(bufferData[6], bufferData[7]);
//headerLen = 12; //I'm now reading the first 12 bytes first so no need for this
gotRectangleHeader = true;
}
bufferData = new byte[((fWidth * fHeight)*4)];
try
{
sock.Receive(bufferData);
}
catch (SocketException ex) { MessageBox.Show("" + ex); }
//Testing to see where the actual data is ending
//byte[] end = new byte[1000];
//Array.Copy(bufferData, 16125, end, 0, 1000);
//for(int f=0; f<bufferData.Length;f++)
//{
// if (Convert.ToInt32(bufferData[f].ToString()) == 0 &&
// Convert.ToInt32(bufferData[f + 1].ToString()) == 0 &&
// Convert.ToInt32(bufferData[f + 2].ToString()) == 0 &&
// Convert.ToInt32(bufferData[f + 3].ToString()) == 0)
// {
// Array.Copy(bufferData, f-30, end, 0, 500);
// int o = 1;
// }
//}
int curRow = 0;
int curCol = 0;
for (int curBit = 0; curBit < (bufferData.Length - headerLen) / 4; curBit++)
{
int caret = (curBit * 4) + headerLen;
if (curRow == 200)
{
int ss = 4;
}
Color pixCol = System.Drawing.Color.FromArgb(Convert.ToInt32(bufferData[caret+3].ToString()), Convert.ToInt32(bufferData[caret+2].ToString()), Convert.ToInt32(bufferData[caret+1].ToString()), Convert.ToInt32(bufferData[caret].ToString()));
myBmp.SetPixel(curCol, curRow, pixCol);
if (curCol == (fWidth - 1))
{
curRow++;
curCol = 0;
}
else
{
curCol++;
}
}
}
imgForm.Show();
imgForm.updateImg(myBmp);
}
I'm sorry for the code, I've gone through so many permutations messing about it's become a mess.
This is what I'm trying to do and the way I imagine that it should work according to the protocol:
I request a FrameBufferUpdateRequest, incremental is false (1, according to the Doc's), X and Y position set to 0 and width & height both U16 set to 1366 x 768 respectively.
I receive a FrameBufferUpdate with Number of Rectangles
I call drawImage passing Number of Rectangles in.
I assume from the docs, for each rectangle then create a buffer to that rectangles height and width. And set the pixels on a BMP to the rectangles bounds.
The first rectangle always has a header, and within that header the requested width and height. The following rectangle doesn't have any header information. So i'm missing something here. I'm guessing I haven't received all of the first rectangles data even though I have set the sockets buffer size to width*height*bytes.
Sometimes I get say the top 200 pixels or so and full width though a quarter of the right hand screen is shown on the left hand side in my BMP. Sometimes I've had the full screen and that's what I want but mostly I get a slither say 10px of the top of the screen then nothing.
I'm doing something wrong, I know I am. But what??? The documentation isn't great. If someone could hold my hand through the FrameBufferUpdateRequest -> FrameBufferUpdate -> Display Raw Pixel Data!!
Thanks for any input
craig
I suggest you refer to http://tigervnc.org/cgi-bin/rfbproto which I've found to be a much better reference on the protocol. Specifically the sections on framebufferupdaterequest and framebufferupdate
Some other notes:
you have your incremental values swapped. 1 is incremental and 0 is a full request. With incremental set to 1 you are only requesting rectangles that have changed since the last round.
don't assume that framebuffer updates comes synchronously right after you send a framebufferupdate. You really need to read the first byte first to determine what sort of message the server has sent and process the message accordingly.
you should really create your bitmap based on that actual size of each rectangle (i.e. after you read the size of the rectangle you are processing.
you need to take into account the encoding format (you're currently ignoring it). That will determine how large the image data for the rectangle is.
Also, I'm not familiar with how C# Socket.Receive works. Unless it is always guaranteed to block until the buffer is filled, you might need to check how much data was actually read since the servers don't always send the whole framebufferupdate message all at once and even if they do, the messages might get fragmented and not arrive all at once.

Categories