Live Video Streaming using Raspberry Pi and C# - c#

I'm in a uni project of live Streaming the video (Taken from a Web Cam) and stream it to the desktop using C# (UWP, Windows 10 IoT Core). Even though I found some projects doing the server side implementation in Java (For Rasp) and Client side using UWP I couldn't find any Projects regarding Server side programming in C#.
Plus, is it really possible to do such server side programming using C# for live streaming as this Microsoft link say it isn't.
View the Microsoft Link
Any help would be deeply appreciated.
Regards,
T.S.

Even though I found some projects doing the server side implementation in Java (For Rasp) and Client side using UWP I couldn't find any Projects regarding Server side programming in C#.
There is another project I have coded and tested successfully. You could have a reference if it could help you.
In the MyVideoServer App the important is getting the camera id and previewFrame of the video. previewFrame = await MyMediaCapture.GetPreviewFrameAsync(videoFrame);Then send video stream to client through streamSocketClient.await streamSocketClient.sendBuffer(buffer);
public MainPage()
{
this.InitializeComponent();
InitializeCameraAsync();
InitSocket();
}
MediaCapture MyMediaCapture;
VideoFrame videoFrame;
VideoFrame previewFrame;
IBuffer buffer;
DispatcherTimer timer;
StreamSocketListenerServer streamSocketSrv;
StreamSocketClient streamSocketClient;
private async void InitializeCameraAsync()
{
var allVideoDevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
DeviceInformation cameraDevice = allVideoDevices.FirstOrDefault();
var mediaInitSettings = new MediaCaptureInitializationSettings { VideoDeviceId = cameraDevice.Id };
MyMediaCapture = new MediaCapture();
try
{
await MyMediaCapture.InitializeAsync(mediaInitSettings);
}
catch (UnauthorizedAccessException)
{
}
PreviewControl.Height = 180;
PreviewControl.Width = 240;
PreviewControl.Source = MyMediaCapture;
await MyMediaCapture.StartPreviewAsync();
videoFrame = new VideoFrame(BitmapPixelFormat.Bgra8, 240, 180, 0);
buffer = new Windows.Storage.Streams.Buffer((uint)(240 * 180 * 8));
}
Then the key server code is trying to create a server and connect client by socket communication in InitSocket function. StreamSocketListenerServer should be created as an object and started. At the same time the server ip port is setup.streamSocketSrv = new StreamSocketListenerServer();
await streamSocketSrv.start("22333");Last but not least, the Timer_Tick will send video stream to client every 100ms.
private async void InitSocket()
{
streamSocketSrv = new StreamSocketListenerServer();
await streamSocketSrv.start("22333");
streamSocketClient = new StreamSocketClient();
timer = new DispatcherTimer();
timer.Interval = TimeSpan.FromMilliseconds(100);
timer.Tick += Timer_Tick;
timer.Start();
}
Following you could deploy MyVideoServer App on Raspberry Pi 3.
Then you could deploy MyVideoClient App on PC. Then enter Raspberry Pi 3 IP Address and click Connect button. The video stream would display on the App.
This is the sample code and you could take a reference.

Related

Vidyo.IO sharing screen and monitors error

I use vidyo.IO for communication using the following code
ConnectorPKG.Initialize();
var _connector = new Connector(Handle, Connector.ConnectorViewStyle.ConnectorviewstyleDefault, 8, "all#VidyoClient", "VidyoClient.log", 0);
// This should be called on each window resizing.
_connector.ShowViewAt(Handle, 0, 0, Weidth, Height);
// Registering to events we want to handle.
_connector.RegisterLocalCameraEventListener(new LocalCameraListener(this));
_connector.RegisterLocalWindowShareEventListener(new LocalWindowShareListener(this));
_connector.RegisterLocalMicrophoneEventListener(new LocalMicropfoneListener(this));
_connector.RegisterLocalSpeakerEventListener(new LocalSpeakerListener(this));
_connector.RegisterParticipantEventListener(new ParticipantListener(this));
_connector.RegisterLocalMonitorEventListener(new LocalMonitorListener(this));
_connector.RegisterMessageEventListener(new ChatListener(this));
_connector.DisableDebug();
then after joining a room I share window using code like this
var winToShare = LocalWindows.FirstOrDefault( );
if (winToShare != null)
{
winToShare.IsSelected = true;
//SetSelectedLocalWindow(winToShare);
SharingInProgress = _connector.SelectLocalWindowShare(winToShare.Object);
}
and same for monitors , Now I always get this error
can't share overconstrained frame interval
What platform are you developing for? Seems like you may be building a mobile app using Xamarin and if that's the case, you will not be able to do window/app share. That feature is available only on desktop and web clients.

"C#/Xaml Windows app store" Get a file.mbtiles on a localhost using nodejs

I wanna use a tile map in my c# program using a nodejs server.
The server is actually working because I can see tiles on my browser.
The problem is that my program correctly works on Visual Studio but when I'm trying to use the packaged version on a different device it doesn't recognize my server.
Here is my actual code :
public MapPage()
{
this.InitializeComponent();
this.navigationHelper = new NavigationHelper(this);
this.navigationHelper.LoadState += navigationHelper_LoadState;
this.navigationHelper.SaveState += navigationHelper_SaveState;
MapTileLayer layer2 = new MapTileLayer();
layer2.Opacity = 1;
string ip = this.getCurrentIPAddress().ToString();
layer2.GetTileUri += (s, e) =>
{
e.Uri = new Uri(string.Format("http://localhost:3000/{0}/{1}/{2}.png", e.LevelOfDetail, e.X, e.Y));
};
MyMap.TileLayers.Add(layer2);
shapeLayer = new MapLayer();
MyMap.Children.Add(shapeLayer);
poiLayer = new MapLayer();
MyMap.Children.Add(poiLayer);
this.all_btn.IsChecked = true;
}
Does anyone got an idea ?
Thanks for reading =)
EDIT:
It seems to be caused by the absence of Visual Studio on the device
EDIT2:
I tried to do the same process using the new VS2015 Xaml controller MapControl And I had the same issue.
EDIT3:
I went back to w8.1 and it seems that for the app to work I've to install VS and download the bing maps SDK so it may be due to this =/

Loop TextToSpeech message in Ozeki VOIP

Im using Microsoft Speech Platform with Ozeki VOIP Sip Client for playing TextToSpeech messages when I am calling SIP calls. How can I set TTS to loop the message forever in Ozeki?
I'm using this nuget package for Ozeki: http://www.nuget.org/packages/ozeki.voip.sip.client/
Here is my code:
var textToSpeech = new TextToSpeech();
var msp = new MSSpeechPlatformTTS();
textToSpeech.AddTTSEngine(msp);
var clientLanguage = ConfigurationManager.AppSettings["TextSpeechLanguage"];
var voices = textToSpeech.GetAvailableVoices();
foreach (var voice in voices)
{
if (voice.Language == clientLanguage)
textToSpeech.ChangeLanguage(voice.Language, voice.Name);
}
if (string.IsNullOrEmpty(speechString))
{
textToSpeech.ChangeLanguage("en-GB");
speechString = "You have a visitor. Press 1 to accept the visit. Press 2 to talk the the visitor.";
}
mediaSender.AttachToCall(call);
connector.Connect(textToSpeech, mediaSender);
textToSpeech.AddAndStartText(speechString);
I think it can help you. Try to change the last line of your code accordingly:
while(true)
{
textToSpeech.AddAndStartText(speechString);
}
You can learn more about using MS Speach Platform 11 in C# here.
The answer was to attach the "stopped" event handler and play the text again to create a loop of the message.

How can I use google text to speech api in windows form?

I want to use google text to speech in my windows form application, it will read a label. I added System.Speech reference. How can it read a label with a button click event?
http://translate.google.com/translate_tts?q=testing+google+speech This is the google text to speech api, or how can I use microsoft's native text to speech?
UPDATE Google's TTS API is no longer publically available. The notes at the bottom about Microsoft's TTS are still relevant and provide equivalent functionality.
You can use Google's TTS API from your WinForm application by playing the response using a variation of this question's answer (it took me a while but I have a real solution):
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
this.FormClosing += (sender, e) =>
{
if (waiting)
stop.Set();
};
}
private void ButtonClick(object sender, EventArgs e)
{
var clicked = sender as Button;
var relatedLabel = this.Controls.Find(clicked.Tag.ToString(), true).FirstOrDefault() as Label;
if (relatedLabel == null)
return;
var playThread = new Thread(() => PlayMp3FromUrl("http://translate.google.com/translate_tts?q=" + HttpUtility.UrlEncode(relatedLabel.Text)));
playThread.IsBackground = true;
playThread.Start();
}
bool waiting = false;
AutoResetEvent stop = new AutoResetEvent(false);
public void PlayMp3FromUrl(string url)
{
using (Stream ms = new MemoryStream())
{
using (Stream stream = WebRequest.Create(url)
.GetResponse().GetResponseStream())
{
byte[] buffer = new byte[32768];
int read;
while ((read = stream.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
}
ms.Position = 0;
using (WaveStream blockAlignedStream =
new BlockAlignReductionStream(
WaveFormatConversionStream.CreatePcmStream(
new Mp3FileReader(ms))))
{
using (WaveOut waveOut = new WaveOut(WaveCallbackInfo.FunctionCallback()))
{
waveOut.Init(blockAlignedStream);
waveOut.PlaybackStopped += (sender, e) =>
{
waveOut.Stop();
};
waveOut.Play();
waiting = true;
stop.WaitOne(10000);
waiting = false;
}
}
}
}
}
NOTE: The above code requires NAudio to work (free/open source) and using statements for System.Web, System.Threading, and NAudio.Wave.
My Form1 has 2 controls on it:
A Label named label1
A Button named button1 with a Tag of label1 (used to bind the button to its label)
The above code can be simplified slightly if a you have different events for each button/label combination using something like (untested):
private void ButtonClick(object sender, EventArgs e)
{
var clicked = sender as Button;
var playThread = new Thread(() => PlayMp3FromUrl("http://translate.google.com/translate_tts?q=" + HttpUtility.UrlEncode(label1.Text)));
playThread.IsBackground = true;
playThread.Start();
}
There are problems with this solution though (this list is probably not complete; I'm sure comments and real world usage will find others):
Notice the stop.WaitOne(10000); in the first code snippet. The 10000 represents a maximum of 10 seconds of audio to be played so it will need to be tweaked if your label takes longer than that to read. This is necessary because the current version of NAudio (v1.5.4.0) seems to have a problem determining when the stream is done playing. It may be fixed in a later version or perhaps there is a workaround that I didn't take the time to find. One temporary workaround is to use a ParameterizedThreadStart that would take the timeout as a parameter to the thread. This would allow variable timeouts but would not technically fix the problem.
More importantly, the Google TTS API is unofficial (meaning not to be consumed by non-Google applications) it is subject to change without notification at any time. If you need something that will work in a commercial environment I'd suggest either the MS TTS solution (as your question suggests) or one of the many commercial alternatives. None of which tend to be even this simple though.
To answer the other side of your question:
The System.Speech.Synthesis.SpeechSynthesizer class is much easier to use and you can count on it being available reliably (where with the Google API, it could be gone tomorrow).
It is really as easy as including a reference to the System.Speech reference and:
public void SaySomething(string somethingToSay)
{
var synth = new System.Speech.Synthesis.SpeechSynthesizer();
synth.SpeakAsync(somethingToSay);
}
This just works.
Trying to use the Google TTS API was a fun experiment but I'd be hard pressed to suggest it for production use, and if you don't want to pay for a commercial alternative, Microsoft's solution is about as good as it gets.
I know this question is a bit out of date but recently Google published Google Cloud Text To Speech API.
.NET Client version of Google.Cloud.TextToSpeech can be found here:
https://github.com/jhabjan/Google.Cloud.TextToSpeech.V1
Here is short example how to use the client:
GoogleCredential credentials =
GoogleCredential.FromFile(Path.Combine(Program.AppPath, "jhabjan-test-47a56894d458.json"));
TextToSpeechClient client = TextToSpeechClient.Create(credentials);
SynthesizeSpeechResponse response = client.SynthesizeSpeech(
new SynthesisInput()
{
Text = "Google Cloud Text-to-Speech enables developers to synthesize natural-sounding speech with 32 voices"
},
new VoiceSelectionParams()
{
LanguageCode = "en-US",
Name = "en-US-Wavenet-C"
},
new AudioConfig()
{
AudioEncoding = AudioEncoding.Mp3
}
);
string speechFile = Path.Combine(Directory.GetCurrentDirectory(), "sample.mp3");
File.WriteAllBytes(speechFile, response.AudioContent);

.Net Application to capture image from pda camera

I need a .net application to interact with pda's camera so that with save button (in my application) I can save in sql server, with zoom button (in my application) I can zoom the image.
mmm, at first glance your question reads like you are after an app to do this (which is why I voted to close the question as non programming related)
but...If you are in fact after the compact framework code for this then, this may help (and I'll try reverse my vote...)
CameraCaptureDialog cameraCapture = new CameraCaptureDialog();
cameraCapture.Owner = null;
cameraCapture.InitialDirectory = #"\My Documents";
cameraCapture.DefaultFileName = #"test.3gp";
cameraCapture.Title = "Camera Demo";
cameraCapture.VideoTypes = CameraCaptureVideoTypes.Messaging;
cameraCapture.Resolution = new Size(176, 144);
cameraCapture.VideoTimeLimit = new TimeSpan(0, 0, 15); // Limited to 15 seconds of video.
cameraCapture.Mode = CameraCaptureMode.VideoWithAudio;
if (DialogResult.OK == cameraCapture.ShowDialog())
{
Console.WriteLine("The picture or video has been successfully captured to:\n{0}", cameraCapture.FileName);
}
This code snipped from the MSDN article on the CameraCaptureDialog

Categories