FileHelpers delete list/reset - c#

I am developing a test application using C# and .NET. I am a programmer (embedded) but I am not very familiar with this environment.
I have made an application that collects sensor data from an Arduino and uses FileHelpers API to populate a list of data for exporting to CSV.
// Initializing Log generation
FileHelperEngine<logItems> engine = new FileHelperEngine<logItems>();
List<logItems> logItems = new List<logItems>();
And the log save event
private void stopButton_Click(object sender, EventArgs e)
{
tmrGUI.Enabled = false; // Stop sampling
double maxValue = pressureTable.Max();
PeakValueIndicator.Text = maxValue.ToString("0.00");
engine.HeaderText = "Sample,Pressure,Time";
engine.WriteFile(string.Concat(DateTime.Now.ToString("yyyyMMdd_HHmmss"), "_", TestNameControl.Text, ".csv"), logItems);
logGenerator();
TestNameControl.Text = string.Empty;
startButton.Enabled = false;
stopButton.Enabled = false;
The application works, however if I run the data collection more than one time without closing the program between runs it will append the new sensor data at the end of the previous data list.
Is there a way to reset the list either by using the FileHelpers API or memory resetting the list using regular C#?

Related

How to use Google Cloud Speech (V1 API) for speech to text - need to be able to process over 3 hours audio files properly and efficiently

I am looking for documentation and stuff but could not find a solution yet
Installed NuGet package
Also generated API key
However can't find proper documentation how to use API key
Moreover, I want to be able to upload very long audio files
So what would be the proper way to upload up to 3 hours audio files and get their results?
I have 300$ budget so should be enough
Here my so far code
This code currently fails since I have not set the credentials correctly at the moment which I don't know how to
I also have service account file ready to use
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
}
private void Button_Click(object sender, RoutedEventArgs e)
{
var speech = SpeechClient.Create();
var config = new RecognitionConfig
{
Encoding = RecognitionConfig.Types.AudioEncoding.Flac,
SampleRateHertz = 48000,
LanguageCode = LanguageCodes.English.UnitedStates
};
var audio = RecognitionAudio.FromStorageUri("1m.flac");
var response = speech.Recognize(config, audio);
foreach (var result in response.Results)
{
foreach (var alternative in result.Alternatives)
{
Debug.WriteLine(alternative.Transcript);
}
}
}
}
I don't want to set environment variable. I have both API key and Service Account json file. How can I manually set?
You need to use the SpeechClientBuilder to create a SpeechClient with custom credentials, if you don't want to use the environment variable. Assuming you've got a service account file somewhere, change this:
var speech = SpeechClient.Create();
to this:
var speech = new SpeechClientBuilder
{
CredentialsPath = "/path/to/your/file"
}.Build();
Note that to perform a long-running recognition operation, you should also use the LongRunningRecognize method - I strongly suspect your current RPC will fail, either explicitly because it's trying to run on a file that's too large, or it'll just time out.
You need to set the environment variable before create the instance of Speech:
Environment.SetEnvironmentVariable("GOOGLE_APPLICATION_CREDENTIALS", "text-tospeech.json");
Where the second param (text-tospeech.json) is your file with credentials generated by Google Api.

Speech recognition only listens for Commands in my dictionary?

Okay so i am working on my program. The problem is that whenever spymode = true, it doesnt register whatever i am saying to the text file. It just registers the set commands in "CommandsList" dictionary. So whenever i say like "twitch" which is a command in that dictionary it will write that to the spymodeLog.txt file. But whenever i say something that is not a command, for example "hello my name is Robin" , that will not be written to the .txt file. It only takes commands in my dictionary and outputs it to the file whenever i say them. Why is this? and how can i fix it? really odd.
static Dictionary<string, string> CommandsList = new Dictionary<string, string>();
internal static void recEngine_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
if (keyHold == true)
{
string command = "";
if (CommandsList.TryGetValue(e.Result.Text, out command))
{
System.Diagnostics.Process.Start(command);
}
}
if (spymode == true)
{
DateTime now = DateTime.Now;
string path = Application.StartupPath + "spymodeLog.txt";
if (File.Exists(path))
{
using (StreamWriter wr = File.AppendText(path))
{
wr.WriteLine(e.Result.Text.ToString());
}
}
else if(!File.Exists(path))
{
using (var wr = new StreamWriter("spymodeLog.txt", true))
{
wr.WriteLine(e.Result.Text.ToString());
}
}
}
}
SpeechRecognized event fire when a command from the dictionnary has been recognized, hence the name of the event.
What you want is the SpeechDetected event which from the help says it fire when it recognize something has been said.
Each speech recognizer has an algorithm to distinguish between silence and speech. When the SpeechRecognitionEngine performs a speech recognition operation, it raises the SpeechDetected event when its algorithm identifies the input as speech.
This event will allow you to get the actual position when an audio was captured. This allow you to go find that audio portion and save it to a file or send it in the methods that allow you to convert audio to text (i do not remember the methods but they are there, i have used it in the past)
Second method which doesn't work on everyone computer. 1 out of 3 of my PC actually works with this method and this method is to create an instance of a special grammar disctionnary called DictationGrammar
What you do is create a new instance of this class. You have 3 ways to do so, either normal, spelling or default of the computer accessbility option.
default computer accessibility :
var grammar = new DictationGrammar();
Normal :
var grammar = new DictationGrammar("grammar:dictation");
Spelling :
var grammar = new DictationGrammar("grammar:dictation#spelling");
than use that dictionnary with the combinaison of all those you want and you can create a tree of choices that lead to the generic recognition. Me in my old app i used to have the keyword "Note" that had to be said and then it was falling on the DictationGrammar and started to recognize all text.
Since it didn't work on all computers while other commands where perfectly working i assumed it has something to do with wrong language of dictation grammar being loaded or something like that but there was no option to change that so i went with SpeechDetected and converted the audio to text myself.

Need o log data very fast using entity framework

I need to develop a software to monitor a value from a pressure transducer using a PLC and store the values in a datababe. The problem is i need to read de values every 20ms. Im using this code to save the data using entity framework and SQL. Im using a text box to see if the timer can handle the speed and confront with the SQL
Records made with the text box:
26/06/2017 - 10: 46:35.236
26/06/2017 - 10: 46:35.256
26/06/2017 - 10: 46:35.276
26/06/2017 - 10: 46:35.296
private void mmTimer_Tick(object sender, System.EventArgs e)
{
counter++;
lblCounter.Text = counter.ToString();
txtDT.AppendText(DateTime.Now.ToString("dd/MM/yyyy - HH: mm:ss.FFF\n"));
using (DatabaseContext db = new DatabaseContext())
{
storeDataBindingSource.DataSource = db.StoreDataList.ToList();
StoreData objStoreData = storeDataBindingSource.Current as StoreData;
{
var _StoreData = new StoreData
{
DateTime = DateTime.Now.ToString("dd/MM/yyyy - HH: mm:ss.FFF")
};
db.StoreDataList.Add(_StoreData);
db.SaveChanges();
}
}
}
But when i look at the SQL Table the time values dont keep the same 20ms in every insert probably because of the huge amount of data that are beeing saved every time. Maybe i should use a buffer and insert all at once.
Any sugestion? Thanks in advance.
Any suggestion
use a buffer and insert all at once.
Definitely buffer readings. As a further optimization you can bypass SaveChanges() (which performs row-by-row inserts) and use a TVP or SqlBulkCopy to insert batches into SQL Server.

Office.Interop.Publisher com outofmemoryexception when call via aspx.cs page

I am trying to run a catalog merge on a Microsoft Publisher document when a user clicks a button on a web page. It gets an OutOfMemoryException error message. This code runs just fine in a console app. So I am wondering if there is any tricks to get it to work. I was able to do Word merge on a docx file this way, but Publisher seems to get the OutOfmemoryException immediately.
protected void genStaffIndex_Bt_Click(object sender, EventArgs e)
{
string dataSource = #"C:\Users\score\Documents\My Data Sources\(local) caraway SupportStaffView.odc";
string fileGenDir = Server.MapPath("~/HootAdmin/GenDocs");
string outputFile = Path.Combine(fileGenDir, "SupportStaffCatalog.pub");
string sourceDoc = Server.MapPath("~/HootAdmin/DocTemplates/SupportStaffCatalog.pub");
long bytes = System.Diagnostics.Process.GetCurrentProcess().WorkingSet64;
Microsoft.Office.Interop.Publisher.Application application = new Microsoft.Office.Interop.Publisher.Application();
bytes = System.Diagnostics.Process.GetCurrentProcess().WorkingSet64;
var mydoc = application.Open(sourceDoc, false, false, Microsoft.Office.Interop.Publisher.PbSaveOptions.pbDoNotSaveChanges);
mydoc.MailMerge.OpenDataSource(bstrDataSource: dataSource);
var newdoc = mydoc.MailMerge.Execute(false, Microsoft.Office.Interop.Publisher.PbMailMergeDestination.pbMergeToNewPublication);
mydoc.Close();
newdoc.SaveAs(outputFile, Microsoft.Office.Interop.Publisher.PbFileFormat.pbFilePublication, false);
newdoc.Close();
application.Quit();

How can I use google text to speech api in windows form?

I want to use google text to speech in my windows form application, it will read a label. I added System.Speech reference. How can it read a label with a button click event?
http://translate.google.com/translate_tts?q=testing+google+speech This is the google text to speech api, or how can I use microsoft's native text to speech?
UPDATE Google's TTS API is no longer publically available. The notes at the bottom about Microsoft's TTS are still relevant and provide equivalent functionality.
You can use Google's TTS API from your WinForm application by playing the response using a variation of this question's answer (it took me a while but I have a real solution):
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
this.FormClosing += (sender, e) =>
{
if (waiting)
stop.Set();
};
}
private void ButtonClick(object sender, EventArgs e)
{
var clicked = sender as Button;
var relatedLabel = this.Controls.Find(clicked.Tag.ToString(), true).FirstOrDefault() as Label;
if (relatedLabel == null)
return;
var playThread = new Thread(() => PlayMp3FromUrl("http://translate.google.com/translate_tts?q=" + HttpUtility.UrlEncode(relatedLabel.Text)));
playThread.IsBackground = true;
playThread.Start();
}
bool waiting = false;
AutoResetEvent stop = new AutoResetEvent(false);
public void PlayMp3FromUrl(string url)
{
using (Stream ms = new MemoryStream())
{
using (Stream stream = WebRequest.Create(url)
.GetResponse().GetResponseStream())
{
byte[] buffer = new byte[32768];
int read;
while ((read = stream.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
}
ms.Position = 0;
using (WaveStream blockAlignedStream =
new BlockAlignReductionStream(
WaveFormatConversionStream.CreatePcmStream(
new Mp3FileReader(ms))))
{
using (WaveOut waveOut = new WaveOut(WaveCallbackInfo.FunctionCallback()))
{
waveOut.Init(blockAlignedStream);
waveOut.PlaybackStopped += (sender, e) =>
{
waveOut.Stop();
};
waveOut.Play();
waiting = true;
stop.WaitOne(10000);
waiting = false;
}
}
}
}
}
NOTE: The above code requires NAudio to work (free/open source) and using statements for System.Web, System.Threading, and NAudio.Wave.
My Form1 has 2 controls on it:
A Label named label1
A Button named button1 with a Tag of label1 (used to bind the button to its label)
The above code can be simplified slightly if a you have different events for each button/label combination using something like (untested):
private void ButtonClick(object sender, EventArgs e)
{
var clicked = sender as Button;
var playThread = new Thread(() => PlayMp3FromUrl("http://translate.google.com/translate_tts?q=" + HttpUtility.UrlEncode(label1.Text)));
playThread.IsBackground = true;
playThread.Start();
}
There are problems with this solution though (this list is probably not complete; I'm sure comments and real world usage will find others):
Notice the stop.WaitOne(10000); in the first code snippet. The 10000 represents a maximum of 10 seconds of audio to be played so it will need to be tweaked if your label takes longer than that to read. This is necessary because the current version of NAudio (v1.5.4.0) seems to have a problem determining when the stream is done playing. It may be fixed in a later version or perhaps there is a workaround that I didn't take the time to find. One temporary workaround is to use a ParameterizedThreadStart that would take the timeout as a parameter to the thread. This would allow variable timeouts but would not technically fix the problem.
More importantly, the Google TTS API is unofficial (meaning not to be consumed by non-Google applications) it is subject to change without notification at any time. If you need something that will work in a commercial environment I'd suggest either the MS TTS solution (as your question suggests) or one of the many commercial alternatives. None of which tend to be even this simple though.
To answer the other side of your question:
The System.Speech.Synthesis.SpeechSynthesizer class is much easier to use and you can count on it being available reliably (where with the Google API, it could be gone tomorrow).
It is really as easy as including a reference to the System.Speech reference and:
public void SaySomething(string somethingToSay)
{
var synth = new System.Speech.Synthesis.SpeechSynthesizer();
synth.SpeakAsync(somethingToSay);
}
This just works.
Trying to use the Google TTS API was a fun experiment but I'd be hard pressed to suggest it for production use, and if you don't want to pay for a commercial alternative, Microsoft's solution is about as good as it gets.
I know this question is a bit out of date but recently Google published Google Cloud Text To Speech API.
.NET Client version of Google.Cloud.TextToSpeech can be found here:
https://github.com/jhabjan/Google.Cloud.TextToSpeech.V1
Here is short example how to use the client:
GoogleCredential credentials =
GoogleCredential.FromFile(Path.Combine(Program.AppPath, "jhabjan-test-47a56894d458.json"));
TextToSpeechClient client = TextToSpeechClient.Create(credentials);
SynthesizeSpeechResponse response = client.SynthesizeSpeech(
new SynthesisInput()
{
Text = "Google Cloud Text-to-Speech enables developers to synthesize natural-sounding speech with 32 voices"
},
new VoiceSelectionParams()
{
LanguageCode = "en-US",
Name = "en-US-Wavenet-C"
},
new AudioConfig()
{
AudioEncoding = AudioEncoding.Mp3
}
);
string speechFile = Path.Combine(Directory.GetCurrentDirectory(), "sample.mp3");
File.WriteAllBytes(speechFile, response.AudioContent);

Categories