Looping through stream data - c#

I'm writing a program that uses a fingerprint reader. I have stored the fingerprint data in an array [arr]. Unfortunately, only the first value is read i.e [0]. So only one finger is detected and the rest are ignored but if I place a specific number in the array e.g 2. It works fine for that value alone:
Here's my code:
for (int x = 0; x < (arr.Length - 1); x++)
{
byte[] fpbyte = GetStringToBytes(arr[x]);
Stream stream = new MemoryStream(fpbyte);
Data.Templates[x] = new DPFP.Template(stream);
}
foreach (DPFP.Template template in Data.Templates)
{
// Get template from storage.
if (template != null)
{
// Compare feature set with particular template.
ver.Verify(FeatureSet, template, ref res);
Data.IsFeatureSetMatched = res.Verified;
Data.FalseAcceptRate = res.FARAchieved;
if (res.Verified)
MessageBox.Show("Yes");
break; // success
}
}
if (!res.Verified)
Status = DPFP.Gui.EventHandlerStatus.Failure;
MessageBox.Show("No");
Data.Update();

You unconditionally break from your loop, whether verified or not.
Your code should read :
if (res.Verified) {
MessageBox.Show("Yes");
break; // success
}
This is a good example why good coding practice suggests always having the brackets, even for a one line conditional effect, as the error would have been much more obvious.
Similarly you should have written
if (!res.Verified) {
Status = DPFP.Gui.EventHandlerStatus.Failure;
MessageBox.Show("No");
}
at the end of your snippet.

Thanks to Dragonthoughts, I made the following changes and the code works just fine:
for (int x = 0; x < (arr.Length - 1); x++)
{
byte[] fpbyte = GetStringToBytes(arr[x]);
using (Stream stream = new MemoryStream(fpbyte))
{
Data.Templates[x] = new DPFP.Template(stream);
// Get template from storage.
if (Data.Templates[x] != null)
{
// Compare feature set with particular template.
ver.Verify(FeatureSet, Data.Templates[x], ref res);
Data.IsFeatureSetMatched = res.Verified;
Data.FalseAcceptRate = res.FARAchieved;
if (res.Verified)
{
status.Text = "Verified";
break; // success
}
}
}
}
if (!res.Verified)
{
Status = DPFP.Gui.EventHandlerStatus.Failure;
status.Text = "Unverified";
}
Data.Update();

Related

How to find a real range of the sheet using C#?

I need help in determining the real range of my spreadsheet as I don't know the last row number as the data as it gets imported from an external source. I am creating a small program to find if a certain columns have a "null" value and send the details to Slack. Once I reach an empty row, I need to terminate the code. Attaching a small screenshot. enter image description here
Now, I have tried creating a range variable but the script runs till the end and sends the message on Slack even though there is no data in the last rows. I need it to stop once it reaches the end of the data, like 29th row in the screenshot. Below is my code -
public static void ReadEntries()
{
var range = $ "{Sheet}!A3:AW28";
SpreadsheetsResource.ValuesResource.GetRequest request =
_service.Spreadsheets.Values.Get(SpreadsheetId, range);
var response = request.Execute();
IList < IList < object >> values = response.Values;
if (values != null && values.Count > 0) {
foreach(var row in values) {
var n = 0;
if (row[n].ToString().Trim() == null) {
break;
}
//var rowCount = worksheet.Dimension.End.Row;
//Console.WriteLine("{0} | {1} | {2} | {3}", row[45], row[46], row[47], row[48]);
//if (row[n] == "-")
for (int i = 0; i < 49; i++)
//else if (row[n].ToString().Trim()== "null" && n<49)
{
var client1 = new RestClient("<slack webhook>");
client1.Timeout = -1;
var request1 = new RestRequest(Method.POST);
var body = "<message_payload>";
request1.AddParameter("application/json", body, ParameterType.RequestBody);
//IRestResponse response1 = client1.Execute(request1);
if (row[i].ToString().Trim() == "null") {
Console.WriteLine("OK" + i);
} else if (row[i].ToString().Trim() == "") {
Console.WriteLine("Its Over" + i);
}
//Thread.Sleep(1000);
//n++;
}
}
} else {
Console.WriteLine("No data found.");
}
Now, I have tried creating a range variable but the script runs till the end and sends the message on Slack even though there is no data in the last rows. I need it to stop once it reaches the end of the data, like 29th row in the screenshot. I have tried searching for an answer and found a line that can find the last row and probably the real range of my spreadsheet, but it does not work for some reason -
var rowCount = worksheet.Dimension.End.Row;
I have also tried to identify a blank column by converting the response cell to string like this -
if (row[n].ToString().Trim() == null)
{
break;
}
But I don't think it is working correctly. I am really sorry if I did not explain it properly, english is not my first language but I will try to share more details if needed.
if (string.IsNullOrEmpty(row[n]?.ToString().Trim()))
{
break;
or
string.IsNullOrWhiteSpace
Try like that
Is this what you seek: (not sure)
function myfunk() {
const sh = SpreadsheetApp.getActiveSheet();
const rg = `${sh.getName()}!${sh.getDataRange().getA1Notation()}`;
Logger.log(rg);
return rg;
}

Parsing data from serial port thread-safe

I read data from the serial port and parse it in a separate class. However data is incorrectly parsed and some samples are repeated while others are missing.
Here is an example of the parsed packet. It starts with the packetIndex (shoudl start from 1 and incrementing). You can see how the packetIdx repeats and some of the other values repeat as well. I think that's due to multithreading but I'm not sure how to fix it.
2 -124558.985180734 -67934.4168823262 -164223.049786454 -163322.386243628
2 -124619.580759952 -67962.535376851 -164191.757344217 -163305.68949052
3 -124685.719571795 -67995.8394760894 -164191.042088394 -163303.119039907
5 -124801.747477263 -68045.7062179692 -164195.288919841 -163299.140429394
6 -124801.747477263 -68045.7062179692 -164221.105184687 -163297.46404856
6 -124832.8387538 -68041.9287731563 -164214.936103217 -163294.983004926
This is what I should receive:
1 -124558.985180734 -67934.4168823262 -164223.049786454 -163322.386243628
2 -124619.580759952 -67962.535376851 -164191.757344217 -163305.68949052
3 -124685.719571795 -67995.8394760894 -164191.042088394 -163303.119039907
4 -124801.747477263 -68045.7062179692 -164195.288919841 -163299.140429394
...
This is the SerialPort_DataReceived
public void serialPort1_DataReceived(object sender, System.IO.Ports.SerialDataReceivedEventArgs e)
{
lock (_lock)
{
byte[] buffer = new byte[_serialPort1.BytesToRead];
_serialPort1.Read(buffer, 0, buffer.Length);
for (int i = 0; i < buffer.Length; i++)
{
//Parse data
double[] samplesAtTimeT = DataParserObj.interpretBinaryStream(buffer[i]);
//Add data to BlockingCollection when parsed
if (samplesAtTimeT != null)
_bqBufferTimerSeriesData.Add(samplesAtTimeT);
}
}
}
And the class that parses the data:
public class DataParser
{
private int packetSampleCounter = 0;
private int localByteCounter = 0;
private int packetState = 0;
private byte[] tmpBuffer = new byte[3];
private double[] ParsedData = new double[5]; //[0] packetIdx (0-255), [1-4] signal
public double[] interpretBinaryStream(byte actbyte)
{
bool returnDataFlag = false;
switch (packetState)
{
case 0: // end packet indicator
if (actbyte == 0xC0)
packetState++;
break;
case 1: // start packet indicator
if (actbyte == 0xA0)
packetState++;
else
packetState = 0;
break;
case 2: // packet Index
packetSampleCounter = 0;
ParsedData[packetSampleCounter] = actbyte;
packetSampleCounter++;
localByteCounter = 0;
packetState++;
break;
case 3: //channel data (4 channels x 3byte/channel)
// 3 bytes
tmpBuffer[localByteCounter] = actbyte;
localByteCounter++;
if (localByteCounter == 3)
{
ParsedData[packetSampleCounter] = Bit24ToInt32(tmpBuffer);
if (packetSampleCounter == 5)
packetState++; //move to next state, end of packet
else
localByteCounter = 0;
}
break;
case 4: // end packet
if (actbyte == 0xC0)
{
returnDataFlag = true;
packetState = 1;
}
else
packetState = 0;
break;
default:
packetState = 0;
break;
}
if (returnDataFlag)
return ParsedData;
else
return null;
}
}
Get rid of the DataReceived event and instead use await serialPort.BaseStream.ReadAsync(....) to get notified when data comes in. async/await is much cleaner and doesn't force you into multithreaded data processing. For high speed networking, parallel processing is great. But serial ports are slow, so extra threads have no benefit.
Also, BytesToRead is buggy (it does return the number of queued bytes, but it destroys other state) and you should never call it.
Finally, do NOT ignore the return value from Read (or BaseStream.ReadAsync). You need to know how bytes were actually placed into your buffer, because it is not guaranteed to be the same number you asked for.
private async void ReadTheSerialData()
{
var buffer = new byte[200];
while (serialPort.IsOpen) {
var valid = await serialPort.BaseStream.ReadAsync(buffer, 0, buffer.Length);
for (int i = 0; i < valid; ++i)
{
//Parse data
double[] samplesAtTimeT = DataParserObj.interpretBinaryStream(buffer[i]);
//Add data to BlockingCollection when parsed
if (samplesAtTimeT != null)
_bqBufferTimerSeriesData.Add(samplesAtTimeT);
}
}
}
Just call this function after opening the port and setting your flow control, timeouts, etc. You may find that you no longer need the blocking queue, but can just handle the contents of samplesAtTimeT directly.

Read .csv file and store it in an array?

I am working on an assignment which stores data from .csv file into array. I have used for(int i = 0; i < data.Length; i++), but i++ is unreachable. Have a look on the code you will get to know. The problem is in storing only perhaps. Help me if you can.
Thanks
static void Load(string[] EmployeeNumbers, string[] EmployeeNames, string[] RegistrationNumbers, float[] EngineCapacityArray,
int[] StartKilometresArray, int[] EndKilometresArray, string[] TripDescriptions, bool[] PassengerCarriedArray,
ref int NextAvailablePosition, ref int RecordCount, ref int CurrentRecord)
{
string str = "";
FileStream fin;
string[] data;
bool tval = false;
// Open the input file
try
{
fin = new FileStream("carallowance.csv", FileMode.Open);
}
catch (IOException exc)
{
Console.WriteLine(exc.Message);
return;
}
// Read each line of the file
StreamReader fstr_in = new StreamReader(fin);
try
{
while ((str = fstr_in.ReadLine()) != null)
{
// Separate the line into the name and age
data = str.Split(';');
if (data.Length == 8)
{
Console.WriteLine("Error: Could not load data from the file. Possibly incorrect format.");
}
for (int i = 0; i < data.Length; i++)
{
EmployeeNumbers[NextAvailablePosition] = data[0];
EmployeeNames[NextAvailablePosition] = data[1];
RegistrationNumbers[NextAvailablePosition] = data[2];
tval = float.TryParse(data[3], out EngineCapacityArray[NextAvailablePosition]);
tval = int.TryParse(data[4], out StartKilometresArray[NextAvailablePosition]);
tval = int.TryParse(data[5], out EndKilometresArray[NextAvailablePosition]);
TripDescriptions[NextAvailablePosition] = data[6];
tval = bool.TryParse(data[7], out PassengerCarriedArray[NextAvailablePosition]);
CurrentRecord = NextAvailablePosition;
NextAvailablePosition++;
RecordCount++;
Console.WriteLine("Your file is sucessfully loaded.");
break;
}
}
}
catch (IOException exc)
{
Console.WriteLine(exc.Message);
}
// Close the file
fstr_in.Close();
}
It's unreachable because of the break; at the end of the loop. That forces the for loop to stop executing after the first time around. If you run this in a console project, it'll only put out a 0.
private static void Main(string[] args)
{
for (int i = 0; i < 2; i++)
{
Console.WriteLine(i.ToString());
break;
}
}
Perhaps the code review stackexchange would be better. There's a number of issues here.
First we can simplify using a framework callto File.ReadAllLines(...). That will give you a sequence of all lines in the file. Then you want to transform that into a sequence of arrays (split on ','). That's straightforward:
var splitLines = File.ReadAllLines("\path")
.Select(line => line.Split(new char[] { ',' }));
Now you can just iterate over splitLines with a foreach.
(I do notice that you seem to be setting values into the arrays that are passed in. Try to not get into the habit of doing that. These kinds of side effects and abuse of reference params is prone to becoming very brittle.)
Then this seems very odd:
if (data.Length == 8)
{
Console.WriteLine("...");
}
I suspect that you just have a typo in your comparison operator (should be !=). If you don't care about writing to the console on bad data, you can simply just filter out the bad data after the transformation. That looks like:
var splitLines = File.ReadAllLines("\path")
.Select(line => line.Split(new char[] { ',' }))
.Where(data => data.Length == 8);
Now recall that [int/float].TryParse(s, out v) will set v to be the value that was parsed, or the default value for the type, and return true if the parse was successful. That "or default" is important here. That means that you're stuffing bad/invalid values if they can't be parsed, and you're doing nothing with tval.
Instead of all of that, consider an object/type that represents a record from your dataset. It looks like you're trying to track employee mileage from a csv table. That looks something like:
public class MileageRecord
{
public string Name { get; set; }
/* More properties */
public MileageRecord FromCSV(string[] data)
{
/* try parsing, if not then log errs to file and return null */
}
}
Now you've gotten rid of all of your side effects and the whole thing is cleaner. Loading all this data from file is as straightforward as this:
public static IEnumerable<MileageRecord> Load()
{
return File.ReadAllLines("\path")
//.Skip(1) // if first line of the file is column headers
.Select(line => line.Split(new char[] { ',' }))
.Where(data => data.Length == 8)
.Select(data => MileageRecord.FromCSV(data))
.Where(mileage => mileage != null);
}
This piece of code:
NextAvailablePosition++;
RecordCount++;
Console.WriteLine("Your file is sucessfully loaded.");
break; // <-- this instruction
}
Takes you out of the for loop without the possibility to increment i value.
Problem : you are supposed to add break statement inside the if condition(which is inside the while loop) , so that if the data Length does not match with 8 then it will break/come out from loop. but you have mistakenly added break inside the for-loop.that why it only executed for 1st time and comesout of the loop.
Solution : Move the break statement from for loop to if-blcok inside the while loop.
Try This:
Step 1: Remove the break statement from for-loop.
CurrentRecord = NextAvailablePosition;
NextAvailablePosition++;
RecordCount++;
Console.WriteLine("Your file is sucessfully loaded.");
// break; //move this statement to inside the if block
Step 2: place the break statement in if-block inside while loop.
if (data.Length == 8)
{
Console.WriteLine("Error: Could not load data from the file. Possibly incorrect format.");
break;
}
Suggestion : you can re-write your code using File.ReadAllLines() method to avoid the complexity as below :
static void Load(string[] EmployeeNumbers, string[] EmployeeNames, string[] RegistrationNumbers, float[] EngineCapacityArray,
int[] StartKilometresArray, int[] EndKilometresArray, string[] TripDescriptions, bool[] PassengerCarriedArray,
ref int NextAvailablePosition, ref int RecordCount, ref int CurrentRecord)
{
string str = "";
string[] data;
bool tval = false;
String [] strLines=File.ReadAllLines("carallowance.csv");
for(int i=0;i<strLines.Length;i++)
{
str=strLines[i];
data = str.Split(';');
if (data.Length == 8)
{
Console.WriteLine("Error: Could not load data from the file. Possibly incorrect format.");
break;
}//End of if block
else
{
EmployeeNumbers[NextAvailablePosition] = data[0];
EmployeeNames[NextAvailablePosition] = data[1];
RegistrationNumbers[NextAvailablePosition] = data[2];
tval = float.TryParse(data[3], out EngineCapacityArray[NextAvailablePosition]);
tval = int.TryParse(data[4], out StartKilometresArray[NextAvailablePosition]);
tval = int.TryParse(data[5], out EndKilometresArray[NextAvailablePosition]);
TripDescriptions[NextAvailablePosition] = data[6];
tval = bool.TryParse(data[7], out PassengerCarriedArray[NextAvailablePosition]);
CurrentRecord = NextAvailablePosition;
NextAvailablePosition++;
RecordCount++;
} //End of else block
} //End of for loop
Console.WriteLine("Your file is sucessfully loaded.");
} //End of function

Out Of Memory Exception

I have function that creates an animated gif, it always worked perfectly but now when all my gifs are black and white, it gives an OutOfMemory Exception On :
e.AddFrame(Image.FromFile(imageFilePaths[i]));
My Function :
public void MakeGif(string sourcefolder, string destinationgif)
{
IsGifing = true;
string path = MainForm.RootDirectory;
String[] imageFilePaths = Directory.GetFiles(path);
String outputFilePath = MainForm.RootDirectory + #"\Final.gif";
AnimatedGifEncoder e = new AnimatedGifEncoder();
e.Start(outputFilePath);
e.SetDelay(300);
//-1:no repeat,0:always repeat
e.SetRepeat(0);
for (int i = 0, count = imageFilePaths.Length; i < count; i++)
e.AddFrame(Image.FromFile(imageFilePaths[i]));
e.Finish();
IsGifing = false;
}
AddFrame Function :
public bool AddFrame(Image im)
{
if ((im == null) || !started)
{
return false;
}
bool ok = true;
try
{
if (!sizeSet)
{
// use first frame's size
SetSize(im.Width, im.Height);
}
image = im;
GetImagePixels(); // convert to correct format if necessary
AnalyzePixels(); // build color table & map pixels
if (firstFrame)
{
WriteLSD(); // logical screen descriptior
WritePalette(); // global color table
if (repeat >= 0)
{
// use NS app extension to indicate reps
WriteNetscapeExt();
}
}
WriteGraphicCtrlExt(); // write graphic control extension
WriteImageDesc(); // image descriptor
if (!firstFrame)
{
WritePalette(); // local color table
}
WritePixels(); // encode and write pixel data
firstFrame = false;
}
catch (IOException e)
{
ok = false;
}
return ok;
}
Documentation for Image.FromFile says that it will throw OutOfMemoryException if the file does not contain a valid image, or if the image is in a format that GDI+ doesn't support.
What happens if you rewrite your code to:
for (int i = 0, count = imageFilePaths.Length; i < count; i++)
{
var img = Image.FromFile(imageFilePaths[i]);
e.AddFrame(img);
}
If you get the exception on the call to Image.FromFile, it's because your image can't be loaded.
I don't know the details, but this doesn't look write. There are no 'usings' so you possibly aren't disposing of resources.

using StreamReader.Read to read blocks of short, integer and decimal data types

the values are comma separeted so I am using a stringbuilder to build up the values. then write them to the appropriate buffer. I noticed a considerable time spent in the builder.ToString and the Parse functions. Do I have to write unsafe code to overcome this problem? and what's the best way to acheive what I want
private static void ReadSecondBySecondFileToEndBytes(FileInfo file, SafeDictionary<short, SafeDictionary<string, SafeDictionary<int, decimal>>> dayData)
{
string name = file.Name.Split('.')[0];
int result = 0;
int index = result;
int length = 1*1024; //1 kb
char[] buffer = new char[length];
StringBuilder builder = new StringBuilder();
bool pendingTick = true;
bool pendingSymbol = true;
bool pendingValue = false;
string characterString = string.Empty;
short symbol = 0;
int tick = 0;
decimal value;
using (StreamReader streamReader = (new StreamReader(file.FullName)))
{
while ((result = streamReader.Read(buffer, 0, length)) > 0)
{
int i = 0;
while (i < result)
{
if (buffer[i] == '\r' || buffer[i] == '\n')
{
pendingTick = true;
if (pendingValue)
{
value = decimal.Parse(builder.ToString());
pendingSymbol = true;
pendingValue = false;
dayData[symbol][name][tick] = value;
builder.Clear();
}
}
else if (buffer[i] == ',') // new value to capture
{
if (pendingTick)
{
tick = int.Parse(builder.ToString());
pendingTick = false;
}
else if (pendingSymbol)
{
symbol = short.Parse(builder.ToString());
pendingValue = true;
pendingSymbol = false;
}
else if (pendingValue)
{
value = decimal.Parse(builder.ToString());
pendingSymbol = true;
pendingValue = false;
dayData[symbol][name][tick] = value;
}
builder.Clear();
}
else
builder.Append(buffer[i]);
i++;
}
}
}
}
My suggestion would be to not try to parse the majority of the file as you are doing now, but go for something like this:
using (var reader = File.OpenText("<< filename >>"))
{
string line;
while ((line = reader.ReadLine()) != null)
{
string[] parts = line.Split(',');
// Process the different parts of the line here.
}
}
The main difference here is that you are not parsing line ends and separation on comma's. The advantage being that when you use high level methods like ReadLine(), the StreamReader (which File.OpenText() returns) can optimize for reading the file line by line. The same goes for String.Split().
Using these high level methods will almost always be faster then when you parse the buffer yourself.
With the approach above, you don't have to use the StringBuilder anymore and can just get your values like this:
tick = int.Parse(parts[0]);
symbol = short.Parse(parts[1]);
value = decimal.Parse(parts[2]);
dayData[symbol][name][tick] = value;
I have not verified the above snippet; please verify that these lines are correct, or correct them for your business logic.
You got the wrong impression. Yes, while you are testing your program, you'll indeed see most time being spent inside the Parse() and builder. Because that is the only code that does any real work.
But that's not going to be this way in production. Then all the time will be spent in the StreamReader. Because the file won't be present in the file system cache like it is when you run your program over and over again on your dev machine. In production, the file has to be read off a disk drive. And that's glacially slow, disk I/O is the true bottleneck of your program. Making the parsing twice as fast will only make your program a few percent faster, if at all.
Don't compromise the reliability or maintainability of your code for such a small gain.

Categories