FileReader class in C# - c#

I am looking for fast class for to work with text files and comfortable reading different object (methods like NextInt32, NextDouble, NextLine, etc). Can you advice me something?
Edit: BinaryReader is bad class in my case. Format of my data is not binary. I have file like
1 2 3
FirstToken NextToken
1.23 2,34
And I want read this file with code like:
int a = FileReader.NextInt32();
int b = FileReader.NextInt32();
int c = FileReader.NextInt32();
int d = FileReader.NextString();
int e = FileReader.NextString();
int f = FileReader.NextDouble();
int g = FileReader.NextDouble();
Edit2: I am looking for analog Scanner from Java

I believe this extension method for TextReader would do the trick:
public static class TextReaderTokenizer
{
// Adjust as needed. -1 is EOF.
private static int[] whitespace = { -1, ' ', '\r' , '\n', '\t' };
public static T ReadToken<T>(this TextReader reader)
{
StringBuilder sb = new StringBuilder();
while (Array.IndexOf(whitespace, reader.Peek()) < 0)
{
sb.Append((char)reader.Read());
}
return (T)Convert.ChangeType(sb.ToString(), typeof(T));
}
}
It can be used thus:
TextReader reader = File.OpenText("foo.txt");
int n = reader.ReadToken<int>();
string s = reader.ReadToken<string>();
[EDIT] As requested in question comments, here's an instance wrapper version of the above that is parametrized with delimiters and CultureInfo:
public class TextTokenizer
{
private TextReader reader;
private Predicate<char> isDelim;
private CultureInfo cultureInfo;
public TextTokenizer(TextReader reader, Predicate<char> isDelim, CultureInfo cultureInfo)
{
this.reader = reader;
this.isDelim = isDelim;
this.cultureInfo = cultureInfo;
}
public TextTokenizer(TextReader reader, char[] delims, CultureInfo cultureInfo)
{
this.reader = reader;
this.isDelim = c => Array.IndexOf(delims, c) >= 0;
this.cultureInfo = cultureInfo;
}
public TextReader BaseReader
{
get { return reader; }
}
public T ReadToken<T>()
{
StringBuilder sb = new StringBuilder();
while (true)
{
int c = reader.Peek();
if (c < 0 || isDelim((char)c))
{
break;
}
sb.Append((char)reader.Read());
}
return (T)Convert.ChangeType(sb.ToString(), typeof(T));
}
}
Sample usage:
TextReader reader = File.OpenText("foo.txt");
TextTokenizer tokenizer = new TextTokenizer(
reader,
new[] { ' ', '\r', '\n', '\t' },
CultureInfo.InvariantCulture);
int n = tokenizer.ReadToken<int>();
string s = tokenizer.ReadToken<string>();

I'm going to add this as a separate answer because it's quite distinct from the answer I already gave. Here's how you could start creating your own Scanner class:
class Scanner : System.IO.StringReader
{
string currentWord;
public Scanner(string source) : base(source)
{
readNextWord();
}
private void ReadNextWord()
{
System.Text.StringBuilder sb = new StringBuilder();
char nextChar;
int next;
do
{
next = this.Read();
if (next < 0)
break;
nextChar = (char)next;
if (char.IsWhiteSpace(nextChar))
break;
sb.Append(nextChar);
} while (true);
while((this.Peek() >= 0) && (char.IsWhiteSpace((char)this.Peek())))
this.Read();
if (sb.Length > 0)
currentWord = sb.ToString();
else
currentWord = null;
}
public bool HasNextInt()
{
if (currentWord == null)
return false;
int dummy;
return int.TryParse(currentWord, out dummy);
}
public int NextInt()
{
try
{
return int.Parse(currentWord);
}
finally
{
readNextWord();
}
}
public bool HasNextDouble()
{
if (currentWord == null)
return false;
double dummy;
return double.TryParse(currentWord, out dummy);
}
public double NextDouble()
{
try
{
return double.Parse(currentWord);
}
finally
{
readNextWord();
}
}
public bool HasNext()
{
return currentWord != null;
}
}

You should define exactly what your file format is meant to look like. How would you represent a string with a space in it? What determines where the line terminators go?
In general you can use TextReader and its ReadLine method, followed by double.TryParse, int.TryParse etc - but you'll need to pin the format down more first.

Have you checked out the BinaryReader class? Yes it's a text file but there is nothing stopping you from treating it as binary data and hence using BinaryReader. It has all of the methods that you are looking for with the exception of ReadLine. However it wouldn't be too difficult to implement that method on top of BinaryReader.

If you do need text files (ie UTF-8 or ASCII encoding) then the binary writer will not work.
You can use the TextReader, but unlike the BinaryReader and the TextWriter it does not support any types other than Line and char. You will have to define what separators are allowed and parse the Line base data yourself.

The System.IO.BinaryReader class is what you need.
Example of implementation of a ReadLine method:
public static class Extensions
{
public static String ReadLine(this BinaryReader binaryReader)
{
var bytes = new List<Byte>();
byte temp;
while ((temp = (byte)binaryReader.Read()) < 10)
bytes.Add(temp);
return Encoding.Default.GetString(bytes.ToArray());
}
}
Example for using this class:
using System;
using System.IO;
using System.Security.Permissions;
class Test
{
static void Main()
{
// Load application settings.
AppSettings appSettings = new AppSettings();
Console.WriteLine("App settings:\nAspect Ratio: {0}, " +
"Lookup directory: {1},\nAuto save time: {2} minutes, " +
"Show status bar: {3}\n",
new Object[4]{appSettings.AspectRatio.ToString(),
appSettings.LookupDir, appSettings.AutoSaveTime.ToString(),
appSettings.ShowStatusBar.ToString()});
// Change the settings.
appSettings.AspectRatio = 1.250F;
appSettings.LookupDir = #"C:\Temp";
appSettings.AutoSaveTime = 10;
appSettings.ShowStatusBar = true;
// Save the new settings.
appSettings.Close();
}
}
// Store and retrieve application settings.
class AppSettings
{
const string fileName = "AppSettings####.dat";
float aspectRatio;
string lookupDir;
int autoSaveTime;
bool showStatusBar;
public float AspectRatio
{
get{ return aspectRatio; }
set{ aspectRatio = value; }
}
public string LookupDir
{
get{ return lookupDir; }
set{ lookupDir = value; }
}
public int AutoSaveTime
{
get{ return autoSaveTime; }
set{ autoSaveTime = value; }
}
public bool ShowStatusBar
{
get{ return showStatusBar; }
set{ showStatusBar = value; }
}
public AppSettings()
{
// Create default application settings.
aspectRatio = 1.3333F;
lookupDir = #"C:\AppDirectory";
autoSaveTime = 30;
showStatusBar = false;
if(File.Exists(fileName))
{
BinaryReader binReader =
new BinaryReader(File.Open(fileName, FileMode.Open));
try
{
// If the file is not empty,
// read the application settings.
// First read 4 bytes into a buffer to
// determine if the file is empty.
byte[] testArray = new byte[3];
int count = binReader.Read(testArray, 0, 3);
if (count != 0)
{
// Reset the position in the stream to zero.
binReader.BaseStream.Seek(0, SeekOrigin.Begin);
aspectRatio = binReader.ReadSingle();
lookupDir = binReader.ReadString();
autoSaveTime = binReader.ReadInt32();
showStatusBar = binReader.ReadBoolean();
}
}
// If the end of the stream is reached before reading
// the four data values, ignore the error and use the
// default settings for the remaining values.
catch(EndOfStreamException e)
{
Console.WriteLine("{0} caught and ignored. " +
"Using default values.", e.GetType().Name);
}
finally
{
binReader.Close();
}
}
}
// Create a file and store the application settings.
public void Close()
{
using(BinaryWriter binWriter =
new BinaryWriter(File.Open(fileName, FileMode.Create)))
{
binWriter.Write(aspectRatio);
binWriter.Write(lookupDir);
binWriter.Write(autoSaveTime);
binWriter.Write(showStatusBar);
}
}
}

You can probably use the System.IO.File Class to read the file and System.Convert to parse the strings you read from the file.
string line = String.Empty;
while( (line = file.ReadLine()).IsNullOrEmpty() == false )
{
TYPE value = Convert.ToTYPE( line );
}
Where TYPE is whatever type you're dealing with at that particular line / file.
If there are multiple values on one line you could do a split and read the individual values e.g.
string[] parts = line.Split(' ');
if( parts.Length > 1 )
{
foreach( string item in parts )
{
TYPE value = Convert.ToTYPE( item );
}
}
else
{
// Use the code from before
}

Related

C# Extract json object from mixed data text/js file

I need to parse reactjs file in main.451e57c9.js to retrieve version number with C#.
This file contains mixed data, here is little part of it:
.....inally{if(s)throw i}}return a}}(e,t)||xe(e,t)||we()}var Se=
JSON.parse('{"shortVersion":"v3.1.56"}')
,Ne="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgA
AASAAAAAqCAYAAAATb4ZSAAAACXBIWXMAAAsTAAALEw.....
I need to extract json data of {"shortVersion":"v3.1.56"}
The last time I tried to simply find the string shortVersion and return a certain number of characters after, but it seems like I'm trying to create the bicycle from scratch. Is there proper way to identify and extract json from the mixed text?
public static void findVersion()
{
var partialName = "main.*.js";
string[] filesInDir = Directory.GetFiles(#pathToFile, partialName);
var lines = File.ReadLines(filesInDir[0]);
foreach (var line in File.ReadLines(filesInDir[0]))
{
string keyword = "shortVersion";
int indx = line.IndexOf(keyword);
if (indx != -1)
{
string code = line.Substring(indx + keyword.Length);
Console.WriteLine(code);
}
}
}
RESULT
":"v3.1.56"}'),Ne="data:image/png;base64,iVBORw0KGgoAA.....
string findJson(string input, string keyword) {
int startIndex = input.IndexOf(keyword) - 2; //Find the starting point of shortversion then subtract 2 to start at the { bracket
input = input.Substring(startIndex); //Grab everything after the start index
int endIndex = 0;
for (int i = 0; i < input.Length; i++) {
char letter = input[i];
if (letter == '}') {
endIndex = i; //Capture the first instance of the closing bracket in the new trimmed input string.
break;
}
}
return input.Remove(endIndex+1);
}
Console.WriteLine(findJson("fwekjfwkejwe{'shortVersion':'v3.1.56'}wekjrlklkj23klj23jkl234kjlk", "shortVersion"));
You will recieve {'shortVersion':'v3.1.56'} as output
Note you may have to use line.Replace('"', "'");
Try below method -
public static object ExtractJsonFromText(string mixedStrng)
{
for (var i = mixedStrng.IndexOf('{'); i > -1; i = mixedStrng.IndexOf('{', i + 1))
{
for (var j = mixedStrng.LastIndexOf('}'); j > -1; j = mixedStrng.LastIndexOf("}", j -1))
{
var jsonProbe = mixedStrng.Substring(i, j - i + 1);
try
{
return JsonConvert.DeserializeObject(jsonProbe);
}
catch
{
}
}
}
return null;
}
Fiddle
https://dotnetfiddle.net/N1jiWH
You should not use GetFiles() since you only need one and that returns all before you can do anything. This should give your something you can work with here and it should be as fast as it likely can be with big files and/or lots of files in a folder (to be fair I have not tested this on such a large file system or file)
using System;
using System.IO;
using System.Linq;
public class Program
{
public static void Main()
{
Console.WriteLine("Hello World");
var path = $#"c:\SomePath";
var jsonString = GetFileVersion(path);
if (!string.IsNullOrWhiteSpace(jsonString))
{
// do something with string; deserialize or whatever.
var result=JsonConvert.DeserializeObject<List<Version>>(jsonString);
var vers = result.shortVersion;
}
}
private static string GetFileVersion(string path)
{
var partialName = "main.*.js";
// JSON string fragment to find: doubled up braces and quotes for the $# string
string matchString = $#"{{""shortVersion"":";
string matchEndString = $#" ""}}'";
// we can later stop on the first match
DirectoryInfo dir = new DirectoryInfo(path);
if (!dir.Exists)
{
throw new DirectoryNotFoundException("The directory does not exist.");
}
// Call the GetFileSystemInfos method and grab the first one
FileSystemInfo info = dir.GetFileSystemInfos(partialName).FirstOrDefault();
if (info.Exists)
{
// walk the file contents looking for a match (assumptions made here there IS a match and it has that string noted)
var line = File.ReadLines(info.FullName).SkipWhile(line => !line.Contains(matchString)).Take(1).First();
var indexStart = line.IndexOf(matchString);
var indexEnd = line.IndexOf(matchEndString, indexStart);
var jsonString = line.Substring(indexStart, indexEnd + matchEndString.Length);
return jsonString;
}
return string.Empty;
}
public class Version
{
public string shortVersion { get; set; }
}
}
Use this this should be faster - https://dotnetfiddle.net/sYFvYj
public static object ExtractJsonFromText(string mixedStrng)
{
string pattern = #"\(\'\{.*}\'\)";
string str = null;
foreach (Match match in Regex.Matches(mixedStrng, pattern, RegexOptions.Multiline))
{
if (match.Success)
{
str = str + Environment.NewLine + match;
}
}
return str;
}

how can in receive data from serial port in unity?

I want to establish a full duplex connection between the microcontroller(ARM Stm32F4) and the program made by Unity software via c#. This connection must be made through the serial port. I need a two-way connection between these two parts.
At first the sent data from the program (made in unity software) can send command to the microcontroller(ARM Stm32F4), then the microcontroller must check the received data and verifying finishing commands to the Unity software
If the data is correct, send the next data to the program. My problem is that my program, which is made by the unity, does not receive finished data that sent from the microcontroller.
How can I have full duplex connection?
*
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
using System.IO.Ports;
using System.IO;
using UnityEditor;
using SerialPortUtility;
using System;
public class gcode : MonoBehaviour
{
string str;
string str1;
public SerialPort sina;
public InputField sendgcode;
public InputField plusoffset;
public InputField minusoffset;
public InputField lenghtoffset;
public InputField counter;
public Button plus_inc;
public Button plus_dec;
public Button minus_inc;
public Button minus_dec;
public Button lenght_inc;
public Button lenght_dec;
string board;
Array[] inserial;
//int resend = 0;
// Start is called before the first frame update
void Start()
{
sina = new SerialPort("com1", 115200);
sina.ReadTimeout = 100;
sina.Open();
sina.DtrEnable = true;
}
// Update is called once per frame
void Update()
{
//sina.Close();
//str = sina.ReadLine();
//sendgcode.text = str;
}
public void sendserial()
{
str = sendgcode.text;
TextWriter a40 = new StreamWriter("c:\\1.txt");
a40.WriteLine(str);
a40.Close();
str = "";
TextWriter a41 = new StreamWriter("c:\\2.txt");
TextWriter a43 = new StreamWriter("c:\\3.txt");
TextWriter a22 = new StreamWriter("c:\\22.txt");
TextWriter plus = new StreamWriter("c:\\p.txt");
TextWriter minus = new StreamWriter("c:\\m.txt");
TextWriter lenght = new StreamWriter("c:\\t.txt");
System.IO.StreamReader file = new System.IO.StreamReader("c:\\1.txt");
while ((str = file.ReadLine()) != null)
{
str1 = str.Replace(" ", "");
str1 = str1.Replace("L", "\r\n");
str1 = str1.Replace("A", "\r\n");
str1 = str1.Replace("S", "");
a41.WriteLine(str1);
}
a41.WriteLine("S");
a41.Close();
file.Close();
System.IO.StreamReader file22 = new System.IO.StreamReader("c:\\2.txt");
while ((str = file22.ReadLine()) != "S")
{
if (str.Length >= 1) { a22.WriteLine(str); }
}
a22.WriteLine("S");
a22.Close();
file22.Close();
System.IO.StreamReader file1 = new System.IO.StreamReader("c:\\22.txt");
while ((str = file1.ReadLine()) != "S")
{
int idx = str.IndexOf('-');
if (idx >= 0)
{
if (str.Length == 2)
{
str = str.Replace("-", "-00");
a43.WriteLine(str);
}
else if (str.Length == 3)
{
str = str.Replace("-", "-0");
a43.WriteLine(str);
}
else
{
a43.WriteLine(str);
}
}
else if (str.Length == 1)
{
a43.WriteLine("00" + str);
}
else if (str.Length == 2)
{
a43.WriteLine("0" + str);
}
else
{
a43.WriteLine(str);
}
}
a43.WriteLine("S");
a43.Close();
file1.Close();
str1 = null;
int c = 0;
System.IO.StreamReader file2 = new System.IO.StreamReader("c:\\3.txt");
while ((str = file2.ReadLine()) != "S")
{
if (str.Length > 2)
{
str1 = str1 + str; //read line by line to 4 line
}
c++;
if (c == 4) //end read 4 lines
{
c = 0;
int n = 0;
str1 = "#" + str1;
int sp;
do
{
sina.WriteLine(str1);
board = sina.ReadExisting();
Debug.Log(board);
}
while (board != str1);
sina.WriteLine("$");
str1 = null;
}
}
/*
System.Threading.Thread.Sleep(1000);
sina.WriteLine(str1); //send last line of gcode
file2.Close();
System.Threading.Thread.Sleep(1000);
sina.WriteLine("p" + plusoffset.text);
System.Threading.Thread.Sleep(1000);
sina.WriteLine("m" + minusoffset.text);
System.Threading.Thread.Sleep(1000);
sina.WriteLine("t" + lenghtoffset.text);
System.Threading.Thread.Sleep(1000);
System.IO.StreamReader filecrc = new System.IO.StreamReader("c:\\3.txt");
int crc = 0;
int s1;
int Poffset = 0;
int Moffset = 0;
int Toffset = 0;
int Ecount = 0;
while ((str = filecrc.ReadLine()) != "S")
{
int.TryParse(str, out s1);
crc = crc + s1;
}
filecrc.Close();
int.TryParse(plusoffset.text, out Poffset);
int.TryParse(minusoffset.text, out Moffset);
int.TryParse(lenghtoffset.text, out Toffset);
int.TryParse(counter.text, out Ecount);
crc = crc + Poffset;
crc = crc + Moffset;
crc = crc + Toffset;
crc = crc + Ecount;
System.Threading.Thread.Sleep(1000);
sina.WriteLine("M" + crc.ToString());
System.Threading.Thread.Sleep(1000);
sina.WriteLine("E" + counter.text);
plus.WriteLine(plusoffset.text); plus.Close();
minus.WriteLine(minusoffset.text); minus.Close();
lenght.WriteLine(lenghtoffset.text); lenght.Close();
*/
}
public void plusinc()
{
int value;
int.TryParse(plusoffset.text, out value);
value++;
plusoffset.text = value.ToString();
}
public void plusdec()
{
int value;
int.TryParse(plusoffset.text, out value);
value--;
plusoffset.text = value.ToString();
}
public void minusinc()
{
int value;
int.TryParse(minusoffset.text, out value);
value++;
minusoffset.text = value.ToString();
}
public void minusdec()
{
int value;
int.TryParse(minusoffset.text, out value);
value--;
minusoffset.text = value.ToString();
}
public void lenghtinc()
{
int value;
int.TryParse(lenghtoffset.text, out value);
value++;
lenghtoffset.text = value.ToString();
}
public void lenghtdec()
{
int value;
int.TryParse(lenghtoffset.text, out value);
value--;
lenghtoffset.text = value.ToString();
}
public void load_offest()
{
System.Threading.Thread.Sleep(500);
System.IO.StreamReader offset = new System.IO.StreamReader("c:\\offset.txt");
string stroffset;
stroffset = offset.ReadLine();
plusoffset.text = stroffset;
stroffset = offset.ReadLine();
minusoffset.text = stroffset;
stroffset = offset.ReadLine();
lenghtoffset.text = stroffset;
}
public void clear()
{
sendgcode.text = "";
plusoffset.text = "";
minusoffset.text = "";
lenghtoffset.text = "";
}
void OnSerialLine(string line)
{
Debug.Log("Got a line: " + line);
}
}*```
In your do - while loop you constantly keep writing the string str1. This looks already quite odd.
And then you do board = sina.ReadExisting() which returns all currently available stream and buffer content.
Reads all immediately available bytes, based on the encoding, in both the stream and the input buffer of the SerialPort object.
Since this is a stream and there is no sich concept as "messages" might be multiple lines/messages or also only a part of one => pretty unlikely that this matches exactly with what you have sent immediately.
You would need to come up with a proper protocol like e.g. using a special separator character between messages.
Rather use ReadLine which reads until the next encounter of the configured NewLine character (by default \n) or use ReadTo(string) for a string pattern.
Both will "freeze" until according character is received making sure the entire message is fully received before continuing in your code.
So instead of
do
{
sina.WriteLine(str1);
board = sina.ReadExisting();
Debug.Log(board);
}
while (board != str1);
sina.WriteLine("$");
you would probably rather do something like e.g.
sina.WriteLine(str1);
do
{
board = sina.ReadLine();
Debug.Log(board);
}
while (board != str1);
sina.WriteLine("$");
IF a loop is what you want to use at all.
In general I would always use two separate threads for sending and receiving. Except you have a clear protocol where you only need to be able to receive in certain slots in the logic.

Importing and removing duplicates from a massive amount of text files using C# and Redis

This is a bit of a doozy and it's been a while since I worked with C#, so bear with me:
I'm running a jruby script to iterate through 900 files (5 Mb - 1500 Mb in size) to figure out how many dupes STILL exist within these (already uniq'd) files. I had little luck with awk.
My latest idea was to insert them into a local MongoDB instance like so:
db.collection('hashes').update({ :_id => hash}, { $inc: { count: 1} }, { upsert: true)
... so that later I could just query it like db.collection.where({ count: { $gt: 1 } }) to get all the dupes.
This is working great except it's been over 24 hours and at the time of writing I'm at 72,532,927 Mongo entries and growing.
I think Ruby's .each_line is bottlnecking the IO hardcore:
So what I'm thinking now is compiling a C# program which fires up a thread PER EACH FILE and inserts the line (md5 hash) into a Redis list.
From there, I could have another compiled C# program simply pop the values off and ignore the save if the count is 1.
So the questions are:
Will using a compiled file reader and multithreading the file reads significantly improve performance?
Is using Redis even necessary? With a tremendous amount of AWS memory, could I not just use the threads to fill some sort of a list atomically and proceed from there?
Thanks in advance.
Updated
New solution. Old solution. The main idea is to calculate dummy hashes(just sum of all chars in string) of each line and store it in Dictionary<ulong, List<LinePosition>> _hash2LinePositions. It's possible to have multiple hashes in the same stream and it solves by List in Dictionary Value. When the hashes are the same, we read and compare the strings from the streams. LinePosition is using for storing info about line - position in stream and its length. I don't have such huge files as you, but my tests shows that it works. Here is the full code:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
public class Solution
{
struct LinePosition
{
public long Start;
public long Length;
public LinePosition(long start, long count)
{
Start = start;
Length = count;
}
public override string ToString()
{
return string.Format("Start: {0}, Length: {1}", Start, Length);
}
}
class TextFileHasher : IDisposable
{
readonly Dictionary<ulong, List<LinePosition>> _hash2LinePositions;
readonly Stream _stream;
bool _isDisposed;
public HashSet<ulong> Hashes { get; private set; }
public string Name { get; private set; }
public TextFileHasher(string name, Stream stream)
{
Name = name;
_stream = stream;
_hash2LinePositions = new Dictionary<ulong, List<LinePosition>>();
Hashes = new HashSet<ulong>();
}
public override string ToString()
{
return Name;
}
public void CalculateFileHash()
{
int readByte = -1;
ulong dummyLineHash = 0;
// Line start position in file
long startPosition = 0;
while ((readByte = _stream.ReadByte()) != -1) {
// Read until new line
if (readByte == '\r' || readByte == '\n') {
// If there was data
if (dummyLineHash != 0) {
// Add line hash and line position to the dict
AddToDictAndHash(dummyLineHash, startPosition, _stream.Position - 1 - startPosition);
// Reset line hash
dummyLineHash = 0;
}
}
else {
// Was it new line ?
if (dummyLineHash == 0)
startPosition = _stream.Position - 1;
// Calculate dummy hash
dummyLineHash += (uint)readByte;
}
}
if (dummyLineHash != 0) {
// Add line hash and line position to the dict
AddToDictAndHash(dummyLineHash, startPosition, _stream.Position - startPosition);
// Reset line hash
dummyLineHash = 0;
}
}
public List<LinePosition> GetLinePositions(ulong hash)
{
return _hash2LinePositions[hash];
}
public List<string> GetDuplicates()
{
List<string> duplicates = new List<string>();
foreach (var key in _hash2LinePositions.Keys) {
List<LinePosition> linesPos = _hash2LinePositions[key];
if (linesPos.Count > 1) {
duplicates.AddRange(FindExactDuplicates(linesPos));
}
}
return duplicates;
}
public void Dispose()
{
if (_isDisposed)
return;
_stream.Dispose();
_isDisposed = true;
}
private void AddToDictAndHash(ulong hash, long start, long count)
{
List<LinePosition> linesPosition;
if (!_hash2LinePositions.TryGetValue(hash, out linesPosition)) {
linesPosition = new List<LinePosition>() { new LinePosition(start, count) };
_hash2LinePositions.Add(hash, linesPosition);
}
else {
linesPosition.Add(new LinePosition(start, count));
}
Hashes.Add(hash);
}
public byte[] GetLineAsByteArray(LinePosition prevPos)
{
long len = prevPos.Length;
byte[] lineBytes = new byte[len];
_stream.Seek(prevPos.Start, SeekOrigin.Begin);
_stream.Read(lineBytes, 0, (int)len);
return lineBytes;
}
private List<string> FindExactDuplicates(List<LinePosition> linesPos)
{
List<string> duplicates = new List<string>();
linesPos.Sort((x, y) => x.Length.CompareTo(y.Length));
LinePosition prevPos = linesPos[0];
for (int i = 1; i < linesPos.Count; i++) {
if (prevPos.Length == linesPos[i].Length) {
var prevLineArray = GetLineAsByteArray(prevPos);
var thisLineArray = GetLineAsByteArray(linesPos[i]);
if (prevLineArray.SequenceEqual(thisLineArray)) {
var line = System.Text.Encoding.Default.GetString(prevLineArray);
duplicates.Add(line);
}
#if false
string prevLine = System.Text.Encoding.Default.GetString(prevLineArray);
string thisLine = System.Text.Encoding.Default.GetString(thisLineArray);
Console.WriteLine("PrevLine: {0}\r\nThisLine: {1}", prevLine, thisLine);
StringBuilder sb = new StringBuilder();
sb.Append(prevPos);
sb.Append(" is '");
sb.Append(prevLine);
sb.Append("'. ");
sb.AppendLine();
sb.Append(linesPos[i]);
sb.Append(" is '");
sb.Append(thisLine);
sb.AppendLine("'. ");
sb.Append("Equals => ");
sb.Append(prevLine.CompareTo(thisLine) == 0);
Console.WriteLine(sb.ToString());
#endif
}
else {
prevPos = linesPos[i];
}
}
return duplicates;
}
}
public static void Main(String[] args)
{
List<TextFileHasher> textFileHashers = new List<TextFileHasher>();
string text1 = "abc\r\ncba\r\nabc";
TextFileHasher tfh1 = new TextFileHasher("Text1", new MemoryStream(System.Text.Encoding.Default.GetBytes(text1)));
tfh1.CalculateFileHash();
textFileHashers.Add(tfh1);
string text2 = "def\r\ncba\r\nwet";
TextFileHasher tfh2 = new TextFileHasher("Text2", new MemoryStream(System.Text.Encoding.Default.GetBytes(text2)));
tfh2.CalculateFileHash();
textFileHashers.Add(tfh2);
string text3 = "def\r\nbla\r\nwat";
TextFileHasher tfh3 = new TextFileHasher("Text3", new MemoryStream(System.Text.Encoding.Default.GetBytes(text3)));
tfh3.CalculateFileHash();
textFileHashers.Add(tfh3);
List<string> totalDuplicates = new List<string>();
Dictionary<ulong, Dictionary<TextFileHasher, List<LinePosition>>> totalHashes = new Dictionary<ulong, Dictionary<TextFileHasher, List<LinePosition>>>();
textFileHashers.ForEach(tfh => {
foreach(var dummyHash in tfh.Hashes) {
Dictionary<TextFileHasher, List<LinePosition>> tfh2LinePositions = null;
if (!totalHashes.TryGetValue(dummyHash, out tfh2LinePositions))
totalHashes[dummyHash] = new Dictionary<TextFileHasher, List<LinePosition>>() { { tfh, tfh.GetLinePositions(dummyHash) } };
else {
List<LinePosition> linePositions = null;
if (!tfh2LinePositions.TryGetValue(tfh, out linePositions))
tfh2LinePositions[tfh] = tfh.GetLinePositions(dummyHash);
else
linePositions.AddRange(tfh.GetLinePositions(dummyHash));
}
}
});
HashSet<TextFileHasher> alreadyGotDuplicates = new HashSet<TextFileHasher>();
foreach(var hash in totalHashes.Keys) {
var tfh2LinePositions = totalHashes[hash];
var tfh = tfh2LinePositions.Keys.FirstOrDefault();
// Get duplicates in the TextFileHasher itself
if (tfh != null && !alreadyGotDuplicates.Contains(tfh)) {
totalDuplicates.AddRange(tfh.GetDuplicates());
alreadyGotDuplicates.Add(tfh);
}
if (tfh2LinePositions.Count <= 1) {
continue;
}
// Algo to get duplicates in more than 1 TextFileHashers
var tfhs = tfh2LinePositions.Keys.ToArray();
for (int i = 0; i < tfhs.Length; i++) {
var tfh1Positions = tfhs[i].GetLinePositions(hash);
for (int j = i + 1; j < tfhs.Length; j++) {
var tfh2Positions = tfhs[j].GetLinePositions(hash);
for (int k = 0; k < tfh1Positions.Count; k++) {
var tfh1Pos = tfh1Positions[k];
var tfh1ByteArray = tfhs[i].GetLineAsByteArray(tfh1Pos);
for (int m = 0; m < tfh2Positions.Count; m++) {
var tfh2Pos = tfh2Positions[m];
if (tfh1Pos.Length != tfh2Pos.Length)
continue;
var tfh2ByteArray = tfhs[j].GetLineAsByteArray(tfh2Pos);
if (tfh1ByteArray.SequenceEqual(tfh2ByteArray)) {
var line = System.Text.Encoding.Default.GetString(tfh1ByteArray);
totalDuplicates.Add(line);
}
}
}
}
}
}
Console.WriteLine();
if (totalDuplicates.Count > 0) {
Console.WriteLine("Total number of duplicates: {0}", totalDuplicates.Count);
Console.WriteLine("#######################");
totalDuplicates.ForEach(x => Console.WriteLine("{0}", x));
Console.WriteLine("#######################");
}
// Free resources
foreach (var tfh in textFileHashers)
tfh.Dispose();
}
}
If you have tons of ram... You guys are overthinking it...
var fileLines = File.ReadAllLines(#"c:\file.csv").Distinct();

C# - A faster alternative to Convert.ToSingle()

I'm working on a program which reads millions of floating point numbers from a text file. This program runs inside of a game that I'm designing, so I need it to be fast (I'm loading an obj file). So far, loading a relatively small file takes about a minute (without precompilation) because of the slow speed of Convert.ToSingle(). Is there a faster way to do this?
EDIT: Here's the code I use to parse the Obj file
http://pastebin.com/TfgEge9J
using System;
using System.IO;
using System.Collections.Generic;
using OpenTK.Math;
using System.Drawing;
using PlatformLib;
public class ObjMeshLoader
{
public static StreamReader[] LoadMeshes(string fileName)
{
StreamReader mreader = new StreamReader(PlatformLib.Platform.openFile(fileName));
MemoryStream current = null;
List<MemoryStream> mstreams = new List<MemoryStream>();
StreamWriter mwriter = null;
if (!mreader.ReadLine().Contains("#"))
{
mreader.BaseStream.Close();
throw new Exception("Invalid header");
}
while (!mreader.EndOfStream)
{
string cmd = mreader.ReadLine();
string line = cmd;
line = line.Trim(splitCharacters);
line = line.Replace(" ", " ");
string[] parameters = line.Split(splitCharacters);
if (parameters[0] == "mtllib")
{
loadMaterials(parameters[1]);
}
if (parameters[0] == "o")
{
if (mwriter != null)
{
mwriter.Flush();
current.Position = 0;
}
current = new MemoryStream();
mwriter = new StreamWriter(current);
mwriter.WriteLine(parameters[1]);
mstreams.Add(current);
}
else
{
if (mwriter != null)
{
mwriter.WriteLine(cmd);
mwriter.Flush();
}
}
}
mwriter.Flush();
current.Position = 0;
List<StreamReader> readers = new List<StreamReader>();
foreach (MemoryStream e in mstreams)
{
e.Position = 0;
StreamReader sreader = new StreamReader(e);
readers.Add(sreader);
}
return readers.ToArray();
}
public static bool Load(ObjMesh mesh, string fileName)
{
try
{
using (StreamReader streamReader = new StreamReader(Platform.openFile(fileName)))
{
Load(mesh, streamReader);
streamReader.Close();
return true;
}
}
catch { return false; }
}
public static bool Load2(ObjMesh mesh, StreamReader streamReader, ObjMesh prevmesh)
{
if (prevmesh != null)
{
//mesh.Vertices = prevmesh.Vertices;
}
try
{
//streamReader.BaseStream.Position = 0;
Load(mesh, streamReader);
streamReader.Close();
#if DEBUG
Console.WriteLine("Loaded "+mesh.Triangles.Length.ToString()+" triangles and"+mesh.Quads.Length.ToString()+" quadrilaterals parsed, with a grand total of "+mesh.Vertices.Length.ToString()+" vertices.");
#endif
return true;
}
catch (Exception er) { Console.WriteLine(er); return false; }
}
static char[] splitCharacters = new char[] { ' ' };
static List<Vector3> vertices;
static List<Vector3> normals;
static List<Vector2> texCoords;
static Dictionary<ObjMesh.ObjVertex, int> objVerticesIndexDictionary;
static List<ObjMesh.ObjVertex> objVertices;
static List<ObjMesh.ObjTriangle> objTriangles;
static List<ObjMesh.ObjQuad> objQuads;
static Dictionary<string, Bitmap> materials = new Dictionary<string, Bitmap>();
static void loadMaterials(string path)
{
StreamReader mreader = new StreamReader(Platform.openFile(path));
string current = "";
bool isfound = false;
while (!mreader.EndOfStream)
{
string line = mreader.ReadLine();
line = line.Trim(splitCharacters);
line = line.Replace(" ", " ");
string[] parameters = line.Split(splitCharacters);
if (parameters[0] == "newmtl")
{
if (materials.ContainsKey(parameters[1]))
{
isfound = true;
}
else
{
current = parameters[1];
}
}
if (parameters[0] == "map_Kd")
{
if (!isfound)
{
string filename = "";
for (int i = 1; i < parameters.Length; i++)
{
filename += parameters[i];
}
string searcher = "\\" + "\\";
filename.Replace(searcher, "\\");
Bitmap mymap = new Bitmap(filename);
materials.Add(current, mymap);
isfound = false;
}
}
}
}
static float parsefloat(string val)
{
return Convert.ToSingle(val);
}
int remaining = 0;
static string GetLine(string text, ref int pos)
{
string retval = text.Substring(pos, text.IndexOf(Environment.NewLine, pos));
pos = text.IndexOf(Environment.NewLine, pos);
return retval;
}
static void Load(ObjMesh mesh, StreamReader textReader)
{
//try {
//vertices = null;
//objVertices = null;
if (vertices == null)
{
vertices = new List<Vector3>();
}
if (normals == null)
{
normals = new List<Vector3>();
}
if (texCoords == null)
{
texCoords = new List<Vector2>();
}
if (objVerticesIndexDictionary == null)
{
objVerticesIndexDictionary = new Dictionary<ObjMesh.ObjVertex, int>();
}
if (objVertices == null)
{
objVertices = new List<ObjMesh.ObjVertex>();
}
objTriangles = new List<ObjMesh.ObjTriangle>();
objQuads = new List<ObjMesh.ObjQuad>();
mesh.vertexPositionOffset = vertices.Count;
string line;
string alltext = textReader.ReadToEnd();
int pos = 0;
while ((line = GetLine(alltext, pos)) != null)
{
if (line.Length < 2)
{
break;
}
//line = line.Trim(splitCharacters);
//line = line.Replace(" ", " ");
string[] parameters = line.Split(splitCharacters);
switch (parameters[0])
{
case "usemtl":
//Material specification
try
{
mesh.Material = materials[parameters[1]];
}
catch (KeyNotFoundException)
{
Console.WriteLine("WARNING: Texture parse failure: " + parameters[1]);
}
break;
case "p": // Point
break;
case "v": // Vertex
float x = parsefloat(parameters[1]);
float y = parsefloat(parameters[2]);
float z = parsefloat(parameters[3]);
vertices.Add(new Vector3(x, y, z));
break;
case "vt": // TexCoord
float u = parsefloat(parameters[1]);
float v = parsefloat(parameters[2]);
texCoords.Add(new Vector2(u, v));
break;
case "vn": // Normal
float nx = parsefloat(parameters[1]);
float ny = parsefloat(parameters[2]);
float nz = parsefloat(parameters[3]);
normals.Add(new Vector3(nx, ny, nz));
break;
case "f":
switch (parameters.Length)
{
case 4:
ObjMesh.ObjTriangle objTriangle = new ObjMesh.ObjTriangle();
objTriangle.Index0 = ParseFaceParameter(parameters[1]);
objTriangle.Index1 = ParseFaceParameter(parameters[2]);
objTriangle.Index2 = ParseFaceParameter(parameters[3]);
objTriangles.Add(objTriangle);
break;
case 5:
ObjMesh.ObjQuad objQuad = new ObjMesh.ObjQuad();
objQuad.Index0 = ParseFaceParameter(parameters[1]);
objQuad.Index1 = ParseFaceParameter(parameters[2]);
objQuad.Index2 = ParseFaceParameter(parameters[3]);
objQuad.Index3 = ParseFaceParameter(parameters[4]);
objQuads.Add(objQuad);
break;
}
break;
}
}
//}catch(Exception er) {
// Console.WriteLine(er);
// Console.WriteLine("Successfully recovered. Bounds/Collision checking may fail though");
//}
mesh.Vertices = objVertices.ToArray();
mesh.Triangles = objTriangles.ToArray();
mesh.Quads = objQuads.ToArray();
textReader.BaseStream.Close();
}
public static void Clear()
{
objVerticesIndexDictionary = null;
vertices = null;
normals = null;
texCoords = null;
objVertices = null;
objTriangles = null;
objQuads = null;
}
static char[] faceParamaterSplitter = new char[] { '/' };
static int ParseFaceParameter(string faceParameter)
{
Vector3 vertex = new Vector3();
Vector2 texCoord = new Vector2();
Vector3 normal = new Vector3();
string[] parameters = faceParameter.Split(faceParamaterSplitter);
int vertexIndex = Convert.ToInt32(parameters[0]);
if (vertexIndex < 0) vertexIndex = vertices.Count + vertexIndex;
else vertexIndex = vertexIndex - 1;
//Hmm. This seems to be broken.
try
{
vertex = vertices[vertexIndex];
}
catch (Exception)
{
throw new Exception("Vertex recognition failure at " + vertexIndex.ToString());
}
if (parameters.Length > 1)
{
int texCoordIndex = Convert.ToInt32(parameters[1]);
if (texCoordIndex < 0) texCoordIndex = texCoords.Count + texCoordIndex;
else texCoordIndex = texCoordIndex - 1;
try
{
texCoord = texCoords[texCoordIndex];
}
catch (Exception)
{
Console.WriteLine("ERR: Vertex " + vertexIndex + " not found. ");
throw new DllNotFoundException(vertexIndex.ToString());
}
}
if (parameters.Length > 2)
{
int normalIndex = Convert.ToInt32(parameters[2]);
if (normalIndex < 0) normalIndex = normals.Count + normalIndex;
else normalIndex = normalIndex - 1;
normal = normals[normalIndex];
}
return FindOrAddObjVertex(ref vertex, ref texCoord, ref normal);
}
static int FindOrAddObjVertex(ref Vector3 vertex, ref Vector2 texCoord, ref Vector3 normal)
{
ObjMesh.ObjVertex newObjVertex = new ObjMesh.ObjVertex();
newObjVertex.Vertex = vertex;
newObjVertex.TexCoord = texCoord;
newObjVertex.Normal = normal;
int index;
if (objVerticesIndexDictionary.TryGetValue(newObjVertex, out index))
{
return index;
}
else
{
objVertices.Add(newObjVertex);
objVerticesIndexDictionary[newObjVertex] = objVertices.Count - 1;
return objVertices.Count - 1;
}
}
}
Based on your description and the code you've posted, I'm going to bet that your problem isn't with the reading, the parsing, or the way you're adding things to your collections. The most likely problem is that your ObjMesh.Objvertex structure doesn't override GetHashCode. (I'm assuming that you're using code similar to http://www.opentk.com/files/ObjMesh.cs.
If you're not overriding GetHashCode, then your objVerticesIndexDictionary is going to perform very much like a linear list. That would account for the performance problem that you're experiencing.
I suggest that you look into providing a good GetHashCode method for your ObjMesh.Objvertex class.
See Why is ValueType.GetHashCode() implemented like it is? for information about the default GetHashCode implementation for value types and why it's not suitable for use in a hash table or dictionary.
Edit 3: The problem is NOT with the parsing.
It's with how you read the file. If you read it properly, it would be faster; however, it seems like your reading is unusually slow. My original suspicion was that it was because of excess allocations, but it seems like there might be other problems with your code too, since that doesn't explain the entire slowdown.
Nevertheless, here's a piece of code I made that completely avoids all object allocations:
static void Main(string[] args)
{
long counter = 0;
var sw = Stopwatch.StartNew();
var sb = new StringBuilder();
var text = File.ReadAllText("spacestation.obj");
for (int i = 0; i < text.Length; i++)
{
int start = i;
while (i < text.Length &&
(char.IsDigit(text[i]) || text[i] == '-' || text[i] == '.'))
{ i++; }
if (i > start)
{
sb.Append(text, start, i - start); //Copy data to the buffer
float value = Parse(sb); //Parse the data
sb.Remove(0, sb.Length); //Clear the buffer
counter++;
}
}
sw.Stop();
Console.WriteLine("{0:N0}", sw.Elapsed.TotalSeconds); //Only a few ms
}
with this parser:
const int MIN_POW_10 = -16, int MAX_POW_10 = 16,
NUM_POWS_10 = MAX_POW_10 - MIN_POW_10 + 1;
static readonly float[] pow10 = GenerateLookupTable();
static float[] GenerateLookupTable()
{
var result = new float[(-MIN_POW_10 + MAX_POW_10) * 10];
for (int i = 0; i < result.Length; i++)
result[i] = (float)((i / NUM_POWS_10) *
Math.Pow(10, i % NUM_POWS_10 + MIN_POW_10));
return result;
}
static float Parse(StringBuilder str)
{
float result = 0;
bool negate = false;
int len = str.Length;
int decimalIndex = str.Length;
for (int i = len - 1; i >= 0; i--)
if (str[i] == '.')
{ decimalIndex = i; break; }
int offset = -MIN_POW_10 + decimalIndex;
for (int i = 0; i < decimalIndex; i++)
if (i != decimalIndex && str[i] != '-')
result += pow10[(str[i] - '0') * NUM_POWS_10 + offset - i - 1];
else if (str[i] == '-')
negate = true;
for (int i = decimalIndex + 1; i < len; i++)
if (i != decimalIndex)
result += pow10[(str[i] - '0') * NUM_POWS_10 + offset - i];
if (negate)
result = -result;
return result;
}
it happens in a small fraction of a second.
Of course, this parser is poorly tested and has these current restrictions (and more):
Don't try parsing more digits (decimal and whole) than provided for in the array.
No error handling whatsoever.
Only parses decimals, not exponents! i.e. it can parse 1234.56 but not 1.23456E3.
Doesn't care about globalization/localization. Your file is only in a single format, so there's no point caring about that kind of stuff because you're probably using English to store it anyway.
It seems like you won't necessarily need this much overkill, but take a look at your code and try to figure out the bottleneck. It seems to be neither the reading nor the parsing.
Have you measured that the speed problem is really caused by Convert.ToSingle?
In the code you included, I see you create lists and dictionaries like this:
normals = new List<Vector3>();
texCoords = new List<Vector2>();
objVerticesIndexDictionary = new Dictionary<ObjMesh.ObjVertex, int>();
And then when you read the file, you add in the collection one item at a time.
One of the possible optimizations would be to save total number of normals, texCoords, indexes and everything at the start of the file, and then initialize these collections by these numbers. This will pre-allocate the buffers used by collections, so adding items to the them will be pretty fast.
So the collection creation should look like this:
// These values should be stored at the beginning of the file
int totalNormals = Convert.ToInt32(textReader.ReadLine());
int totalTexCoords = Convert.ToInt32(textReader.ReadLine());
int totalIndexes = Convert.ToInt32(textReader.ReadLine());
normals = new List<Vector3>(totalNormals);
texCoords = new List<Vector2>(totalTexCoords);
objVerticesIndexDictionary = new Dictionary<ObjMesh.ObjVertex, int>(totalIndexes);
See List<T> Constructor (Int32) and Dictionary<TKey, TValue> Constructor (Int32).
This related question is for C++, but is definitely worth a read.
For reading as fast as possible, you're probably going to want to map the file into memory and then parse using some custom floating point parser, especially if you know the numbers are always in a specific format (i.e. you're the one generating the input files in the first place).
I tested .Net string parsing once and the fastest function to parse text was the old VB Val() function. You could pull the relevant parts out of Microsoft.VisualBasic.Conversion Val(string)
Converting String to numbers
Comparison of relative test times (ms / 100000 conversions)
Double Single Integer Int(w/ decimal point)
14 13 6 16 Val(Str)
14 14 6 16 Cxx(Val(Str)) e.g., CSng(Val(str))
22 21 17 e! Convert.To(str)
23 21 16 e! XX.Parse(str) e.g. Single.Parse()
30 31 31 32 Cxx(str)
Val: fastest, part of VisualBasic dll, skips non-numeric,
ConvertTo and Parse: slower, part of core, exception on bad format (including decimal point)
Cxx: slowest (for strings), part of core, consistent times across formats

C# : Console.Read() does not get the "right" input

I have the following code:
The actual problem is the "non-quoted" code.
I want to get the player amount (max = 4), but when I ask via Console.Read() and I enter any Int from 1 to 4 I get as value: 48 + Console.Read().
They only thing how I can get the "real" input is using Console.ReadLine(), but this does not give me an Integer, no it returns a string, and actually do not know how to convert String (Numbers) to Integers in C#, because I am new, and because I only found ToString() and not ToNumber.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace eve_calc_tool
{
class Program
{
int players;
int units;
int active_units;
int inactive_units;
int finished_units;
int lastDiceNumber = 0;
bool game_state;
public static void Main(string[] args)
{
int count_game = 0;
//Console.Title = "Mensch ärger dich nicht";
//Console.WriteLine("\tNeues Spiel wird");
//Console.WriteLine("\t...geladen");
//System.Threading.Thread.Sleep(5000);
//Console.Clear();
//Console.WriteLine("Neues Spiel wird gestartet, bitte haben sie etwas Geduld");
//Console.Title = "Spiel " + count_game.ToString();
//Console.Clear();
//string prevText = "Anzahl der Spieler: ";
//Console.WriteLine(prevText);
string read = Console.ReadLine();
/*Program game = new Program();
game.players = read;
game.setPlayers(game.players);
if (game.players > 0 && 5 > game.players)
{
game.firstRound();
}*/
string readagain = read;
Console.ReadLine();
}
/*
bool setPlayers(int amount)
{
players = amount;
if (players > 0)
{
return true;
}
else
{
return false;
}
}
bool createGame()
{
inactive_units = units = getPlayers() * 4;
active_units = 0;
finished_units = 0;
game_state = true;
if (game_state == true)
{
return true;
}
else
{
return false;
}
}
int getPlayers()
{
return players;
}
private static readonly Random random = new Random();
private static readonly object syncLock = new object();
public static int RandomNumber(int min, int max)
{
lock (syncLock)
{ // synchronize
return random.Next(min, max);
}
}
int rollDice()
{
lastDiceNumber = RandomNumber(1,6);
return lastDiceNumber;
}
int firstRound()
{
int[] results = new int[getPlayers()];
for (int i = 0; i < getPlayers(); i++)
{
results[i] = rollDice();
}
Array.Sort(results);
return results[3];
}
*/
}
}
You can use
int convertedNumber = int.parse(stringToConvert)
or
int convertedNumber;
int.TryParse(stringToConvert, out covertedNumber)
to convert strings to integers.
You should really use TryParse instead so that you can catch if the user doesn't input a number. int.Parse will throw an exception if it tries to convert a string that is not numeric.
int convertedNumber = 0;
if (!int.TryParse(stringToConvert, out convertedNumber))
{
// this code will execute if the user did not put
// in an actual number. For example, if the user entered "a".
}
The TryParse method returns a boolean value which will tell you whether the conversion was successful. If it was successful, the converted value will be passed through the out parameter.
To convert your string to an integer, use int.Parse(yourString).
The reason you get "48 + Console.ReadKey" is that Console.ReadKey returns the code of the key that was pressed - in this case, the ANSI value of the number character that was pressed.

Categories