Elency Solutions CSV Library Error C# - c#

I am using the elency solutions CSV library for C# to save and load some data from a file.
My code saves and loads correctly, but when I load and then try to save an error occurs, saying that another process is using the file.
The load method is this:
private void loadfile(string name)
{
int key = 696969;
CsvReader read = new CsvReader("data.csv");
try
{
do
{
read.ReadNextRecord();
} while (name != read.Fields[0]);
int decAgain = int.Parse(read.Fields[1], System.Globalization.NumberStyles.HexNumber); //convert to int
int dec = decAgain ^ key;
MessageBox.Show(dec.ToString());
}
catch (Exception)
{
MessageBox.Show("Not Found");
}
read = null;
}
As you can see, I am sort of disposing the "read" object.
Here is the save method:
private void savefile(string encrypted, string name)
{
CsvFile file = new CsvFile();
CsvRecord rec = new CsvRecord();
CsvWriter write = new CsvWriter();
rec.Fields.Add(name);
rec.Fields.Add(encrypted);
file.Records.Add(rec);
write.AppendCsv(file, "data.csv");
file = null;
rec = null;
write = null;
}
It always gets stuck on append csv.
I do understand the problem. The reader is not being closed successfully. How can I correctly close the file?
NB: I have tried read.Dispose() but it is not working.
Can you please help me out?
Regards

Use using to automatically dispose object. It may solve your issue.
private void savefile(string encrypted, string name)
{
using(CsvFile file = new CsvFile())
{
using(CsvRecord rec = new CsvRecord())
{
using(CsvWriter write = new CsvWriter())
{
rec.Fields.Add(name);
rec.Fields.Add(encrypted);
file.Records.Add(rec);
write.AppendCsv(file, "data.csv");
}
}
}
}

Related

C# (xamarin project) not able to find path to the file

I have this problem when I'm trying to read JSON file (or any file): It's not able to find that file. I try everything, even the absolute path (error almost same - DirectoryNotFound)
This is structure of mine code:
And this is code:
private void LoadJson()
{
using (var r = new StreamReader("quizQuestions.json"))
{
string json = r.ReadToEnd();
items = JsonConvert.DeserializeObject<List<Questions>>(json);
}
}
I I even try to use Directory.GetCurrentDirectory() but it's returning : / -> only this character. I don't know where is a mistake or if I forgot to set something. I try to find answers everywhere but I was not able to find anything with this.
Make sure the Build Action of the file is set as Content or as an Asset and give this a try.
private void LoadJson()
{
AssetManager assets = this.Assets;
using (var r = new StreamReader(assets.Open ("quizQuestions.json")))
{
string json = r.ReadToEnd();
items = JsonConvert.DeserializeObject<List<Questions>>(json);
}
}
You can configure the file as Embedded Resource and then access it like this:
public static Stream GetEmbeddedResourceStream(Assembly assembly, string resourceFileName)
{
var resourceNames = assembly.GetManifestResourceNames();
var resourcePaths = resourceNames
.Where(x => x.EndsWith(resourceFileName, StringComparison.CurrentCultureIgnoreCase)).ToArray();
if (resourcePaths.Any() && resourcePaths.Count() == 1)
{
return assembly.GetManifestResourceStream(resourcePaths.Single());
}
return null; // or throw Exception
}
private void LoadJson()
{
Assembly assembly = GetAssemblyContainingTheJson();
using (var r = GetEmbeddedResourceStream(assembly, "quizQuestions.json"))
{
string json = r.ReadToEnd();
items = JsonConvert.DeserializeObject<List<Questions>>(json);
}
}

How to pass xml file inside of code instead of a folder path?

I'am trying to not create a file, and pass xml document straight to a SkiaSharp method Load. I mean, is there the way to imitate a path? So here is the code:
public IActionResult svgToPng(string itemId, string mode = "
{
var svgSrc = new XmlDocument();
svgSrc.LoadXml(/*Some xml code*/);
string svgSaveAs = "save file path";
var quality = 100;
var svg = new SkiaSharp.Extended.Svg.SKSvg();
var pict = svg.Load(svgSrc); // HERE, it needs to be a path, not XmlDocument, but i want to pass straight
var dimen = new SkiaSharp.SKSizeI
(
(int) Math.Ceiling(pict.CullRect.Width),
(int) Math.Ceiling(pict.CullRect.Height)
);
var matrix = SKMatrix.MakeScale(1, 1);
var img = SKImage.FromPicture(pict, dimen, matrix);
// Convert to PNG
var skdata = img.Encode(SkiaSharp.SKEncodedImageFormat.Png, quality);
using(var stream = System.IO.File.OpenWrite(svgSaveAs))
{
skdata.SaveTo(stream);
}
ViewData["Content"] = "PNG file was created out of SVG.";
return View();
}
The Load method seems to be this:
public SKPicture Load(
using (var stream = File.OpenRead(filename))
{
return Load(stream);
}
}
look at the code of that library :
https://github.com/mono/SkiaSharp.Extended/blob/master/SkiaSharp.Extended.Svg/source/SkiaSharp.Extended.Svg.Shared/SKSvg.cs
Look at the Load method, it has multiple implementations :
public SKPicture Load(string filename)
{
using (var stream = File.OpenRead(filename))
{
return Load(stream);
}
}
public SKPicture Load(Stream stream)
{
using (var reader = XmlReader.Create(stream, xmlReaderSettings, CreateSvgXmlContext()))
{
return Load(reader);
}
}
public SKPicture Load(XmlReader reader)
{
return Load(XDocument.Load(reader));
}
You will need to pick one of them and use it. Now, nothing stops you from getting the code and adding one extra Load for an XML string for example, but since this is a library you do not control, I'd stick to what you are given.
You could use the XmlReader version, that's probably the closest one to what you want.

C# pass file text to list when file is entered via command line argument

I am writing a program for an assignment that is meant to read two text files and use their data to write to a third text file. I was instructed to pass the contents of the one file to a list. I have done something similar, passing the contents to an array (see below). But I can't seem to get it to work with a list.
Here is what I have done in the past with arrays:
StreamReader f1 = new StreamReader(args[0]);
StreamReader f2 = new StreamReader(args[1]);
StreamWriter p = new StreamWriter(args[2]);
double[] array1 = new double[20];
double[] array2 = new double[20];
double[] array3 = new double[20];
string line;
int index;
double value;
while ((line = f1.ReadLine()) != null)
{
string[] currentLine = line.Split('|');
index = Convert.ToInt16(currentLine[0]);
value = Convert.ToDouble(currentLine[1]);
array1[index] = value;
}
If it is of any interest, this is my current setup:
static void Main(String[] args)
{
// Create variables to hold the 3 elements of each item that you will read from the file
// Create variables for all 3 files (2 for READ, 1 for WRITE)
int ID;
string InvName;
int Number;
string IDString;
string NumberString;
string line;
List<InventoryNode> Inventory = new List<InventoryNode>();
InventoryNode Item = null;
StreamReader f1 = new StreamReader(args[0]);
StreamReader f2 = new StreamReader(args[1]);
StreamWriter p = new StreamWriter(args[2]);
// Read each item from the Update File and process the data
//Data is separated by pipe |
If you want to convert Array to List, you can just call Add or Insert to make it happen.
According to your code, you can do Inventory.Add(Item).
while ((line = f1.ReadLine()) != null)
{
string[] currentLine = line.Split('|');
Item = new InventoryItem {
Index = Convert.ToInt16(currentLine[0]),
Value = Convert.ToDouble(currentLine[1])
};
Inventory.Add(Item);
}
like this.
If I understand it correctly all you want to do is read two input file, parse the data in these file in a particular format (in this case int|double) and then write it to a new file. If this is the requirement, please try out the following code, as it is not sure how you want the data to be presented in the third file I have kept the format as it is (i.e. int|double)
static void Main(string[] args)
{
if (args == null || args.Length < 3)
{
Console.WriteLine("Wrong Input");
return;
}
if (!ValidateFilePath(args[0]) || !ValidateFilePath(args[1]))
{
return;
}
Dictionary<int, double> parsedFileData = new Dictionary<int, double>();
//Read the first file
ReadFileData(args[0], parsedFileData);
//Read second file
ReadFileData(args[1], parsedFileData);
//Write to third file
WriteFileData(args[2], parsedFileData);
}
private static bool ValidateFilePath(string filePath)
{
try
{
return File.Exists(filePath);
}
catch (Exception)
{
Console.WriteLine($"Failed to read file : {filePath}");
return false;
}
}
private static void ReadFileData(string filePath, Dictionary<int, double> parsedFileData)
{
try
{
using (StreamReader fileStream = new StreamReader(filePath))
{
string line;
while ((line = fileStream.ReadLine()) != null)
{
string[] currentLine = line.Split('|');
int index = Convert.ToInt16(currentLine[0]);
double value = Convert.ToDouble(currentLine[1]);
parsedFileData.Add(index, value);
}
}
}
catch (Exception ex)
{
Console.WriteLine($"Exception : {ex.Message}");
}
}
private static void WriteFileData(string filePath, Dictionary<int, double> parsedFileData)
{
try
{
using (StreamWriter fileStream = new StreamWriter(filePath))
{
foreach (var parsedLine in parsedFileData)
{
var line = parsedLine.Key + "|" + parsedLine.Value;
fileStream.WriteLine(line);
}
}
}
catch (Exception ex)
{
Console.WriteLine($"Exception : {ex.Message}");
}
}
There are few things you should always remember while writing a C# code :
1) Validate command line inputs before using.
2) Always lookout for any class that has dispose method, instantiate it inside using block.
3) Proper mechanism in the code to catch exceptions, else your program would crash at runtime with invalid inputs or inputs that you could not validate!

How to convert from large csv file into json without using split (Out Of Memory issue) C#

I am trying to parse a 300MB csv file and save it on mongodb. In order to do that I will need to convert this csv file into a list of BsonDocument which include key value pairs which create a document. each row in the csv file is a new BsonDocument.
Every couple of minutes of parallel testing, I am getting OOM exception on the split operation.
I've read this article which is very interesting. but I couldn't find any practical solution which I can implement on those huge files.
I was looking into different csv helpers, but couldn't find anything which solve this issue.
Any help will be much appreciated.
You should be able to read it line by line like this:
public static void Main()
{
using (StreamReader sr = new StreamReader(path))
{
string[] headers = null;
string[] curLine;
while ((curLine = sr.ReadLine().Split(',')) != null)
{
if (firstLine == null)
{
headers = curLine;
}
else
{
processLine(headers, curLine);
}
}
}
}
public static void processLine(string[] headers, string[] line)
{
for (int i = 0; i < headers.Length)
{
string header = headers[i];
string line = line[i];
//Now you have individual header/line pairs that you can put into mongodb
}
}
I've never used mongodb and I don't know the structure of your csv or your mongo, so I won't be able to help much there. Hopefully you can get it from here though. If not, edit your post with some more details about how you need to structure your mongodb and hopefully somebody will post a more helpful answer.
Thank you #dbc That worked!
#ashbygeek, I needed to add this to your code,
while (!sr.EndOfStream && (curLine = sr.ReadLine().Split('\t')) != null)
{
//do process
}
So I am uploading my code which I get my big CSV file from Azure blob, and insert in Batch to mongoDB instead of each document.
I also created my own primary key hash, and index, in order to identify duplicates documents, and if I found one, I'll start insert them one by one in order to identify the duplicate.
I hope it will help for someone in the future.
using (TextFieldParser parser = new TextFieldParser(blockBlob2.OpenRead()))
{
parser.TextFieldType = FieldType.Delimited;
parser.SetDelimiters("\t");
bool headerWritten = false;
List<BsonDocument> listToInsert = new List<BsonDocument>();
int chunkSize = 50;
int counter = 0;
var headers = new string[0];
while (!parser.EndOfData)
{
//Processing row
var fields = parser.ReadFields();
if (!headerWritten)
{
headers = fields;
headerWritten = true;
continue;
}
listToInsert.Add(new BsonDocument(headers.Zip(fields, (k, v) => new { k, v }).ToDictionary(x => x.k, x => x.v)));
counter++;
if (counter != chunkSize) continue;
AdditionalInformation(listToInsert, dataCollectionQueueMessage);
CalculateHashForPrimaryKey(listToInsert);
await InsertDataIntoDB(listToInsert, dataCollectionQueueMessage);
counter = 0;
listToInsert.Clear();
}
if (listToInsert.Count > 0)
{
AdditionalInformation(listToInsert, dataCollectionQueueMessage);
CalculateHashForPrimaryKey(listToInsert);
await InsertDataIntoDB(listToInsert, dataCollectionQueueMessage);
}
}
private async Task InsertDataIntoDB(List<BsonDocument>listToInsert, DataCollectionQueueMessage dataCollectionQueueMessage)
{
const string connectionString = "mongodb://127.0.0.1/localdb";
var client = new MongoClient(connectionString);
_database = client.GetDatabase("localdb");
var collection = _database.GetCollection<BsonDocument>(dataCollectionQueueMessage.CollectionTypeEnum.ToString());
await collection.Indexes.CreateOneAsync(new BsonDocument("HashMultipleKey", 1), new CreateIndexOptions() { Unique = true, Sparse = true, });
try
{
await collection.InsertManyAsync(listToInsert);
}
catch (Exception ex)
{
ApplicationInsights.Instance.TrackException(ex);
await InsertSingleDocuments(listToInsert, collection, dataCollectionQueueMessage);
}
}
private async Task InsertSingleDocuments(List<BsonDocument> dataCollectionDict, IMongoCollection<BsonDocument> collection
,DataCollectionQueueMessage dataCollectionQueueMessage)
{
ApplicationInsights.Instance.TrackEvent("About to start insert individual documents and to find the duplicate one");
foreach (var data in dataCollectionDict)
{
try
{
await collection.InsertOneAsync(data);
}
catch (Exception ex)
{
ApplicationInsights.Instance.TrackException(ex,new Dictionary<string, string>() {
{
"Error Message","Duplicate document was detected, therefore ignoring this document and continuing to insert the next docuemnt"
}, {
"FilePath",dataCollectionQueueMessage.FilePath
}}
);
}
}
}

IsolatedStorageException sometimes following a disk-full condition?

I'm designing an API. Currently I'm trying to safely handle a condition where we run out of disk space. Basically, we have a series of files holding some data. When the disk is full, when we go to write another data file, it will of course throw an error. At this point, we delete a single file(loop through file list from oldest to newest and retry after we successfully delete a file). Then, we retry writing the file. Repeat that process until the file is written without error.
Now the fun part. All of this happens concurrently. Like, at some point there are 8 threads doing this at once. This makes things extra interesting, and has lead to an odd error.
Here is the code
public void Save(string text, string id)
{
using (var store = IsolatedStorageFile.GetUserStoreForApplication())
{
var existing = store.GetFileNames(string.Format(Prefix + "/*-{0}.dat", id));
if (existing.Any()) return; //it already is saved
string name = string.Format(Prefix + "/{0}-{1}.dat", DateTime.UtcNow.ToString("yyyyMMddHHmmssfffffff"), id);
tryagain:
bool doover=false;
try
{
AttemptFileWrite(store, name, text);
}
catch (IOException)
{
doover = true;
}
catch (IsolatedStorageException) //THIS LINE
{
doover = true;
}
if (doover)
{
Attempt(() => store.DeleteFile(name)); //because apparently this can also fail.
var files = store.GetFileNames(Path.Combine(Prefix, "*.dat"));
foreach (var file in files.OrderBy(x=>x))
{
try
{
store.DeleteFile(Path.Combine(Prefix, file));
}
catch
{
continue;
}
break;
}
goto tryagain; //prepare the velociraptor shield!
}
}
}
void AttemptFileWrite(IsolatedStorageFile store, string name, string text)
{
using (var file = store.OpenFile(
name,
FileMode.Create,
FileAccess.ReadWrite,
FileShare.None | FileShare.Delete
))
{
using (var writer = new StreamWriter(file))
{
writer.Write(text);
writer.Flush();
writer.Close();
}
file.Close();
}
}
static void Attempt(Action func)
{
try
{
func();
}
catch
{
}
}
static T Attempt<T>(Func<T> func)
{
try
{
return func();
}
catch
{
}
return default(T);
}
public string GetSaved()
{
string content=null;
using (var store = IsolatedStorageFile.GetUserStoreForApplication())
{
var files = store.GetFileNames(Path.Combine(Prefix,"*.dat")).OrderBy(x => x);
if (!files.Any()) return new MessageBatch();
foreach (var filename in files)
{
IsolatedStorageFileStream file=null;
try
{
file = Attempt(() =>
store.OpenFile(Path.Combine(Prefix, filename), FileMode.Open, FileAccess.ReadWrite, FileShare.None | FileShare.Delete));
if (file == null)
{
continue; //couldn't open. assume locked or some such
}
file.Seek(0L, SeekOrigin.Begin);
using (var reader = new StreamReader(file))
{
content = reader.ReadToEnd();
}
//take note here. We delete the file, while we still have it open!
//This is done because having the file open prevents other readers, but if we close it first,
//then there is a race condition that right after closing the stream, another reader could pick it up and
//open exclusively. It looks weird, but it's right. Trust me.
store.DeleteFile(Path.Combine(Prefix, filename));
if (!string.IsNullOrEmpty(content))
{
break;
}
}
finally
{
if (file != null) file.Close();
}
}
}
return content;
}
At the line marked THIS LINE, is what I'm talking about. When doing AttemptFileWrite, I can look over at store.AvailableSpace and see that there is enough room to fit the data into it, but upon trying to open the file, it throws this IsolatedStorageException with the description of Operation Not Permitted. Aside from this weird case, in all other cases it's just an IOException thrown with a message about the disk being full
I'm trying to figure out if I have some odd race condition, or if this is an error I just have to deal with or what?
Why does this error occur?

Categories