Read x number of lines of a file at a time C# - c#

I want to read and process 10+ lines at a time for GB files, but haven't found a solution to spit out 10 lines until the end.
My last attempt was :
int n = 10;
foreach (var line in File.ReadLines("path")
.AsParallel().WithDegreeOfParallelism(n))
{
System.Console.WriteLine(line);
Thread.Sleep(1000);
}
I've seen solutions that use buffer sizes but I want to read in the entire row.

The Default behavour is to read all the Line in one shot, if you want to read less than that you need to dig a little deeper into how it reads them and get a StreamReader which will then let you control the reading process
using (StreamReader sr = new StreamReader(path))
{
while (sr.Peek() >= 0)
{
Console.WriteLine(sr.ReadLine());
}
}
it also has a ReadLineAsync method that will return a task
if you contain these tasks in an ConcurrentBag you can very easily keep the processing running on 10 lines at a time.
var bag =new ConCurrentBag<Task>();
using (StreamReader sr = new StreamReader(path))
{
while(sr.Peek() >=0)
{
if(bag.Count < 10)
{
Task processing = sr.ReadLineAsync().ContinueWith( (read) => {
string s = read.Result;//EDIT Removed await to reflect Scots comment
//process line
});
bag.Add(processing);
}
else
{
Task.WaitAny(bag.ToArray())
//remove competed tasks from bag
}
}
}
note this code is for guidance only not to be used as is;
if all you want is the last ten lines then you can get that with the solution here
How to read a text file reversely with iterator in C#

This method would create "pages" of lines from your file.
public static IEnumerable<string[]> ReadFileAsLinesSets(string fileName, int setLen = 10)
{
using (var reader = new StreamReader(fileName))
while (!reader.EndOfStream)
{
var set = new List<string>();
for (var i = 0; i < setLen && !reader.EndOfStream; i++)
{
set.Add(reader.ReadLine());
}
yield return set.ToArray();
}
}
... More fun version...
class Example
{
static void Main(string[] args)
{
"YourFile.txt".ReadAsLines()
.AsPaged(10)
.Select(a=>a.ToArray()) //required or else you will get random data since "WrappedEnumerator" is not thread safe
.AsParallel()
.WithDegreeOfParallelism(10)
.ForAll(a =>
{
//Do your work here.
Console.WriteLine(a.Aggregate(new StringBuilder(),
(sb, v) => sb.AppendFormat("{0:000000} ", v),
sb => sb.ToString()));
});
}
}
public static class ToolsEx
{
public static IEnumerable<IEnumerable<T>> AsPaged<T>(this IEnumerable<T> items,
int pageLength = 10)
{
using (var enumerator = new WrappedEnumerator<T>(items.GetEnumerator()))
while (!enumerator.IsDone)
yield return enumerator.GetNextPage(pageLength);
}
public static IEnumerable<T> GetNextPage<T>(this IEnumerator<T> enumerator,
int pageLength = 10)
{
for (var i = 0; i < pageLength && enumerator.MoveNext(); i++)
yield return enumerator.Current;
}
public static IEnumerable<string> ReadAsLines(this string fileName)
{
using (var reader = new StreamReader(fileName))
while (!reader.EndOfStream)
yield return reader.ReadLine();
}
}
internal class WrappedEnumerator<T> : IEnumerator<T>
{
public WrappedEnumerator(IEnumerator<T> enumerator)
{
this.InnerEnumerator = enumerator;
this.IsDone = false;
}
public IEnumerator<T> InnerEnumerator { get; private set; }
public bool IsDone { get; private set; }
public T Current { get { return this.InnerEnumerator.Current; } }
object System.Collections.IEnumerator.Current { get { return this.Current; } }
public void Dispose()
{
this.InnerEnumerator.Dispose();
this.IsDone = true;
}
public bool MoveNext()
{
var next = this.InnerEnumerator.MoveNext();
this.IsDone = !next;
return next;
}
public void Reset()
{
this.IsDone = false;
this.InnerEnumerator.Reset();
}
}

Related

Why is it not possible to delete files quickly (<60s) across threads in aspnet?

I get error
System.IO.IOException: 'The process cannot access the file 'xxx' because it is being used by another process.'
when I try to delete a temp file in a background worker service in aspnet core.
I am eventually allowed to delete the file after about a minute (52s, 73s).
If I change garbage collection to workstation mode, I may instead delete after ~1s (but still, a delay).
I have tried a combination of FileOptions to no avail, including FileOptions.WriteThrough.
When the controller writes the file, I use
FlushAsync(), Close(), Dispose() and 'using' (I know it's overkill.)
I also tried using just File.WriteAllBytesAsync, with same result.
In the background reader, I as well use Close() and Dispose().
(hint: background reader will not allow me to use DeleteOnClose,
which would have been ideal.)
As I search stackoverflow for similar 'used by another process' issues,
all those I have found eventually resolve to
'argh it turns out I/he still had an extra open instance/reference
he forgot about',
but I have not been able to figure out that I am doing that.
Another hint:
In the writing controller, I am able to delete the file immediately
after writing it, I presume because I am still on the same thread?
Is there some secret knowledge I should read somewhere,
about being able to delete recently open files, across threads?
UPDATE: Here relevant(?) code snippets:
// (AspNet Controller)
[RequestSizeLimit(9999999999)]
[DisableFormValueModelBinding]
[RequestFormLimits(MultipartBodyLengthLimit = MaxFileSize)]
[HttpPost("{sessionId}")]
public async Task<IActionResult> UploadRevisionChunk(Guid sessionId) {
log.LogWarning($"UploadRevisionChunk: {sessionId}");
string uploadFolder = UploadFolder.sessionFolderPath(sessionId);
if (!Directory.Exists(uploadFolder)) { throw new Exception($"chunk-upload failed"); }
var cr = parseContentRange(Request);
if (cr == null) { return this.BadRequest("no content range header specified"); }
string chunkName = $"{cr.From}-{cr.To}";
string saveChunkPath = Path.Combine(uploadFolder,chunkName);
await streamToChunkFile_WAB(saveChunkPath); // write-all-bytes.
//await streamToChunkFile_MAN(saveChunkPath); // Manual.
long crTo = cr.To ?? 0;
long crFrom = cr.From ?? 0;
long expected = (crTo - crFrom) + 1;
var fi = new FileInfo(saveChunkPath);
var dto = new ChunkResponse { wrote = fi.Length, expected = expected, where = "?" };
string msg = $"at {crFrom}, wrote {dto.wrote} bytes (expected {dto.expected}) to {dto.where}";
log.LogWarning(msg);
return Ok(dto);
}
private async Task streamToChunkFile_WAB(string saveChunkPath) {
using (MemoryStream ms = new MemoryStream()) {
Request.Body.CopyTo(ms);
byte[] allBytes = ms.ToArray();
await System.IO.File.WriteAllBytesAsync(saveChunkPath, allBytes);
}
}
// stream reader in the backgroundService:
public class MyMultiStream : Stream {
string[] filePaths;
FileStream curStream = null;
IEnumerator<string> i;
ILogger log;
QueueItem qItem;
public MyMultiStream(string[] filePaths_, Stream[] streams_, ILogger log_, QueueItem qItem_) {
qItem = qItem_;
log = log_;
filePaths = filePaths_;
log.LogWarning($"filepaths has #items: {filePaths.Length}");
IEnumerable<string> enumerable = filePaths;
i = enumerable.GetEnumerator();
i.MoveNext();// necessary to prime the iterator.
}
public override bool CanRead { get { return true; } }
public override bool CanWrite { get { return false; } }
public override bool CanSeek { get { return false; } }
public override long Length { get { throw new Exception("dont get length"); } }
public override long Position {
get { throw new Exception("dont get Position"); }
set { throw new Exception("dont set Position"); }
}
public override void SetLength(long value) { throw new Exception("dont set length"); }
public override long Seek(long offset, SeekOrigin origin) { throw new Exception("dont seek"); }
public override void Write(byte[] buffer, int offset, int count) { throw new Exception("dont write"); }
public override void Flush() { throw new Exception("dont flush"); }
public static int openStreamCounter = 0;
public static int closedStreamCounter = 0;
string curFileName = "?";
private FileStream getNextStream() {
string nextFileName = i.Current;
if (nextFileName == null) { throw new Exception("getNextStream should not be called past file list"); }
//tryDelete(nextFileName,log);
FileStream nextStream = new FileStream(
path:nextFileName,
mode: FileMode.Open,
access: FileAccess.Read,
share: FileShare.ReadWrite| FileShare.Delete,
bufferSize:4096, // apparently default.
options: 0
| FileOptions.Asynchronous
| FileOptions.SequentialScan
// | FileOptions.DeleteOnClose // (1) this ought to be possible, (2) we should fix this approach (3) if we can fix this, our issue is solved, and our code much simpler.
); // None); // ReadWrite); // None); // ReadWrite); //| FileShare.Read);
log.LogWarning($"TELLUS making new stream [{nextFileName}] opened:[{++openStreamCounter}] closed:[{closedStreamCounter}]");
curFileName = nextFileName;
++qItem.chunkCount;
return nextStream;
}
public override int Read(byte[] buffer, int offset, int count) {
int bytesRead = 0;
while (true) {
bytesRead = 0;
if (curStream == null) { curStream = getNextStream(); }
try {
bytesRead = curStream.Read(buffer, offset, count);
log.LogWarning($"..bytesRead:{bytesRead} [{Path.GetFileName(curFileName)}]"); // (only show a short name.)
} catch (Exception e) {
log.LogError($"failed reading [{curFileName}] [{e.Message}]",e);
}
if (bytesRead > 0) { break; }
curStream.Close();
curStream.Dispose();
curStream = null;
log.LogWarning($"TELLUS closing stream [{curFileName}] opened:[{openStreamCounter}] closed:[{++closedStreamCounter}]");
//tryDelete(curFileName); Presumably we can't delete so soon.
bool moreFileNames = i.MoveNext();
log.LogWarning($"moreFileNames?{moreFileNames}");
if (!moreFileNames) {
break;
}
}
return bytesRead;
}
..
// Background worker operating multistream:
public class BackgroundChunkWorker: BackgroundService {
ILogger L;
ChunkUploadQueue q;
public readonly IServiceScopeFactory scopeFactory;
public BackgroundChunkWorker(ILogger<int> log_, ChunkUploadQueue q_, IServiceScopeFactory scopeFactory_) {
q = q_; L = log_;
scopeFactory = scopeFactory_;
}
override protected async Task ExecuteAsync(CancellationToken cancel) { await BackgroundProcessing(cancel); }
private async Task BackgroundProcessing(CancellationToken cancel) {
while (!cancel.IsCancellationRequested) {
try {
await Task.Delay(1000,cancel);
bool ok = q.q.TryDequeue(out var item);
if (!ok) { continue; }
L.LogInformation($"item found! {item}");
await treatItemScope(item);
} catch (Exception ex) {
L.LogCritical("An error occurred when processing. Exception: {#Exception}", ex);
}
}
}
private async Task<bool> treatItemScope(QueueItem Qitem) {
using (var scope = scopeFactory.CreateScope()) {
var ris = scope.ServiceProvider.GetRequiredService<IRevisionIntegrationService>();
return await treatItem(Qitem, ris);
}
}
private async Task<bool> treatItem(QueueItem Qitem, IRevisionIntegrationService ris) {
await Task.Delay(0);
L.LogWarning($"TryAddValue from P {Qitem.sessionId}");
bool addOK = q.p.TryAdd(Qitem.sessionId, Qitem);
if (!addOK) {
L.LogError($"why couldnt we add session {Qitem.sessionId} to processing-queue?");
return false;
}
var startTime = DateTime.UtcNow;
Guid revisionId = Qitem.revisionId;
string[] filePaths = getFilePaths(Qitem.sessionId);
Stream[] streams = filePaths.Select(fileName => new FileStream(fileName, FileMode.Open)).ToArray();
MyMultiStream multiStream = new MyMultiStream(filePaths, streams, this.L, Qitem);
BimRevisionStatus brs = await ris.UploadRevision(revisionId, multiStream, startTime);
// (launchDeletes is my current hack/workaround,
// it is not part of the problem)
// await multiStream.launchDeletes();
Qitem.status = brs;
return true;
}
..

How to handle multiple file downloads in Playwright?

I have a button that when clicked will start downloading multiple files (this button will also open a chrome://downloads tab and closes it immediately.
The page.download event handler for downloads will not fire.
The page.WaitForDownloadAsync() returns only one of these files.
I do not know the file names that will be downloaded, I also do not know if more than 1 file will be downloaded, there is always the possibility that only 1 file will be downloaded, but also the possibility that multiple files will be downloaded.
How can I handle this in playwright? I would like to return a list of all the downloaded files paths.
So I resolved this with the following logic.
I created two variables:
List<string> downloadedFiles = new List<string>();
List<string> fileDownloadSession = new();
I then created a method to add as a handler to the page.Download that looks like this:
private async void downloadHandler(object sender, IDownload download)
{
fileDownloadSession.Add("Downloading...");
var waiter = await download.PathAsync();
downloadedFiles.Add(waiter);
fileDownloadSession.Remove(fileDownloadSession.First());
}
Afterwards, I created a public method to get the downloaded files that looks like this:
public List<string> GetDownloadedFiles()
{
while (fileDownloadSession.Any())
{
}
var downloadedFilesList = downloadedFiles;
downloadedFiles = new List<string>();
return downloadedFilesList;
}
All these methods and planning are in a separate class of their own so that they can monitor the downloaded files properly, and also to freeze the main thread so it can grab all of the required files.
All in all it seems just as sketchy of a solution, similarly to how you would implement it in Selenium, nothing much has changed in terms of junkyard implementations in the new frameworks.
You can find my custom class here: https://paste.mod.gg/rztmzncvtagi/0, enjoy, there is no other topic that answers this specific question for playwright on C#.
Code here, in case it gets deleted from paste.mod.gg:
using System.Net;
using System.Runtime.InteropServices.JavaScript;
using Flanium;
using FlaUI.UIA3;
using Microsoft.Playwright;
using MoreLinq;
using Polly;
namespace Fight;
public class WebBrowser
{
private IBrowser _browser;
private IBrowserContext _context;
private IPage _page;
private bool _force;
private List<string> downloadedFiles = new List<string>();
private List<string> fileDownloadSession = new();
public void EagerMode()
{
_force = true;
}
public enum BrowserType
{
None,
Chrome,
Firefox,
}
public IPage GetPage()
{
return _page;
}
public WebBrowser(BrowserType browserType = BrowserType.Chrome, bool headlessMode = false)
{
var playwright = Playwright.CreateAsync().Result;
_browser = browserType switch
{
BrowserType.Chrome => playwright.Chromium.LaunchAsync(new BrowserTypeLaunchOptions {Headless = headlessMode}).Result,
BrowserType.Firefox => playwright.Firefox.LaunchAsync(new BrowserTypeLaunchOptions {Headless = headlessMode}).Result,
_ => null
};
_context = _browser.NewContextAsync().Result;
_page = _context.NewPageAsync().Result;
_page.Download += downloadHandler;
Console.WriteLine("WebBrowser was successfully started.");
}
private async void downloadHandler(object sender, IDownload download)
{
fileDownloadSession.Add("Downloading...");
var waiter = await download.PathAsync();
downloadedFiles.Add(waiter);
fileDownloadSession.Remove(fileDownloadSession.First());
}
public List<string> GetDownloadedFiles()
{
while (fileDownloadSession.Any())
{
}
var downloadedFilesList = downloadedFiles;
downloadedFiles = new List<string>();
return downloadedFilesList;
}
public void Navigate(string url)
{
_page.GotoAsync(url).Wait();
}
public void Close(string containedURL)
{
var pages = _context.Pages.Where(x => x.Url.Contains(containedURL));
if (pages.Any())
pages.ForEach(x => x.CloseAsync().Wait());
}
public IElementHandle Click(string selector, int retries = 15, int retryInterval = 1)
{
var element = Policy.HandleResult<IElementHandle>(result => result == null)
.WaitAndRetry(retries, interval => TimeSpan.FromSeconds(retryInterval))
.Execute(() =>
{
var element = FindElement(selector);
if (element != null)
{
try
{
element.ClickAsync(new ElementHandleClickOptions() {Force = _force}).Wait();
element.DisposeAsync();
return element;
}
catch (Exception e)
{
return null;
}
}
return null;
});
return element;
}
public IElementHandle FindElement(string selector)
{
IElementHandle element = null;
var Pages = _context.Pages.ToArray();
foreach (var w in Pages)
{
//============================================================
element = w.QuerySelectorAsync(selector).Result;
if (element != null)
{
return element;
}
//============================================================
var iframes = w.Frames.ToList();
var index = 0;
for (; index < iframes.Count; index++)
{
var frame = iframes[index];
element = frame.QuerySelectorAsync(selector).Result;
if (element is not null)
{
return element;
}
var children = frame.ChildFrames;
if (children.Count > 0 && iframes.Any(x => children.Any(y => y.Equals(x))) == false)
{
iframes.InsertRange(index + 1, children);
index--;
}
}
}
return element;
}
}

ways to improve the time of the execution - Data structures

I am learning Data Structures and I am stuck in a problem I cannot find the way to improve the performance of the code. The problem is this: https://www.hackerrank.com/challenges/contacts/problem I made the solution and I could pass 3 of the 15 tests, but the rest I couldnĀ“t because show me:
Time limit exceeded Your code did not execute within the time limits.
Please optimize your code. For more information on execution time
limits, refer to the environment page
I change the nested if with linq, but I couldn't get the improvement. Could you help me? I left the code I am using:
public static List<int> contacts (List<List<string>> queries)
{
List<string> contactList = new List<string>();
List<string> findList = new List<string>();
List<int> result = new List<int>();
foreach (var instruction in queries)
{
if (instruction[0] == "add")
contactList.Add(instruction[1]);
else
findList.Add(instruction[1]);
}
for(int i = 0; i < findList.Count; i++)
{
//-----------------------------------------------------------------------
var counter = contactList.Where(x => x.Contains(findList[i])).Count();
result.Add(counter);
//-----------------------------------------------------------------------
//foreach (var contact in contactList)
//{
// if (contact.Contains(findList[i]))
// result[i]++;
//}
}
return result;
}
Thanks in advance for your help
Thanks for your advices! I made a research about the issue and I leave the solution in C#
using System.Collections.Generic;
namespace ConsoleApp2 {
internal class Program
{
static void Main(string[] args)
{
var x = contacts(new List<List<string>>() {
new List<string> {"add", "hack" },
new List<string> {"add", "hackerrank" },
new List<string> {"find", "hac" },
new List<string> {"find", "hak" }
});
}
public static List<int> contacts (List<List<string>> queries)
{
Trie trie = new Trie();
var findList = new List<int>();
foreach (var instruction in queries)
{
if (instruction[0] == "add")
trie.add(instruction[1]);
else
findList.Add(trie.find(instruction[1]));
}
return findList;
}
}
class TrieNode
{
private Dictionary<char, TrieNode> children = new Dictionary<char, TrieNode>();
public int size;
public void putChildIfAbsent(char ch)
{
if(!children.ContainsKey(ch))
children.Add(ch, new TrieNode());
}
public TrieNode getChild(char ch)
{
if (children.ContainsKey(ch))
return children[ch];
return null;
}
}
class Trie
{
TrieNode root = new TrieNode();
public void add(string str)
{
TrieNode curr = root;
foreach(char ch in str)
{
curr.putChildIfAbsent(ch);
curr=curr.getChild(ch);
curr.size++;
}
}
public int find(string prefix)
{
TrieNode curr = root;
foreach(char ch in prefix)
{
curr= curr.getChild(ch);
if (curr == null)
return 0;
}
return curr.size;
}
}
}

Trie implementation: Search method with weird return value

I am fairly new to C# and working on a project in which I need to build a prefix tree (trie). Searching in the trie should return a list of words matching a given search prefix.
That's the code I have so far, but the search doesnt actually return the value I'm looking for and instead returns "Trees.PrefixTree+d__5". What am I doing wrong or what do I have to change to get it run?
Thank you very much in advance!
using System;
using System.Collections.Generic;
using System.Text;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Threading;
namespace Trees
{
public class Program
{
public static void Main()
{
//String[] file = File.ReadAllLines(#"C:\Users\samue\Desktop\Neuer Ordner (2)\liste.txt");
string[] dictionary = new string[] { "test", "try", "angle", "the", "code", "is", "isnt" };
PrefixTree trie = new PrefixTree();
var sw = new Stopwatch();
foreach (var word in dictionary)
{
trie.Add(word);
}
//Thread workerThread = new Thread(trie.Search(suchwort);
Console.WriteLine(trie.Search("te"));
}
}
public class PrefixTree
{
private PrefixTreeNode root;
public PrefixTree()
{
root = new PrefixTreeNode(String.Empty);
}
public void Add(string word)
{
AddRecursive(root, word, String.Empty);
}
private void AddRecursive(PrefixTreeNode node, string remainingString, string currentString)
{
if (remainingString.Length <= 0)
{
return;
}
char prefix = remainingString[0];
string substring = remainingString.Substring(1);
if (!node.SubNodes.ContainsKey(prefix))
{
node.SubNodes.Add(prefix, new PrefixTreeNode(currentString + prefix));
}
if (substring.Length == 0)
{
node.SubNodes[prefix].IsWord = true;
return;
}
else
{
AddRecursive(node.SubNodes[prefix], substring, currentString + prefix);
}
}
public IEnumerable<string> Search(string searchString)
{
PrefixTreeNode node = root;
foreach (var search in searchString)
{
if (!node.SubNodes.ContainsKey(search))
{
return new string[0];
}
node = node.SubNodes[search];
}
return FindAllWordsRecursive(node);
}
private IEnumerable<string> FindAllWordsRecursive(PrefixTreeNode node)
{
if (node.IsWord)
{
yield return node.Word;
}
foreach (var subnode in node.SubNodes)
{
foreach (var result in FindAllWordsRecursive(subnode.Value))
{
yield return result;
}
}
}
protected class PrefixTreeNode
{
private readonly Dictionary<char, PrefixTreeNode> subNodes;
private bool isWord;
private readonly string word;
public PrefixTreeNode(string word)
{
subNodes = new Dictionary<char, PrefixTreeNode>();
isWord = false;
this.word = word;
}
public Dictionary<char, PrefixTreeNode> SubNodes { get { return subNodes; } }
public bool IsWord { get { return isWord; } set { isWord = value; } }
public string Word { get { return word; } }
}
}
}
Try changing your Console.WriteLine() to this:
Console.WriteLine(string.Join(", ", trie.Search("te")));
You were not concatenating all the strings together!

How to sort an arraylist on date?

Code:
while ((linevalue = filereader.ReadLine()) != null)
{
items.Add(linevalue);
}
filereader.Close();
items.Sort();
//To display the content of array (sorted)
IEnumerator myEnumerator = items.GetEnumerator();
while (myEnumerator.MoveNext())
{
Console.WriteLine(myEnumerator.Current);
}
The program above displays all the values. How to extract only the dates and sort it in ascending order?
I am not let to work with linq, use the exception or threading or any other stuff. I have to stick with the File Stream, try to get my data out of the text file, sort and store it, so that i can retrieve it, view it and edit it and search for any particular date and see the date of joining records for that date. Can't figure out. Struggling
Basically, don't try and work with the file as lines of text; separate that away, so that you have one piece of code which parses that text into typed records, and then process those upstream when you only need to deal with typed data.
For example (and here I'm assuming that the file is tab-delimited, but you could change it to be column indexed instead easily enough); look at how little work my Main method needs to do to work with the data:
using System;
using System.Collections.Generic;
using System.Globalization;
using System.IO;
using System.Linq;
static class Program
{
static void Main()
{
foreach (var item in ReadFile("my.txt").OrderBy(x => x.Joined))
{
Console.WriteLine(item.Names);
}
}
static readonly char[] tab = { '\t' };
class Foo
{
public string Names { get; set; }
public int Age { get; set; }
public string Designation { get; set; }
public DateTime Joined { get; set; }
}
static IEnumerable<Foo> ReadFile(string path)
{
using (var reader = File.OpenText(path))
{
// skip the first line (headers), or exit
if (reader.ReadLine() == null) yield break;
// read each line
string line;
var culture = CultureInfo.InvariantCulture;
while ((line = reader.ReadLine()) != null)
{
var parts = line.Split(tab);
yield return new Foo
{
Names = parts[0],
Age = int.Parse(parts[1], culture),
Designation = parts[2],
Joined = DateTime.Parse(parts[3], culture)
};
}
}
}
}
And here's a version (not quite as elegant, but working) that works on .NET 2.0 (and probably on .NET 1.1) using only ISO-1 language features; personally I think it would be silly to use .NET 1.1, and if you are using .NET 2.0, then List<T> would be vastly preferable to ArrayList. But this is "worst case":
using System;
using System.Collections;
using System.Globalization;
using System.IO;
class Program
{
static void Main()
{
ArrayList items = ReadFile("my.txt");
items.Sort(FooByDateComparer.Default);
foreach (Foo item in items)
{
Console.WriteLine(item.Names);
}
}
class FooByDateComparer : IComparer
{
public static readonly FooByDateComparer Default
= new FooByDateComparer();
private FooByDateComparer() { }
public int Compare(object x, object y)
{
return ((Foo)x).Joined.CompareTo(((Foo)y).Joined);
}
}
static readonly char[] tab = { '\t' };
class Foo
{
private string names, designation;
private int age;
private DateTime joined;
public string Names { get { return names; } set { names = value; } }
public int Age { get { return age; } set { age = value; } }
public string Designation { get { return designation; } set { designation = value; } }
public DateTime Joined { get { return joined; } set { joined = value; } }
}
static ArrayList ReadFile(string path)
{
ArrayList items = new ArrayList();
using (StreamReader reader = File.OpenText(path))
{
// skip the first line (headers), or exit
if (reader.ReadLine() == null) return items;
// read each line
string line;
CultureInfo culture = CultureInfo.InvariantCulture;
while ((line = reader.ReadLine()) != null)
{
string[] parts = line.Split(tab);
Foo foo = new Foo();
foo.Names = parts[0];
foo.Age = int.Parse(parts[1], culture);
foo.Designation = parts[2];
foo.Joined = DateTime.Parse(parts[3], culture);
items.Add(foo);
}
}
return items;
}
}
I'm not sure why you'd want to retrieve just the dates. You'd probably be better reading your data into Tuples first. Something like
List<Tuple<string, int, string, DateTime>> items.
Then you can sort them by items.Item4, which will be the date.
You can use LINQ and split the line according to tabs to only retrieve the date and order them through a conversion to date.
while ((linevalue = filereader.ReadLine()) != null)
{
items.Add(linevalue.Split('\t').Last());
}
filereader.Close();
items.OrderBy(i => DateTime.Parse(i));
foreach(var item in items)
{
Console.WriteLine(item);
}
get the desired values in Array from the file...
public class DateComparer : IComparer {
public int Compare(DateTime x, DateTime y) {
if(x.Date > y.Date)
return 1;
if(x.Date < y.Date)
return -1;
else
return 0;
}
}
list.Sort(new DateComparer());

Categories