var lastWrittenFolder = new DirectoryInfo(textBoxPath.Text).GetDirectories()
.OrderByDescending(d => d.LastWriteTimeUtc).First();
this is working fine for getting the latest created folder.
but how do i get the first created folder ?
Change the OrderBy function, and the keySelector parameter:
var lastWrittenFolder = new DirectoryInfo(textBoxPath.Text).GetDirectories()
.OrderBy(d => d.CreationTimeUtc).First();
following code will help you.
static void Main()
{
var folderPath = "your-folder-path";
var directories = new DirectoryInfo(folderPath).GetDirectories();
foreach (var item in directories.OrderBy(m => m.LastWriteTime))
{
Console.WriteLine(item.LastWriteTime + " " + item.Name);
}
Console.ReadLine();
}
You can also use Min, so you need the record which has the minimum creation date.
var firstCreatedFolder = new DirectoryInfo(textBoxPath.Text).GetDirectories()
.Min(d => d.CreationTimeUtc);
You are already there. Just instead of sorting the folder in reverse order ( bottom to top) just sort them from top to bottom and then select the first one like you did.
Here is correct code :
var lastWrittenFolder = new DirectoryInfo(path).GetDirectories().OrderBy(d => d.LastWriteTimeUtc).First();
Related
i'm searching the best way to get size of a second to Last Element of string List. Any suggestions? Thank in advance
List<string> ZipFiles = Directory.EnumerateFiles(textBox1.Text, "*.zip", SearchOption.AllDirectories)
.Where(x => File.GetLastWriteTime(x) >= startDate
&& File.GetLastWriteTime(x) <= endDate)
.Select(x => Path.GetFileName(x))
.ToList();
I tried the following code, but obviously it does not work: it break!
string secLast = LastZipFiles.ToList<string>()[LastZipFiles.ToList<string>().Count - 2];
var secLast = ZipFiles.Reverse().Skip(1).First();
Assuming there is a second to last. And it's not very efficient.
Better might be:
if(LastZipFiles.Count > 1)
{
var secLast = LastZipFiles[LastZipFiles.Count -2];
}
Seeing your comment to the previous answer, I see you actually want the file size. You should create a FileInfo from the file name and access it's Length property:
var secondToLastFileSize = new FileInfo(ZipFiles[ZipFiles.Count - 2]).Length;
var lastFileSize = new FileInfo(ZipFiles[ZipFiles.Count - 1]).Length;
Well, im doing a linq query to get a list of results with the same column, and then i need to replace that column value with a new one.
First Code:
var db = GetContext();
var result = from f in GetContext().ProjectStateHistories
where f.ProjectId.Equals(oldProjectId)
select f;
foreach (var item in result)
{
var projectStateHistoryUpdate = db.ProjectStateHistories.Find(item.Id);
projectStateHistoryUpdate.ProjectId = newProjectId;
db.Entry(projectStateHistoryUpdate).State = EntityState.Modified;
}
db.SaveChanges();
I searched for some answers, and i found that i can use Select, and make a new object (Linq replace null/empty value with another value)
Second Code:
var result = (from f in GetContext().ProjectStateHistories
where f.ProjectId.Equals(oldProjectId)
select f).Select(d=> new { Id = d.Id, EventName = d.EventName, LogUser = d.LogUser, ProjectId = newProjectId, TimeStamp = d.TimeStamp });
And even, Third Code:
var db = GetContext();
var result = (from f in db.ProjectStateHistories
where f.ProjectId.Equals(oldProjectId)
select f).Select(d=> new { ProjectId = newProjectId});
But only the First Code works.
I wanted to ask what i am doing wrong, since i think it is better to change the value with a query, instead of using a foreach.
See code below:
var db = GetContext();
(from f in db.ProjectStateHistories
where f.ProjectId.Equals(oldProjectId)
select f)
.ToList()
.ForEach(i => i.ProjectId = newProjectId);
db.SaveChanges();
Alternatively:
var db = GetContext();
db.ProjectStateHistories
.Where(f => f.ProjectId.Equals(oldProjectId))
.ToList()
.ForEach(f => f.ProjectId = newProjectId);
db.SaveChanges();
The shortest way I know of to replace your code is this:
var db = getcontext();
db.ProjectStateHistories
.Where(f => f.ProjectId.Equals(oldProjectId))
.ToList()
.ForEach(f => f.ProjectId = newProjectId);
db.SaveChanges();
Other answers can be found here
I've just had a thought that could help you, I am just free coding here!
If you just put the for each as part of the select, and then save your changes will that work?
foreach (var source in db.ProjectStateHistories.Where(x => x.ProjectId == oldProjectId))
{
source.ProjectId= newProjectId;
db.Entry(source).State = EntityState.Modified;
}
db.SaveChanges();
I think this is a more efficient way of doing it.
Also the .Select() method is only really useful if you need to Project to a view Model, it won't change the variables in the database, just show them in the newly declared object.
Thanks,
Phill
My program creates a .csv file with a persons name and an integer next to them.
Occasionally there are two entries of the same name in the file, but with a different time. I only want one instance of each person.
I would like to take the mean of the two numbers to produce just one row for the name, where the number will be the average of the two existing.
So here Alex Pitt has two numbers. How can I take the mean of 105 and 71 (in this case) to produce a row that just includes Alex Pitt, 88?
Here is how I am creating my CSV file if reference is required.
public void CreateCsvFile()
{
PaceCalculator ListGather = new PaceCalculator();
List<string> NList = ListGather.NameGain();
List<int> PList = ListGather.PaceGain();
List<string> nAndPList = NList.Zip(PList, (a, b) => a + ", " + b).ToList();
string filepath = #"F:\A2 Computing\C# Programming Project\ScheduleFile.csv";
using (var file = File.CreateText(filepath))
{
foreach (var arr in nAndPList)
{
if (arr == null || arr.Length == 0) continue;
file.Write(arr[0]);
for (int i = 1; i < arr.Length; i++)
{
file.Write(arr[i]);
}
file.WriteLine();
}
}
}
To start with, you can write your current CreateCsvFile much more simply like this:
public void CreateCsvFile()
{
var filepath = #"F:\A2 Computing\C# Programming Project\ScheduleFile.csv";
var ListGather = new PaceCalculator();
var records =
ListGather.NameGain()
.Zip(ListGather.PaceGain(),
(a, b) => String.Format("{0},{1}", a, b));
File.WriteAllLines(filepath, records);
}
Now, it can easily be changed to work out the average pace if you have duplicate names, like this:
public void CreateCsvFile()
{
var filepath = #"F:\A2 Computing\C# Programming Project\ScheduleFile.csv";
var ListGather = new PaceCalculator();
var records =
from record in ListGather.NameGain()
.Zip(ListGather.PaceGain(),
(a, b) => new { Name = a, Pace = b })
group record.Pace by record.Name into grs
select String.Format("{0},{1}", grs.Key, grs.Average());
File.WriteAllLines(filepath, records);
}
I would recommend to merge the duplicates before you put everything into the CSV file.
use:
// The List with all duplicate values
List<string> duplicateChecker = new List<string>();
//Takes the duplicates and puts them in a new List. I'm using the NList because I assume the Names are the important part.
duplicateChecker = NList .Distinct().ToList();
Now you can simply Iterrate through the new list and search their values in your NList. Use a foreach loop which is looking up the index of the Name value in Nlist. After that you can use the Index to merge the integers with a simple math method.
//Something like this:
Make a foreach loop for every entry in your duplicateChecker =>
Use Distrinc again on duplicateChecker to make sure you won't go twice through the same duplicate =>
Get the Value of the current String and search it in Nlist =>
Get the Index of the current Element in Nlist and search for the Index in Plist =>
Get the Integer of Plist and store it in a array =>
// make sure your math method runs before a new name starts. After that store the new values in your nAndPList
Once the Loop is through with the first name use a math method.
I hope you understand what I was trying to say. However I would recommend using a unique identifier for your persons. Sooner or later 2 persons will appear with the same name (like in a huge company).
Change the code below:
List<string> nAndPList = NList.Zip(PList, (a, b) => a + ", " + b).ToList();
To
List<string> nAndPList = NList.Zip(PList, (a, b) => a + ", " + b)
.ToList()
.GroupBy(x => x.[The field you want to group by])
.Select(y => y.First);
I have this need to know how many rows have the same month from a table and I have no idea of how to do it. I thought I'd try some LINQ but I've never used it before so I don't even know if it's possible. Please help me out!
public ActionResult returTest()
{
ViewData["RowsWithSameMonth"] = // I'm guessing I can put some LINQ here?
var returer = from s in db2.ReturerDB select s;
return View(returer.ToList());
}
The ideal would be to get, maybe a two dimensional array with the month in the first cell and the amount of rows from the db in the second?
I'd like the result to be sort of :
string[,] statistics = new string[,]
{
{"2013-11", "5"},
{"2013-12", "10"},
{"2014-01", "3"}
};
Is this doable? Or should I just query the database and do a whole lot of stuff? I'm thinking that I can solve this on my own, but it would mean a lot of ugly code. Background: self taught C# developer at IT-company with 1 years experience of ugly codesmanship and no official degree of any kind.
EDIT
var returer = from s in db2.ReturerDB select s;
var dateRange = returer.ToList();
var groupedData = dateRange.GroupBy(dateRow => dateRow.ToString())
.OrderBy(monthGroup => monthGroup.Key)
.Select(monthGroup => new
{
Month = monthGroup.Key,
MountCount = monthGroup.Count()
});
string test01 = "";
string test02 = "";
foreach (var item in groupedData)
{
test01 = item.Month.ToString();
test02 = item.MountCount.ToString();
}
In debug, test01 is "Namespace.Models.ReturerDB" and test02 is "6" as was expected, or at least wanted. What am I doing wrong?
You can do this:
var groupedData = db2.ReturerDB.GroupBy(r => new { r.Date.Year, r.Date.Month })
.Select(g => new { g.Key.Year, g.Key.Month, Count = g.Count() })
.OrderBy(x => x.Year).ThenBy(x => x.Month);
.ToList();
var result = groupedData
.ToDictionary(g => string.Format("{0}-{1:00}", g.Year, g.Month),
g => g.Count);
Which will give you
Key Value
---------------
2013-11 5
2013-12 10
2014-01 3
(Creating a dictionary is slightly easier than a two-dimensional array)
This will work against a SQL back-end like entity framework of linq-to-sql, because the expressions r.Date.Year and r.Date.Month can be translated into SQL.
with a nod to mehrandvd, here is how you'd achieve this using linq method chain approach:
var dateRange = { // your base collection with the dates};
// make sure you change MyDateField to match your won datetime field
var groupedData = dateRange
.GroupBy(dateRow => dateRow.MyDateField.ToString("yyyy-mm"))
.OrderBy(monthGroup => monthGroup.Key)
.Select(monthGroup => new
{
Month = monthGroup.Key,
MountCount = monthGroup.Count()
});
This would give you the results you required, as per the OP.
[edit] - as requested, example of how to access the newly created anonymous type:
foreach (var item in groupedData)
{
Console.WriteLine(item.Month);
Console.WriteLine(item.MountCount);
}
OR, you could return the whole caboodle as a jsonresult to your client app and iterate inside that, i.e the final line of your view would be:
return Json(groupedData, JsonRequestBehavior.AllowGet);
hope this clarifies.
What you need is grouping.
Considering you have a list of dates a solution would be this:
var dateRows = // Get from database
var monthlyRows = from dateRow in dateRows
group dateRow by dateRow.ToString("yyyy/mm") into monthGroup
orderby monthGroup.Key
select new { Month=monthGroup.Key, MountCount=monthGroup.Count };
// Your results would be a list of objects which have `Month` and `MonthCount` properties.
// {Month="2014/01", MonthCount=24}
// {Month="2014/02", MonthCount=28}
I have a list of details about a large number of files. This list contains the file ID, last modified date and the file path. The problem is there are duplicates of the files which are older versions and sometimes have different file paths. I want to only store the newest version of a file regardless of file path. So I created a loop that iterates through the ordered list, checks to see if the ID is unique and if it is, it gets stored in a new unique list.
var ordered = list.OrderBy(x => x.ID).ThenByDescending(x => x.LastModifiedDate);
List<Item> unique = new List<Item>();
string curAssetId = null;
foreach (Item result in ordered)
{
if (!result.ID.Equals(curAssetId))
{
unique.Add(result);
curAssetId = result.ID;
}
}
However this is still allowing duplicates into the DB and I can't figure out why this code isn't working as expected. By duplicates I mean, the files have the same ID but different file paths, which like I said before shouldn't be an issue. I just want the latest version regardless of pathway. Can anyone else see what the issue is? Thanks
var ordered = listOfItems.OrderBy(x => x.AssetID).ThenByDescending(x => x.LastModifiedDate);
List<Item> uniqueItems = new List<Item>();
foreach (Item result in ordered)
{
if (!uniqueItems.Any(x => x.AssetID.Equals(result.AssetID)))
{
uniqueItems.Add(result);
}
}
this is what I have now and it is still allowing duplicates
This is because , you are not searching entire list to check whether the id is unique or not
List<Item> unique = new List<Item>();
string curAssetId = null; // here is the problem
foreach (Item result in ordered)
{
if (!result.ID.Equals(curAssetId)) // here you only compare the last value.
{
unique.Add(result);
curAssetId = result.ID; // You are only assign the current ID value and
}
}
to solve this , change the following
if (!result.ID.Equals(curAssetId)) // here you only compare the last value.
{
unique.Add(result);
curAssetId = result.ID; // You are only assign the current ID value and
}
to
if (!unique.Any(x=>x.ID.Equals(result.ID)))
{
unique.Add(result);
}
I don't know if this code is just simplified, but have you considered grouping on ID, sorting on LastModifiedDate, then just taking the first from each group?
Something like:
var unique = list.GroupBy(i => i.ID).Select(x => x.OrderByDescending(y => y.LastModifiedDate).First());
var ordered = list.OrderBy(x => x.ID).ThenByDescending(x => x.LastModifiedDate).Distinct() ??
For this purpose you have to create your own EquityComparer and after that you could use linq's Distinct method. Enumerable.Distinct at msdn
Also I think you could stay with your current code but you have to modify it in such a way (as a sample):
var ordered = list.OrderByDescending(x => x.LastModifiedDate);
var unique = new List<Item>();
foreach (Item result in ordered)
{
if (unique.Any(x => x.ID == result.ID))
continue;
unique.Add(result);
}
List<Item> p = new List<Item>();
var x = p.Select(c => new Item
{
AssetID = c.AssetID,
LastModifiedDate = c.LastModifiedDate.Date
}).OrderBy(y => y.id).ThenByDescending(c => c.LastModifiedDate).Distinct();