I am trying to copy files from one directory to another and test based upon the file creation date.
File.Copy(fileName, directory + fileNameOnly, true);
The problem occurs later in my program when I checked the creation date to ensure it is no more than 5 days old.
FileInfo file = new FileInfo(fileName);
if (file.CreationTime.AddHours(120) < DateTime.Now) {}
I have seen that the creation date when copied back is set to 1980-01-01. This is not useful for my requirements as I would like maintain the creation date from the original file. Is there another method of comparing the dates or is it the copy that loses the creation date value.
I guess my question is, how can I maintain the Creation Date?
Use the File.SetCreationTime method after you copy the file.
You can get the source file's creation time with File.GetCreationTime
Related
This question already has answers here:
Why Windows sets new created file's "created time" property to old time?
(3 answers)
Closed 5 years ago.
I am attempting to get the creation date of a file. I am using the File.GetCreationTime() method to do this. If the file is a new file, it seems to work fine. If I delete the file though and re-create it, it seems to be giving me the original creation time. Since the file was deleted, it seems weird and even impossible that it is returning the original date and time of the file.
I put together a simple console application to demonstrate the issue:
static void Main(string[] args)
{
const string fileName = #"C:\Temp\dummy.txt";
File.AppendAllText(fileName, "This is a test");
DateTime creationDate = File.GetCreationTime(fileName);
Console.WriteLine(creationDate.ToShortDateString() + " " + creationDate.ToShortTimeString());
System.Threading.Thread.Sleep(120000);
File.Delete(fileName);
File.AppendAllText(fileName, "This is a test");
creationDate = File.GetCreationTime(fileName);
Console.WriteLine(creationDate.ToShortDateString() + " " + creationDate.ToShortTimeString());
}
This program creates a dummy file and appends the text This is a test. It then prints out the creation date and time to the console screen. So far, so good. It then sleeps for 2 minutes. After the 2 minutes have elapsed, it deletes the file and re-creates it. It then, again, prints out the creation date and time to the console screen. I would except the latter output to be 2 minutes later than the original, however, it is pulling the same exact date and time! I have single stepped through the program and I can verify that it is, indeed, deleting the original file from the hard drive.
Actual Output
--------------
5/6/2017 10:25 AM
5/6/2017 10:25 AM
Expected Output
----------------
5/6/2017 10:25 AM
5/6/2017 10:27 AM
Can someone explain to me what is going on here and how to work around the issue?
From the MSDN page
NTFS-formatted drives may cache information about a file, such as file creation time, for a short period of time. As a result, it may be necessary to explicitly set the creation time of a file if you are overwriting or replacing an existing file.
Background
I'm developing a simple windows service which monitors certain directories for file creation events and logs these - long story short, to ascertain if a file was copied from directory A to directory B. If a file is not in directory B after X time, an alert will be raised.
The issue with this is I only have the file to go on for information when working out if it has made its way to directory B - I'd assume two files with the same name are the same, but as there are over 60 directory A's and a single directory B - AND the files in any directory A may accidentally be the same as another (by date or sequence) this is not a safe assumption...
Example
Lets say, for example, I store a log that file "E17999_XXX_2111.txt" was created in directory C:\Test. I would store the filename, file path, file creation date, file length and the BOM for this file.
30 seconds later, I detect that the file "E17999_XXX_2111.txt" was created in directory C:\FinalDestination... now I have the task of determining whether;
a) the file is the same one created in C:\Test, therefore I can update the first log as complete and stop worrying about it.
b) the file is not the same and I somehow missed the previous steps - therefore I can ignore this file because it has found its way to the destination dir.
Research
So, in order to determine if the file created in the destination is exactly the same as the one created in the first instance, I've done a bit of research and found the following options:
a) filename compare
b) length compare
c) a creation-date compare
d) byte-for-byte compare
e) hash compare
Problems
a) As I said above, going by Filename alone is too presumptuous.
b) Again, just because the length of the contents of a file is the same, it doesn't necessarily mean the files are actually the same.
c) The problem with this is that a copied file is technically a new file, therefore the creation date changes. I would want to set the first log as complete regardless of the time elapsed between the file appearing in directory A and directory B.
d) Aside from the fact that this method is extremely slow, it appears there's an issue if the second file has somehow changed encoding - for example between ANSII and ASCII, which would cause a byte mis-match for things like ascii quotes
I would like not to assume that just because an ASCII ' has changed to an ANSII ', the file is now different as it is near enough the same.
e) This seems to have the same downfalls as a byte-for-byte compare
EDIT
It appears the actual issue I'm experiencing comes down to the reason for the difference in encoding between directories - I'm not currently able to access the code which deals with this part, so I can't tell why this happens, but I am looking to implement a solution which can compare files regardless of encoding to determine "real" differences (i.e. not those whereby a byte has changed due to encoding)
SOLUTION
I've managed to resolve this now by using the SequenceEqual comparison below after encoding my files to remove any bad data if the initial comparison suggested by #Magnus failed to find a match due to this. Code below:
byte[] bytes1 = Encoding.Convert(Encoding.GetEncoding(1252), Encoding.ASCII, Encoding.GetEncoding(1252).GetBytes(File.ReadAllText(FilePath)));
byte[] bytes2 = Encoding.Convert(Encoding.GetEncoding(1252), Encoding.ASCII, Encoding.GetEncoding(1252).GetBytes(File.ReadAllText(FilePath)));
if (Encoding.ASCII.GetChars(bytes1).SequenceEqual(Encoding.ASCII.GetChars(bytes2)))
{
//matched!
}
Thanks for the help!
You would then have to compare the string content if the files. The StreamReader (which ReadLines uses) should detect the encoding.
var areEquals = System.IO.File.ReadLines("c:\\file1.txt").SequenceEqual(
System.IO.File.ReadLines("c:\\file2.txt"));
Note that ReadLines will not read the complete file into memory.
I've recently written a small program to rename a bunch of files located in 6 directories. The program loops through each directory from a list and then renames each file in that directory using the File.Move method. The files are renamed to cart_buttons_1.png with the 1 incrementing by 1 each time.
public static int RenameFiles(DirectoryInfo d, StreamWriter sqlStreamWriter,
int incrementer, int category, int size)
{
FileInfo[] files = d.GetFiles("*.png");
foreach (FileInfo fileInfo in files)
{
File.Move(fileInfo.FullName, d.FullName + "cart_button_" + incrementer + ".png" );
incrementer++;
}
return incrementer;
}
The problem I'm encountering is when I run the program more than once it runs fine up until it hits the folder containing the 100th record. The d.Getfiles method retrieves all the files with the 100s first, causing and IOException, because the file which it is trying to rename already exists in the folder. The workaround I've found for this is just to select all the records with 100 in the filename and renaming them all to 'z' or something so that it just batches them all together. Any thoughts or ideas on how to fix this. Possibly some way to sort the GetFiles to look at the others first.
Using LINQ:
var sorted = files.OrderBy(fi => fi.FullName).ToArray();
Note that the above will sort by the textual values, so you may want to change that to order by the numeric value:
files.OrderBy(fi => int.Parse(fi.Name.Split(new []{'_','.'})[2]))
The above assumes that splitting by _ and . of a file name will result in an array with the third value being the numeric value.
The easiest workaround would be to check if the destination name exists prior to attempting the copy. Since you have the files array already, you can construct your destination name and if File.Exists() returns true, skip that numerical value.
I would also handle the exception that is thrown by the File.Move (you want to test for Existance first to avoid unnecessary exception throwing) because the file-system is not frozen while your code works... so even testing for existance won't ensure it isn't created in the meantime.
Finally, I think that running this code again against the same directory is going to duplicate all of the files again... probably not what is intended. I would filter the source filenames and avoid copying those already matching your pattern.
I was doing window services where I'm generating txt files in target path based on some details from a database but I have a problem the service is running too fast!
I was getting same file name in the place of sec variation required so that i can avoid duplicates over there.
code :
using (transactionscope scope = new transactionscope )
{
string nowtime = datetime.now.today.tostring(HHMMss) // it was working fine
}
file should be generates by specific file naming convention !! ex:hhmmss >>> no millisecond
can any one give me exclusive ideas how to face this part?
You can add milliseconds to the filename:
string nowtime = datetime.Now.Today.ToString("HHmmssfff");
See Custom Date and Time Format Strings.
A few notes about the code you posted:
MM is for months, not minutes. You should use lower case mm.
The parameter that ToString takes is a string.
Your code wouldn't compile as it is not correctly cased. Please use code that can be directly used in the future.
Update:
Seeing as you have to use this format, the only other choice is to "slow down" the service.
Adding a:
Thread.Wait(1000);
In the right place (end of loop?) could do the trick.
Alternatively, you can change your code to append to a file if you are still within the same second.
If you are saying that you are creating multiple files with the same name (multiple files in the same second), then I would take the time out to the milliseconds. You can do this with:
DateTime.Today.ToString("HHmmssfff");
The fff denotes the three places to the right of the decimal (thousandths of a second).
i want to write a code in a way,if there is a text file placed in a specified path, one of the users edited the file and entered new text and saved it.now,i want to get the text which is appended last time.
here am having file size for both before and after append the text
my text file size is 1204kb from that i need to take the end of 200kb text alone is it possible
This can only be done if you're monitoring the file size in real-time, since files do not maintain their own histories.
If watching the files as they are modified is a possibility, you could perhaps use a FileSystemWatcher and calculate the increase in file size upon any modification. You could then read the bytes appended since the file last changes, which would be very straightforward.
Do you know how big the file was before the user appended the text? If not, there's no way of telling... files don't maintain a revision history (in most file systems, anyway).
You can keep track of the file pointer . Eg If you are using C language then you can go to the end of the file using fseek(fp,SEEK_END) and then use ftell(fp) which will give you the current position of the file pointer . After the user edits and saves the file , when you rerun the code you can check with the new position original position . If the new position is greater than the original position offset those number of bytes with the file pointer
As #Jon Skeet alludes to in his answer, the only way to tell specifically what text that was "appended", is by knowing how large the file was before it was changed. The rest of the characters is thus what was "appended".
Note that I quote appended above since I get two conflicting meanings from your question; edited and appended.
If the user only appends text, which is taken to mean "add more text only at the end", then the previous-size approach should in theory work.
However, if the user freely edits the text, by adding text in random spots, and perhaps even removing or changing existing text, then you need a whole 'nother approach to this.
If it's the latter, I might have something you could use, a binary patching implementation that can also be used to figure out from an older copy of the same file what was changed in a newer copy. It isn't easy to use, and might not give you exactly what you want, but as I said, it's hard to tell exactly what your question is.
If your program is running the entire time, you could grab a copy of the file in memory. Then in a separate thread periodically read the new file and compare the two.
If you want your program to be notified when file is changed, use FileSystemWatcher. However, it will only notify you, when file is changed while your program is running and will not provide you with appended text. You will get only information about which file was changed.
FileSystemWatcher watcher = new FileSystemWatcher(Environment.CurrentDirectory, "test.txt");
while (true)
{
var changedResult =
watcher.WaitForChanged(WatcherChangeTypes.Changed);
Console.WriteLine(changedResult.Name);
}
Or:
FileSystemWatcher watcher = new FileSystemWatcher(Environment.CurrentDirectory, "test.txt");
watcher.Changed += watcher_Changed;
static void watcher_Changed(object sender, FileSystemEventArgs e)
{
Console.WriteLine(e.FullPath);
Console.WriteLine(e.ChangeType);
}
Best solution imo is to write a small app which has to be used to change the file in question. This application can then insert additional info into the file which allows you to keep the entire revision history.