I have a faulty hard drive that works intermittently. After cold booting, I can access it for about 30-60 seconds, then the hard drive fails. I'm willing to write a software to backup this drive to a new and bigger disk. I can develop it under GNU/Linux or Windows, I don't care.
The problem is: I can only access the disk for some time, and there are some files that are big and will take longer than that to be copied. For this reason, I'm thinking of backing up the entire hard disk in smaller pieces, something like bit torrenting. I'll read some megabytes and store it, before trying to read another set. My main loop would be something like this:
while(1){
if(!check_harddrive()){ sleep(100ms); continue; }
read_some_megabytes();
if(!check_harddrive()){ sleep(100ms); continue; }
save_data();
update_reading_pointer();
if(all_done){ break; }
}
The problem is the check_harddrive() function. I'm willing to write this in C/C++ for maximus API/library compatibility. I'll need some control over my file handlers to check if they are still valid, and I need something to return bad data, but return, if the drive fails during the copy process.
Maybe C# would give me best results if I abuse "hardcoded" hardware exceptions?
Another approach would be measuring how much time would I need to power cycle my harddrive and code a program to read it during this time only, and flagging me when to power cycle.
What would you do in this case? Are there any tools/utilities that already do this?
Oh, there is a GREAT app to read bad optical medias here, it's called IsoPuzzle, it's not mine, I just wanted to share something related to my problem.
!EDIT!
Some clarifications. I'm a home user, a student of computer engineering at college, I'd rather lose the data than spend thousands of dollars recovering it. The harddrive is still covered by Seagate's warranty, but since they gave me 5 years of warranty, I wanna try everything possible until the time runs out.
When I say cold booting, I mean booting after some seconds without power. Hot booting would be rebooting your computer, cold booting would be shutting it down, waiting a few seconds then bootting it up again. Since the harddisk in question is internal but SATA, I can just disconnect the power cable, wait a few seconds and connect it again.
Until now I'll go with robocopy, I'm just searching for it to see how I can use it. If I don't need to code myself, but script, it'll be even easier.
!EDIT2!
I wasn't clear, my drive is a Seagate 7200.11. It's known that it has a bad firmware and it's not always fixable with a simple firmware update (not after this bug appears). The drive physically is 100% in working condition, just the firmware is screwed, making it enter on a infinite busy state after some seconds.
I would work this from the hardware angle first. Is it an external drive - if so, can you try it in a different case?
You mention cold-booting works, then it quits. Is this heat related? Have you tried using the hard drive for an extended period in something like a freezer?
From the software side I'd have a second thread keep an eye on some progress counter updated by a repeated loop reading small amounts of data, then it would be able to signal failure via a timeout you would define.
I think the simplest way for you is to copy the entire disk image. Under Linux your disk will appear as a block device, /dev/sdb1 for example.
Start copying the disk image until the read error appear. Then wait for the user to "repair" the disk and start reading from the last position.
You can easily mount file disk image and read its content, see -o loop option for mount.
Cool down disk before use. I heard that helps.
You might be interested in robocopy("Robust File Copy"). Robocopy is a command line tool and it can tolerate network outages and resume copying where it previously left off (incomplete files are noted with a date stamp corresponding to 1980-01-01 and contain a recovery record so Robocopy knows from where to continue).
You know... I like being "lazy"... Here is what I would do:
I would write 2 simple scripts. One of them would start robocopy (with persistance feautures turned off) and start the copying while the other would periodically check (maybe by trying to list the contents of the root directory and if it takes more than a few seconds than it it is dead... again..) whether the drive is still working and if the HDD stopped working it would restart the machine. Get them start up after login and setup up auto-login so when the machines reboots it automatically continues.
From a "I need to get my data back" perspective, if your data is really valuable to you, I would recommend sending the drive to a data recovery specialist. Depending on how valuable the data is, the cost (probably several hundred dollars) is trivial. Ideally, you would find a data recovery specialist that doesn't just run some software to do the recovery - if the software approach doesn't work, they should be able to do things like replace the circiut board on the drive, and probably other things (I am not a data recover specialist).
If the value of the data on the drive doesn't quite rise to that level, you should consider purchasing one of the many pieces of software for data recovery. For example, I personally have used and would recommend GetDataBack from Runtime software http://www.runtime.org. I've used it to recover a failing drive, it worked for me.
And now on to more general information... The standard process for data recovery off of a failing drive is to do as little as possible on the drive itself. You should unplug the drive, and stop attempting to do anything. The drive is failing, and it is likely to get worse and worse. You don't want to play around with it. You need to maximize your chances of getting the data off.
The way the process works is to use software that reads the drive block-by-block (not file-by-file), and makes an image copy of the drive. The software attempts to read every block, and will retry the reads if they fail, and writes an image file which is an image of the entire hard drive.
Once the hard drive has been imaged, the software then works against the image to identify the various logical parts of the drive - the partitions, directories, and files. And then it enables you to copy the files off of the image.
The software can typically "deduce" structures from the image. For example, if the partition table is damaged or missing, the software will scan through the entire image, looking for things that might be partitions, and if they look enough like partitions, it will treat them like a partition and see if it can find directories and files. So good software is written with using a lot of knowledge about the different structures on the drive.
If you want to learn how to write such software, good for you! My recommendation is that you start with books about how various operating systems organize data on hard drives, so that you can start to get an intuitive feel for how a software might work with drive images to pull data from them.
Related
I've used FileSystemWatcher in the past. However, I am hoping someone can explain how it actually is working behind the scenes.
I plan to utilize it in an application I am making and it would monitor about 5 drives and maybe 300,000 files.
Does the FileSystemWatcher actually do "Checking" on the drive - as in, will it be causing wear/tear on the drive? Also does it impact hard drive ability to "sleep"
This is where I do not understand how it works - if it is like scanning the drives on a timer etc... or if its waiting for some type of notification from the OS before it does anything.
I just do not want to implement something that is going to cause extra reads on a drive and keep the drive from sleeping.
Nothing like that. The file system driver simply monitors the normal file operations requested by other programs that run on the machine against the filters you've selected. If there's a match then it adds an entry to an internal buffer that records the operation and the filename. Which completes the driver request and gets an event to run in your program. You'll get the details of the operation passed to you from that buffer.
So nothing actually happens the operations themselves, there is no extra disk activity at all. It is all just software that runs. The overhead is minimal, nothing slows down noticeably.
The short answer is no. The FileSystemWatcher calls the ReadDirectoryChangesW API passing it an asynchronous flag. Basically, Windows will store data in an allocated buffer when changes to a directory occur. This function returns the data in that buffer and the FileSystemWatcher converts it into nice notifications for you.
I know this question is somewhat subjective, but I think it might be a valid question to ask.
I want to create a program that watches folders on a file server. The program itself runs on the server, so network folders don't have to be monitored.
I want to get an event in case a folder/file gets deleted, moved, created and such. These information will be written to the disk (where the network users can't access it). I would need the name of the file, and the user who caused it, and maybe more info, but that's the minimum requirement for now.
In C# we can use the FileSystemWatcher-Class, which is very
unreliable. (Examples for that can be found around StackOverflow a
lot.)
We also could use the Auditing Feature of Windows 7 Professional
(which I am running on), but this also gives many, many confusing
entries in the system log. I just can't get reliable information
from those.
Third, one could just poll the files and compare. This is kind of
the brute force attack I would like to omit. Also, the other methods
might be almost realtime, this one is not.
So, I could think of combining 1. and 3., and maybe even 2., too, but what is the clean, the good way to do this?
I was wondering if it was possible to disable users copying and pasting an external file while running my C# application?
example user runs application while it is running clipboard cannot be used, when the application is finished it then enables the clipboard again user can copy and paste now.
I found this prevent-cut-paste-copy-delete-re-naming-of-files-folders
Thanks for any help!
Answer to: "The user runs my launcher this runs the game and then connects to server where they download a file, this file is stored in a appdata this is the file i dont want people to copy".
The only option to prevent user from copying file on its own computer is to not send file there in a first place.
If you just want merely discourage people from copying the file (as it would be the case of "disable copy/paste") then opening file as non-sharable, delete-on-close may be enough.
Very difficult, if not impossible, and most likely totally unnecessary - what you have in plan. Clipboard belongs to the OS, and not just to your application. Think about how to solve the root of your problem in another way. If you explain what you're trying to do, maybe somebody will suggest how they would solve that particular problem. Why are you using the clipboard to maintain user/application state? If you accept input that way, then copy that data into your application's memory (or elsewhere), then work with it. Don't expect it to stay in the clipboard until your app is done working with it. However, also note that, it'd be against all usability rules to update/change the content of the clipboard with the result of that calculation - if that's what your mind is going as you're reading this.
That would be plain evil. Whatever the purpose. Remember, whenever you get to a task that need you to do some hacks just to provide a workaround to avoid dealing with some security layer being there with a reason or (as in your case) messing with some low-level operating system functionalities to change their bahavior, ask yourself if it even makes any sense.
You either don't need that feature, or you are searching for a security issue in the system/software which will be fixed within weeks or months.
You may actually implement some ugly non-reliable obstacles preventing the user to do those operations, but the user will always be able to find a different way to do them. Except if you are dealing with some DRM stuff, which I doubt.
And however, preventing the user to copy-paste? That definitely won't be some happy user ...
If you were going to download the file on every execution anyway, then you could download it at the game's "loading screen" and keep it in an in-memory stream. Less evil than having to hook the clipboard, and pulling it out involves debuggers or the ability to extract from the swapfile...
I'm not a fan of this solution (and I suspect you/you client will not be either) due to the bandwidth costs of downloading the core data of the game at every launch...
This questions continues from what I learnt from my question yesterday titled using git to distribute nightly builds.
In the answers to the above questions it was clear that git would not suit my needs and was encouraged to re-examine using BitTorrent.
Short Version
Need to distribute nightly builds to 70+ people each morning, would like to use git BitTorrent to load balance the transfer.
Long Version
NB. You can skip the below paragraph if you have read my previous question.
Each morning we need to distribute our nightly build to the studio of 70+ people (artists, testers, programmers, production etc). Up until now we have copied the build to a server and have written a sync program that fetches it (using Robocopy underneath); even with setting up mirrors the transfer speed is unacceptably slow with it taking up-to an hour or longer to sync at peak times (off-peak times are roughly 15 minutes) which points to being hardware I/O bottleneck and possibly network bandwidth.
What I know so far
What I have found so far:
I have found the excellent entry on Wikipedia about the BitTorrent protocol which was an interesting read (I had only previously known the basics of how torrents worked). Also found this StackOverflow answer on the BITFIELD exchange that happens after the client-server handshake.
I have also found the MonoTorrent C# Library (GitHub Source) that I can use to write our own tracker and client. We cannot use off the shelf trackers or clients (e.g. uTorrent).
Questions
In my initial design, I have our build system creating a .torrent file and adding it to the tracker. I would super-seed the torrent using our existing mirrors of the build.
Using this design, would I need to create a new .torrent file for each new build? In other words, would it be possible to create a "rolling" .torrent where if the content of the build has only change 20% that is all that needs to be downloaded to get latest?
... Actually. In writing the above question, I think that I would need to create new file however I would be able download to the same location on the users machine and the hash will automatically determine what I already have. Is this correct?
In response to comments
For completely fresh sync the entire build (including: the game, source code, localized data, and disc images for PS3 and X360) ~37,000 files and coming in just under 50GB. This is going to increase as production continues. This sync took 29 minutes to complete at time when there is was only 2 other syncs happening, which low-peak if you consider that at 9am we would have 50+ people wanting to get latest.
We have investigated the disk I/O and network bandwidth with the IT dept; the conclusion was that the network storage was being saturated. We are also recording statistics to a database of syncs, these records show even with handful of users we are getting unacceptable transfer rates.
In regard not using off-the-shelf clients, it is a legal concern with having an application like uTorrent installed on users machines given that other items can be easily downloaded using that program. We also want to have a custom workflow for determining which build you want to get (e.g. only PS3 or X360 depending on what DEVKIT you have on your desk) and have notifications of new builds available etc. Creating a client using MonoTorrent is not the part that I'm concerned about.
To the question whether or not you need to create a new .torrent, the answer is: yes.
However, depending a bit on the layout of your data, you may be able to do some simple semi-delta-updates.
If the data you distribute is a large collection of individual files, with each build some files may have changed you can simply create a new .torrent file and have all clients download it to the same location as the old one (just like you suggest). The clients would first check the files that already existed on disk, update the ones that had changed and download new files. The main drawback is that removed files would not actually be deleted at the clients.
If you're writing your own client anyway, deleting files on the filesystem that aren't in the .torrent file is a fairly simple step that can be done separately.
This does not work if you distribute an image file, since the bits that stayed the same across the versions may have moved, and thus yielding different piece hashes.
I would not necessarily recommend using super-seeding. Depending on how strict the super seeding implementation you use is, it may actually harm transfer rates. Keep in mind that the purpose of super seeding is to minimize the number of bytes sent from the seed, not to maximize the transfer rate. If all your clients are behaving properly (i.e. using rarest first), the piece distribution shouldn't be a problem anyway.
Also, to create a torrent and to hash-check a 50 GiB torrent puts a lot of load on the drive, you may want to benchmark the bittorrent implementation you use for this, to make sure it's performant enough. At 50 GiB, the difference between different implementations may be significant.
Just wanted to add a few non-BitTorrent suggestions for your perusal:
If the delta between nightly builds is not significant, you may be able to use rsync to reduce your network traffic and decrease the time it takes to copy the build. At a previous company we used rsync to submit builds to our publisher, as we found our disc images didn't change much build-to-build.
Have you considered simply staggering the copy operations so that clients aren't slowing down the transfer for each other? We've been using a simple Python script internally when we do milestone branches: the script goes to sleep until a random time in a specified range, wakes up, downloads and checks-out the required repositories and runs a build. The user runs the script when leaving work for the day, when they return they have a fresh copy of everything ready to go.
You could use BitTorrent sync Which is somehow an alternative to dropbox but without a server in the cloud. It allows you to synchronize any number of folders and files of any size. with several people and it uses the same algorithms from the bit Torrent protocol. You can create a read-only folder and share the key with others. This method removes the need to create a new torrent file for each build.
Just to throw another option into the mix, have you considered BITS? Not used it myself but from reading the documentation it supports a distributed peer caching model which sounds like it will achieve what you want.
The downside is that it is a background service so it will give up network bandwidth in favour of user initiated activity - nice for your users but possibly not what you want if you need data on a machine in a hurry.
Still, it's another option.
The Problem
Our company make specialized devices running Windows XP (Windows XPe, to be precise). One of the unbending legal requirements we face is that we must quickly detect when a fixed IDE drive is removed. Quickly as in within a few seconds.
The drives in question are IDE drives. They are also software-protected from writes with an EWF (Enhanced Write Filter) layer. The EWF layer sits under the file system, protecting the disk from writes. If you change or write something on an EWF-protected volume, the actual changes happen only in a memory layer (but the file system isn't aware of that).
The problem is that Windows itself doesn't seem to notice fixed drive removal. You can pull the drive out of the machine, and Windows Explorer will be happy to let you browse directories and even open files if they happen to still be cached in memory. And thanks to the EWF layer, I can even seem to write files to the missing drive.
I need a clean software-only solution. Ideally in C#/.Net 1.1, but I have no problem with using pinvoke or C++.
Things I can't do
No, I can't retrofit thousands of devices with new hardware.
No, we can't just super-glue drives in to meet legal requirements.
No, a normal file write/read won't detect the situation, thanks to the EWF layer.
No, we can't turn off the EWF layer.
No, I can't ignore legal requirements, even if they are silly.
No, I can't detect fixed drive removal the way I would for a USB or other removable drive. These are fixed drives.
No, I can't use WMI (Windows Management Instrumentation). It isn't installed on our machines.
No I can't use versions of .Net past 1.1. It won't fit on our small drives. (But if an easy solution exists in a higher version of .Net, I might be able to port it back to 1.1.)
Current awkward solution
I'm not happy with our current solution. I'm looking for something more elegant and efficient.
What I'm currently doing involves two threads.
Thread A polls the drive. It first creates a special file on the drive using Kernel32.dll:
Kernel32.CreateFile(
filename,
File_Access.GenericRead | File_Access.GenericWrite,
File_Share.Read | File_Share.Write,
IntPtr.Zero,
CreationDisposition.CreateAlways,
CreateFileFlagsAndAttributes.File_Attribute_Hidden | CreateFileFlagsAndAttributes.File_Attribute_System,
IntPtr.Zero);
Then it polls the drive by calling
Kernel32.FlushFileBuffers(fileHandle);
If the drive has been removed, then thread A will hang for a long time before returning an error code.
Thread B polls thread A.
If thread B sees that thread A has locked up (hasn't updated a special variable in a while), then thread B raises an event that the drive has been removed.
My current solution works, but I don't like it. If anyone knows a cleaner software-only solution, I would appreciate it.
I'm shocked and amazed that the system doesn't fall over dead if you yank out a fixed IDE drive. Like, really shocked. But, hey...
Are you sure can't just fix this with super glue? :)
First, the reason why Windows doesn't notice is because notification of device removal has to come from the bus driver. In this case, the IDE bus doesn't support what we call "surprise remove" so no one ever gets told that the disk is unplugged. I suspect that communications just start timing out, which is why your flush trick works.
Not sure if you're going to come up with any cleaner solution though. If you really, really need this and can restrict it to just a particular release of XP, someone might be able to analyze the drivers involved here and exploit a path that would give you a quicker result. But there's clearly nothing architected in Windows to deal with this and so that's like real work.
-scott
Have you looked in here:
http://msdn.microsoft.com/en-us/library/aa363217(VS.85).aspx
Looks like what you are looking for.