Recently I was writing a simple script (maybe 200 lines total), and I did something quickly, which I forgot what it was. But it deleted my script completely without trace!!! I may have accidentally pressed a bad key combination in Notepad2 or I don't know what happened. I ran the script. It ran without errors. It was finally working the way I wanted it. So, I closed the editor. And originally, I saved it on my desktop, and now it was gone. It wasn't in the Trash bin either. It was completely gone! How did it disappear in a flash? Have you ever had a similar experience where you worked on something and it mysteriously disappeared without a trace?
I have a program called Recuva which runs on Windows and looks for deleted files and tries to restore them. But it did not find the file I was looking for. I was really surprised, because usually if you just delete something and then immediately go to Recuva, it will find that file. And chances are it may still be able to restore it. But it couldn't even locate the file. So, my next thought was I'm going to open HxD which is a hex editor which can view files, memory, or disks. I selected the main hard drive, which is 2TB. And I thought, how am I going to find this script? I thought of a unique line which is something that appears in my script that likely does not appear anywhere else, and I typed that into search. I thought, this will take forever. But no! It found it within 1 minute, and I was able to salvage my script! It's a miracle!!! This is one reason why you don't want to encrypt your hard drive. Lol
( I guess, this post could be formatted as a question for vote, but I'm not sure. It's just something I thought of. Not really Perl related or PerlMonks website related, so I wasn't sure where to write this. )
Re: Have you ever lost your work?
by talexb (Chancellor) on Jan 08, 2024 at 18:10 UTC
|
My source code is almost always in a git repo, so the answer is usually no. However, I did (attempt) to go into the crontab editor recently, but used the -r argument instead of -e (they're right next to each other on the keybowrd), and ended up deleting the entire crontab, about 500+ lines of useful stuff.
Fortunately, Past Me had a job that ran twice a day that saved crontab to a file called crontab.latest, and I was able to easily recover. Past Me has made some blunders, but there have also been some clever moments.
If I'm writing anything of size or complexity, I just add it to the local repo. Creating a repo is super easy (it doesn't have to be pushed to the cloud), and then you can create a branch, go off and try something weird or different, secure in the knowledge that the original lame (but working) version is available if you need.
Alex / talexb / Toronto
Thanks PJ. We owe you so much. Groklaw -- RIP -- 2003 to 2013.
| [reply] [d/l] [select] |
Re: Have you ever lost your work?
by stevieb (Canon) on Jan 09, 2024 at 09:32 UTC
|
Have you ever lost your work?
Yeah. Not in the way I believe your question is stated though. I glanced your question and I'll answer it as such.
I lost nearly everything when I questioned a major corporation on their ethical approach on handling personal data. I strongly stated that all data should be encrypted and protected, and accessed only via 2FA (this was many years ago).
They disagreed. My name was Mudd. They got infiltrated and lost millions. The executives jumped the plane with golden parachutes.
With that, I lost everything.
My software though? Hell no... it's all Open Source, and I always use VCS. Even with VCS (Github, Gitlab), I back things up from my Macbook to my local storage system, which is rsync to another system at another house in my city, then it is syncrhonized to iDrive, and also to iCloud. Not only that, but my Perl software is always on CPAN
My dignity can be lost, but my software will never be.
-stevieb
| [reply] |
|
| [reply] |
Re: Have you ever lost your work?
by afoken (Chancellor) on Jan 10, 2024 at 18:55 UTC
|
Another war story: My coolest recovery
My brother called me. Old PC, harddisk could only be read for about 5 min, then it was no longer able to find any sector. Of course, on a sunday, all stores closed. No spare parts at my brother's house or nearby. "I can try to recover what is readable, I've sufficient disk space on my server, and I should be able to find a spare disk." About two and a half hours later, he stood at my door. We very quickly found out that the harddisk stopped working at around 25 °C measured via SMART. It took about 5 min to get the disk that warm. So I searched my longest IDE cable and some power adapters to extend the power lines to the disk, then we put the PC case on a chair carefully aranged in front of my secondary fridge, placed the harddisk inside the fridge, and turned the fridge to maximum cooling. We added mains power, network cable, monitor, keyboard, started the PC and looked at SMART data. Way below 10 °C right after boot, and so we started to dd the entire disk to my server. It took some hours to backup the disk, and some more to copy the disk image to a "new" disk, but it did work. No data lost, thanks to pure luck.
Of course, the disk had announced its coming death by evil clicks and increased seek times weeks before, but nobody cared. And of course, there was no backup.
Alexander
Update: Fixed some typos found by soonix
--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
| [reply] |
|
| [reply] [d/l] |
|
Wow, I am glad you were able to salvage the data from that dying hard drive! I don't think I have ever lost significant amounts of data in my life. Maybe a file here or there, but nothing extremely important. I always try to back up everything at least twice a year.
I remember, my mom asked me one time to clean up her email folders. She wanted to get rid of some of her emails. She told me which folders to delete. One of them was a family folder where she kept her conversations with her mom. I asked her "ARE YOU SURE YOU DON'T NEED THIS?" She said, "Sure!" She thought she had copies of those elsewhere, but it turns out those were the only copies she had. And I hit delete. Gone. Some weeks later she found out that she did not have copies of those emails. So, we deleted those treasured emails. We were both extremely sad. Of course, her mom's computer would have had a copy of those same emails, but her mom had died years earlier, and we don't know what happened to her computer. Since then, I make backup copies of my mom's files every year just in case.
| [reply] |
Re: Have you ever lost your work?
by NERDVANA (Curate) on Jan 10, 2024 at 02:28 UTC
|
The last time I lost any major amount of code was about 20 years ago in college when I was defragmenting my harddrive and it was getting really hot and I opened up the side of the case to point a desk fan at it. I turned on the fan and heard a pop and heard my harddrive spin down. Then spin up. Then spin down. Harddrive was unrecoverable, and had about two months of personal project code on it.
First lesson was never plug inductive loads into a surge protector on the clean side of a battery backup.
Second lesson was always make frequent backups of anything important. I think most people learn this lesson the first time they kill significant hours worth of their own work.
Since then, I almost always push my code changes to a remote git repo (previously darcs, previously svn) the same day as I write it. I have github for the public stuff, and my own personal Digital Ocean ($5/mo) server for the rest. I also have a local backups server that takes drive snapshot copies of everything every few weeks. (I leave it powered off most of the time to protect against surges) I ought to do that more frequently, but everything frequently-edited that I really care about is in Git.
| [reply] |
Re: Have you ever lost your work? (disaster recovery)
by eyepopslikeamosquito (Archbishop) on Jan 09, 2024 at 14:57 UTC
|
harangzsolt33, congratulations on a thoughtful and meticulously planned meditation!
Further to the excellent replies you've already received,
I thought I'd add a couple of anecdotes.
In the first small company I worked for, the husband and wife business owners took
tape backups of our software home with them from the office every Friday night ...
so they'd be able to resurrect their business in the event of an office fire or something.
I doubt they ever did any serious disaster recovery testing though.
In larger companies I've worked for, auditors enforced regular off-site backups
(stored in a bank vault IIRC), along with a formal
Disaster recovery plan.
Not sure if these plans were designed for the business to survive a nuclear strike on the city
that wiped out both the office and the bank vaults.
The Arctic World Archive (AWA) is a facility for data preservation,
located in the Svalbard archipelago on the island of Spitsbergen, Norway,
not far from the Svalbard Global Seed Vault.
It contains data of historical and cultural interest from several countries,
as well as all of American multinational company GitHub's open source code,
in a deeply buried steel vault, with the data storage medium expected to last for 500 to 1,000 years.
-- from Arctic World Archive
If I understand this correctly, code on github would even survive a global nuclear war
and the destruction of civilization?
Disaster Recovery References
| [reply] |
|
In the first small company I worked for, the husband and wife business owners took tape backups of our software home with them from the office every Friday night ... so they'd be able to resurrect their business in the event of an office fire or something. I doubt they ever did any serious disaster recovery testing though.
The only reason you do backups .. is so that you can do a restore.
And if you haven't tested your backup/restore procedure, then it's a little like Schroedinger's Cat. You don't know if you have a backup .. until you actually successfully do a restore.
Alex / talexb / Toronto
Thanks PJ. We owe you so much. Groklaw -- RIP -- 2003 to 2013.
| [reply] |
|
| [reply] [d/l] |
|
if you haven't tested your backup/restore procedure, then it's a little like Schroedinger's Cat. You don't know if you have a backup .. until you actually successfully do a restore.
Hey, it's war story time again! ;-)
On the last few days of my final year at university, a student-managed little server in my favorite lab had lost a lot of data. I don't remember the exact details, I think it lost an entire harddisk. The server was an old tower PC, build around something like a Pentium-II, with no redundancy at all, all consumer parts, no server parts, filled with old harddisks, and a big fan tied to the front of the case with old wires. I guess all of its parts were picked out of the dumpster. It ran Linux, probably an early version of Debian, and it had a SCSI tape streamer. Actually, two streamers, one online, one "offline" in the spare parts bin.
Someone has set up a cron job to use tar to write a backup to tape. Great idea, that's what tar was designed for. One of the students must have swapped the tapes each morning. Larger disks were added, and some day, the tape was full. Backup failed. Some "clever" guy must have found tar's -z option to compress data using gzip, and added that option to the cron job. Backup worked again, tapes had some room again. Nobody verified or tested the backup.
Then, data was lost. Restoring the backups failed. The tapes were worn out and had several read errors, streamers were dirty as hell. tar can handle tapes with errors. It uses fixed-size blocks, and if a block is not readable, it can at least find the next file on tape and continue from there. That way, you won't get all of your data back, but probably a lot of it. Remember the -z option? The cron job wrote a gzip compressed byte stream to the tapes. No more fixed blocks, and gzip absolutely does not like I/O errors while decompressing a compressed data stream. All tape-handling advantages of tar were lost.
In the end, I had a lot of free time that day, and so I could help recovering data from the tape. We found another large, empty harddisk, and used something like dd if=/dev/tape conv=noerror of=/mnt/tmpdisk/backup.tar.gz to get a damaged, but readable compressed tape archive. It could be decompressed, at least partially, and tar was then able to extract a lot of files. Swapping the streamers allowed to read some more data from the current tape. The other tape could also be read partially, and a few more, but older files were recovered. I left sorting out old and new, damaged and sane files and copying them back to the replacement disk to the admin, and told him to fix some things:
- get rid of the -z flag to tar in the cron job, NOW
- get new tapes, preferably longer tapes
- discard the old, worn-out tapes
- get a cleaning tape
- clean up both streamers
- verify the archive on tape after backup
- preferably, get another junk PC, connect the second streamer to that PC, and use that PC to actually test data recovery
In the end, a lot of data was recovered, some from the tapes, some from student PCs in the lab, some from some old disks in the junk bin. But a lot was lost.
Alexander
--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
| [reply] [d/l] [select] |
|
|
|
> serious disaster recovery testing
Careful! "Serious" testing sometimes leads to serious problems.
I don't remember the details anymore but I heard about a test involving "quick" reboots of some key infrastructure of a data center.
Nobody expected that rebooting too many servers at the same time overwhelmed the peak electric supply, which in turn led to shutdowns.
Bottom line: Better test the testing!
| [reply] |
Re: Have you ever lost your work?
by cavac (Parson) on Mar 25, 2024 at 12:40 UTC
|
There may be a few minor code changes (less than 1000 lines per data loss) that got lost over the last few decades. And i don't have my earliest pices of garbage code from the 8 bit and DOS eras any more.
But for the last 20+ years i've been very meticulous when it comes to tracking code i care about. There's a great deal of code i deliberatily don't backup (i have a src/temp directory), this is mostly one-off tools, test code, or code i write live to teach an old collegue new tricks. If THAT stuff goes away, it's a bit like someone sneaked into my house and cleaned the junk out of my attic. Yes, there may be a certain sentimental loss, but i knew i didn't need to hold on to that stuff anyway...
All tracked code is also auto-synced to multiple servers in different locations, just to make sure even a house fire doesn't automagically wipe out my rather huge codebase. With all the ~270 repos on my server (and the the correspondig default base databases for the projects), i'm currently at about 30 GB of code backup.
In the last 10 years, i only had one major loss, and that was of an OpenSCAD project that took me a few days to design. I accidentaly deleted the wrong project, so i was "forced" to design something much better in a fraction of the time.
To be clear, most of my project stuff in my personal mercurial SCM doesn't need to survive me. It's all my hobby stuff, as well as some projects of other people i so regularly use in my own stuff that i started to implement my own non-public modifications to it. All the open source stuff that is also used in commercial applications by my employers gets additional, in-company mirrors.
| [reply] |
Re: Have you ever lost your work?
by Dallaylaen (Chaplain) on Jan 31, 2024 at 15:29 UTC
|
I once lost ~3 days of uncommitted work via a git reset --hard.
Only took me a ~day to rewrite from scratch, though, since what I was doing there was still fresh in memory.
| [reply] |
Re: Have you ever lost your work?
by Danny (Hermit) on Mar 25, 2024 at 23:04 UTC
|
Many editors have some sort of auto-save/last-opened-save mechanism that can help with such problems. | [reply] |
|
True, but I bet even this feature has bitten someone ;-)
| [reply] |
A reply falls below the community's threshold of quality. You may see it by logging in. |
|
|