PART 1 is here.
As I was saying, the problem began when a routine file copy locked up the main Frankenserver – which should not happen. Saturday morning, I checked and found a BIOS update for the motherboard in the Frankenserver was available. Normally, a BIOS update is very routine. This was not.
After applying the BIOS update, the main storage array (16 Terabytes) was corrupted and inaccessible. Not a great feeling. But I was not panicked yet as even if you delete files on hard drives, the files are there until they are overwritten with new data. So I knew the data was just there as the drives all appeared just fine.
But I needed a new motherboard as this one appear to have had an electrical failure. On a holiday weekend, my only solution was drive to a nearby (an hour and 45 minute drive) city to get a part before it closed. So I drove, got the board, drove back, ate, then installed the new board….
And the main storage array was still inaccessible. Now I started to get worried. Yes, the critical data was all backed up “in the cloud” but I had test restored that data before and it took over 3 weeks to download that much data. So I googled and googled – and thankfully found others who had this problem with a solution.
However the solution is a bit heart-stopping. You delete your storage array, rebuild it with the same settings on the same drive and use a utility to restore it. In theory, you are not deleting data but just the metadata “map” to the data and this utility will recreate it – in theory. But the reports of success were strong, so away I went and -
It worked, in just a few minutes and one machine restart. Everything back online and tested by early Sunday a.m. The utility is TestDisk - I had heard of it before but never used it. Needless to say, highly recommended.