06-17-2012 03:09 PM - edited 06-17-2012 03:11 PM
The Volume Status (BlackArmor NAS 400 with 4 x 1TB Seagates, RAID5, actual FW) is failed since yesterday caused by the following reason:
1) Hard Disk #4 failed
2) Then replacement of HD #4 with a new Seagate
3) After correct putting in in Disk Manager the rebuild of the Volume starts
4) Power Loss after 40% of the RAID 5 rebuild.
Since this power loss I've only the message "Failed - Volume not found!" under "Volumes".
Does anbybody now whether it is possible to start the rebuild anew ?
Any repair software for thsi RAID5 array ?
05-06-2013 10:36 PM - edited 05-06-2013 10:53 PM
I am cross-posting my experience because I found a number of open threads on the subject, but not a whole lot in the way of solutions...
First of all, I have a Seagate 440 NAS (protected by a large UPS) that started freezing up a couple of weeks ago - to such a degree that only a power cycle would recover it. After a few power cycles, however, it decided to go into rebuild mode, but it would never finish rebuilding because it would ultimately freeze up again. (and repeat)...
When it froze up the last time, I noticed that three of the four drive lights were on, and one was out - so my hypothesis was that a drive was dying in such a way that was hanging up the NAS in the process. I then decided to pull the potentially bad drive and replace it. When I powered up the system the next time however, there were no shares visible from the outside. When I logged into the web interface and looked under the 'Volumes' tab, it showed the DataVolume as 'FAILED' (Volume not found).
I went ahead and pulled out the suspect drive, replaced it with a new one and 'claimed' the new drive - but without an existing recognizable DataVolume online - the system would not allow it to rebuild itself.
After perusing the knowledge base the general consensus seems to be that this is a FATAL error, and unrecoverable with the NAS hardware.
I noticed some people had success by updating the firmware (certainly worth a try I suppose) but in my case it was already at the most current version (4000.1411)
The next logical step would be to attempt some kind of data recovery on the remaining 3 drives.
I removed the drives from the NAS, noting the serial numbers of which drive was in which slot (the drive order) and also noting which one was missing. I used a couple of Startech.com dual sata docks to mount the drives externally to a PC, and used the windows disk manager to make sure the system could see the drives. (when it asked to initialize a drive to be able to mount it, I said NO by the way!)
Next I looked at some software:
Someone recommended 'UFS Explorer Raid recovery', which seems to be decent software, but in my case - after 25 hours of it parsing my externally mounted drives, it was only able to find some of the partitions (the NTFS, but not the NFS) and it could not mount any of them.
Next, I tried NAS DATA Recovery from Runtime software: http://www.runtime.org
This software worked so well...First it asked which of the drives that were connected to the system were from the NAS, and if there was one missing from the RAID set. Ten seconds later it has figured out the RAID level, drive order, sector size, found all of the partitions and had all of the directories visible. You can do this all with the free trial of the software (but if you want to migrate the data I think at that point you have to buy it) but for $99 I'm very pleased.
I am moving the data now (keep in mind that the USB interfaces are going to slow it down) - then I will rebuild the NAS RAID with the new drive.
F.Y.I. In my case (BlackArmor NAS 440 running RAID-5 on 4 1TB disks)
It shows the block size as 128 sectors, and the Parity Rotation to be 'left-symmetric'