06-22-2009 11:47 AM
There are many SDxx firmware version. I haven't been through the whole forum, but it seems SD1A, specificaly, is not affected by the "RAID" or "Controler specific" problem.
Of course, I only have CCxx firmware, and downgrading to SDxx is not possible, stating the support team.
06-22-2009 02:00 PM
What a frustrating problem! I can confirm that my drive is part of the 7200.11 family (it says so on the drive sticker). I really wish there was better diagnostic tools for RAID arrays. I complained to Seagate and Intel back in August '08 when I first had some trouble with the drive. SeaTools won't scan the drive, and the Intel software gives no specifics. It just says an SMART error... that's all. Makes it hard to know what is really going on.
When a drive would fall out of the array.. I would usually reboot and it was fine. Or I might have to set the status to 'normal' in the Intel RAID tool. Then I was ok again (for awhile). As time went on.. it became more difficult to reset the array. And now I can't get it to come back online. I don't think my drives are 'bricked' as the BIOS does see the drive. I was hoping to recover the data.. but maybe I should just re-format and re-initialize the array? Or is this a hardware problem and I should RMA the drive? Makes it hard when I can't get the Seatools to run a SMART test.
06-22-2009 03:07 PM
Something your could try is to boot a Live Linux CD (perhaps Ubuntu 9.04) and run the Linux SMART utilities. This assumes that Linux knows how to use that controller -- I imagine that it does. This could save you recabling to do testing.
The linux command to print out all SMART status of a drive is something like:
sudo smartctl -a /dev/sda
The linux command to initiate a SMART long drive self-test is something like:
sudo smartctl -t long /dev/sda
sudo means run this command as the superuser.
smartctl is the program to talk SMART to the disk drive
-a means print almost everything; do it again after the DST has completed to see the results
-t long means run the long DST. -t short would start the short one.
/dev/sda1 means the first disk. I'm just guessing at what the right path might be.
These commands need to be run in a terminal window. Kind of like the window in Windows that you can use to run CLI commands.
06-23-2009 04:02 PM
Thanks for the Linux tip. I did try to boot from my fedora live cd, but had some trouble.
I did run SeaTools DOS and it didn't work. DOS graphical didn't load (got hung up on the blue/green screen), and the DOS test program did not see my SATA drive. The Seatools Windows will see the drive but will only run the generic test. It's frustrating to not be able to test this drive and know if it's good or bad. I shouldn't have to reformat it right? Does the formatting matter when you run Seatools?
07-01-2009 08:17 AM - last edited on 07-01-2009 10:29 AM by BradC
Finally, some good news to report...
I had been working with LSI support (my HBA vendor) to get my SAS3800x working. They had sent me a firmware upgrade that would not install because I had a pre-production hardware version. So, a quick email to the person on eBay that sold me the card got me a refund. I purchased a brand new retail LSI SAS3800x and installed it. The SAS3800x is a PCI-X card and is installed in my HP Proliant DL380G3.
The HBA is currently configured as a JBOD -- not strictly a RAID volume... BUT, I was even having problems with this configuration before.
Now, I have 4x CC1H drives attached (via SAS/Infiniband) and working without any noticable issues. Throughput is a little lower than I had hoped -- ranges from approx 25-35MB/sec per drive.
I DO have some occassional errors in my System Event Log from the LSI controller -- but LSI thinks it might be a controller firmware issue. Apparently, I still have an old version of firmware on this HBA. Since it is working, I will just wait until a convenient time to re-flash.
Here's what is next on my research / testing attempts:
I am attempting to connect another box to this JBOD, a Media Center Server that only has PCIe slots available -- so I've ordered the LSI SAS3801e card and will let you guys know how that goes. This card will replace a current HighPoint 2322 HBA that is connected to the same JBOD and is giving me a hard time with these Seagate ST31500341AS drives... That PC currently has both SD1B and CC1H drives connected to it -- drives with either firmware versions intermittently drop out or become non-responsive. But even when they are working, the throughput is abyssmal. I rarely get > 5-10 MB/sec out of the drives connected to the HighPoint HBA...
BTW: All of theses SD1B and CC1H drive work FINE when directly connected to motherboard SATA ports. Throughput is usually very zippy, reliably around 40 MB/sec.
07-13-2009 04:06 PM
07-30-2009 09:28 AM
original system specs:
Asus P6T, with Intel ICH10R raid controller
3x 7200.11 1TB drives in Raid 5
1.5GB/s Jumpers removed
Old firmware (CCxx I think?)
This setup worked great for 6 months after I bought it. Then, one of the drives had a hard failure (long & short generic fail), and seagate replaced the drive without issue ( ). Since then, however, the longest the array will work before becoming degraded is a day or two. iastor errors, and one drive (different each time) will be dropped from the array. Then it gets to rebuild for a couple days, during which the computer is barely useable.
I updated the firmware on all the drives to SD1A right after getting the replacement drive (has not seemed to solve anything).
The Intel raid software is the latest (AFAIK).
I did purchase another 1TB as a spare, and got a 7200.12 model, with SD1A firmware. It is currently in the computer with two 7200.11s, with a 7200.11 on the shelf as a spare.
Just today I put the jumpers back on the drives to limit them to 1.5GB/s.
My First temporary solution is going to be to put all four drives in the computer in a Raid10 array (two drives striped, then a mirror of that).
Then, once we get more money, my real solution will be some Samsung F1 raid drives (they're much cheaper than the seagate NS drives).
I'm mostly frustrated with why it worked fine for 6 months, but now is degraded more than it's up and working.
07-30-2009 11:41 AM
You are not supposed to replace CCxx firmware with SDxx firmware. The drives are supposed to use the same firmware series that they came with, where series seems to be the first two letters.
You can tell what firmware a drive shipped with: it is on the label. Could you check and report back what firmware each drive was shipped with? Before the latest upgrade, had you updated any drives firmware or were they all as-shipped?
It is interesting that replacing one drive causes several different drives to act up. On the other hand, you did change all the firmware, so perhaps all drives could be considered changed.
When a drive is dropped from the array, what is going on? Many previous reports (but not all) describe disks as not responding to I/O commands until the next reboot.