Originally Posted by cat77
i do not
This is an incomplete answer that doesn't make sense. Your RAID has 5 members, 0 and 1 are "removed", member 3 is active as /dev/sdd.
Members 4 and 5 are spares, /dev/sdb and /dev/sdc. How many total hard drives are in the computer?
In the future when you post information, could you format using "[CODE]" instead of "[QUOTE?]" The quoting is annoying and doesn't line up the information correctly.
---------- Post added at 03:27 PM ---------- Previous post was at 03:18 PM ----------
I think your problem is that these drives are not suited for RAID5. WDC certainly doesn't support them except for RAID0 and 1. I think what's happening is they are spinning down and drop out of the RAID, and then come back and the RAID has to rebuild the member.
A correctly added member shouldn't just turn into a "slave" and you've so far not explained why you have two slaves. Are these hot slaves? Did you originally create a 5 member RAID? I wonder if when they dropped off, if mdadm turned them into spares instead of re-adding them?
If this RAID 5 contains data with no backup, then shame on you if it's important data. RAID *is not a backup* plan. If you haven't tried forcing a re-assemble, and you haven't modified anything on those disks, you might be able to restore the array. But you should image all of the drives (use dd to sector copy all three drives) and store them somewhere safe in case you f'up the re-assemble.
There are all sorts of google hits on Cavier Green drive problems in RAID 5. I have no idea if there's a way to configure them so that they will work. But WDC seems to have washed their hands of this. Kinda surprising they do this even for the black drives though!
---------- Post added at 03:41 PM ---------- Previous post was at 03:27 PM ----------
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 63
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 63
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 26
/dev/sdc has one such problem as well.
Did you write zeros to these drives BEFORE you added them to the array? If not, sorry but you don't know what you're doing, and you didn't do enough research before setting up this array. The data on sdb could very well be fakaked beyond repair even though there aren't that many sectors affected. It just depends on what's on those sectors.
If you have backup data, I'd work on that because reconstructing this RAID is going to take a while - I think. Assuming it's even possible - which it should be but a lot more information is needed and it doesn't sound like you know much about RAID best practices. Sorry...