Fedora Linux Support Community & Resources Center
  #1  
Old 6th August 2008, 01:57 PM
sujoykroy Offline
Registered User
 
Join Date: Feb 2008
Posts: 42
Intel Matrix Storage RAID 1 / Fedora 9

After installing Fedora 9 on my box (and making love to it!), today I tried to activate fake(or software!)RAID 1. It's an amazing journey filled with joy, sorrow and suspense. Below is a little big description of the journey which I wanted to share with you hoping that you might get help of it or can contribute to it to overcome the unresovled difficulties and quest.

I do have S975XBX2 Intel(tm) Workstation Board with two SATA 160.0 GB hard disks. Before activating RAID , I had installed Fedora 9 on one disk (keeping the another at deep sleep ) to try different features of Sulphur as it was my first Linux GNU/Linux experience.

First, I needed to activate the RAID support from BIOS. So, I went into BIOS by pressing F2 key during the start of machine boot.Then I went to "Advanced" tab where I found "Drive Configuration" menu item. Entering into "Drive Configuration" menu I found "Configure SATA As" item which can take any of the 3 values viz. IDE, RAID, AHCI. It was set to IDE, I changed it to RAID. Finaly I pressed F10 key to save the settings to reboot.

But the booting seemed to be failed. After showing nice messages of GRUB it threw following lines, -
Code:
Red Hat nash version 6.0.52 starting

Unabled to access resume device (/dev/sda2)
mount: could not find filesystem '/dev/root'
setuproot: moving /dev failed: No such file or directory
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting /sys: No such file or directory
Mount failed for selinuxfs on /selinux: No such file or directory
switchroot: mount failed: No such file or directory
Then I recollected that I was supposed to create the RAID Valume using Intel(tm) Matrix Storage Manager and make a fresh Fedora installation.
So I restarted the machine, I was asked to press "Ctrl + I" (which I ignored earlier) to manage the RAID volume using "Intel(tm) Matrix Storage Manager".I did so and followed a pretty-user-friendly screen of "Intel(tm) Matrix Storage Manager option ROM v5.6.2.1002 ICH7R wRAID5" to create a RAID 1 volume comprising of the 2 hard disks. The final screen after RAID 1 creation held a list like this, -

Code:
RAID Volumnes:
ID	NAME		Level		Strip	Size		Status	Bootable
0	SRRAID1		RAID1(Mirror)	N/A	149.1 GB	Normal	Yes

Physical Disks:
Port	Drive Model	Serial #	Size		Type/Status(Volume ID)		
2	ST3160215AS	9RA6Q3NM	149.1 GB	Member Disk (0)
3	ST3160215AS	9RA6Q0C8	149.1 GB	Member Disk (0)
(I don't know why it is showing 149.1GB as the SATA disks are of 160 GB)

Then I exited the Matrix Storage Manager screen and booted the machine from Fedora 9 installation DVD and started installing Fedora. But right after selecting the "Language" and "Keyborad Layout" option I was splashed by the following confirmation message,
Code:
The partition table on device mapper/ isw-ceefeeedbj_SRRAID1 was unreadable. 
To create new partitions it must be initialized, causing the loss of ALL DATA on the drive.

The operation will override any previous installation choices about which drives to ignore.

Would you like to initialize this drive, erasing ALL DATA?
I clicked "Yes" button as ALL MY DATA were already erased when I did created RAID volume using Intel(tm) Matrix Storage Manager. Then I created the custom partition layout which looked like this,
Code:
Drive						Mount Point/	Type	Format Size 	Start 	End
						RAID/Volume 	
Hard Drives
|-/dev/mapper/isw_ceefeeedbj_SRRAID1		
  |-/dev/mapper/isw_ceefeeedbj_SRRAID1p1	/boot		ext3	\/	196	1	25
  |-/dev/mapper/isw_ceefeeedbj_SRRAID1p2			swap	\/	2047	26	286
  |-/dev/mapper/isw_ceefeeedbj_SRRAID1p3	/		ext3	\/	47975	287	6402
  |-/dev/mapper/isw_ceefeeedbj_SRRAID1p4	
    |-/dev/mapper/isw_ceefeeedbj_SRRAID1p5	/var		ext3	\/	81917	6403	16845
    |-/dev/mapper/isw_ceefeeedbj_SRRAID1p5	/home		ext3	\/	20481	16846	19456
This partition table got created and rest of the installation went similar as that for Non-RAID system. Only few different things caught my attention. Like the "Boot Loader Device" dialog box containg two options to select boot loader from, and BIOS drive order.
Code:
(0) Master Boot Recod(MBR) - /dev/mapper/iswceefeeedbjSRRAID1
( ) First Sector of boot partition - /dev/mapper/iswceefeeedbjSRRAID1p1

BIOS Drive Order:
First BIOS drive: mapper/isw_ceefeeedbj_SRRAID1 152618 MB Linux device mapper
After complete installation the machine booted without any problem. In order to check if the RAID system will be working at the time of disk failure I shut down the machine and unplugged one hard disk (on port 3) and started the machine again. During start I entered into Intel(tm) Matix Storage Manager (pressing Ctrl +I) and found the status of RAID Value as "Degraded". So, that means Matrix storage detected the unplugging of one hard disk. I exited Matrix Storage Manager to proceed further. But Linux failed to boot. It threw some ugly lines after showing our nice GRUB messages.
Code:
Red Hat nash version 6.0.52. starting
ahci 0000:04:00.0 :MV_AHCI HACK: port_map 1f->f
device_mapper: table 253:0:mirror: Device lookup failure
device_mapper: reload ioctl failed: No such device or address
device_mapper: table ioctl failed: No such device or address
device_mapper: deps ioctl failed: No such device or address

int[1]: segfault at 10 ip 0015c0fa sp bf81ba0c error 4 in libdevmapper.so.1.02[14e000+15000]
nash received SIGSEGV! backtrace(14):
/bin/nash(0x8053bc1)
[0x12e40c]
/usr/lib/libnash.so.6.0.52(nashDmDevGetName +0x5a)[0x13d1e6]
/usr/lib/libnash.so.6.0.52[0x1395bf]
/usr/lib/libnash.so.6.0.52[0x1396e9]
/usr/lib/libnash.so.6.0.52(nashBdevIterNext+0x109)[0x139b73]
/usr/lib/libnash.so.6.0.52[0x139e0f]
/usr/lib/libnash.so.6.0.52(nashFindFsByName+0x6e)[0x139f08]
/usr/lib/libnash.so.6.0.52(nashAGetPathBySpec+0xa5)[0x13a022]
/bin/nash(0x80414f49b)
/bin/nash[0x8053a2c]
/bin/nash[0x8054134]
/lib/libc.so.6(_libcstart_main+0xe6)[ox4bc5d6]
/bin/nash[0x804b141]
I tried to start the machine by Disabling RAID from BIOS, but got same result. I guessed when one hard disk failed the RAID 1 system will not boot until you attach a new/repaired hard disk. I may be wrong as I did not see any RAID system before trying on my linux box.

Anyway, I replugged the hard disk on port 3, enabled RAID on BIOS and restarted the machine.
On Intel(tm) Matrix Storage Manager I found that the status of RAID volume is marked as "Rebuild" in yellow. And message is being shown stating that Rebuilt will be handled by operating system.

Fedora got started nicely and everything seems to be normal. But I don't know if the RAID is rebuilt or not. I tried several option using dmraid but nothing seems to show any indication that RAID drives are not synced and the "Rebuild" status on Intel(tm) Matrix Storage Manager is still being shown. So, I guess it is working as I may need to wait for a real hard disk crash to test it.
Code:
dmraid --debug --verbose -ay
Any comment from anybody who had tried(and succeed or failed) RAID1 on Fedora with isw(Intel Software RAID) will be very helpful to me or any body trying to make RAID system, becasue manual on it seems to be lacking on internet.
Reply With Quote
  #2  
Old 16th August 2008, 03:41 PM
Karstlok Offline
Registered User
 
Join Date: Aug 2008
Posts: 1
Hello, Just install the sata drives so that fedora indexes them as two independend drives(standard bios settings). Then folow this howto, worked for me!

http://www.howtoforge.com/software-r...-boot-fedora-8

Only thing you need to figure out and code yourself is the monitoring of the raid-set. Just write a perl / php script that runs each hour and sends you an email if the raid-set is degraded.

edit: typo
Reply With Quote
  #3  
Old 27th September 2008, 10:41 PM
fatka Offline
Registered User
 
Join Date: May 2008
Posts: 4
Quote:
Originally Posted by sujoykroy
Code:
RAID Volumnes:
ID	NAME		Level		Strip	Size		Status	Bootable
0	SRRAID1		RAID1(Mirror)	N/A	149.1 GB	Normal	Yes

Physical Disks:
Port	Drive Model	Serial #	Size		Type/Status(Volume ID)		
2	ST3160215AS	9RA6Q3NM	149.1 GB	Member Disk (0)
3	ST3160215AS	9RA6Q0C8	149.1 GB	Member Disk (0)
(I don't know why it is showing 149.1GB as the SATA disks are of 160 GB)
thats bcoz HDD manufacturers consider 1GB = 10^9 bytes however the BIOS / OS considers 1GB = 2^30 bytes hence the difference

sadly I use the same mobo with 2 160GB SATA drives setup with RAID 0 so wont be able to answer your RAID qns
__________________
Open source is the future. It sets us free.
[SIGPIC][/SIGPIC]

Last edited by fatka; 6th October 2008 at 07:15 PM.
Reply With Quote
  #4  
Old 28th September 2008, 10:57 AM
sujoykroy Offline
Registered User
 
Join Date: Feb 2008
Posts: 42
160 GB (HDD) = 160 * 10^ 9 Bytes = 160 * 10^ 9 / (2^30) GB (BIOS/OS)= 149.011611938 GB (BIOS/OS) makes sense.
thanks fatka. I thought there is something wrong with the hardware or OS.
Reply With Quote
Reply

Tags
fedora, intel, matrix, raid, storage

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
intel matrix raid - how to recover failed harddisk dynason Hardware & Laptops 2 10th August 2008 03:15 PM
FC8 x86_64 install on Intel DQ965GF fails with Matrix RAID enabled sfunk1x Installation, Upgrades and Live Media 1 27th March 2008 09:07 PM
MBR trouble with intel matrix raid sim39 Installation, Upgrades and Live Media 10 8th June 2007 04:52 PM
intel D945 matrix raid 1 array how do i rebuild array? cow Hardware & Laptops 4 13th April 2006 03:06 PM


Current GMT-time: 06:58 (Thursday, 17-04-2014)

TopSubscribe to XML RSS for all Threads in all ForumsFedoraForumDotOrg Archive
logo

All trademarks, and forum posts in this site are property of their respective owner(s).
FedoraForum.org is privately owned and is not directly sponsored by the Fedora Project or Red Hat, Inc.

Privacy Policy | Term of Use | Posting Guidelines | Archive | Contact Us | Founding Members

Powered by vBulletin® Copyright ©2000 - 2012, vBulletin Solutions, Inc.

FedoraForum is Powered by RedHat