Ubuntu 10.04 can't create partition on fakeraid

Bug #568050 reported by David Tomlin
282
This bug affects 46 people
Affects Status Importance Assigned to Milestone
grub-installer (Ubuntu)
Invalid
Undecided
Unassigned
Lucid
Invalid
Undecided
Unassigned
parted (Ubuntu)
Fix Released
High
Phillip Susi
Lucid
Fix Released
High
Phillip Susi

Bug Description

Impact: dmraid installation is entirely broken in Lucid.

Development branch: For the time being, an upstream patch has been reverted in Maverick: see comment 84 and http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/maverick/parted/maverick/annotate/head:/debian/patches/fix-dmraid-regression.patch; however, in future (probably in maverick) we'll fix this differently by switching to the new naming scheme for device-mapper devices, which has more consistent support in upstream code. Such an invasive change would not be appropriate in Lucid - for one, it would require changes to grub2.

Patch: https://code.launchpad.net/+branch/ubuntu/lucid/parted

TEST CASE: Install Ubuntu on a DM-RAID system using (a) the Ubuntu desktop CD and (b) the Ubuntu server CD. Verify that it succeeds. Note that this requires respins of CD images to test effectively; Colin Watson will arrange for sample daily builds not long after this update is processed.

Regression potential: This change is isolated to code dealing with device-mapper devices, so installations on ordinary hard disks should be unaffected. However, I think it would be a good idea to regression-test installations on LVM and software RAID (as distinct from DM-RAID), as those follow the same code path and may be affected.

Original report follows:

Binary package hint: dmraid

Fresh install of Ubuntu Server 10.04 Beta 2 using Sil 3124 fakeraid controller card. Ubuntu install successfully detects fakeraid set (Raid1) and activates it, but after creating my partition layout using all default options with "Guided Partitioning",when I click finish to apply it, it fails stating it could not create a filesystem.

Architecture: i386
DistroRelease: Ubuntu 10.04
LiveMediaBuild: Ubuntu 10.04 "Lucid Lynx" - Release Candidate i386 (20100419.1)
Package: ubiquity 2.2.20
PackageArchitecture: i386
ProcEnviron:
 LANG=en_US.UTF-8
 SHELL=/bin/bash
ProcVersionSignature: Ubuntu 2.6.32-21.32-generic 2.6.32.11+drm33.2
Tags: lucid
Uname: Linux 2.6.32-21-generic i686
UserGroups: adm admin cdrom dialout lpadmin plugdev sambashare

Revision history for this message
Phillip Susi (psusi) wrote : Re: [Bug 568050] [NEW] Ubuntu 10.04 can't create partition on fakeraid

Beta 2 had a serious bug with dmraid that has been fixed. This may be
caused by the same issue. Can you try again with the release candidate?

Revision history for this message
David Tomlin (davetomlin) wrote :

Will do. I'll report how my install goes immediately after.

Revision history for this message
David Tomlin (davetomlin) wrote :

No luck with the release candidate. I'm not doing anything special. I create my "fakeraid" set with the SIL 3124 controller, then I boot to the Ubuntu Server 10.04 cd. I select all the default options until it get to Detect Disks. It successfully detects my RAID 1 and asks if I'd like to activate it. I select yes and am then presented with my 250GB RAID 1 set. I select Guided Partioning and tell it to use the entire disk (no lvm or encryption). It then creates a default partition scheme with just root and swap. I select "Finish partitioning and write changes to disk". I'm then asked to confirm the changes to the disk. Once I select yes, I get a red screen with an error message stating "Failed to create a filesystem. The ext4 file system creation in partition #1 of Serial ATA RAID sil_bgaebjaafgbg (mirror) failed"

Revision history for this message
David Tomlin (davetomlin) wrote :

I also get the exact same behavior with the onboard raid controller on a Gigabyte GA-MA78GM-US2H motherboard using the exact same steps as above.

I also tried other distributions to verify it wasn't hardware and both OpenSuse and CentOS detected and installed succesfully, so it's definitley not hardware related.

Thanks so much for your input on this.

Revision history for this message
David Tomlin (davetomlin) wrote :

Just some more info. I tried using both the Desktop and Server Release Candidate disks.

Revision history for this message
David Tomlin (davetomlin) wrote :
Download full text (4.0 KiB)

Here's what was going on in the syslog from the activation of the raid until it failed to create the partition. Maybe this is an Ubiquity bug and not a dmraid bug????

Apr 23 03:32:51 ubuntu activate-dmraid: Serial ATA RAID disk(s) detected. If this was bad, boot with 'nodmraid'.
Apr 23 03:32:51 ubuntu activate-dmraid: Enabling dmraid support.
Apr 23 03:32:51 ubuntu dmraid-activate: ERROR: Cannot retrieve RAID set information for sil_bgaebjaafgbg
Apr 23 03:32:51 ubuntu dmraid-activate: ERROR: Cannot retrieve RAID set information for sil_bgaebjaafgbg
Apr 23 03:32:52 ubuntu ubiquity: tune2fs
Apr 23 03:32:52 ubuntu ubiquity: :
Apr 23 03:32:52 ubuntu ubiquity: Bad magic number in super-block
Apr 23 03:32:52 ubuntu ubiquity:
Apr 23 03:32:52 ubuntu ubiquity: while trying to open /dev/mapper/sil_bgaebjaafgbg
Apr 23 03:32:52 ubuntu ubiquity: #015
Apr 23 03:32:52 ubuntu ubiquity: Couldn't find valid filesystem superblock.
Apr 23 03:32:52 ubuntu partman: Error running 'tune2fs -l /dev/mapper/sil_bgaebjaafgbg'
Apr 23 03:32:52 ubuntu ubiquity: tune2fs
Apr 23 03:32:52 ubuntu ubiquity: :
Apr 23 03:32:52 ubuntu ubiquity: Bad magic number in super-block
Apr 23 03:32:52 ubuntu ubiquity:
Apr 23 03:32:52 ubuntu ubiquity: while trying to open /dev/mapper/sil_bgaebjaafgbg
Apr 23 03:32:52 ubuntu ubiquity: #015
Apr 23 03:32:52 ubuntu ubiquity: Couldn't find valid filesystem superblock.
Apr 23 03:32:52 ubuntu partman: Error running 'tune2fs -l /dev/mapper/sil_bgaebjaafgbg'
Apr 23 03:32:52 ubuntu ubiquity: tune2fs
Apr 23 03:32:52 ubuntu ubiquity: :
Apr 23 03:32:52 ubuntu ubiquity: Bad magic number in super-block
Apr 23 03:32:52 ubuntu ubiquity:
Apr 23 03:32:52 ubuntu ubiquity: while trying to open /dev/mapper/sil_bgaebjaafgbg
Apr 23 03:32:52 ubuntu ubiquity: #015
Apr 23 03:32:52 ubuntu ubiquity: Couldn't find valid filesystem superblock.
Apr 23 03:32:52 ubuntu partman: Error running 'tune2fs -l /dev/mapper/sil_bgaebjaafgbg'
Apr 23 03:32:53 ubuntu ubiquity: tune2fs
Apr 23 03:32:53 ubuntu ubiquity: :
Apr 23 03:32:53 ubuntu ubiquity: Bad magic number in super-block
Apr 23 03:32:53 ubuntu ubiquity:
Apr 23 03:32:53 ubuntu ubiquity: while trying to open /dev/mapper/sil_bgaebjaafgbg
Apr 23 03:32:53 ubuntu ubiquity: #015
Apr 23 03:32:53 ubuntu ubiquity: Couldn't find valid filesystem superblock.
Apr 23 03:32:53 ubuntu partman: Error running 'tune2fs -l /dev/mapper/sil_bgaebjaafgbg'
Apr 23 03:32:53 ubuntu ubiquity[2853]: switched to page partman
Apr 23 03:33:05 ubuntu ubiquity[2853]: debconffilter_done: ubi-partman (current: ubi-partman)
Apr 23 03:33:05 ubuntu ubiquity[2853]: Step_before = stepPartAuto
Apr 23 03:33:05 ubuntu ubiquity[2853]: switched to page usersetup
Apr 23 03:33:22 ubuntu ubiquity[2853]: debconffilter_done: ubi-usersetup (current: ubi-usersetup)
Apr 23 03:33:22 ubuntu ubiquity[2853]: Step_before = stepUserInfo
Apr 23 03:33:22 ubuntu ubiquity[2853]: filtering out /dev/mapper/sil_bgaebjaafgbg1 as it is to be formatted.
Apr 23 03:33:22 ubuntu ubiquity[2853]: filtering out /dev/mapper/sil_bgaebjaafgbg5 as it is to be formatted.
Apr 23 03:33:22 ubuntu ubiquity[2853]: debconffilter_done: ubi-migrationassistant (current: ubi-migrationassistant)
Apr 23 ...

Read more...

Revision history for this message
Phillip Susi (psusi) wrote :

Yes, that looks like a bug in ubiquity or partman to me. It seems to be trying to use the whole disk at first rather than partitions on it, then can't find the partitions later. Reassigning.

affects: dmraid (Ubuntu) → ubiquity (Ubuntu)
Revision history for this message
Raúl Montes (raulmt) wrote :

I have exact same problem with AMD 785GM-M chipset with RAID 1. In Ubuntu 9.10 it works fine, but on 10.04 Beta 2 and release candidate it fails…

Revision history for this message
Kitagua (matthias-wuerthele) wrote :

Tried to install rc on a raid 0 system (windows 7 already installed). It detected the fake raid as expected but after clicking on install it complains that it was not able to create file system on specified disk. (I have a NVIDIA Fake RAID)

Revision history for this message
Moshe Ortov (jim-networksystemssolutions) wrote :

I've tried to install 10.04 RC on an nVidia RAID5 (fakeraid using dmraid) via alternate and main install and both fail to load any dmraid drivers at all (none - not even RAID0/1). dmraid seems to be completely broken in the RC - the installer offers to activate it but nothing is detected afterwards and a manual attempt reports missing modules. Also, as per previous Ubuntu versions, the dmraid-activate seems to mess up the raid45 driver stuff anyway - I always have the manually install the modules during installation and then edit dmraid-activate to change the module load to dm-raid4-5 and the rebuild the initrd before rebooting otherwise the install is messed up. There are various past bug reports on this so I'm not going to repeat them as it's been reported enough times already without me duplicating it further.

I saw one posting (elsewhere) suggesting just using software raid - while that's probably a fair suggestion, the dmraid really should work when it is detecting the fakeraid controller is there and offering it.

Revision history for this message
David Tomlin (davetomlin) wrote :

I love Ubuntu, but I have a feeling they're going to be inundated with all the gripes concerning fakeraid from people after the official release in 3 days. This is not a knock at all, but it kills me that CentOS, Fedora, and Opensuse all detect my fakeraid and install on it just fine right out of the box, but my fav distribution can't!!! :(

Revision history for this message
David Tomlin (davetomlin) wrote :

Just tried to install using the latest daily build (4/27) and got the same result. I've tried on 2 different machines each with a different fakeraid controller. As usual Ubuntu detects both controllers and the fakeraid set on each, but fails during the format procedure stating it can't "Failed to create a file system". I'm surprised the LTS version will be released without working support for fakeraid.

Revision history for this message
Evan (ev) wrote :

Can you please run the installer in debug mode (`ubiquity -d` from the live CD desktop after clicking on "Try Ubuntu"), then run `apport-collect 568050` once you've reproduced the bug.

Thanks!

Revision history for this message
David Tomlin (davetomlin) wrote :

No problem. I'll do this tonight and post back immediately.

Revision history for this message
David Tomlin (davetomlin) wrote : Casper.gz

apport information

tags: added: apport-collected
description: updated
Revision history for this message
David Tomlin (davetomlin) wrote : Dependencies.txt

apport information

Revision history for this message
David Tomlin (davetomlin) wrote : UbiquityDebug.gz

apport information

Revision history for this message
David Tomlin (davetomlin) wrote : UbiquityPartman.gz

apport information

Revision history for this message
David Tomlin (davetomlin) wrote : UbiquitySyslog.gz

apport information

Revision history for this message
David Tomlin (davetomlin) wrote :

I just uploaded the report you requested for the 1st machine. I'm going to upload the same from my other PC with a different raid controller so you can compare them.

Revision history for this message
Phillip Susi (psusi) wrote :

Confirmed, tested rc here and got the same results. Marking as high importance as well since it is a critical problem for some users.

Changed in ubiquity (Ubuntu):
importance: Undecided → High
status: New → Confirmed
tags: added: regression-potential
Revision history for this message
David Tomlin (davetomlin) wrote :

As you confirmed it I won't send the info on the other machine. Thanks so much for your help! :)

Revision history for this message
David Tomlin (davetomlin) wrote :

Sorry I couldn't get this information to you before the actual release. :(

Revision history for this message
David Tomlin (davetomlin) wrote :

This bug effects both the X86 and X64 platforms.

Revision history for this message
David Tomlin (davetomlin) wrote :

One more comment, I ran into this using both the Ubuntu Server and Ubuntu Desktop images.

Phillip Susi (psusi)
tags: added: regression-release
removed: regression-potential
Revision history for this message
Qboy61 (rixq) wrote :

I agree with David's message #11. I am sad that I can't use Ubuntu either due to failed install on Intel RAID. My failure is identical to David's. Lucid alternate desktop install 32-bit works right up to the point where the RAID array should actively be created. Hopefully there'll be a fix for this soon. Have 4 machines running RAID 1 (not comfortable with loss of data due to drive failure) and I'd like to get Ubuntu Lucid on these machines besides Windows. :-(

description: updated
Revision history for this message
Will Green (greenwc) wrote :

Encountering the same problem on 10.04 X64.

Revision history for this message
Phillip Susi (psusi) wrote :

You can probably work around this issue by manually partitioning first, then installing to the existing partitions. That's what you used to have to do anyhow before Karmic or so.

Revision history for this message
David Tomlin (davetomlin) wrote :

Hey Phillip. Thanks so much for your help on this, and I will definitely try the workaround. Just one comment though, and please know this is not directly aimed at you Phillip as you've been nothing but helpful, but rather Ubuntu as a whole. As stated above, CentOS, Fedora, and OpenSuse all installed on my fakeraid out of the box with no workarounds, and while I honestly feel neither of those distributions can even somewhat compare to Ubuntu, it seems that Ubuntu is shortchanging users when it comes to fakeraid. In other words, if after the last 3 current distributions we are still having to use workarounds with fakeraid, does that mean it's going to always be that way? I may be wrong, but I don't think fakeraid has ever worked out of the box in Ubuntu. Is this just not a big enough priority for Ubuntu? And as for people who are using fakeraid and may be willing to change from Windows to Ubuntu, I would imaging the first time they discover they can't dual boot with Windows and Ubuntu without what might be a difficult workaround for them to implement depending on their knowledge of linux, they may just give up and go to another distribution which in the end hurts Canonical.

Another thing to note is that the version of CentOS i tried was released in 2007, and still successfully detected and installed on the array. It's hard to swallow that in 2010 Ubuntu still hasn't gotten there.

If this was an old or outdated technology, I could almost understand, but you can hardly find a current motherboard now that doesn't implement fakeraid, and regardless of whether linux software raid is better than fakeraid, fakeraid has its merits too, and I'm sure is used very often as it can be setup much quicker than linux software raid.

That's just my 2 cents, but again I appreciate everything you're doing to try and help us with this. Thanks.

Revision history for this message
Aeudian (jbanks-bogdan) wrote :

I also have the same issue on my HP Workstation XW4600 with an HP Mirror (160GB).

The system tries to partition the drive array (which it sees) however it tries to install to PARITION1 which does not exist. When I do an ls on /dev/mapper/ I see ARRAY ARRAYP2 and ARRAYP5; not ARRAYP1 (where it attempts to install).

I booted into Ubuntu 10.04 via CDROM and did an fdisk and created my primary partition and linux swap. I then formatted my primary partition to ext4. I tried to rerun the installer however it wanted to format the partition (which would break it). I rebooted the system and ran the installer again this time setting it to use the existing ext4. I had to add the mount point (/). The installer finished successfully afterwards.

However on boot; it appears that grub fails to load. After the bios boots I receive a blinking cursor. I am bootting back to CD now to check my grub config. I will let you know my results.

Revision history for this message
Aeudian (jbanks-bogdan) wrote :

Okay...I figured out my problem. After I did everything above and ran the installed at Step 8 (last step) I did not hit advanced. It had the grub to be installed to /dev/sda (WHICH DOES NOT EXIST). I used the drop down menu and selected /dev/mapper/(array)P1.

The system fully booted and appears to be operational.

Revision history for this message
beamin (wmartindale) wrote :

Aeudian:

     I have the exact same issue that you did. However I am struggling to get fdisk to work. Whenever i try to list with fdisk -l, nothing shows up. If I try to list with fdisk -l /dev/mapper/(raid1array) it says it cannot open it.

     Can you shed some light on the specifics of using fdisk for this scenario? I have tried using gparted, manually setting up partitions in the installer itself, no luck.

     Thanks!

ps: I completely agree with David Tomlin.

Revision history for this message
Aeudian (jbanks-bogdan) wrote :

beamin:

Do "ls -l /dev/mapper/" the results will print out the controller and any paritions currently (indicated by p#).

Make sure you do sudo and do "sudo fdisk /dev/mapper/(array)" with no parition #.

Setup your partitions you will do 2 (or more). On my 160GB Mirror I did P1 as 150GB (linux format 83) as a primary. And I did an extended 10GB as swap (linux format 82).

Once they were made; I wrote the changes and had to reboot. Boot back to the CD and do "ls -l /dev/mapper/" again. It should show P1 and P5 along with the controller.

Then I had to do "sudo mkfs.ext4 /dev/mapper/(array)(p1)" Make sure you select P1 (parition 1). Again I had to reboot after else the installer wants to format the parition.

Boot back into the CD and when you get to the install part for partitions choose manually. Select the ext4 you created and put a mount point of "/". Uncheck the format box if it checks. When you get to stage 8; make sure you point the grub installer to parition1 NOT /dev/sda.

Revision history for this message
David Tomlin (davetomlin) wrote :

Aeudian,

Just out of curiosity, how much time would you say you invested on finally being able to get your fakeraid setup to work?

Revision history for this message
Aeudian (jbanks-bogdan) wrote :

David,

Probably 2 hours? I tried the install this morning and took about 3 attempts being a work around was discovered.

Revision history for this message
Phillip Susi (psusi) wrote : Re: [Bug 568050] Re: Ubuntu 10.04 can't create partition on fakeraid

On 4/30/2010 1:42 PM, beamin wrote:
> I have the exact same issue that you did. However I am struggling
> to get fdisk to work. Whenever i try to list with fdisk -l, nothing
> shows up. If I try to list with fdisk -l /dev/mapper/(raid1array) it
> says it cannot open it.
>
> Can you shed some light on the specifics of using fdisk for this
> scenario? I have tried using gparted, manually setting up partitions in
> the installer itself, no luck.

You need to use sudo. Also to get gparted to see the array run sudo
gparted /dev/mapper/(raid1array).

Revision history for this message
beamin (wmartindale) wrote :

Thanks a lot guys for your time. I am working right now but will report back later today / this evening with results.

Revision history for this message
jay armstrong (jayarmstrong) wrote :

confirming that this happens with the 10.04 final release as well.

10.04 i386 desktop
nvidia geforce 8200 fakeraid

Revision history for this message
David Tomlin (davetomlin) wrote :

Just so I can do some planning, is it realistic to think this bug could be fixed soon and implemented in the iso downloads, or will it more than likely be released in an upcoming major update?

Revision history for this message
jerico (erich-schommarz) wrote :

Hello Together

Unfortuntatly I'm also facing this problem with the new Ubuntu 10.04 Desktop ISO and Alternate ISO. After trying all possible described ways above I have killed my Fakeraid and installed 10.04 with a Softwareraid. At least I have now my new system back up running. Windows on a second partition is now requirement for me.

Phillip Susi (psusi)
Changed in ubiquity (Ubuntu):
assignee: nobody → Phillip Susi (psusi)
status: Confirmed → In Progress
Phillip Susi (psusi)
affects: ubiquity (Ubuntu) → parted (Ubuntu)
93 comments hidden view all 173 comments
Revision history for this message
Dale Kuhn (dalekuhn) wrote :

I am running Mint 9 using the post from Konstantinos on May 16 from this thread. I imagine this is mostly relevant to Ubuntu 10.4 as well. For the most part, it seems to work fine. However there are two things that are odd. When I get to the end of the install procedure where I run these commands:

grub-install /dev/mapper/isw_jfighfbah_My_RAID
update-grub

I get about 5 warnings of a memory leak from the install command and about 10 more from the update. This sounds similar to what Martin Lucich reported on 5/12. But it does reboot afterwards without issue. The second thing which seemed odd is when I went to install my Xilinx software. I'm installing to /opt/Xilinx which has about 170GB free based on right clicking the folder and looking at its properties. However, the Xilinx installer says I have insufficient disc space (it shows 0 GB free). What's funny is that it asks if I want to install anyway. I say yes, and the software works afterwards. This software does see my drives correctly under OpenSuse11.2 using the same partition setup.

So while there are workarounds to get this OS usable, there are still some lingering issues related to the handling of the fake raid arrays. For the record, I am using two 1TB drives under Intel Matrix Raid (ICH10R). The first 100GB of each drive is striped and used for / and swap, the other 900GB from each drive is mirrored and mounted as /home. All partitions except swap are ext4.

I'm still pretty new to Linux in general, but I'm willing to help test and get this resolved.

Thanks for your help,
Dale

Revision history for this message
Obolo (spamer-onlinehome) wrote :

I've got a simple solution for this problem: I've build a linux softraid instead the fakeraid from the onboard chip. This brings me a lot of pluses:

1. The PC boots faster because of the missing RAID-Bios-screen

2. The HDDs are working now with AHCI and can use their full performance (NCQ etc)

3. SMART is now working on the Linux-Desktop for each HDD

4. The performance of the Linux-Software-Raid0 is 25% higher (!) then the fakeraid-performance (hdparm, bonnie)

5. The installation with the alternate-cd ist free of failtures!

So, use the Linux Softraid! You can only win!

Revision history for this message
Dale Kuhn (dalekuhn) wrote :

I'm curious about software raid. The main reason I like my matrix fakeraid setup is that I can have part of my disc used for striping and the rest used for mirroring. I decided to build my own box and not buy a copy of Windows so I don't have any dual boot issues at this point. Honestly, I've never timed my bootup with and without fake raid striping to compare them. Maybe it doesn't matter that much?

Revision history for this message
Obolo (spamer-onlinehome) wrote :

@Dale Kuhn: You can create RAID-Arrays like the Intel-Raid Matrix, but you can create more than two different RAID-Arrays. With Intel-Matrix you can create only two RAID-Arrays (RAID0+RAID1 or 2xRAID0 or 2x RAID1 etc) , with Linux Softraid you can create many RAIDs you like. Its a killer-feature I think. If you want install Windows beside, you leave place on one HDD and make a Windows Softraid, ist possible since Vista, too!

Revision history for this message
Phillip Susi (psusi) wrote :

Guys, if you want to have a discussion about fakeraid vs soft raid,
please take it elsewhere since it is not pertinent to this bug.

Revision history for this message
Pradeep Sanders (psanders-ultraviolet) wrote :

When installing Ubuntu 10.04 server onto a fakeraid (Intel 82801) with a RAID-10 volume created, partman refuses to show the correct raid set. Even if you partition your disk using another OS and boot back into the installer, you still cannot select it.

The only solution I have found is to do the following:

1) Switch to another console (alt-F2)
2) Save the output of /bin/parted_devices somewhere (/tmp/realdevices is used here)
3) Copy /bin/parted_devices to /bin/parted_devices.orig
4) Using nano, edit /bin/parted_devices and enter:

#!/bin/sh
cat /tmp/realdevices

5) chmod 755 /bin/parted_devices
6) Execute /bin/parted_devices and compare with output from executing /bin/parted_devices.orig. Should be identical.
7) Run fdisk on your RAID-10 volume and record the size in bytes from the first line output by the 'p' command
8) Using nano, edit /tmp/realdevices and copy the line for one of the component RAID-1 volumes. For example:

/dev/mapper/isw_abcabcabcd_Volume0-0 150037204992 Linux device-mapper (mirror)

9) Modify the copied line to match the raid-10 device:

/dev/mapper/isw_abcabcabcd_Volume0 300074401792 Linux device-mapper (mirror)

NOTE: These are examples for 150GB disks, making a 300GB RAID-10. abcabcabcd is also just a placeholder, yours will be different. USE THE CORRECT VALUES FOR YOUR SYSTEM!

10) Execute /bin/parted_devices and compare the output to the previous execution. The new line should appear formatted exactly like the ones you had before

11) Proceed with installation.
12) Skip installing a bootloader. When the install is complete, reboot to the install CD and enter rescue mode.
13) Chroot to the new disk, purge the grub-pc package, and install grub.
14) Run grub manually as follows:

grub --device-map=/dev/null

grub> device (hd0) /dev/mapper/isw_abcabcabcd_Volume0

grub> geometry (hd0) 121602 255 63 (use output from cfdisk here for Cylinders, Heads, Sectors)
drive 0x80: C/H/S = 121602/255/63, The number of sectors = 1953536130, /dev/map
per/isw_hijdbieid_Volume0
   Partition num: 0, Filesystem type is ext2fs, partition type 0x83
   Partition num: 5, Filesystem type unknown, partition type 0x82

grub> root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83

grub> setup (hd0)
 Checking if "/boot/grub/stage1" exists... yes
 Checking if "/boot/grub/stage2" exists... yes
 Checking if "/boot/grub/e2fs_stage1_5" exists... yes
 Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 17 sectors are embedded.
succeeded
 Running "install /boot/grub/stage1 (hd0) (hd0)1+17 p (hd0,0)/boot/grub/stage2
/boot/grub/menu.lst"... succeeded
Done.

15) Reboot

Revision history for this message
Preet (preet-desai) wrote :

 Phillip - I tried to run your patch (#84), but can't get past the package configuration for grub because it fails to install on /dev/mapper/isw_jfighfbah_My_RAID. I do not have a number at the end of the volume name. I'd appreciate any advice you can offer.

Revision history for this message
Chris Martin (chris-martin-cc) wrote :
Download full text (3.3 KiB)

Just in case any one is still interested in this, I thought I would let you know how I managed to install 10.04. on a DELL 390 with ICH7R fakeraid controller (mirrored drives)

First I have been struggling with this for a month - but without Phillip Susi's update I would never have made it - Thanks Phillip

I ended up using EXT3 for /root and grub

(1) Make fakeraid config in the BIOS - without a digit in at the end. (see ls /dev/mapper output)crw-rw---- 1 brw-rw---- 1 root disk 252, 0 2010-06-01 23:29 isw_badgdgdehd_crow # (note: no digit)
brw-rw---- 1 root disk 252, 1 2010-06-01 23:29 isw_badgdgdehd_crow1 # /root
brw-rw---- 1 root disk 252, 2 2010-06-01 23:29 isw_badgdgdehd_crow5 # swap
brw-rw---- 1 root disk 252, 3 2010-06-01 23:29 isw_badgdgdehd_crow6 # /home

(2) In a terminal window - Install Phillips update:
sudo apt-add-repository ppa:psusi/ppa
sudo apt-get update
sudo apt-get install libparted0 # Note: I used install otherwise you get ALL updates

(3) Perform Install - BUT:
(3.a) Chose manual partitioning and make sure that the root fs is EXT3
(3.b) At the last step before the actual install, click the Advanced options and uncheck the "install boot loader option". We will install grub (the boot loader later).

(4) After installer finishes reboot the machine. and boot from the live CD again
(4.a) Check that you can view the partitions in the raid array with this command
         $ ls -l /dev/mapper/
         control
         isw_badgdgdehd_crow
         isw_badgdgdehd_crow1
         isw_badgdgdehd_crow5
         isw_badgdgdehd_crow6

(5) Install libparted0 and grub in your new Ubuntu installation:
     I'm not sure I really needed to install libpartd0 but I did anyway.
         1. $ sudo mkdir /m
         2. $ sudo mount /dev/mapper/isw_badgdgdehd_crow1 /m # This mounted my /root under /m
         3. $ sudo mount --bind /dev /m/dev/
         4. $ sudo mount -t proc proc /m/proc/
         5. $ sudo mount -t sysfs sys /m/sys/
         6. $ sudo cp /etc/resolv.conf /m/etc/resolv.conf
         7. $ sudo chroot /m
         8. # apt-add-repository ppa:psusi/ppa
         9. # apt-get update
       10. # apt-get install libpartd0
       11. # apt-get install grub

(6) Set up grub
       1. # mkdir /boot/grub
       2. # cp /usr/lib/grub/x86_64-pc/* /boot/grub/
       3. # grub-install /dev/mapper/isw_badgdgdehd_crow
           NOTE: I received and error indicating that the Drive was not in the BIOS - but it did create the device.map file.
       4. edit the file /boot/grub/device.map
           - change
           (fd0) /dev/fd0
           (hd0) /dev/sda
           (hd1) /dev/sdb
           - to
           (fd0) /dev/fd0
           (hd0) /dev/mapper/isw_badgdgdehd_crow
           (hd1) /dev/mapper/isw_badgdgdehd_crow
       5. # grub --no-curses # you will then have a grub prompt
       6. grub> device (hd0) /dev/mapper/isw_badgdgdehd_crow
       7. grub> root (hd0,0)
       8. grub> setup (hd1) # Yep. I had to do both - In this order
       9. grub> setup (hd0)
     10. grub> quit
     11. # update-grub # Answer yes to creating a menu.lst

(7) Reboot

I was then able to boot...

Read more...

Revision history for this message
beamin (wmartindale) wrote :

Chris Martin,

     YOU ARE THE MAN! This actually WORKED! Ubuntu 10.04 LTS is booting on my fakeraid now! Awesome!

Thanks PHILLIP also for your support!!!

Revision history for this message
Demetrio Pecorini (demetrio90) wrote :

Hi all, i'm trying to follow Chris Martin's solution to install Ubuntu 10.04 with Windows 7 on a RAID 0 with ICH8R Controller.
This is my partition scheme, after a successfull install (thanks to Phillip Susi):

crw-rw---- 1 root root 10, 59 2010-06-02 12:35 control
brw-rw---- 1 root disk 252, 0 2010-06-02 12:35 isw_cgecfhgbha_CAVIAR
brw-rw---- 1 root disk 252, 1 2010-06-02 12:35 isw_cgecfhgbha_CAVIAR1 # Windows 7 NTFS
brw-rw---- 1 root disk 252, 2 2010-06-02 12:35 isw_cgecfhgbha_CAVIAR5 # swap
brw-rw---- 1 root disk 252, 3 2010-06-02 12:35 isw_cgecfhgbha_CAVIAR6 # / EXT3

I have edit my device.map file in this current way, (i have also hd2 as you can see):

(fd0) /dev/fd0
(hd0) /dev/mapper/isw_cgecfhgbha_CAVIAR
(hd1) /dev/mapper/isw_cgecfhgbha_CAVIAR
(hd2) /dev/mapper/isw_cgecfhgbha_CAVIAR

Finally, when i try to setup grub with the following command:

setup (hd1)

I get an error saying: "Error 17: Cannot mount selected partition". Do you know what i'm doing wrong? Sorry for my noob questions.

Revision history for this message
Demetrio Pecorini (demetrio90) wrote :

Sorry for my previous post, i finally got Ubuntu 10.04 and Windows 7 to work in dual boot, I was using wrong partitions to configure grub. Thank you very much Phillip Susi and Chris Martin for your support you gave us.

Revision history for this message
Phillip Susi (psusi) wrote :

On 06/02/2010 05:33 PM, Demirulez wrote:
> Sorry for my previous post, i finally got Ubuntu 10.04 and Windows 7 to
> work in dual boot, I was using wrong partitions to configure grub. Thank
> you very much Phillip Susi and Chris Martin for your support you gave
> us.

Did you need the modified package in my PPA to do so?

Revision history for this message
beamin (wmartindale) wrote :

Phillip,

     Both Demi and I followed Chris Martin's write up which included your PPA.

Revision history for this message
syngiun (syngiun) wrote :

I'm trying to follow Chris Martin's write up however...

All goes well until step 4.a. I run $ ls -l /dev/mapper/ and I get the following (using Mint 9):

mint@mint ~ $ ls -l /dev/mapper/
total 0
crw-rw---- 1 root root 10, 59 2010-06-03 06:33 control
brw-rw---- 1 root disk 252, 0 2010-06-03 06:33 isw_cabcedbh_PapaRaid

During install, I manually created a root ( / ) and a /home partition (no swap as I have 8GB RAM and don't use hibernate.) Otherwise, the rest of the install process went fine.

Correct me if I'm wrong, but all I'm seeing here is the array, and not either of my partitions.

What to do now?

Revision history for this message
Demetrio Pecorini (demetrio90) wrote :

Phillip,

         yes, i used your modiefied package before and after installing, like Chris Martin's said and if it can be useful i followed also this tutorial for dual booting: http://neildecapia.wordpress.com/2010/02/15/dual-booting-windows-7-and-ubuntu-karmic-9-10-on-a-raid-0-array/ (no need to repair my Windows 7 installation like said).

Revision history for this message
beamin (wmartindale) wrote :

Synguin,

      That is only the array, yes.

     Did you install the OS on the array originally and not one of the drives separately? (Im assuming you did since it detects the array when you do ls -l /dev/mapper. I just slapped a mint 9 cd in my ubuntu 10.04 box just for kicks and it lists the array + partitions just fine.

       That's pretty weird that you rebooted into the livecd and there is nothing there after doing a full install... !

       My only suggestion: Try again?

Revision history for this message
Mario Arias (the-clone-master) wrote :

For those trying to use 10.4 server and not desktop....

I was able to install using FakeRaid with these steps (mixed from several posts and a lot of trial and error later testing with three different machines)...

* Create fakeraid taking care of not having a digit at the end of the raid volume name
* Partition the volume using Ubuntu 9.10. In my case, I had Ubunto 9.10 server installer stopped right after partitions were created and the installer began to copy files to the disk. At that point I shut down the server.
* Start 10.4 LTS server installer.
* When you get to partitioning, select the ext4 partition and modify it to use as ext4 (currently ussage is none) and set the mount point to "/ (root)". Don't touch anything else
* Finish partitioning and let the installer complete.
* When done, restart the machine but go into the 10.4 installer again. This time select "repair a broken system"
* When you reach the menu with the different repair options (start a terminal session, ..., install grub, ...) choose install grub and enter your fakeraid device.... /dev/mapper/isw_XXXXXXX_XXXX (the one with no number at the end..)
* Remove the installer and restart the server
* Enjoy 10.4 LTS on FakeRaid... ;-)

Regards,
-Mario

Revision history for this message
Kzin (wmkzin) wrote :

Hello Everybody,
First, I can confirm this bug, SATA RAID1, nVidia nForce4 chipset, onboard MSI K8N Neo4 or something like that.
Second, I can confirm Phillip's fix works.
Lastly, I found a more streamlined approach to a successful install, at least for me. Sorry Phillip for the scope creep, but it seems that most people hit this same wall once they apply your patch, and that is that grub does not install correctly.

This is pretty much copied verbatim from the post by Chris Martin... I am just going to modify it a bit, there are some key differences, so keep an eye out;

(1) Start the desktop installer, choose to try Ubuntu (This could work with alt install, but this is what I did)
(2) In a terminal window - Install Phillips update:
sudo apt-add-repository ppa:psusi/ppa
sudo apt-get update
sudo apt-get install libparted0

(3) Perform Install - BUT:
(3.a) Choose use overwrite entire disk (all data will be lost)
(3.b) At the last step before the actual install, click the Advanced options and uncheck the "install boot loader option". We will install grub (the boot loader) later.

(4) After installer finishes return to your terminal window
(4.a) Check that you can view the partitions in the raid array with this command
         $ ls -l /dev/mapper/
         control
         nvidia_achdbjg
         nvidia_achdbjg1
         nvidia_achdbjg5

(5) Install grub2 in your new Ubuntu installation:
         sudo mkdir /m
         sudo mount /dev/mapper/nvidia_achdbjg1 /m
         sudo mount --bind /dev /m/dev/
         sudo mount -t proc proc /m/proc/
         sudo mount -t sysfs sys /m/sys/
         sudo cp /etc/resolv.conf /m/etc/resolv.conf
         sudo chroot /m
         apt-get install grub-pc

Here you get a few menus, and also a prompt as to where you would like to install the bootloader. I installed mine on /dev/mapper/nvidia_achdbjg. Be sure to install it on your mapper device and not your /dev/sdx devices, as those aren't available at reboot.
You will get a bunch of memory leak errors.

Reboot and everything works nicely.

Thank you Phillip for the patch and thank you Chris for the steps that got me most of the way there. Didn't work for me, but got me on the right course.

Colin Watson (cjwatson)
Changed in parted (Ubuntu Lucid):
status: New → In Progress
importance: Undecided → High
assignee: nobody → Phillip Susi (psusi)
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package parted - 2.2-5ubuntu6

---------------
parted (2.2-5ubuntu6) maverick; urgency=low

  [ Phillip Susi ]
  * fix-dmraid-regression.path: Reverse upstream change that broke
    installation on dmraid disks for lucid (LP: #568050)
    (Note that this patch is likely to be reverted in Maverick once
    udev/lvm2 switch to the new naming scheme, per the upstream mailing list
    discussion.)
 -- Colin Watson <email address hidden> Mon, 14 Jun 2010 11:07:13 +0100

Changed in parted (Ubuntu):
status: In Progress → Fix Released
Revision history for this message
Colin Watson (cjwatson) wrote :

I'm sponsoring Phillip's fix, with a few inconsequential tweaks - thanks! Note that the version in lucid-proposed is going to be LESS than that in Phillip's PPA, due to how standard version numbering works out with respect to maverick - I'm not going to worry too much about this.

Colin Watson (cjwatson)
description: updated
Revision history for this message
Federico Gonzalez (federico-gonzalez) wrote :

Where can I find a LiveCD (x64 Server) that includes the fix?
I'm about to switch my server over from Windows to Ubuntu+VMware and getting a wee bit nervous reading this topic. :)

Revision history for this message
Phillip Susi (psusi) wrote :

On 6/15/2010 11:15 AM, Federico Gonzalez wrote:
> Where can I find a LiveCD (x64 Server) that includes the fix? I'm
> about to switch my server over from Windows to Ubuntu+VMware and
> getting a wee bit nervous reading this topic. :)

There is no such thing as a server livecd. The livecd is the desktop
build, server can only be installed with the conventional text mode
installer. It sounds like an iso for testing this should be posted here
soon, and if it goes well, the 10.04.1 release images should have the
fix and work when they are released.

Revision history for this message
Federico Gonzalez (federico-gonzalez) wrote :

Phillip,
Thank you for your answer. I'll keep an eye out for 10.04.1 then - as far as I can tell it will be released during July.

Revision history for this message
David Tomlin (davetomlin) wrote :

Phillip,

You're the man! Thanks for taking on this bug!

Dave

Revision history for this message
Julian (julian-online) wrote :

Hy,
I have a DELL E520 with an Intel ICH8R fake raid. Following the instructions from Jeremy and
later the patch from Philip i managed to install 10.04 on the RAID. First the ext4 and than the swap partition.
Thanks a lot for this information!

Now i have the issue that the PC does not boot sometimes (most times).
Grub starts and afterwards the screen get blank, no kernel messages etc.
I updated grub2 several times and try to find the issue but did not find out why.

Can this be related to the same issues with the fake raid? guess that grub did not find the kernel etc.
Or should i try to install a separate boot partition or grub1 ..

Julian

Revision history for this message
Phillip Susi (psusi) wrote :

Julian, please start a thread on the ubuntu forums for help making sure
grub is set up correctly. We need to keep this bug report clear of
clutter. This issue only affects the partitioning stage of installing.

Revision history for this message
Martin Pitt (pitti) wrote : Please test proposed package

Accepted parted into lucid-proposed, the package will build now and be available in a few hours. Please test and give feedback here. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you in advance!

Changed in parted (Ubuntu Lucid):
status: In Progress → Fix Committed
tags: added: verification-needed
Joel Ebel (jbebel)
tags: added: glucid
Revision history for this message
Joel Ebel (jbebel) wrote :

I have tested the parted-udeb libparted0-udeb inside debian-installer and can confirm that now the partitions are created in /dev/mapper by created, for example ..Volume01 rather than ..Volume0p1. The installation proceeds past the partitioning phase now. However, at the end of the install, grub-install still tried to install to /dev/sda and failed.

Revision history for this message
michael ansey (der-brain) wrote :

To make it clear we have multiple bugs which effects all Ubuntu Version
installing Fakeraid. The first is that you can not create pratition (
end up with red screen) The second is that you can not install grub2.
The third nobody test so far comes up if you use mdadm Grub2 is only
installed on one disk meanwhile installation and you can not add a
grub2 manually to the second disk. You will faile by a error which sounds:
Installing grub2 to a partition is not a good idea only if you
have a raid array -- installation fails (sorry just a rough translation
i installed in German language.

This means for 10.04 Lucid Lynx there is no redudant system possible
out of the box. I strongly suggest taking the server version offline and
move it back to testing cause it is a real dangerous for running a
10.04 Server system.

The last bug means that if your second drive fails you have to go to your
hosting company if you do not have a remote card installed!
This shoud be written in the documantation and not that fakeraid is supported.
That is lie!

What we all need are fixed 10.04 CD´s which includes Phllips patch
and several grub2 patches especially writing boot sector to the
second disk.

At the end i have to say i am very disapointed that there is obviosly no
testing at HP! It was so big in all newspapers that Canonical works now
together with HP espacially SERVER. I am sure they never insert a Beta
to their Raid 1 machines at HP! This is a relationship just for Canonical
for promotion. There is no Ubuntu testing at HP labs for sure!

As administrator i just can say using Debian or Redhat is save.
I will use Ubuntu just for Desktop Computers after this experience.
This also shows a lot about HP. I do not want to buy Servers for
a few hundred tousend dollars from a company which does not
test there certified LTS Server Version - ... no thank you i will not
bring a company with such a behaviour in contact with my cusomers!

Revision history for this message
Phillip Susi (psusi) wrote : Re: [Bug 568050] Re: Ubuntu 10.04 can't create partition on fakeraid

This is getting off topic Michael, but fakeraid support is only useful
for dual boot compatibility with Windows. It is not well supported, and
does not properly handle fault tolerance on any distribution. If you
are building a server or otherwise do not dual boot with Windows, you
should be using conventional Linux mdadm software raid, which is well
tested and supported.

Revision history for this message
Martin Pitt (pitti) wrote :

Hello Joel,

Joel Ebel [2010-06-18 18:33 -0000]:
> I have tested the parted-udeb libparted0-udeb inside debian-installer
> and can confirm that now the partitions are created in /dev/mapper by
> created, for example ..Volume01 rather than ..Volume0p1. The
> installation proceeds past the partitioning phase now.

Thanks for testing!

> However, at the end of the install, grub-install still tried to
> install to /dev/sda and failed.

OK, seems that there is an additional grub problem then. Can you
please add a grub task ("Also affects distribution...") and attach the
install log?

It seems the parted side of the fix worked, so I mark this as
verified. While it doesn't fix the complete install, it at least gets
a step further.

Thanks, Martin

Revision history for this message
Joel Ebel (jbebel) wrote :

install log of failed grub-installer attached.

I note that within the installer, fdisk still uses the old 0p1 names for the partitions. perhaps unrelated, but interesting.

Revision history for this message
Phillip Susi (psusi) wrote :

On 6/21/2010 3:38 AM, Martin Pitt wrote:
> OK, seems that there is an additional grub problem then. Can you
> please add a grub task ("Also affects distribution...") and attach the
> install log?

This should be filed as a separate bug against the installer ( if one
does not already exist ) rather than attached to this one. The
installer has always assumed it should install grub to sda so you have
always had to manually tell it to install to the raid device instead.
That issue is really unrelated to this one.

Martin Pitt (pitti)
tags: added: verification-done
removed: verification-needed
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package parted - 2.2-5ubuntu5.1

---------------
parted (2.2-5ubuntu5.1) lucid-proposed; urgency=low

  [ Phillip Susi ]
  * fix-dmraid-regression.path: Reverse upstream change that broke
    installation on dmraid disks for lucid (LP: #568050)
    (Note that this patch is likely to be reverted in Maverick once
    udev/lvm2 switch to the new naming scheme, per the upstream mailing list
    discussion.)
 -- Colin Watson <email address hidden> Mon, 14 Jun 2010 11:20:45 +0100

Changed in parted (Ubuntu Lucid):
status: Fix Committed → Fix Released
Revision history for this message
SAB (sbungay) wrote :

Is there a respin of 10.04 that will see, use, and install on a software raid? Though I would prefer UBUNTU 10.04, I spent far too much time researching and working on this problem to no avail and ended up migrating to Fedora 13 which, I gotta say, set up on the raid like a duck takes to water. Please tell me this problem is fixed, I'd like to return to UBUNTU but I need that raid and can't spend more time futzing around trying to get it to work.

Revision history for this message
SAB (sbungay) wrote :

Forgot the environment info...

Motherboard: ASUS M4A87TD EVO
CPU: AMD Athlon X2 240
ATI 870 / SB850
2GB RAM
3x500GB Segate HDs in a (software) RAID 5 configuration

Revision history for this message
Scott Talbert (swt-techie) wrote :

On Tue, 27 Jul 2010, SAB wrote:

> Is there a respin of 10.04 that will see, use, and install on a software
> raid? Though I would prefer UBUNTU 10.04, I spent far too much time
> researching and working on this problem to no avail and ended up
> migrating to Fedora 13 which, I gotta say, set up on the raid like a
> duck takes to water. Please tell me this problem is fixed, I'd like to
> return to UBUNTU but I need that raid and can't spend more time futzing
> around trying to get it to work.

SAB,

I don't know about a respin of the installer CD, but I was able to get it
to work off the existing installer CD by:

1. Boot into the Live CD.
2. Get a network connection.
3. "sudo apt-get update"
4. "sudo apt-get install libparted0"
5. Start installer.

Install worked fine for me after this point.

Scott

Revision history for this message
baker.alex (baker.alex) wrote :

10.04.1 was just released this week. Setting up a dual-boot FakeRAID was effortless

1. Configured RAID array in BIOS named STRIPE
2. Created a partition on the array and installed Windows 7
3. Booted 10.04.1 live CD, chose to test drive Ubuntu, began installer, and partitioned remaining space
4. At the final step before installation I clicked "Advanced" and instructed GRUB to install to my base array /dev/mapper/isw_eahajccfcj_STRIPE (*not* /dev/sda or the partition /dev/mapper/isw_eahajccfcj_STRIPE2 where Ubuntu was installed)
5. Finished installation and rebooted

If you are installing Windows 7 and you have hard drives outside of your array then pay attention to the location of the System Reserved partition. When I ran the Windows Installer it placed this partition on a separate drive that had a higher boot priority than the RAID array that I installed GRUB to

Revision history for this message
Phillip Susi (psusi) wrote :

I don't know why this had a task opened for grub-installer. It was an issue in parted and was fixed, so closing the grub-installer task.

Changed in grub-installer (Ubuntu):
status: New → Invalid
Changed in grub-installer (Ubuntu Lucid):
status: New → Invalid
Revision history for this message
Ksaun (cougarslayer) wrote :

I understand about half of what is being said, but I thought all would find this interesting. My harddrive, a WD 1T SATAIII 64MB 7200rpm, was first installed with Windows 7. (Gigabyte motherboard) I used vmware to try a few linux distributions. I settled on Ubuntu. I put in the live Ubuntu 10.04.1 disk and tried to install it but my "prepare partition" screen was blank. fdisk -l returned nothing. Gparted saw nothing. I rebooted wtih Gparted live, it saw nothing. I typed sudo testdisk and it saw a very small partition (640MB) and stated the write access was blocked. I reformatted with windows XP to NTFS, tried it again... same crap. I tried booting without dmraid and then uninstalled dmraid once booted (complete uninstall), and same thing: gparted could not see my partition. I moved my SATAIII drive to a SATAII plugin... no problem. Ubuntu is now loading....?

Displaying first 40 and last 40 comments. View all 173 comments or add a comment.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.