mountall blocks on timeout waiting for a partition, rather than supplying prompt and picking it up later
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
mountall (Ubuntu) |
Fix Released
|
High
|
Scott James Remnant (Canonical) | ||
Lucid |
Fix Released
|
High
|
Scott James Remnant (Canonical) |
Bug Description
This bug describes the fault where during booting you will see the message "Waiting for /some/partition [SM]"
That may be LVM, it may be encrypted, or it may simply be on a slower disk. The key point is that the message is intelligible, and never goes away on its own.
In effect, boot hangs because a drive takes more than 2s to become ready
Anzenketh (anzenketh) wrote : | #1 |
affects: | ubuntu → lvm2 (Ubuntu) |
tags: | added: regression-potential |
Anzenketh (anzenketh) wrote : | #2 |
Thank you for taking the time to report this bug and helping to make Ubuntu better. Unfortunately we can't fix it without more information. Please include the information requested at https:/
Changed in lvm2 (Ubuntu): | |
status: | New → Incomplete |
freak007 (freak-linux4freak) wrote : | #3 |
I have a similar problem with lvm. But I don't know if it's the same.
Sometimes (very often indeed) during boot process, my lvm volumes are mounted but empty ! Of course, I'm unable to access to my desktop.
After some boots, my volumes are good and all works fine.
dmesg not show anything.
DevenPhillips (deven-phillips) wrote : Re: [Bug 527666] Re: LVM Not mounting in Lucid | #4 |
OK, looking at this finally. I'm getting ready for my wedding, so
sorry for the slow response.
First, there is no /scripts/
Second, the --verbose and --suppress-syslog options are not valid for
udevd inside of initramfs
Thanks,
Deven
On Thu, Feb 25, 2010 at 2:16 PM, Anzenketh <email address hidden> wrote:
> Thank you for taking the time to report this bug and helping to make
> Ubuntu better. Unfortunately we can't fix it without more information.
> Please include the information requested at
> https:/
>
> ** Changed in: lvm2 (Ubuntu)
> Status: New => Incomplete
>
> --
> LVM Not mounting in Lucid
> https:/
> You received this bug notification because you are a direct subscriber
> of the bug.
>
DevenPhillips (deven-phillips) wrote : Re: LVM Not mounting in Lucid | #5 |
Additionally, the /sbin/udevtrigger command does not exist on Lucid.
Stephan Rügamer (sruegamer) wrote : | #6 |
I'm setting this from Incomplete to Confirmed. The reason is
1) what Deven said below (the instructions are somewhat not applyable to lucid)
2) We have at least two people hitting the very same problem (one of them is my person and the other one is amitk (check http://
The way to reproduce:
1. create a VG
2. create a LV on the VG
3. Mount the LV via fstab and reboot your server (it happened on ubuntu lucid server flavour)
4. wait and see
This is a regression from karmic and should be fixed before release.
@bug triaging team: please set the correct "regression" tag (as I don't know the correct workflow)
I wonder if we should move this bug from lvm2 to initramfs, because lvm2 in general does work as a charm...it's only the boot up area.
Changed in lvm2 (Ubuntu): | |
status: | Incomplete → Confirmed |
Arnulf Heimsbakk (arnulf-heimsbakk) wrote : | #7 |
- lvm-parts.sh Edit (1.6 KiB, text/x-sh)
I use the attached script to create partitions on servers as a step in my kickstart installation.
I can confirm a regression since the last LTS release. When I reboot Lucid with the new partition setup, the boot screen hangs on random partition, usually /tmp, /var or /usr.
Last tested of me on Lucid Alpha 3 in a VirtualBox setup on x86.
LVM works fine before reboot. The boot goes very fast, does it wait for proper LVM discovery?
Stephan Rügamer (sruegamer) wrote : | #8 |
Hmmm...
since the last dist-upgrade with new kernel, new initramfs-tools and new mountall package it works here for me...
I'm trying to reproduce it somehow, because I think there is something like a timing race condition..I'm not sure
Phillip Susi (psusi) wrote : | #9 |
It seemed to work fine for me last night. I created an LVM snapshot of my 9.10 root, rebooted using the snapshot as the root, then upgraded to lucid. Rebooted back into the original 9.10 root, then again into the lucid snapshot without issue.
Arnulf Heimsbakk (arnulf-heimsbakk) wrote : | #10 |
Did a new test today.
Kickstart installation with netboot image. I used no.archive.
Installation and and the first reboot went fine. All six LVM-volumes was discovered:
/home
/opt
/tmp
/usr
/var
/var/log
All reboots after the first missed one or more LVM volumes and started to hang on random volume again. I am at a loss here. Why did it work on the first reboot, but not with subsequent reboots?
Suggestion how I should debug this would be appreciated. Or is there any sensible place to insert a delay somewhere to work around this problem?
DevenPhillips (deven-phillips) wrote : Re: [Bug 527666] Re: LVM Not mounting in Lucid | #11 |
I have to agree with Philip. On my system with just /home as an LVM
volume, the latest updates appear to have fixed my problem. Now, I
would remind you that this is Ubuntu Desktop, 64 bit.
Thanks,
Deven
On Thu, Mar 11, 2010 at 4:50 AM, Arnulf Heimsbakk
<email address hidden> wrote:
> Did a new test today.
>
> Kickstart installation with netboot image. I used no.archive.
> as mirror.
>
> Installation and and the first reboot went fine. All six LVM-volumes was
> discovered:
>
> /home
> /opt
> /tmp
> /usr
> /var
> /var/log
>
> All reboots after the first missed one or more LVM volumes and started
> to hang on random volume again. I am at a loss here. Why did it work on
> the first reboot, but not with subsequent reboots?
>
> Suggestion how I should debug this would be appreciated. Or is there any
> sensible place to insert a delay somewhere to work around this problem?
>
> --
> LVM Not mounting in Lucid
> https:/
> You received this bug notification because you are a direct subscriber
> of the bug.
>
Arnulf Heimsbakk (arnulf-heimsbakk) wrote : Re: LVM Not mounting in Lucid | #12 |
Hi,
I can reproduce this problem every new install (ubuntu-minimal on x86).
I've been testing around. Using only two or three LVM partitions seem to work every time. No problem booting there.
Using four LVM partitions or in my case six partitions seems to be a problem. It varies on how many LVM partitions which is detected in the boot sequence. Usually four, but at some boots five and even three.
Arnulf
doclist (dclist) wrote : | #13 |
When you say LVM partition do you mean physical partition, LVM volume
group or LVM logical volume? I experience this problem intermittently
wtih 1 volume group and 3 logical volumes.
DevenPhillips (deven-phillips) wrote : Re: [Bug 527666] Re: LVM Not mounting in Lucid | #14 |
I have several PVs in 1 VG with several LVs, but only the /home volume is
automounted by fstab. I believe that others here are saying multiple LVs
mounted by fstab.
Deven
On Mar 11, 2010 9:10 PM, "doclist" <email address hidden> wrote:
When you say LVM partition do you mean physical partition, LVM volume
group or LVM logical volume? I experience this problem intermittently
wtih 1 volume group and 3 logical volumes.
--
LVM Not mounting in Lucid
https:/
You received this bug notifica...
Arnulf Heimsbakk (arnulf-heimsbakk) wrote : Re: LVM Not mounting in Lucid | #15 |
I create one VG on one PV. In that VG I create six LVs.
I try to mount all six LVs by fstab:
/home
/opt
/tmp
/usr
/var
/var/log
Changed in lvm2 (Ubuntu): | |
importance: | Undecided → Medium |
Amit Kucheria (amitk) wrote : | #16 |
Confirming that it 'hangs' after the first boot for multiple LVM mounts. Increasing importance and assigning to Scott.
I guess Scott will want debug output after adding --debug to the mountall command in /etc/init/
Changed in lvm2 (Ubuntu): | |
assignee: | nobody → Scott James Remnant (scott) |
Amit Kucheria (amitk) wrote : | #17 |
Picture of output with --debug is captured here:
http://
Arnulf Heimsbakk (arnulf-heimsbakk) wrote : | #18 |
Arnulf Heimsbakk (arnulf-heimsbakk) wrote : | #19 |
DevenPhillips (deven-phillips) wrote : Re: [Bug 527666] Re: LVM Not mounting in Lucid | #20 |
Yep, just happened to me again after a reboot to install some updates.
Here's my configuration details:
LVM2 -
root@dphillips-
--- Physical volume ---
PV Name /dev/sdb
VG Name VirtualMachines
PV Size 465.76 GiB / not usable 12.02 MiB
Allocatable yes
PE Size 16.00 MiB
Total PE 29808
Free PE 2928
Allocated PE 26880
PV UUID tr32vc-
root@dphillips-
--- Logical volume ---
LV Name /dev/VirtualMac
VG Name VirtualMachines
LV UUID IILyg1-
LV Write Access read/write
LV Status available
# open 0
LV Size 30.00 GiB
Current LE 1920
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
--- Logical volume ---
LV Name /dev/VirtualMac
VG Name VirtualMachines
LV UUID Jvrlrc-
LV Write Access read/write
LV Status available
# open 0
LV Size 40.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:1
--- Logical volume ---
LV Name /dev/VirtualMac
VG Name VirtualMachines
LV UUID VXRec0-
LV Write Access read/write
LV Status available
# open 0
LV Size 30.00 GiB
Current LE 1920
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:2
--- Logical volume ---
LV Name /dev/VirtualMac
VG Name VirtualMachines
LV UUID eBW0Na-
LV Write Access read/write
LV Status available
# open 1
LV Size 100.00 GiB
Current LE 6400
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:3
--- Logical volume ---
LV Name /dev/VirtualMac
VG Name VirtualMachines
LV UUID rhk9FS-
LV Write Access read/write
LV Status available
# open 0
LV Size 200.00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:4
--- Logical volume ---
LV Name /dev/VirtualMac
VG Name VirtualMachines
LV UUID u4CeIR-FuXN-NYuz...
Changed in lvm2 (Ubuntu): | |
importance: | Medium → High |
Arnulf Heimsbakk (arnulf-heimsbakk) wrote : Re: LVM Not mounting in Lucid | #21 |
Should this bug be filed under package mountall? It doesn't seem to be a lvm-bug.
freak007 (freak-linux4freak) wrote : | #22 |
I think also this bug is related to mountall.
if I press S in the boot sequence, I have my gdm login. Switching to VT1 an logging as root, I can mount fine my LV.
Arnulf Heimsbakk (arnulf-heimsbakk) wrote : | #23 |
This bug does not seem to be directly related to lvm, but rather to the mountall command.
affects: | lvm2 (Ubuntu) → mountall (Ubuntu) |
summary: |
- LVM Not mounting in Lucid + LVM volumes not mounted in Lucid |
summary: |
- LVM volumes not mounted in Lucid + multiple LVM volumes not mounted in Lucid |
Amit Kucheria (amitk) wrote : Re: multiple LVM volumes not mounted in Lucid | #24 |
Should the LVs be owned by root:root or root:disk?
I have 3 LVs on the new disk - Home, Private and Shared.
I found that all the LVs in /dev/mapper are owned by root:root except for Private and Shared that are owned by root:disk. And these are the two that are not mounted and cause the wait messages. Will go through the udev logs as I find time.
Changed in mountall (Ubuntu Lucid): | |
milestone: | none → ubuntu-10.04-beta-2 |
Michael Heča (orgoj) wrote : | #25 |
I have same bug. Install fresh Lucid beta 2 and add /data lvm/reiserfs mountpoint to fstab. System is often not start. Sometimes i press reset, if "Waiting for /data [SM]" is shown, and sytem on next boot start disc check and booted to gdm.
If I press enter on message "Waiting...", often start maintenance shell. After running mount -a, all mountpoint successfully mounted.
Michael Heča (orgoj) wrote : | #26 |
- mountall logs from boot Edit (6.3 KiB, application/zip)
Logs from two boots by
/etc/init/
exec mountall --debug --daemon $force_fsck $fsck_fix >/dev/mountall-
Ralph (ralph-puncher-deactivatedaccount) wrote : | #27 |
I have created 3 logical volumes on a removable USB drive - one volume group of one physical partition. If I have fstab entries for these LV's the system will start to boot then give me a "Waiting ..." message on the first LV if the drive has not been connected; enter gives a maintenance shell. If the drive is connected at startup/restart, bootup is not a problem. This problem does not occur under 9.04.
Ralph (ralph-puncher-deactivatedaccount) wrote : | #28 |
Please amend last line to read 9.10 not 9.04.
Changed in mountall (Ubuntu Lucid): | |
assignee: | Scott James Remnant (scott) → Canonical Foundations Team (canonical-foundations) |
Barry Warsaw (barry) wrote : | #29 |
I've tried but have been unable to reproduce this. I'm not entirely sure that my environment is equivalent though, so let me explain what I did and if you have suggestions for other things to try, I can give it a shot.
I created a brand new kvm vm x86_64 w/ a 40G disk, 512MB. I grabbed the lucid-beta1 64bit server iso and did a fresh install. When it came time to partition the disk, I created one VG on the PV. I created 6 LVs on the VG:
root -> /
home -> /home
opt -> /opt
tmp -> /tmp
var -> /var
varlog -> /var/log
with various sizes ranging from about 5G to 10G apiece. Everything installed and booted perfectly fine. No hang, all filesystems mounted correctly. In fact, boot was so blazingly fast I blinked and it was done.
I updated all packages and rebooted about 10 times. I never had a hang or failure to mount any partitions. Boot never took longer than a second or two. I added --debug to mountall as in orgoj's comment #26 and mountall-stderr.log was never anything but empty. mountall-stdout.log didn't have any indications of problems (on the contrary, it looked quite reasonable).
Is this a reasonable test of the reported issue? Is there anything else I can try to get a better reproduction of the bug?
DevenPhillips (deven-phillips) wrote : Re: [Bug 527666] Re: multiple LVM volumes not mounted in Lucid | #30 |
I can't say, but I would suggest trying it without using VMs.
Deven
On Fri, Mar 26, 2010 at 4:26 PM, Barry Warsaw <email address hidden> wrote:
> I've tried but have been unable to reproduce this. I'm not entirely
> sure that my environment is equivalent though, so let me explain what I
> did and if you have suggestions for other things to try, I can give it a
> shot.
>
> I created a brand new kvm vm x86_64 w/ a 40G disk, 512MB. I grabbed the
> lucid-beta1 64bit server iso and did a fresh install. When it came time
> to partition the disk, I created one VG on the PV. I created 6 LVs on
> the VG:
>
> root -> /
> home -> /home
> opt -> /opt
> tmp -> /tmp
> var -> /var
> varlog -> /var/log
>
> with various sizes ranging from about 5G to 10G apiece. Everything
> installed and booted perfectly fine. No hang, all filesystems mounted
> correctly. In fact, boot was so blazingly fast I blinked and it was
> done.
>
> I updated all packages and rebooted about 10 times. I never had a hang
> or failure to mount any partitions. Boot never took longer than a
> second or two. I added --debug to mountall as in orgoj's comment #26
> and mountall-stderr.log was never anything but empty. mountall-
> stdout.log didn't have any indications of problems (on the contrary, it
> looked quite reasonable).
>
> Is this a reasonable test of the reported issue? Is there anything else
> I can try to get a better reproduction of the bug?
>
> --
> multiple LVM volumes not mounted in Lucid
> https:/
> You received this bug notification because you are a direct subscriber
> of the bug.
>
Barry Warsaw (barry) wrote : Re: multiple LVM volumes not mounted in Lucid | #31 |
@Deven: yeah, unfortunately i haven't got any free hardware laying about ;). I'll have to see if I can cobble something together.
Michel (michel-crondor) wrote : | #32 |
I can confirm this. I have one lv which is owned by root:disk, when this lv is present in /etc/fstab, the system refuses to boot, it keeps waiting for this lv to be mounted. If I remove this lv from /etc/fstab, it boots. Unfortunately, I cannot for the life of me find where these permissions are stored! Why does just this one lv have a different group?
Barry Warsaw (barry) wrote : | #33 |
Okay, I'm going to dig up some physical hardware to see if I can reproduce this. I've had no luck reproducing it in VMs, even with a layout suggested by someone in IRC.
Michael Heča (orgoj) wrote : | #34 |
I make fresh install Ubuntu 10.4 i386 beta1, manual partition whole disk to:
sda1 /boot ext2 256MB
sda2 swap 2GB
sda3 / ext3 12GB
sda5 lvm main 'rest of disk'
/data/main/home /home reiserfs 40GB
After restart and reboot system hang on "Wait for /home [SM]".
Barry Warsaw (barry) wrote : | #35 |
@orgoj: interesting. does the same thing happen if you use ext4 instead of reiserfs?
DevenPhillips (deven-phillips) wrote : Re: [Bug 527666] Re: multiple LVM volumes not mounted in Lucid | #36 |
It happens on my machine, and I'm using ext4.
On Fri, Mar 26, 2010 at 9:45 PM, Barry Warsaw <email address hidden> wrote:
> @orgoj: interesting. does the same thing happen if you use ext4 instead
> of reiserfs?
>
> --
> multiple LVM volumes not mounted in Lucid
> https:/
> You received this bug notification because you are a direct subscriber
> of the bug.
>
Michael Heča (orgoj) wrote : Re: multiple LVM volumes not mounted in Lucid | #37 |
I noticed the message on boot in both cases if system boot or hang:
udevd-work[70]: inotify_
Michael Heča (orgoj) wrote : | #38 |
/dev/sdb1 is second part of my lvm storage on main PC.
Michael Heča (orgoj) wrote : | #39 |
- mountall-orgoj-ok.tar.gz Edit (2.9 KiB, application/x-tar)
Logs from mountall if system successfully booted.
Michael Heča (orgoj) wrote : | #40 |
I try the same as reiserfs but with ext4 for / and /home(lvm).
System hang on boot, but no "Wait for.." is shown. After pressing M, console is shown. Mount don't show mounted /home. Mount -a mount /home without errors. After Ctrl-D system successfully mounted.
summary: |
- multiple LVM volumes not mounted in Lucid + Waiting for /some/partition [SM] |
Changed in mountall (Ubuntu Lucid): | |
status: | Confirmed → Triaged |
assignee: | Canonical Foundations Team (canonical-foundations) → Scott James Remnant (scott) |
description: | updated |
Changed in mountall (Ubuntu Lucid): | |
status: | Triaged → Fix Committed |
Changed in mountall (Ubuntu Lucid): | |
status: | Fix Committed → Fix Released |
32 comments hidden Loading more comments | view all 112 comments |
Arnulf Heimsbakk (arnulf-heimsbakk) wrote : Re: Waiting for /some/partition [SM] | #73 |
- mountall.debug-100404-ah Edit (26.5 KiB, text/plain)
I can confirm that the mount issue still exists. I'm adding debug from mountall and /var/log/udev. Note. I used [s] to skip waiting for /var and /var/log. My test setup is as follows.
lvdisplay:
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
homelv rootvg -wi-ao 1.00g
loglv rootvg -wi-a- 2.00g
optlv rootvg -wi-ao 1.00g
tmplv rootvg -wi-ao 2.00g
varlv rootvg -wi-a- 2.00g
fstab:
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda1 during installation
UUID=d5e5232c-
# swap was on /dev/sda5 during installation
UUID=1b4eae89-
UUID=414e0c6d-
UUID=816b3834-
UUID=35f9fad7-
UUID=40767f37-
UUID=e1ae56e7-
I have also discovered, that if I move /usr to an lvm partition then I get the message
error: file not found.
on my console when/right after the kernel boots. I have no idea where that comes from or if it is related to this problem. Tips for debugging is appreciated.
Arnulf
Arnulf Heimsbakk (arnulf-heimsbakk) wrote : | #74 |
Arnulf Heimsbakk (arnulf-heimsbakk) wrote : | #75 |
Can the status of this bug be changed from "Fix Released" to "Confirmed" since it is still an issue?
Arnulf
Barry Warsaw (barry) wrote : | #76 |
@arnulf: done
Changed in mountall (Ubuntu Lucid): | |
status: | Fix Released → Confirmed |
thamieu (thamieuz3r0-deactivatedaccount) wrote : | #77 |
I see 2 issues :
- mountall stop working while user is prompted to press S/M (corrected in mountall 2.10, cf #58)
- the latest devices created in /dev/mapper is owned by root.disk instead of root.root
On my machine, "mountall --version" returns "2.8" while "apt-cache show mountall" returns "2.10" (and apt-get tells my I already have the latest version). Maybe this 2.10 package contains a mistake ?
Waiting for the dm device to be mounted is pointless, only changing permissions on /dev/mapper/
thamieu
Scott James Remnant (Canonical) (canonical-scott) wrote : Re: [Bug 527666] Re: Waiting for /some/partition [SM] | #78 |
On Tue, 2010-04-06 at 11:47 +0000, Arnulf Heimsbakk wrote:
> Can the status of this bug be changed from "Fix Released" to "Confirmed"
> since it is still an issue?
>
No.
If you are still having issues, you must have had a different bug to the
original reporter all along.
Please open a new bug.
Scott
--
Scott James Remnant
<email address hidden>
Changed in mountall (Ubuntu Lucid): | |
status: | Confirmed → Fix Released |
Tim Jones (tim-mr-dog) wrote : Re: Waiting for /some/partition [SM] | #79 |
Hi,
I'm having the same problem as orgoj and some of the others on this bug. Did someone create a new bug for this, possibly different, bug which looks like this one?
Thanks,
Tim
Sergey V. Udaltsov (sergey-udaltsov) wrote : | #80 |
similar to tamieu. But my /dev/mapper contains only one file - control :((( Should I open new bug as well?
Tim Jones (tim-mr-dog) wrote : | #81 |
An 'grep swap' extract from /var/log/boot.log with mountall --debug:
local 6/6 remote 0/0 virtual 11/11 swap 0/1
try_mount: /dev/mapper/
try_udev_device: block /dev/mapper/
try_udev_device: /dev/mapper/
run_fsck: /dev/mapper/
activating /dev/mapper/
spawn: swapon /dev/mapper/
spawn: swapon /dev/mapper/
swapon: /dev/mapper/
mountall: swapon /dev/mapper/
mountall: Problem activating swap: /dev/mapper/
mounted: /dev/mapper/
swap finished
local 6/6 remote 0/0 virtual 11/11 swap 1/1
Just a guess here... If each of the mountall discovered FSs are mounted in the background by a spawned process (assumed from the logging) then as /home is generally the largest mount on a default install and is going to take the longest, could it be possible that it just happens that as the swap mount has failed all but the /home has mounted ok but mountall has given up waiting due to the failure and killed off the spawned mounts?
Scott James Remnant (Canonical) (canonical-scott) wrote : Re: [Bug 527666] Re: Waiting for /some/partition [SM] | #82 |
On Wed, 2010-04-07 at 16:53 +0000, Tim Jones wrote:
> I'm having the same problem as orgoj and some of the others on this bug.
> Did someone create a new bug for this, possibly different, bug which
> looks like this one?
>
If you could each create a new one using "ubuntu-bug mountall", I would
really appreciate that.
It's quite possible that you each have a different problem at this
point.
Scott
--
Scott James Remnant
<email address hidden>
DevenPhillips (deven-phillips) wrote : Re: Waiting for /some/partition [SM] | #83 |
I would also ask that everyone post back here with the new bug numbers so that I and others will be able to track the trail to the other bugs should we land here.
Thanks
thamieu (thamieuz3r0-deactivatedaccount) wrote : | #84 |
I opened a new bug about the ownership issue : #557909.
Sergey V. Udaltsov (sergey-udaltsov) wrote : | #85 |
I have my bug related to "lost" lvs/vg: #554478
grendelkhan (scottricketts) wrote : | #86 |
Having this same issue, mountall version 2.11
Michael Heča (orgoj) wrote : | #87 |
I try same fresh install from 10.4b2 alternative i386 with home reiserfs on LVM as before and 3 next reboots are ok.
On main system from version 2.11 mostly boot and from version 2.12 I not see hang on boot.
Michael Heča (orgoj) wrote : | #88 |
After update and install nvidia-96 driver, system hang on boot with same symptom. On maintenance console I see home not mounted, mount -a forking fine and after Ctrl-D system boot. Gdm after login hang and restart.
Matt Grant (mattgrant) wrote : | #89 |
Having this this same issue, mountall 2.12. Trying to debug it. Seems like an 'add/change' event is not getting to mountall from udev, as symlinks in /dev/vg are being created...
Matt Grant (mattgrant) wrote : | #90 |
Further to above:
There is still a race condition in mountall, proabably due to teh integration with plymouth boot screen.
Add/change events from udev are being dropped.
When I get the error, I press 'M', and sulogin. The links are there in /dev/<volume_
two things should be done:
1) add code to try 2 mount attempts before giving up on a file system in /etc/fsta on boot.
2) Find the race and fix it.
1) is the belt and braces - not mounting file systems on boot is a SERIOUS problem.
Condition can be debugged when a machines running by creating a volume group with about 10 logical volumes, deactivating it with 'vgchange -a n /dev/<volume_
Micheal Waltz (ecliptik) wrote : | #91 |
- Manual skipping of LVM filesystems to boot fully Edit (50.0 KiB, image/jpeg)
Still having the same problem as well, pulled down the latest packages for install this morning. Attaching screenshot, fstab, mount after boot, and LVM displays.
lsb_release -rd
Description: Ubuntu lucid (development branch)
Release: 10.04
apt-cache policy mountall
mountall:
Installed: 2.13
Candidate: 2.13
Version table:
*** 2.13 0
500 http://
100 /var/lib/
Ali Onur Uyar (aouyar) wrote : | #92 |
I am experiencing exactly the same problem since I upgraded to Lucid yesterday. I wonder if this is a udev problem, because I've also discovered an issue with the permissions of /dev/shm.
Since, the upgrade to Lucid, boots hangup indefinitely. I have to execute the following procedure to get to the GDM screen:
1. Enter M (for Manual Recovery)
2. Execute "mount -a" which mounts all filesystems on LVM without problems.
3. CTRL-D to close the shell and continue with the reboot.
After login to Gnome Session, launching Google Chrome fails, because /dev/shm has permissions rw-r--r-t. Google Chrome starts working normally after setting /dev/shm permissions manually to rw-rw-rwt, but the permissions do not survive a reboot.
frankie (frankie-etsetb) wrote : | #93 |
Works for me now !
- plymouth 0.8.2-2
- udev 151-12
- mountall 2.13
Scott James Remnant (Canonical) (canonical-scott) wrote : | #94 |
Something is clearly resetting the permissions of /dev/shm - I don't think it'll be udev, udev would have removed the "t" as well
Ali Onur Uyar (aouyar) wrote : | #95 |
Yesterday, I had posted a comment with details of the issue I am experiencing since I upgraded to Lucid.
Lucid hangs up indefinitely with the "Waiting for 'some partition'" error. The partitions that cause the problem are on LVs. Amit Kucheria mentioned that at this point some of the LVs have root:root ownership whereas others have root:disk ownership, and apparently the LVs that hang are the ones with root:disk ownership.
Simply changing the owner ship of the device node in /dev/mapper is a no fix, because the permissions are not persistent through reboots. So, I went ahead and added the following line in mountall.conf before the line that launches the daemon with exec:
chown root:root /dev/mapper/*
Adding this line fixed the problem completely. This test seems to confirm that the problem is with the ownership of the LVM device nodes, but I have no idea why some nodes end up having the root:disk ownership, while others have root:root in the first place.
Thierry Carrez (ttx) wrote : | #96 |
Same here, but sometimes everything works (about half the time):
I have /home under LVM:
/dev/cassini/
The boot process (sometimes) hangs with the following message:
The disk drive for /home is not ready yet or not present.
Continue to wait; or Press S to skip mounting or M for manual recovery
I press M
# mount /home
# exit
and then the boot proceeds. See my mountall logs at comment 3.
Thierry Carrez (ttx) wrote : | #97 |
Sorry, I meant at comment https:/
Bug 561390 tracks this specific issue, it could be marked a duplicate if that bug was reopened instead, depending on where Scott prefers to track the issue.
Ali Onur Uyar (aouyar) wrote : | #98 |
With following line in mountall.conf to fix permissions for LVs, everything seems to work fine:
chown root:root /dev/mapper/*
But, I've discovered that on battery power things get even worse. The boot seems to hangup about the same place, but I cannot obtain a recovery shell and I have found no way to get a running system. I am not sure if this is another bug somewhere else or the bug is related.
Ali Onur Uyar (aouyar) wrote : | #99 |
With the change to fix de ownership issue of DM device nodes, things seemed to be working, but then I started having problems again today even with mains power. I second Thierry Carrez that the boot fails about half the time. In fact the things have become worse, because sometimes pressing M for manual recovery does not work and the only way to get the system to boot is to reboot over and over again to get a working session.
As far as I can gather Lucid boot process is failing completely for many people that have multiple filesystems on LVM. I've been using Ubuntu since with LVM since 7.04, and all the upgrades up to 9.10 worked without problems. Judging by the comments of other the problem is not limited to upgrades either, This bug really seems to be a show-stopper, because a system that was working perfectly, does not even get to a login prompt with Lucid.
I will be glad to help to identify a solution, but I do not know how.
Ali Onur Uyar (aouyar) wrote : | #100 |
Seems like the problem does not occur consistently with every possible setup, because I have a another laptop with 8 LVs that I upgraded to Lucid yesterday and it has been booting without problems; just the usual error messages for statd and ureadahead startup for having /var on a separate partition. I've uninstalled ureadahead for fixing the error messages with ureadahead which apparently does not work with /var n a separate partition, but the statd error messages are still there.
Michael Kofler (michael-kofler) wrote : | #101 |
On my machine (two disks, no RAID, LVM), the boot process still hangs in about 1 out of 5 boots. Strg+Alt+Del to reboot almost always works. (Lucid with all updates as of yesterday, 64 bit.)
Changed in mountall (Ubuntu Lucid): | |
status: | Fix Released → Confirmed |
Mathieu Alorent (kumy) wrote : | #102 |
- Bootchart OK Edit (318.6 KiB, image/png)
We still experience this bug on lucid today. Bootchart shows that the boot stalls on mountall.
The system boots in some cases, so it is possible to compare OK and KO cases. Attached are:
* The two bootcharts (OK and KO) ;
* The two mountall --debug logs (OK and KO);
* Our /etc/fstab
The bootcharts clearly show that mountall is the process blocking the boot with LVM (until we press 'S' or 'M'). In the KO case, the mountall debug logs read:
Received SIGUSR1 (network device up)
try_mount: /WOO waiting for device
which seems to be blocking all the depending mounts.
Mathieu Alorent (kumy) wrote : | #103 |
- Tarball with bootcharts, mountall debug logs and fstab. Edit (509.6 KiB, application/x-tar)
Sorry I could only add one attachment, so here is a tarball with all the attachments listed in the previous comment.
Mathieu Alorent (kumy) wrote : | #104 |
Upon debugging further, it seems mountall is waiting for /dev/HEBEX/
root@malorent:~# lvscan
ACTIVE '/dev/HEBEX/
ACTIVE '/dev/HEBEX/
ACTIVE '/dev/HEBEX/WOO' [5.00 GiB] inherit
ACTIVE '/dev/HEBEX/
ACTIVE '/dev/HEBEX/
root@malorent:~# ls -l /dev/mapper/
total 0
brw-rw---- 1 root disk 251, 4 Apr 23 14:57 HEBEX-VAR_LOG
brw-rw---- 1 root disk 251, 2 Apr 23 14:57 HEBEX-WOO
brw-rw---- 1 root disk 251, 1 Apr 23 14:57 HEBEX-WOO_LOG
brw-rw---- 1 root disk 251, 3 Apr 23 14:57 HEBEX-WOO_PROG
crw-rw---- 1 root root 10, 59 Apr 23 14:57 control
root@malorent:~# ls -l /dev/HEBEX/
total 0
lrwxrwxrwx 1 root root 23 Apr 23 14:57 VAR_LOG -> ../mapper/
lrwxrwxrwx 1 root root 19 Apr 23 14:57 WOO -> ../mapper/HEBEX-WOO
lrwxrwxrwx 1 root root 23 Apr 23 14:57 WOO_LOG -> ../mapper/
lrwxrwxrwx 1 root root 24 Apr 23 14:57 WOO_PROG -> ../mapper/
root@malorent:~# lvdisplay /dev/HEBEX/WOO_BASE
/dev/
/dev/
--- Logical volume ---
LV Name /dev/HEBEX/WOO_BASE
VG Name HEBEX
LV UUID 1an8Zg-
LV Write Access read/write
LV Status NOT available
LV Size 1.00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
So LVM finds the missing device internally, but the device is not created by udev.
Mathieu Alorent (kumy) wrote : | #105 |
Update: the /dev nodes seem to only be missing when two LVM partitions fail.
Scott James Remnant (Canonical) (canonical-scott) wrote : Re: [Bug 527666] Re: Waiting for /some/partition [SM] | #106 |
On Fri, 2010-04-23 at 09:49 +0000, Mathieu Alorent wrote:
> We still experience this bug on lucid today.
>
No, this bug has been fixed. You are experiencing a different bug, I'd
appreciate it if you could open a new bug with "ubuntu-bug mountall"
which will some of the information we need from you.
Scott
--
Scott James Remnant
<email address hidden>
Changed in mountall (Ubuntu Lucid): | |
status: | Confirmed → Fix Released |
summary: |
- Waiting for /some/partition [SM] + mountall blocks on timeout waiting for a partition, rather than + supplying prompt and picking it up later |
Scott James Remnant (Canonical) (canonical-scott) wrote : | #107 |
Mathieu: actually, after reviewing the data you did attach, it's a high
probability you're experiencing bug #561390
Scott
--
Scott James Remnant
<email address hidden>
Ali Onur Uyar (aouyar) wrote : | #108 |
Hi Scott,
The bugs 561390 and 527666 seem to be pointing to the very same issue to me. Infact, I was quite tempted to mark them as duplicates:
* In both cases the same error message is displayed and the only way to continue with the boot process is to enter the Recovery Shell and mount the missing partitions manually.
* The filesystems that do not get mounted are on LVM.
* There is usually something wrong with the permissions of /dev/mapper devices and /dev/shm, when the problem occurs.
* Both bugs seem to point to a critical regression in Lucid; the partition setup that was working perfectly with karmic, causes problems after Lucid upgrade.
Why do you think the two bugs refer to separate issues? In what way do the two bugs differ? How can I identify exactly which issue I am experiencing?
Scott James Remnant (Canonical) (canonical-scott) wrote : Re: [Bug 527666] Re: mountall blocks on timeout waiting for a partition, rather than supplying prompt and picking it up later | #109 |
On Sat, 2010-04-24 at 17:42 +0000, Ali Onur Uyar wrote:
> The bugs 561390 and 527666 seem to be pointing to the very same issue to me.
>
They are not.
527666 (this bug) describes an issue where mountall simply doesn't wait
long enough for block devices to appear that *do* appear.
561390 describes an issue where mountall never receives notification of
LVM devices from the kernel.
> Infact, I was quite tempted to mark them as duplicates:
>
Do not.
> Why do you think the two bugs refer to separate issues? In what way do
> the two bugs differ? How can I identify exactly which issue I am
> experiencing?
>
Since this bug (mountall doesn't wait long enough) has been fixed, if
you are experiencing issues you are either experiencing bug 561390
(which has not been marked Fix Released) or a different bug entirely.
It's always best to just file a new bug describing your own problems,
and allow the developers to triage that bug and determine themselves
whether it's a duplicate of a known problem or a new problem not
previously known.
Scott
--
Scott James Remnant
<email address hidden>
Ali Onur Uyar (aouyar) wrote : | #110 |
Thanks Scott, for the detailed explanation. Even though I've been using Ubuntu for the last few years, I am quite new to launchpad.
Scott James Remnant (Canonical) (canonical-scott) wrote : | #111 |
On Mon, 2010-04-26 at 00:01 +0000, Ali Onur Uyar wrote:
> Thanks Scott, for the detailed explanation. Even though I've been using
> Ubuntu for the last few years, I am quite new to launchpad.
>
It's not really a Launchpad thing.
The confusion becomes because there's a tendancy for users to classify
bugs by their symptoms ("black screen", "big loud noise", etc.) -
whereas developers classify bugs by the cause.
While these two bugs have the same apparent symptom, the cause is
actually quite different.
In fact, I'm now convinced there are *three* bugs; two of which have
been fixed. You have the third.
Scott
--
Scott James Remnant
<email address hidden>
Scott James Remnant (Canonical) (canonical-scott) wrote : | #112 |
For those still experiencing problems, and not yet subscribed to bug #561390, in my PPA you'll find a new dmsetup package, could you try it out and see whether it makes things better or worse?
sudo add-apt-repository ppa:scott/ppa
sudo apt-get update
sudo apt-get upgrade
Check you have dmsetup 2.02.54-
dpkg-query -W dmsetup
Then reboot.
Thank you for taking the time to report this bug and helping to make Ubuntu better. This bug did not have a package associated with it, which is important for ensuring that it gets looked at by the proper developers. You can learn more about finding the right package at https:/ /wiki.ubuntu. com/Bugs/ FindRightPackag e. I have classified this bug as a bug in lvm2.
When reporting bugs in the future please use apport, either via the appropriate application's "Help -> Report a Problem" menu or using 'ubuntu-bug' and the name of the package affected. You can learn more about this functionality at https:/ /wiki.ubuntu. com/ReportingBu gs.