Quantcast
Channel: Active questions tagged mount+fstab - Ask Ubuntu
Viewing all articles
Browse latest Browse all 699

18.04 Drive mounted into directory in a zfs pool, now pool is degraded

$
0
0

I'll start by saying all my data is backed up elsewhere, so all is not lost (it's just annoying and I'd like to learn from my mistakes).

I've had a 2 drive zfs mirror pool setup for several months with no problems. I recently added an older 2TB drive to record my CCTV footage to. To avoid setting up a second samba share, I wanted to mount the drive under the exist path.

I formatted the 2TB drive with ext4 and then attempted to manually mount the drive where I wanted it. Everything seemed to be fine: the original directory tree was all there and the new drive's lost+found folder appeared in the new path too. So far, so good!

I then went into fstab to make the mount permanent (second row):

UUID=e82b3fae-dce5-4b41-bd87-1f7bbd5f8039 /               ext4    errors=remount-ro 0       1
UUID=6b35ec61-13aa-46f9-b6b7-dfd4b264318f      /zpool_primary/Media/CCTV       ext4    defaults        0       0

I restarted, server came back up and I went to bed (I can't be certain if I checked the path / network share).

These evening, I've found that the ONLY directory in is now the CCTV folder.

My initial assumption was that I broke the pool when I mounted the drive, but then if you look at the below, it's reporting that one of the drives used to be mounted as a different device:

root@gomez:/zpool_primary# zpool status
  pool: zpool_primary
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: none requested
config:

        NAME                      STATE     READ WRITE CKSUM
        zpool_primary             DEGRADED     0     0     0
          mirror-0                DEGRADED     0     0     0
            sdb                   ONLINE       0     0     0
            15142782844563214281  UNAVAIL      0     0     0  was /dev/sdf1

errors: No known data errors

The results of lsblk:

root@gomez:/home/nick# lsblk -l
NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda    8:0    0 111.8G  0 disk
sda1   8:1    0 111.8G  0 part /
sdb    8:16   0   7.3T  0 disk
sdb1   8:17   0   7.3T  0 part
sdb9   8:25   0     8M  0 part
sdc    8:32   1   1.8T  0 disk
sdc1   8:33   1   1.8T  0 part /zpool_primary/Media/CCTV
sdd    8:48   1   7.3T  0 disk
sdd1   8:49   1   7.3T  0 part
sdd9   8:57   1     8M  0 part

Before I start poking around and making things worse, can anyone suggest how I get the zfs pool back? I've already tried commenting out the fstab line and restarting, but I am just left with the path (without anything in it).

As a side note, I was under the impression that even with one drive degraded, the zpool should have continued functioning (which isn't the case).

For posterity, here are the results of blkid:

root@gomez:/zpool_primary# blkid
/dev/sda1: LABEL="OS" UUID="e82b3fae-dce5-4b41-bd87-1f7bbd5f8039" TYPE="ext4" PARTUUID="5805358e-01"
/dev/sdb1: LABEL="zpool_primary" UUID="9579775147971336578" UUID_SUB="6175940412684032547" TYPE="zfs_member" PARTLABEL="zfs-efd142ee34d8cfea" PARTUUID="018883e2-0067-ac4b-8126-a2c02d0cfa45"
/dev/sdc1: LABEL="CCTVPartition" UUID="6b35ec61-13aa-46f9-b6b7-dfd4b264318f" TYPE="ext4" PARTLABEL="primary" PARTUUID="a31f929b-0989-4baa-8faf-082be6fca607"
/dev/sdd1: LABEL="zpool_primary" UUID="9579775147971336578" UUID_SUB="15142782844563214281" TYPE="zfs_member" PARTLABEL="zfs-693944edaab7d9e6" PARTUUID="91203b94-8387-1e4e-8646-5ba2cb6c461f"
/dev/sdb9: PARTUUID="e18a4bf7-b4c9-2149-bcd2-19ab9ba2182c"
/dev/sdd9: PARTUUID="009580ae-4c61-ab4d-a923-b7dc6ba14faa"

fdisk:

root@gomez:/zpool_primary# fdisk -l
Disk /dev/sda: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x5805358e

Device     Boot Start       End   Sectors   Size Id Type
/dev/sda1  *     2048 234440703 234438656 111.8G 83 Linux


Disk /dev/sdb: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 7FB97DF9-6B2B-4D4A-A086-4B53CBA65C6F

Device           Start         End     Sectors  Size Type
/dev/sdb1         2048 15628036095 15628034048  7.3T Solaris /usr & Apple ZFS
/dev/sdb9  15628036096 15628052479       16384    8M Solaris reserved 1


Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 205036C8-9CE4-4C1F-9FC5-0D8BF4B6EBB8

Device     Start        End    Sectors  Size Type
/dev/sdc1   2048 3907028991 3907026944  1.8T Linux filesystem


Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: EA0DD0BF-B55E-4B4E-BA33-C74AAB6922F9

Device           Start         End     Sectors  Size Type
/dev/sdd1         2048 15628036095 15628034048  7.3T Solaris /usr & Apple ZFS
/dev/sdd9  15628036096 15628052479       16384    8M Solaris reserved 1

Viewing all articles
Browse latest Browse all 699

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>