(The commented out line is the newer UUID based format which newer debians/Ubuntus should use.) Thus I reverted to the old format of telling the/dev path in fstab and it seems to work. The other thing was my fstab entry for the file system which I layered on top of my logical volume /dev/storagevg/onelv. Use the command " cat /proc/mdstat" to see your RAIDs. I think there's a wizard on every system which creates this file, it should come up when you do " sudo dpkg-reconfigure mdadm". After that another reboot added /dev/md0 and /dev/md1 properly to my system. Just issue this command and add the output to your nf file. Which is effectively the output of " sudo mdadm -detail -scan". Horror.Ī bit of forum post reading later, I had found out that my process did not add my new disk RAID1 to my /etc/mdadm/nf file, which I fixed by adding:ĪRRAY /dev/md1 level=raid1 num-devices=2 UUID=3eaf73fc:0559f59a:e7cc9877:xxxxx Even more disappointing, the drive /dev/md1 I had just created was now a strange /dev/md_d1 with a slew of other devices named /dev/md_d1/d1p1 etc.
#ADDING DISKS TO SOFTRAID FREE#
What looked like an easy and problem free process turned out to have a few surprises in store. A previous e2fsck might be needed to make sure everything in this fs is okay: In my case it is an Ext3 file system, thus I am using the resize2fs command. The last step in the process is to resize the file system residing on the logical volume /dev/storagevg/onelv so that it uses the additional space. Just for the record, the command "sudo lvextend /dev/storagevg/onelv -l +100%PVS"gave me a "segmentation fault" error, thus the above equivalent. Sudo lvextend /dev/storagevg/onelv /dev/md1 This is equivalent to specifying "-l +100%PVS" on the command line." We here want to extend the logical volume by 100% of the newly added storage space, thus we can learn from the man page of lvextend: "lvextend /dev/vg01/lvol01 /dev/sdk3" tries to extend the size of that logical volume by the amount of free space on physical volume /dev/sdk3. Use lvdisplay to get the path of the 'logical volume' we want to grow inside the 'volume group'. To do this we run the lvextend tool providing the size by which we wish to extend the volume. So as we here are in effect growing the size of the "disk" lvm exposes to the system - so far on the physical layer by installing disks and on a lvm-physical layer by adding 'physical volumes' to our 'volume group' - we now need to tell lvm that it should use the additional storage space to grow the size of the exposed drive. This is independent from how you intend to format it. In lvm-speak, a 'logical volume' is the disk lvm exposes to the system. So, after we added our RAID1 disk md1 to our vg, its storage space is ready to be allocated to a logical volume. (In the previous post we used vgcreate to init a volume group.) Find out the name of the volume group you'd like to extend with vgdisplay and then use thisto build your vgextend command: Th trick to add drives/'physical volumes' to a 'volume group' is to vgextend. As a quick glance at the lvm scheme diagram above explains, the volume-group is the virtual "disk" the lvm exposes to the system, where actual physical drives combined by lvm are the underlying storage hardware. With the physical volume created, we now need to add this new pv to the volume group (vg) using the vgextend command.
You see, we activate our virtual/RAID1 "disk" as a physical volume with pvcreate here. This is called 'converting a disk to a physical volume' in the terms of lvm and can be achieved with pvcreate: Next, we need to add the new "disk" md1 to our set of physical volumes, although we deal with a "not so physical volume" here. If you got for example the Red Hat lovume managerGUI running, you should immediately after you issuied the command see the new RAID disk in the listing. create is for creating a new array, after that follows the desired device name for the RAID "disk", -l specifies the RAID level to use, RAID 1 here, and -n tells mdadm to add two drives which follow after that. Sudo mdadm -create /dev/md1 -l 1 -n 2 /dev/sdb /dev/sdc Use any tool you like to find out under which device names your system registered the physical drives on your system, then execute the mdadm command with these values to create the RAID "disk", for example: So the first step after installing our new drives is to marry them together in a Linux md drive, a RAID array.
What makes our process described here unique is that it uses an underlying set of two disks bundled as RAID 1 as the "disk" we'd like to add to our volume group. The basic steps of adding a disk to a volume group are for example explained here.
#ADDING DISKS TO SOFTRAID SOFTWARE#
This is a follow up post to Creating a software RAID 1 as basis for an LVM drive.