Linux LVM (Logical Volume Manager) transforms static partitions into a flexible, portable, and recoverable storage layer. Beyond simple resizing, LVM enables migrations, RAID mirroring, disaster recovery, and SAN integrations (like NetApp).
This post takes you from fundamentals to deep operational concepts — including vgexport
, vgimport
, vgchange
, vgrename
, metadata recovery, RAID, and safe PV resizing practices.
🧩 1. LVM Building Blocks
Component | Purpose |
---|---|
PV (Physical Volume) | Disk or partition initialized for LVM |
VG (Volume Group) | Pool combining PVs into one logical space |
LV (Logical Volume) | Virtual partition carved from VG |
pvcreate /dev/sdb
vgcreate vg_data /dev/sdb
lvcreate -n lv_app -L 50G vg_data
mkfs.ext4 /dev/vg_data/lv_app
mount /dev/vg_data/lv_app /mnt/app
⚙️ 2. Extending, Resizing & Removing Storage
🧱 Two Ways to Grow Storage — and Why One is Riskier
You can expand LVM capacity by either resizing a disk (existing PV) or adding a new disk (new PV) to your VG.
Scenario 1: Extending an Existing PV (Riskier)
If your underlying LUN or disk was expanded (say from 100 GB → 200 GB):
- Rescan the device:
echo 1 > /sys/class/block/sdb/device/rescan
fdisk -l /dev/sdb
We may need to use resizepart if the lvm pv we want to increase is a partition and not a seperate disk
Start parted and print partition table:parted /dev/sdb
(parted) print
This shows all partitions and their numbers.
Resize the partition:(parted) resizepart 2 100%
Replace 2 with your partition number (e.g., /dev/sdb2).100% tells parted to extend the partition to use all available free space at the end of the disk.This operation is safe and does not delete or re-create the partition if there is unallocated space adjacent to the partition��.
Exit parted:(parted) quit
- Resize the PV:
pvresize /dev/sdb
- Validate the VG:
pvs
vgdisplay vg_data
Now your VG reflects additional Free PE
space.
⚠️ Risks:
- Rescan failures or cached geometry can corrupt LVM metadata.
- Multipath or clustered systems may see inconsistent disk layouts.
- If expansion fails mid-process, recovery can be tricky.
✅ Before resizing, take an LVM metadata backup:
vgcfgbackup vg_data
If corruption happens:
vgcfgrestore vg_data
(Only restores structure, not the actual data.)
Scenario 2: Adding a New Disk (Safer)
Instead of resizing an existing PV, add a new disk:
pvcreate /dev/sdc
vgextend vg_data /dev/sdc
Then expand an LV:
lvextend -L +50G /dev/vg_data/lv_app
resize2fs /dev/vg_data/lv_app
✅ Recommended for SAN/NetApp/Production systems
✅ No dependency on device rescans or geometry changes
✅ Easy rollback
Method | Description | Risk | Use Case |
---|---|---|---|
pvresize |
Reclaims resized disk space | ⚠️ High | Virtual/Dev Environments |
vgextend |
Adds new PV to VG | ✅ Low | SAN/Physical Servers |
🟢 pvmove — Safely Move Data Between Disks
pvmove allows you to migrate data from one physical volume (PV) to another within the same volume group (VG). It’s essential when replacing disks or redistributing space.
Example:
vgextend vg_data /dev/sdd1 # Add a new PV to VG
pvmove /dev/sdb1 /dev/sdd1 # Move data off old PV
⚠️ Failure Scenarios & Safety
Not enough free space in VG
pvmove requires free extents on another PV or newly added disk.
If insufficient space: operation fails cleanly:
No extents available for allocationInterrupted move (system crash or power loss)
Temporary metadata tracks progress.
You can safely resume or abort:
pvmove --continue /dev/sdb1
pvmove --abort /dev/sdb1Incorrect target VG
pvmove works within a single VG only.
Moving across VGs requires vgextend + vgreduce, which is more complex.
Key Points:
Non-destructive: Data is copied and verified before updating metadata.
Requires enough free space in the target PV or VG.
Can pause, resume, or abort using:
pvmove --abort /dev/sdb1
pvmove --continue /dev/sdb1
After migration, old PVs can be safely removed with vgreduce.
vgreduce
Purpose: Remove a PV from a VG.
Behavior:
Non-destructive if the PV is empty (no logical volumes or extents allocated). It just updates the VG metadata to forget the PV.
Destructive if the PV still contains data — LVM will refuse to remove it, but if you force it (with --force), you can destroy data.
Example (safe usage):
vgreduce vg_data /dev/sdb1
Only works if /dev/sdb1 has no allocated extents (moved away via pvmove).
Example (unsafe usage):
vgreduce --force vg_data /dev/sdb1
Forces removal even if data exists — can destroy all data on that PV.
✅ Rule of Thumb:
Always check PV usage first:
pvs -o+pv_used
lvs -a -o+devices
Only remove PVs that are completely free.
If data exists, first use pvmove to migrate it, then run vgreduce.
🟢 lvresize — Expand or Reduce Logical Volumes
lvresize changes the size of a logical volume (LV). It works both ways: increasing or decreasing the LV size.
Increase LV Size Example:
lvresize -L +20G /dev/vg_data/lv_home
Then resize filesystem (XFS example)
xfs_growfs /dev/vg_data/lv_home
Reduce LV Size Example (Caution!):
lvresize -L 50G /dev/vg_data/lv_home # Resize LV to total 50G
resize2fs /dev/vg_data/lv_home # Shrink filesystem first (ext4)
Important Notes When Reducing:
Always shrink the filesystem first; failing to do so will corrupt data.
Ensure the LV contains enough free space for reduction; check usage with df or lvs.
Consider a backup before reducing — it’s destructive if done incorrectly.
🚀 3. Advanced LVM Operations
🔹 vgexport
& vgimport
Used to migrate or clone VGs between systems without copying data.
vgexport
vgexport vg_data
- Marks VG as “exported” (hidden from local LVM)
- Does not delete data
- Safe before unmapping or SAN snapshot
vgimport
vgimport vg_data
vgchange -ay vg_data
- Reads PV headers
- Re-registers VG
- Clears export flag
Check usage:
pvs
PV VG Fmt Attr PSize PFree
/dev/sdb vg_data lvm2 x-- 100.00g 0
/dev/sdc vg_data lvm2 a-- 200.00g 50.00g
(x
→ exported, a
→ active)
🔹 vgrename
Used to rename a VG (especially useful after importing a clone).
vgrename vg_data vg_data_clone
vgscan --cache
lvs
Avoids duplicate VG name conflicts during SAN clone imports.
🔹 vgchange
Activate or deactivate a VG:
vgchange -ay vg_data # Activate
vgchange -an vg_data # Deactivate
Commonly used after imports or before unmounting for maintenance.
🧱 4. LVM Metadata Backup & Restore
LVM automatically stores metadata backups under /etc/lvm/backup/
.
Manual backup:
vgcfgbackup vg_data
Restore only structure, not data:
vgcfgrestore vg_data
🧩 Use Cases
- VG corruption
- Accidental LV deletion
- Disk failure recovery
Remember:
🧠 Metadata backups restore the layout, not user data.
You’ll still need file-level recovery for lost contents.
💾 5. LVM RAID & Mirroring (Modern Way)
LVM supports software RAID natively with --type raidX
.
Avoid old -m
mirror syntax except for legacy systems.
RAID Type | Command Example | Description |
---|---|---|
RAID 0 | lvcreate -L 200G --type raid0 -i2 -n lv_raid0 vg_data /dev/sdb /dev/sdc |
Striping only |
RAID 1 | lvcreate -L 100G --type raid1 -m1 -n lv_raid1 vg_data /dev/sdb /dev/sdc |
Mirroring |
RAID 5 | lvcreate -L 500G --type raid5 -i3 -n lv_raid5 vg_data /dev/sdb /dev/sdc /dev/sdd |
Striping + parity |
RAID 6 | lvcreate -L 600G --type raid6 -i4 -n lv_raid6 vg_data /dev/sdb /dev/sdc /dev/sdd /dev/sde |
Double parity |
RAID 10 | lvcreate -L 400G --type raid10 -i2 -n lv_raid10 vg_data /dev/sdb /dev/sdc /dev/sdd /dev/sde |
Mirrored stripes |
View RAID info:
lvs -a -o +devices,raid_sync_action
Convert an existing LV:
lvconvert --type raid1 -m1 vg_data/lv_app /dev/sdc
Repair or replace failed disk:
pvmove /dev/sdb /dev/sdf
vgreduce vg_data /dev/sdb
lvconvert --repair vg_data/lv_data
🧠 RAID Parity & Mirroring Notes:
-
--type raid1
→ Mirrors data across devices -
--type raid5/6/10
→ Adds parity redundancy - Modern kernels auto-sync during rebuilds
- Always prefer hardware RAID (NetApp, etc.) in enterprise setups
🧭 6. Using LVM with NetApp Snapshots
If your backend storage is NetApp, and you share a snapshot clone as a new LUN, you can:
- Export the VG from the source:
vgexport vg_data
- Map the snapshot clone to a new host.
- On the new host:
vgimport vg_data
vgrename vg_data vg_data_clone
vgchange -ay vg_data_clone
✅ Safely mount and test the clone.
No data copy. No downtime.
🧮 7. Migration Without Rsync
When moving volumes between servers:
vgexport vg_data
# Move or reattach disks/SAN
vgimport vg_data
vgchange -ay vg_data
Optionally rename VG:
vgrename vg_data vg_clone
✅ No rsync required — data stays on the same blocks.
✅ Ideal for SAN or virtualized migrations.
🧩 8. Quick Reference
Task | Command | Purpose | |||
---|---|---|---|---|---|
Extend VG | vgextend vg_data /dev/sdc |
Safely add new disk | |||
Extend PV | pvresize /dev/sdb |
Use resized disk space | |||
Backup Metadata | vgcfgbackup vg_data |
Save structure info | |||
Restore Metadata | vgcfgrestore vg_data |
Restore LVM layout | |||
Export/Import | vgexport / vgimport |
Migrate VG across systems | |||
Rename VG | vgrename vg_data vg_clone |
Avoid name conflicts | |||
Create RAID | `lvcreate --type raid1 | 5 | 6 | 10 ...` | Software RAID |
Repair RAID | lvconvert --repair |
Fix degraded array |
🧠 Final Thoughts
- Always prefer adding new PVs over resizing existing ones.
-
pvresize
is convenient but riskier for production SANs. -
vgexport
/vgimport
make migrations and SAN snapshot reuses instant. - LVM metadata backups restore structure only, not content.
- Modern LVM RAID offers software redundancy, but hardware RAID or NetApp mirrors are better for critical workloads.
LVM remains one of the most powerful abstractions in Linux storage, bridging raw disks, SANs, and enterprise reliability into one logical framework.
Top comments (0)