Exploring LVM Snapshots Part 1

Author:Jeff Propes
Date:Aug 18, 2016
Copyright:© 2016 Jeff Propes. All rights reserved.

Today I want to explore LVM snapshots and how to deploy and interact with them. I had a lot of questions and some confusion surrounding snapshots for a long time. But through this process, I learned that they're very easy to make, use, and even abuse.

This experiment got pretty interesting after I started wondering about dangerous things regarding snapshots. I explore what happens if you write more data to a snapshot than you provisioned in part 2, if you are interested.

Table of Contents

Before We Begin

Here's what I'll be doing in this document:

  1. Create a new logical volume (hereafter LV) and fill it will some sample data.
  2. Create a snapshot of the new LV.
  3. Simulate writes continuing to the LV while seeing the snapshot remain static.
  4. Take a backup of the static snapshot data.

I'll use an existing server of mine running CentOS 7 and LVM2. This server happens to host my spacewalk service, and it is named quite uncreatively: spacewalk.universe. It was selected solely because it has some uncommitted disk space in its volume group.

Target Audience

This document is targetted at the intermediate-level Linux sysadmin. You should already be familiar with what LVM is and roughly what it can do. Numerous and sundry docs already exist which can fill in potential knowledge gaps you have. Don't worry, I'll still be here when you return.

Neither will I be covering how to operate any of the commands listed here. There are quality man pages for everything and --help usage responses built into all of the LVM binaries.

With that being said, let's get started!

Step 1: Create a New Logical Volume

I'm making a 5GB logical volume called snaptest to test against. It will be mounted to /snaptest. Hey ... don't judge me for not using /mnt.

[root@spacewalk ~]# lvcreate -L5G -n snaptest centos
  Logical volume "snaptest" created.
[root@spacewalk ~]# mkfs.ext4 /dev/mapper/centos-snaptest
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1310720 blocks
65536 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@spacewalk ~]# mkdir -p /snaptest
[root@spacewalk ~]# mount /dev/mapper/centos-snaptest /snaptest

Easy enough.

Step 2: Create Sample Data

It doesn't really matter what sample data I use, but to prevent unrelated topics like deduplicated data and caching from intruding into this experiment, I'll use random data sourced from /dev/urandom. For fun and realism, I'll vary the size of the files so they're between 10 and 30MB in size. I'm also going to let fate decide how many files we're working with today, somewhere between 1 and 30.

[root@spacewalk ~]# FILECOUNT="$(($RANDOM % 30))"
[root@spacewalk ~]# for (( i=1; i<=FILECOUNT; i++ )); do dd if=/dev/urandom of=/snaptest/randfile${i} bs=1M count=$(($RANDOM % 20 + 10)) >/dev/null 2>&1; echo -n "${i}. "; done; echo "done"
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. done

I guess we ended up with 27 files. Here's what the LV looks like now:

[root@spacewalk ~]# df -h |awk '{if (NR == 1) {print $0}} /snaptest/ {print $0}'
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/centos-snaptest   4.8G  533M  4.1G  12% /snaptest
[root@spacewalk ~]# ls -l /snaptest
total 525312
-rw-r--r--. 1 root root 16777216 Aug 18 14:52 randfile1
-rw-r--r--. 1 root root 28311552 Aug 18 14:52 randfile10
-rw-r--r--. 1 root root 10485760 Aug 18 14:52 randfile11
-rw-r--r--. 1 root root 12582912 Aug 18 14:52 randfile12
-rw-r--r--. 1 root root 13631488 Aug 18 14:52 randfile13
-rw-r--r--. 1 root root 27262976 Aug 18 14:52 randfile14
-rw-r--r--. 1 root root 18874368 Aug 18 14:52 randfile15
-rw-r--r--. 1 root root 28311552 Aug 18 14:52 randfile16
-rw-r--r--. 1 root root 13631488 Aug 18 14:52 randfile17
-rw-r--r--. 1 root root 30408704 Aug 18 14:52 randfile18
-rw-r--r--. 1 root root 23068672 Aug 18 14:52 randfile19
-rw-r--r--. 1 root root 29360128 Aug 18 14:52 randfile2
-rw-r--r--. 1 root root 24117248 Aug 18 14:53 randfile20
-rw-r--r--. 1 root root 17825792 Aug 18 14:53 randfile21
-rw-r--r--. 1 root root 14680064 Aug 18 14:53 randfile22
-rw-r--r--. 1 root root 12582912 Aug 18 14:53 randfile23
-rw-r--r--. 1 root root 16777216 Aug 18 14:53 randfile24
-rw-r--r--. 1 root root 28311552 Aug 18 14:53 randfile25
-rw-r--r--. 1 root root 22020096 Aug 18 14:53 randfile26
-rw-r--r--. 1 root root 30408704 Aug 18 14:53 randfile27
-rw-r--r--. 1 root root 13631488 Aug 18 14:52 randfile3
-rw-r--r--. 1 root root 10485760 Aug 18 14:52 randfile4
-rw-r--r--. 1 root root 13631488 Aug 18 14:52 randfile5
-rw-r--r--. 1 root root 29360128 Aug 18 14:52 randfile6
-rw-r--r--. 1 root root 14680064 Aug 18 14:52 randfile7
-rw-r--r--. 1 root root 19922944 Aug 18 14:52 randfile8
-rw-r--r--. 1 root root 16777216 Aug 18 14:52 randfile9
[root@spacewalk ~]# pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/sda2  centos lvm2 a--   49.51g 40.00m
  /dev/sdb   centos lvm2 a--  221.00g 17.09g

Great! We have sample data now.

Step 3: Create Snapshot

I'll stick with my previous pattern of creativity and name my snapshot snaptest_ss. I'll provision it to have 2GB of available space for any writes to /snaptest while this snapshot is active. More on that later.

[root@spacewalk ~]# lvcreate -L2G -s -n snaptest_ss /dev/mapper/centos-snaptest
  Logical volume "snaptest_ss" created.
[root@spacewalk ~]# lvs
  LV          VG     Attr       LSize   Pool Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  root        centos -wi-ao----  44.47g
  snaptest    centos owi-aos---   5.00g
  snaptest_ss centos swi-a-s---   2.00g      snaptest 0.00
  spacewalk   centos -wi-ao---- 198.90g
  swap        centos -wi-ao----   5.00g
[root@spacewalk ~]# lvdisplay /dev/mapper/centos-snaptest_ss
  --- Logical volume ---
  LV Path                /dev/centos/snaptest_ss
  LV Name                snaptest_ss
  VG Name                centos
  LV UUID                zb6kFd-gL7G-s6eU-tENU-4n6l-M08U-JnAe3Q
  LV Write Access        read/write
  LV Creation host, time spacewalk.universe, 2016-08-18 15:29:29 -0500
  LV snapshot status     active destination for snaptest
  LV Status              available
  # open                 0
  LV Size                5.00 GiB
  Current LE             1280
  COW-table size         2.00 GiB
  COW-table LE           512
  Allocated to snapshot  0.00%
  Snapshot chunk size    4.00 KiB
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:6

[root@spacewalk ~]# lvdisplay /dev/mapper/centos-snaptest
  --- Logical volume ---
  LV Path                /dev/centos/snaptest
  LV Name                snaptest
  VG Name                centos
  LV UUID                mxl7fi-WjPb-dTcs-J3TS-orac-RoJD-tyWLKt
  LV Write Access        read/write
  LV Creation host, time spacewalk.universe, 2016-08-18 14:45:28 -0500
  LV snapshot status     source of
                         snaptest_ss [active]
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

Looks good so far. From the available data, it looks like a snapshot is almost identical to a regular logical volume, only it has a parent.

Step 4: Verification

Now we mount the snapshot as if it were a real volume. We don't have to format it because it's a "live clone" of its parent volume which is already formatted. Unsurprisingly, I'm mounting it to /snaptest_ss.

Hey, stop that. I can feel your judgements from afar.

[root@spacewalk ~]# mkdir /snaptest_ss
[root@spacewalk ~]# mount /dev/mapper/centos-snaptest_ss /snaptest_ss

I should be able to write new things to /snaptest and the mounted snapshot on /snaptest_ss should be unaffected.

[root@spacewalk ~]# dd if=/dev/urandom of=/snaptest/post1 bs=1M count=50
50+0 records in
50+0 records out
52428800 bytes (52 MB) copied, 4.3403 s, 12.1 MB/s
[root@spacewalk ~]# lvs
  LV          VG     Attr       LSize   Pool Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  root        centos -wi-ao----  44.47g
  snaptest    centos owi-aos---   5.00g
  snaptest_ss centos swi-aos---   2.00g      snaptest 2.45
  spacewalk   centos -wi-ao---- 198.90g
  swap        centos -wi-ao----   5.00g
[root@spacewalk ~]# ls -l /snaptest/post1
-rw-r--r--. 1 root root 52428800 Aug 18 15:32 /snaptest/post1
[root@spacewalk ~]# ls -l /snaptest_ss/post1
ls: cannot access /snaptest_ss/post1: No such file or directory

Sweet! It worked. For posterity, here's the full contents of the snapshot volume so you can compare sizes, times, etc and verify they're identical.

[root@spacewalk ~]# ls -l /snaptest_ss/
total 525312
-rw-r--r--. 1 root root 16777216 Aug 18 14:52 randfile1
-rw-r--r--. 1 root root 28311552 Aug 18 14:52 randfile10
-rw-r--r--. 1 root root 10485760 Aug 18 14:52 randfile11
-rw-r--r--. 1 root root 12582912 Aug 18 14:52 randfile12
-rw-r--r--. 1 root root 13631488 Aug 18 14:52 randfile13
-rw-r--r--. 1 root root 27262976 Aug 18 14:52 randfile14
-rw-r--r--. 1 root root 18874368 Aug 18 14:52 randfile15
-rw-r--r--. 1 root root 28311552 Aug 18 14:52 randfile16
-rw-r--r--. 1 root root 13631488 Aug 18 14:52 randfile17
-rw-r--r--. 1 root root 30408704 Aug 18 14:52 randfile18
-rw-r--r--. 1 root root 23068672 Aug 18 14:52 randfile19
-rw-r--r--. 1 root root 29360128 Aug 18 14:52 randfile2
-rw-r--r--. 1 root root 24117248 Aug 18 14:53 randfile20
-rw-r--r--. 1 root root 17825792 Aug 18 14:53 randfile21
-rw-r--r--. 1 root root 14680064 Aug 18 14:53 randfile22
-rw-r--r--. 1 root root 12582912 Aug 18 14:53 randfile23
-rw-r--r--. 1 root root 16777216 Aug 18 14:53 randfile24
-rw-r--r--. 1 root root 28311552 Aug 18 14:53 randfile25
-rw-r--r--. 1 root root 22020096 Aug 18 14:53 randfile26
-rw-r--r--. 1 root root 30408704 Aug 18 14:53 randfile27
-rw-r--r--. 1 root root 13631488 Aug 18 14:52 randfile3
-rw-r--r--. 1 root root 10485760 Aug 18 14:52 randfile4
-rw-r--r--. 1 root root 13631488 Aug 18 14:52 randfile5
-rw-r--r--. 1 root root 29360128 Aug 18 14:52 randfile6
-rw-r--r--. 1 root root 14680064 Aug 18 14:52 randfile7
-rw-r--r--. 1 root root 19922944 Aug 18 14:52 randfile8
-rw-r--r--. 1 root root 16777216 Aug 18 14:52 randfile9

Step 5: Simulated Workload

So now we are getting to the heart of the matter. Imagine that I/O activity is continuing on the original LV. The snapshot, however, remains static, and I can do what I wish to it without stopping the I/O on the LV.

I am going to simulate continual I/O by reading and writing more files to /snaptest. In my little script here, there's a 2 in 3 chance that the file will be deleted after being written, simulating a database-like operation that's growing with time via inserts and deletes. A database with really big inserts. Huge writes. Huuuuugggeee ... tracts of land.

[root@spacewalk ~]# COUNT=0; while :; do dd if=/dev/urandom of=/snaptest/loadtest${COUNT} bs=1M count=$(($RANDOM % 20 + 10)) >/dev/null 2>&1; [ $(($RANDOM % 3)) -eq 0 ] || rm -f /snaptest/loadtest${COUNT}; COUNT=$((COUNT + 1)); done &
[1] 30970

With that running in the background, we can watch the volume usage grow over time (2nd column of numbers). These four listings were spaced out by 5 seconds between invocations.

[root@spacewalk ~]# df -h |grep snaptest
/dev/mapper/centos-snaptest     4.8G  601M  4.0G  13% /snaptest
/dev/mapper/centos-snaptest_ss  4.8G  533M  4.1G  12% /snaptest_ss
...
[root@spacewalk ~]# df -h |grep snaptest
/dev/mapper/centos-snaptest     4.8G  643M  4.0G  14% /snaptest
/dev/mapper/centos-snaptest_ss  4.8G  533M  4.1G  12% /snaptest_ss
...
[root@spacewalk ~]# df -h |grep snaptest
/dev/mapper/centos-snaptest     4.8G  692M  3.9G  15% /snaptest
/dev/mapper/centos-snaptest_ss  4.8G  533M  4.1G  12% /snaptest_ss
...
[root@spacewalk ~]# df -h |grep snaptest
/dev/mapper/centos-snaptest     4.8G  731M  3.9G  16% /snaptest
/dev/mapper/centos-snaptest_ss  4.8G  533M  4.1G  12% /snaptest_ss

And you can also see that lvs is reporting how full the snapshot is:

[root@spacewalk ~]# lvs
  LV          VG     Attr       LSize   Pool Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  root        centos -wi-ao----  44.47g
  snaptest    centos owi-aos---   5.00g
  snaptest_ss centos swi-aos---   2.00g      snaptest 9.62
  spacewalk   centos -wi-ao---- 198.90g
  swap        centos -wi-ao----   5.00g

Explanation

While the snapshot exists, writes to the parent LV are tossed into a Copy-On-Write (hereafter CoW) buffer. Recall we set the size of the snapshot to 2GB. That's the maximum amount of data written that can occur before the snapshot gets "full" and bad things happen. That 2GB doesn't just mean size of files written. It also includes any inodes, access times of file inodes, etc etc.

What's really interesting is that we mount the snapshot (which is the CoW buffer) to see the original data. Any new writes are actually being stored in the snapshot itself. It's backwards from how I thought it would be.

Step 6: Backup the snapshot

With the test load I/O ongoing, let's take a backup of the contents of /snaptest using the snapshot.

[root@spacewalk ~]# lvs
  LV          VG     Attr       LSize   Pool Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  root        centos -wi-ao----  44.47g
  snaptest    centos owi-aos---   5.00g
  snaptest_ss centos swi-aos---   2.00g      snaptest 73.49
  spacewalk   centos -wi-ao---- 198.90g
  swap        centos -wi-ao----   5.00g

Ooh ... maybe I waited too long and let that get a little close to being full. Better hurry.

[root@spacewalk ~]# tar czf ~/snaptest_backup_from_snapshot.tar.gz /snaptest_ss/*
tar: Removing leading `/' from member names
[root@spacewalk ~]# lvs
  LV          VG     Attr       LSize   Pool Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  root        centos -wi-ao----  44.47g
  snaptest    centos owi-aos---   5.00g
  snaptest_ss centos swi-aos---   2.00g      snaptest 76.80
  spacewalk   centos -wi-ao---- 198.90g
  swap        centos -wi-ao----   5.00g

I forgot to wrap the tar command with time, but it took about 90-100 seconds total. Longer than I thought, but then this is pseudorandom data that's largely incompressible. Oh, right, we should probably stop the test load I/O now.

[root@spacewalk ~]# fg 1
while :; do
    dd if=/dev/urandom of=/snaptest/loadtest${COUNT} bs=1M count=$(($RANDOM % 20 + 10)) > /dev/null 2>&1; [ $(($RANDOM % 3)) -eq 0 ] || rm -i -f /snaptest/loadtest${COUNT}; COUNT=$((COUNT + 1));
done
^C

That's better. Now, how'd we do?

Step 7: Verification

There should be only files named randfileXX in the tarball if the snapshot worked.

[root@spacewalk ~]# tar tvf ~/snaptest_backup_from_snapshot.tar.gz
-rw-r--r-- root/root 521164800 2016-08-18 15:36 snaptest_ss/randfile1
-rw-r--r-- root/root  28311552 2016-08-18 14:52 snaptest_ss/randfile10
-rw-r--r-- root/root  10485760 2016-08-18 14:52 snaptest_ss/randfile11
-rw-r--r-- root/root  12582912 2016-08-18 14:52 snaptest_ss/randfile12
-rw-r--r-- root/root  13631488 2016-08-18 14:52 snaptest_ss/randfile13
-rw-r--r-- root/root  27262976 2016-08-18 14:52 snaptest_ss/randfile14
-rw-r--r-- root/root  18874368 2016-08-18 14:52 snaptest_ss/randfile15
-rw-r--r-- root/root  28311552 2016-08-18 14:52 snaptest_ss/randfile16
-rw-r--r-- root/root  13631488 2016-08-18 14:52 snaptest_ss/randfile17
-rw-r--r-- root/root  30408704 2016-08-18 14:52 snaptest_ss/randfile18
-rw-r--r-- root/root  23068672 2016-08-18 14:52 snaptest_ss/randfile19
-rw-r--r-- root/root  29360128 2016-08-18 14:52 snaptest_ss/randfile2
-rw-r--r-- root/root  24117248 2016-08-18 14:53 snaptest_ss/randfile20
-rw-r--r-- root/root  17825792 2016-08-18 14:53 snaptest_ss/randfile21
-rw-r--r-- root/root  14680064 2016-08-18 14:53 snaptest_ss/randfile22
-rw-r--r-- root/root  12582912 2016-08-18 14:53 snaptest_ss/randfile23
-rw-r--r-- root/root  16777216 2016-08-18 14:53 snaptest_ss/randfile24
-rw-r--r-- root/root  28311552 2016-08-18 14:53 snaptest_ss/randfile25
-rw-r--r-- root/root  22020096 2016-08-18 14:53 snaptest_ss/randfile26
-rw-r--r-- root/root  30408704 2016-08-18 14:53 snaptest_ss/randfile27
-rw-r--r-- root/root  13631488 2016-08-18 14:52 snaptest_ss/randfile3
-rw-r--r-- root/root  10485760 2016-08-18 14:52 snaptest_ss/randfile4
-rw-r--r-- root/root  13631488 2016-08-18 14:52 snaptest_ss/randfile5
-rw-r--r-- root/root  29360128 2016-08-18 14:52 snaptest_ss/randfile6
-rw-r--r-- root/root  14680064 2016-08-18 14:52 snaptest_ss/randfile7
-rw-r--r-- root/root  19922944 2016-08-18 14:52 snaptest_ss/randfile8
-rw-r--r-- root/root  16777216 2016-08-18 14:52 snaptest_ss/randfile9

Success!! And in /snaptest there will also be the loadtest files and post1 ...

[root@spacewalk ~]# ls -l /snaptest/
total 1502208
-rw-r--r--. 1 root root 28311552 Aug 18 15:33 loadtest1
-rw-r--r--. 1 root root 11534336 Aug 18 15:34 loadtest10
-rw-r--r--. 1 root root 15728640 Aug 18 15:36 loadtest102
-rw-r--r--. 1 root root 27262976 Aug 18 15:36 loadtest104
-rw-r--r--. 1 root root 20971520 Aug 18 15:37 loadtest109
-rw-r--r--. 1 root root 10485760 Aug 18 15:34 loadtest11
-rw-r--r--. 1 root root 20971520 Aug 18 15:37 loadtest111
-rw-r--r--. 1 root root 12582912 Aug 18 15:37 loadtest113
-rw-r--r--. 1 root root 22020096 Aug 18 15:37 loadtest119
-rw-r--r--. 1 root root 27262976 Aug 18 15:34 loadtest12
-rw-r--r--. 1 root root 25165824 Aug 18 15:37 loadtest121
-rw-r--r--. 1 root root 19922944 Aug 18 15:37 loadtest127
-rw-r--r--. 1 root root 12582912 Aug 18 15:37 loadtest129
-rw-r--r--. 1 root root 10485760 Aug 18 15:34 loadtest13
-rw-r--r--. 1 root root  2097152 Aug 18 15:37 loadtest132
-rw-r--r--. 1 root root 19922944 Aug 18 15:34 loadtest20
-rw-r--r--. 1 root root 26214400 Aug 18 15:34 loadtest24
-rw-r--r--. 1 root root 15728640 Aug 18 15:34 loadtest26
-rw-r--r--. 1 root root 29360128 Aug 18 15:34 loadtest30
-rw-r--r--. 1 root root 24117248 Aug 18 15:35 loadtest43
-rw-r--r--. 1 root root 30408704 Aug 18 15:35 loadtest44
-rw-r--r--. 1 root root 27262976 Aug 18 15:35 loadtest45
-rw-r--r--. 1 root root 27262976 Aug 18 15:35 loadtest48
-rw-r--r--. 1 root root 11534336 Aug 18 15:34 loadtest5
-rw-r--r--. 1 root root 22020096 Aug 18 15:35 loadtest50
-rw-r--r--. 1 root root 20971520 Aug 18 15:35 loadtest51
-rw-r--r--. 1 root root 28311552 Aug 18 15:35 loadtest52
-rw-r--r--. 1 root root 25165824 Aug 18 15:35 loadtest53
-rw-r--r--. 1 root root 22020096 Aug 18 15:35 loadtest56
-rw-r--r--. 1 root root 10485760 Aug 18 15:35 loadtest58
-rw-r--r--. 1 root root 23068672 Aug 18 15:34 loadtest6
-rw-r--r--. 1 root root 24117248 Aug 18 15:35 loadtest61
-rw-r--r--. 1 root root 17825792 Aug 18 15:35 loadtest62
-rw-r--r--. 1 root root 14680064 Aug 18 15:35 loadtest66
-rw-r--r--. 1 root root 19922944 Aug 18 15:35 loadtest68
-rw-r--r--. 1 root root 16777216 Aug 18 15:35 loadtest69
-rw-r--r--. 1 root root 30408704 Aug 18 15:34 loadtest7
-rw-r--r--. 1 root root 24117248 Aug 18 15:35 loadtest70
-rw-r--r--. 1 root root 12582912 Aug 18 15:35 loadtest71
-rw-r--r--. 1 root root 17825792 Aug 18 15:36 loadtest75
-rw-r--r--. 1 root root 29360128 Aug 18 15:36 loadtest78
-rw-r--r--. 1 root root 15728640 Aug 18 15:36 loadtest82
-rw-r--r--. 1 root root 17825792 Aug 18 15:36 loadtest85
-rw-r--r--. 1 root root 24117248 Aug 18 15:36 loadtest88
-rw-r--r--. 1 root root 22020096 Aug 18 15:36 loadtest92
-rw-r--r--. 1 root root 13631488 Aug 18 15:36 loadtest98
-rw-r--r--. 1 root root 15728640 Aug 18 15:36 loadtest99
-rw-r--r--. 1 root root 52428800 Aug 18 15:32 post1
-rw-r--r--. 1 root root 16777216 Aug 18 14:52 randfile1
-rw-r--r--. 1 root root 28311552 Aug 18 14:52 randfile10
-rw-r--r--. 1 root root 10485760 Aug 18 14:52 randfile11
-rw-r--r--. 1 root root 12582912 Aug 18 14:52 randfile12
-rw-r--r--. 1 root root 13631488 Aug 18 14:52 randfile13
-rw-r--r--. 1 root root 27262976 Aug 18 14:52 randfile14
-rw-r--r--. 1 root root 18874368 Aug 18 14:52 randfile15
-rw-r--r--. 1 root root 28311552 Aug 18 14:52 randfile16
-rw-r--r--. 1 root root 13631488 Aug 18 14:52 randfile17
-rw-r--r--. 1 root root 30408704 Aug 18 14:52 randfile18
-rw-r--r--. 1 root root 23068672 Aug 18 14:52 randfile19
-rw-r--r--. 1 root root 29360128 Aug 18 14:52 randfile2
-rw-r--r--. 1 root root 24117248 Aug 18 14:53 randfile20
-rw-r--r--. 1 root root 17825792 Aug 18 14:53 randfile21
-rw-r--r--. 1 root root 14680064 Aug 18 14:53 randfile22
-rw-r--r--. 1 root root 12582912 Aug 18 14:53 randfile23
-rw-r--r--. 1 root root 16777216 Aug 18 14:53 randfile24
-rw-r--r--. 1 root root 28311552 Aug 18 14:53 randfile25
-rw-r--r--. 1 root root 22020096 Aug 18 14:53 randfile26
-rw-r--r--. 1 root root 30408704 Aug 18 14:53 randfile27
-rw-r--r--. 1 root root 13631488 Aug 18 14:52 randfile3
-rw-r--r--. 1 root root 10485760 Aug 18 14:52 randfile4
-rw-r--r--. 1 root root 13631488 Aug 18 14:52 randfile5
-rw-r--r--. 1 root root 29360128 Aug 18 14:52 randfile6
-rw-r--r--. 1 root root 14680064 Aug 18 14:52 randfile7
-rw-r--r--. 1 root root 19922944 Aug 18 14:52 randfile8
-rw-r--r--. 1 root root 16777216 Aug 18 14:52 randfile9

Ok, great, everything's worked as expected.

Step 8: Toying With the Snaphot

Now I'm curious what happens if I try to write to the snapshot itself. It is mounted read-write after all ...

[root@spacewalk ~]# mount |grep snaptest_ss
/dev/mapper/centos-snaptest_ss on /snaptest_ss type ext4 (rw,relatime,seclabel,data=ordered)

So let's see what happens, shall we?

[root@spacewalk ~]# dd if=/dev/urandom of=/snaptest_ss/backwards1 bs=1M count=25
25+0 records in
25+0 records out
26214400 bytes (26 MB) copied, 2.15702 s, 12.2 MB/s

... huh, nothing exploded.

[root@spacewalk ~]# ls -l /snaptest_ss/backwards1
-rw-r--r--. 1 root root 26214400 Aug 18 15:58 /snaptest_ss/backwards1
[root@spacewalk ~]# ls -l /snaptest/backwards1
ls: cannot access /snaptest/backwards1: No such file or directory

So it didn't leak backwards into the parent LV. As near as I can tell, the snapshot is now diverged from what /snaptest had in it at the time of the snapshot's creation, and that's apparently OK. I could see this feature being super useful, actually.

[root@spacewalk ~]# lvs
  LV          VG     Attr       LSize   Pool Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  root        centos -wi-ao----  44.47g
  snaptest    centos owi-aos---   5.00g
  snaptest_ss centos swi-aos---   2.00g      snaptest 77.35
  spacewalk   centos -wi-ao---- 198.90g
  swap        centos -wi-ao----   5.00g
[root@spacewalk ~]# lvdisplay /dev/mapper/centos-snaptest
  --- Logical volume ---
  LV Path                /dev/centos/snaptest
  LV Name                snaptest
  VG Name                centos
  LV UUID                mxl7fi-WjPb-dTcs-J3TS-orac-RoJD-tyWLKt
  LV Write Access        read/write
  LV Creation host, time spacewalk.universe, 2016-08-18 14:45:28 -0500
  LV snapshot status     source of
                         snaptest_ss [active]
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

[root@spacewalk ~]# lvdisplay /dev/mapper/centos-snaptest_ss
  --- Logical volume ---
  LV Path                /dev/centos/snaptest_ss
  LV Name                snaptest_ss
  VG Name                centos
  LV UUID                zb6kFd-gL7G-s6eU-tENU-4n6l-M08U-JnAe3Q
  LV Write Access        read/write
  LV Creation host, time spacewalk.universe, 2016-08-18 15:29:29 -0500
  LV snapshot status     active destination for snaptest
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  COW-table size         2.00 GiB
  COW-table LE           512
  Allocated to snapshot  77.35%
  Snapshot chunk size    4.00 KiB
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:6

All this looks as expected with no complaints.

Step 9: Eliminate the Snapshot

There should be no ill effect on /snaptest from removing the snapshot.

[root@spacewalk ~]# umount /snaptest_ss
[root@spacewalk ~]# lvremove /dev/mapper/centos-snaptest_ss
Do you really want to remove active logical volume snaptest_ss? [y/n]: y
  Logical volume "snaptest_ss" successfully removed
[root@spacewalk ~]# lvs
  LV        VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root      centos -wi-ao----  44.47g
  snaptest  centos -wi-ao----   5.00g
  spacewalk centos -wi-ao---- 198.90g
  swap      centos -wi-ao----   5.00g
[root@spacewalk ~]#

I am curious to see if the new backwards1 file showed up in /snaptest when we eliminated the snapshot.

[root@spacewalk ~]# ls -l /snaptest/backwards1
ls: cannot access /snaptest/backwards1: No such file or directory

Guess not. Good to know.

Conclusions

LVM snapshots just work, and work quite well. I can see myself using this all the time for lots of things such as backups, taking statistics of a very dynamic I/O operation, etc. The only real downfalls I see are:

What happens if you don't size the snapshot large enough, you ask? I explore that very thing in part 2 of my LVM Snapshot series.