summaryrefslogtreecommitdiff
path: root/man/lvmvdo.7_main
diff options
context:
space:
mode:
authorKarol Lewandowski <k.lewandowsk@samsung.com>2024-01-23 12:58:00 +0100
committerKarol Lewandowski <k.lewandowsk@samsung.com>2024-01-23 12:58:00 +0100
commitcbab226a74fbaaa43220dee80e8435555c6506ce (patch)
tree1bbd14ec625ea85d0bcc32232d51c1f71e2604d2 /man/lvmvdo.7_main
parent44a3c2255bc480c82f34db156553a595606d8a0b (diff)
downloaddevice-mapper-sandbox/klewandowski/upstream_2.03.22.tar.gz
device-mapper-sandbox/klewandowski/upstream_2.03.22.tar.bz2
device-mapper-sandbox/klewandowski/upstream_2.03.22.zip
Diffstat (limited to 'man/lvmvdo.7_main')
-rw-r--r--man/lvmvdo.7_main448
1 files changed, 448 insertions, 0 deletions
diff --git a/man/lvmvdo.7_main b/man/lvmvdo.7_main
new file mode 100644
index 0000000..931c461
--- /dev/null
+++ b/man/lvmvdo.7_main
@@ -0,0 +1,448 @@
+.TH "LVMVDO" "7" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
+.
+.SH NAME
+.
+lvmvdo \(em Support for Virtual Data Optimizer in LVM
+.
+.SH DESCRIPTION
+.
+VDO is software that provides inline
+block-level deduplication, compression, and thin provisioning capabilities
+for primary storage.
+.P
+Deduplication is a technique for reducing the consumption of storage
+resources by eliminating multiple copies of duplicate blocks. Compression
+takes the individual unique blocks and shrinks them.
+These reduced blocks are then efficiently packed together into
+physical blocks. Thin provisioning manages the mapping from logical blocks
+presented by VDO to where the data has actually been physically stored,
+and also eliminates any blocks of all zeroes.
+.P
+With deduplication, instead of writing the same data more than once, VDO detects and records each
+duplicate block as a reference to the original
+block. VDO maintains a mapping from Logical Block Addresses (LBA) (used by the
+storage layer above VDO) to physical block addresses (used by the storage
+layer under VDO). After deduplication, multiple logical block addresses
+may be mapped to the same physical block address; these are called shared
+blocks and are reference-counted by the software.
+.P
+With compression, VDO compresses multiple blocks (or shared blocks)
+with the fast LZ4 algorithm, and bins them together where possible so that
+multiple compressed blocks fit within a 4 KB block on the underlying
+storage. Mapping from LBA is to a physical block address and index within
+it for the desired compressed data. All compressed blocks are individually
+reference counted for correctness.
+.P
+Block sharing and block compression are invisible to applications using
+the storage, which read and write blocks as they would if VDO were not
+present. When a shared block is overwritten, a new physical block is
+allocated for storing the new block data to ensure that other logical
+block addresses that are mapped to the shared physical block are not
+modified.
+.P
+To use VDO with \fBlvm\fP(8), you must install the standard VDO user-space tools
+\fBvdoformat\fP(8) and the currently non-standard kernel VDO module
+"\fIkvdo\fP".
+.P
+The "\fIkvdo\fP" module implements fine-grained storage virtualization,
+thin provisioning, block sharing, and compression.
+The "\fIuds\fP" module provides memory-efficient duplicate
+identification. The user-space tools include \fBvdostats\fP(8)
+for extracting statistics from VDO volumes.
+.
+.SH VDO TERMS
+.
+.TP
+VDODataLV
+.br
+VDO data LV
+.br
+A large hidden LV with the _vdata suffix. It is created in a VG
+.br
+used by the VDO kernel target to store all data and metadata blocks.
+.
+.TP
+VDOPoolLV
+.br
+VDO pool LV
+.br
+A pool for virtual VDOLV(s), which are the size of used VDODataLV.
+.br
+Only a single VDOLV is currently supported.
+.
+.TP
+VDOLV
+.br
+VDO LV
+.br
+Created from VDOPoolLV.
+.br
+Appears blank after creation.
+.
+.SH VDO USAGE
+.
+The primary methods for using VDO with lvm2:
+.nr step 1 1
+.
+.SS \n[step]. Create a VDOPoolLV and a VDOLV
+.
+Create a VDOPoolLV that will hold VDO data, and a
+virtual size VDOLV that the user can use. If you do not specify the virtual size,
+then the VDOLV is created with the maximum size that
+always fits into data volume even if no
+deduplication or compression can happen
+(i.e. it can hold the incompressible content of /dev/urandom).
+If you do not specify the name of VDOPoolLV, it is taken from
+the sequence of vpool0, vpool1 ...
+.P
+Note: The performance of TRIM/Discard operations is slow for large
+volumes of VDO type. Please try to avoid sending discard requests unless
+necessary because it might take considerable amount of time to finish the discard
+operation.
+.P
+.nf
+.B lvcreate --type vdo -n VDOLV -L DataSize -V LargeVirtualSize VG/VDOPoolLV
+.B lvcreate --vdo -L DataSize VG
+.fi
+.P
+.I Example
+.nf
+# lvcreate --type vdo -n vdo0 -L 10G -V 100G vg/vdopool0
+# mkfs.ext4 -E nodiscard /dev/vg/vdo0
+.fi
+.
+.SS \n+[step]. Convert an existing LV into VDOPoolLV
+.
+Convert an already created or existing LV into a VDOPoolLV, which is a volume
+that can hold data and metadata.
+You will be prompted to confirm such conversion because it \fBIRREVERSIBLY
+DESTROYS\fP the content of such volume and the volume is immediately
+formatted by \fBvdoformat\fP(8) as a VDO pool data volume. You can
+specify the virtual size of the VDOLV associated with this VDOPoolLV.
+If you do not specify the virtual size, it will be set to the maximum size
+that can keep 100% incompressible data there.
+.P
+.nf
+.B lvconvert --type vdo-pool -n VDOLV -V VirtualSize VG/VDOPoolLV
+.B lvconvert --vdopool VG/VDOPoolLV
+.fi
+.P
+.I Example
+.nf
+# lvconvert --type vdo-pool -n vdo0 -V10G vg/ExistingLV
+.fi
+.
+.SS \n+[step]. Change the compression and deduplication of a VDOPoolLV
+.
+Disable or enable the compression and deduplication for VDOPoolLV
+(the volume that maintains all VDO LV(s) associated with it).
+.P
+.B lvchange --compression y|n --deduplication y|n VG/VDOPoolLV
+.P
+.I Example
+.nf
+# lvchange --compression n vg/vdopool0
+# lvchange --deduplication y vg/vdopool1
+.fi
+.
+.SS \n+[step]. Change the default settings used for creating a VDOPoolLV
+.
+VDO allows to set a large variety of options. Lots of these settings
+can be specified in lvm.conf or profile settings. You can prepare
+a number of different profiles in the \fI#DEFAULT_SYS_DIR#/profile\fP directory
+and just specify the profile file name.
+Check the output of \fBlvmconfig --type default --withcomments\fP
+for a detailed description of all individual VDO settings.
+.P
+.I Example
+.nf
+# cat <<EOF > #DEFAULT_SYS_DIR#/profile/vdo_create.profile
+allocation {
+.RS
+vdo_use_compression=1
+vdo_use_deduplication=1
+vdo_use_metadata_hints=1
+vdo_minimum_io_size=4096
+vdo_block_map_cache_size_mb=128
+vdo_block_map_period=16380
+vdo_use_sparse_index=0
+vdo_index_memory_size_mb=256
+vdo_slab_size_mb=2048
+vdo_ack_threads=1
+vdo_bio_threads=1
+vdo_bio_rotation=64
+vdo_cpu_threads=2
+vdo_hash_zone_threads=1
+vdo_logical_threads=1
+vdo_physical_threads=1
+vdo_write_policy="auto"
+vdo_max_discard=1
+.RE
+}
+EOF
+.P
+# lvcreate --vdo -L10G --metadataprofile vdo_create vg/vdopool0
+# lvcreate --vdo -L10G --config 'allocation/vdo_cpu_threads=4' vg/vdopool1
+.fi
+.
+.SS \n+[step]. Set or change VDO settings with option --vdosettings
+.
+Use the form 'option=value' or 'option1=value option2=value',
+or repeat --vdosettings for each option being set.
+Options are listed in the Example section above, for the full description see
+.BR lvm.conf (5).
+Options can omit 'vdo_' and 'vdo_use_' prefixes and all its underscores.
+So i.e. vdo_use_metadata_hints=1 and metadatahints=1 are equivalent.
+To change the option for an already existing VDOPoolLV use
+.BR lvchange (8)
+command. However not all option can be changed.
+Only compression and deduplication options can be also changed for an active VDO LV.
+Lowest priority options are specified with configuration file,
+then with --vdosettings and highest are expliction option --compression
+and --deduplication.
+.P
+.I Example
+.P
+.nf
+# lvcreate --vdo -L10G --vdosettings 'ack_threads=1 hash_zone_threads=2' vg/vdopool0
+# lvchange --vdosettings 'bio_threads=2 deduplication=1' vg/vdopool0
+.fi
+.
+.SS \n+[step]. Checking the usage of VDOPoolLV
+.
+To quickly check how much data on a VDOPoolLV is already consumed,
+use \fBlvs\fP(8). The Data% field reports how much data is occupied
+in the content of the virtual data for the VDOLV and how much space is already
+consumed with all the data and metadata blocks in the VDOPoolLV.
+For a detailed description, use the \fBvdostats\fP(8) command.
+.P
+Note: \fBvdostats\fP(8) currently understands only \fI/dev/mapper\fP device names.
+.P
+.I Example
+.nf
+# lvcreate --type vdo -L10G -V20G -n vdo0 vg/vdopool0
+# mkfs.ext4 -E nodiscard /dev/vg/vdo0
+# lvs -a vg
+.P
+ LV VG Attr LSize Pool Origin Data%
+ vdo0 vg vwi-a-v--- 20.00g vdopool0 0.01
+ vdopool0 vg dwi-ao---- 10.00g 30.16
+ [vdopool0_vdata] vg Dwi-ao---- 10.00g
+.P
+# vdostats --all /dev/mapper/vg-vdopool0-vpool
+/dev/mapper/vg-vdopool0 :
+ version : 30
+ release version : 133524
+ data blocks used : 79
+ ...
+.fi
+.
+.SS \n+[step]. Extending the VDOPoolLV size
+.
+You can add more space to hold VDO data and metadata by
+extending the VDODataLV using the commands
+\fBlvresize\fP(8) and \fBlvextend\fP(8).
+The extension needs to add at least one new VDO slab. You can configure
+the slab size with the \fB\%allocation/\:vdo_slab_size_mb\fP setting.
+.P
+You can also enable automatic size extension of a monitored VDOPoolLV
+with the \fBactivation/vdo_pool_autoextend_percent\fP and
+\fB\%activation/\:vdo_pool_autoextend_threshold\fP settings.
+.P
+Note: You cannot reduce the size of a VDOPoolLV.
+.P
+.B lvextend -L+AddingSize VG/VDOPoolLV
+.P
+.I Example
+.nf
+# lvextend -L+50G vg/vdopool0
+# lvresize -L300G vg/vdopool1
+.fi
+.
+.SS \n+[step]. Extending or reducing the VDOLV size
+.
+You can extend or reduce a virtual VDO LV as a standard LV with the
+\fBlvresize\fP(8), \fBlvextend\fP(8), and \fBlvreduce\fP(8) commands.
+.P
+Note: The reduction needs to process TRIM for reduced disk area
+to unmap used data blocks from the VDOPoolLV, which might take
+a long time.
+.P
+.B lvextend -L+AddingSize VG/VDOLV
+.br
+.B lvreduce -L-ReducingSize VG/VDOLV
+.P
+.I Example
+.nf
+# lvextend -L+50G vg/vdo0
+# lvreduce -L-50G vg/vdo1
+# lvresize -L200G vg/vdo2
+.fi
+.
+.SS \n+[step]. Component activation of a VDODataLV
+.
+You can activate a VDODataLV separately as a component LV for examination
+purposes. The activation of the VDODataLV activates the data LV in read-only mode,
+and the data LV cannot be modified.
+If the VDODataLV is active as a component, any upper LV using this volume CANNOT
+be activated. You have to deactivate the VDODataLV first to continue to use the VDOPoolLV.
+.P
+.I Example
+.nf
+# lvchange -ay vg/vpool0_vdata
+# lvchange -an vg/vpool0_vdata
+.fi
+.
+.SH VDO TOPICS
+.
+.nr step 1 1
+.
+.SS \n[step]. Stacking VDO
+.
+You can convert or stack a VDOPooLV with these currently supported
+volume types: linear, stripe, raid and cache with cachepool.
+.
+.SS \n[step]. Using multiple volumes using same VDOPoolLV
+.
+You can convert existing VDO LV into a thin volume. After this conversion
+you can create a thin snapshot or you can add more thin volumes
+with thin-pool named after orignal LV name LV_tpool0.
+.P
+.I Example
+.nf
+# lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
+# lvconvert --type thin vg/vdo1
+# lvcreate -V20 vg/vdo1_tpool0
+.fi
+.
+.SS \n+[step]. VDOPoolLV on top of raid
+.
+Using a raid type LV for a VDODataLV.
+.P
+.I Example
+.nf
+# lvcreate --type raid1 -L 5G -n vdopool vg
+# lvconvert --type vdo-pool -V 10G vg/vdopool
+.fi
+.
+.SS \n+[step]. Caching a VDOPoolLV
+.
+VDOPoolLV (accepts also VDODataLV volume name) caching provides a mechanism
+to accelerate reads and writes of already compressed and deduplicated
+data blocks together with VDO metadata.
+.P
+.I Example
+.nf
+# lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
+# lvcreate --type cache-pool -L 1G -n cachepool vg
+# lvconvert --cache --cachepool vg/cachepool vg/vdopool
+# lvconvert --uncache vg/vdopool
+.fi
+.
+.SS \n+[step]. Caching a VDOLV
+.
+VDO LV cache allow you to 'cache' a device for better performance before
+it hits the processing of the VDO Pool LV layer.
+.P
+.I Example
+.nf
+# lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
+# lvcreate --type cache-pool -L 1G -n cachepool vg
+# lvconvert --cache --cachepool vg/cachepool vg/vdo1
+# lvconvert --uncache vg/vdo1
+.fi
+.
+.SS \n+[step]. Usage of Discard/TRIM with a VDOLV
+.
+You can discard data on a VDO LV and reduce used blocks on a VDOPoolLV.
+However, the current performance of discard operations is still not optimal
+and takes a considerable amount of time and CPU.
+Unless you really need it, you should avoid using discard.
+.P
+When a block device is going to be rewritten,
+its blocks will be automatically reused for new data.
+Discard is useful in situations when user knows that the given portion of a VDO LV
+is not going to be used and the discarded space can be used for block
+provisioning in other regions of the VDO LV.
+For the same reason, you should avoid using mkfs with discard for
+a freshly created VDO LV to save a lot of time that this operation would
+take otherwise as device is already expected to be empty.
+.
+.SS \n+[step]. Memory usage
+.
+The VDO target requires 38 MiB of RAM and several variable amounts:
+.IP \(bu 2
+1.15 MiB of RAM for each 1 MiB of configured block map cache size.
+The block map cache requires a minimum of 150 MiB RAM.
+.br
+.IP \(bu
+1.6 MiB of RAM for each 1 TiB of logical space.
+.br
+.IP \(bu
+268 MiB of RAM for each 1 TiB of physical storage managed by the volume.
+.br
+.P
+UDS requires a minimum of 250 MiB of RAM,
+which is also the default amount that deduplication uses.
+.P
+The memory required for the UDS index is determined by the index type
+and the required size of the deduplication window and
+is controlled by the \fBallocation/vdo_use_sparse_index\fP setting.
+.P
+With enabled UDS sparse indexing, it relies on the temporal locality of data
+and attempts to retain only the most relevant index entries in memory and
+can maintain a deduplication window that is ten times larger
+than with dense while using the same amount of memory.
+.P
+Although the sparse index provides the greatest coverage,
+the dense index provides more deduplication advice.
+For most workloads, given the same amount of memory,
+the difference in deduplication rates between dense
+and sparse indexes is negligible.
+.P
+A dense index with 1 GiB of RAM maintains a 1 TiB deduplication window,
+while a sparse index with 1 GiB of RAM maintains a 10 TiB deduplication window.
+In general, 1 GiB is sufficient for 4 TiB of physical space with
+a dense index and 40 TiB with a sparse index.
+.
+.SS \n+[step]. Storage space requirements
+.
+You can configure a VDOPoolLV to use up to 256 TiB of physical storage.
+Only a certain part of the physical storage is usable to store data.
+This section provides the calculations to determine the usable size
+of a VDO-managed volume.
+.P
+The VDO target requires storage for two types of VDO metadata and for the UDS index:
+.IP \(bu 2
+The first type of VDO metadata uses approximately 1 MiB for each 4 GiB
+of physical storage plus an additional 1 MiB per slab.
+.IP \(bu
+The second type of VDO metadata consumes approximately 1.25 MiB
+for each 1 GiB of logical storage, rounded up to the nearest slab.
+.IP \(bu
+The amount of storage required for the UDS index depends on the type of index
+and the amount of RAM allocated to the index. For each 1 GiB of RAM,
+a dense UDS index uses 17 GiB of storage and a sparse UDS index will use
+170 GiB of storage.
+.
+.SH SEE ALSO
+.
+.nh
+.ad l
+.BR lvm (8),
+.BR lvm.conf (5),
+.BR lvmconfig (8),
+.BR lvcreate (8),
+.BR lvconvert (8),
+.BR lvchange (8),
+.BR lvextend (8),
+.BR lvreduce (8),
+.BR lvresize (8),
+.BR lvremove (8),
+.BR lvs (8),
+.P
+.BR vdo (8),
+.BR vdoformat (8),
+.BR vdostats (8),
+.P
+.BR mkfs (8)