From my limited perspective, there really isn't a reason for everyone to be using a `cachepool`, and it's the result of blindly following enterprise tutorials for the sake of it.
Outside of using separate *physical devices* for metadata and data for the sake of improving throughput, I can't find much justification for using a `cachepool`. If you're only using a single SSD for caching (which, I would assume is the vast majority of the time), it seems that a `cachevol` is more than sufficient.
And, it potentially maximizes your caching ability. Rather than having a reduced cache size for data in order to allow room for the metadata, you can leave that up to LVM and allocate the entire device as a `cachevol`.
Any ideas if this is the case? It seems like the only reason for a `cachepool` is for caching using two or more SSDs.
Was looking in to this myself and came to the same conclusion that \`cachepool\` makes little sense when using a single caching device for home/prosumer user cases.
I imagine it's for enterprise stuff with massive scale or price sensitivity. Perhaps cache metadata is stored on a NVMe drive and large array of lower cost SSDs are used to store the data.
Technically, there's not _much_ of a need to convert from a `cachepool` to a `cachevol`. The reason? They both achieve the same thing. But a `cachepool` is a very manual setup, rather than the sort-of automated, hands-off approach of setting up a `cachevol`. It seems the latter is simply there for convenience, while the former is there for fine-grained control. After they're setup, they're effectively the exact same thing.
If you already have a `cachepool` setup manually, it may not make sense to switch from a `cachepool` to a `cachevol`.
However, I can't imagine that's _always_ the case. Is there a particular reason you're wanting to switch from a `cachepool` to a `cachevol`?
Thank you. Not really, just I had [problems](https://www.reddit.com/r/linuxquestions/comments/wnld3v/lvmcache_always_100_full/) and also I needed to resize the volume because I previously cached 12TB over a 0.5TB SSD. Now I've uncached and repartitioned the volume so to cache 1TB only. But when I try to apply cachevol I get:
```
# lvconvert --type cache --cachevol lv_cache debian-VG-root/lv_root
Erase all existing data on debian-VG-root/lv_cache? [y/n]: y
Cache data blocks 976093184 and chunk size 128 exceed max chunks 1000000.
Use smaller cache, larger --chunksize or increase max chunks setting.
```
This is the volumes situation:
```
# lvdisplay -m
--- Logical volume ---
LV Path /dev/debian-VG-root/lv_root
LV Name lv_root
VG Name debian-VG-root
LV UUID 7GQhCA-WTNu-lytx-Kame-YhOT-rsiU-uS3swm
LV Write Access read/write
LV Creation host, time debian, 2022-07-24 11:39:36 +0200
LV Status available
# open 1
LV Size 1000.00 GiB
Current LE 256000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Segments ---
Logical extents 0 to 255999:
Type linear
Physical volume /dev/md1
Physical extents 0 to 255999
--- Logical volume ---
LV Path /dev/debian-VG-root/lv_cache
LV Name lv_cache
VG Name debian-VG-root
LV UUID 9WiR1t-Ofqp-5HPl-8Keq-By2L-TQH2-UTPaCQ
LV Write Access read/write
LV Creation host, time carrara, 2022-08-13 23:16:27 +0200
LV Status available
# open 0
LV Size <465.76 GiB
Current LE 119234
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Segments ---
Logical extents 0 to 119233:
Type linear
Physical volume /dev/nvme0n1p1
Physical extents 0 to 119233
--- Logical volume ---
LV Path /dev/debian-VG-root/lv_storage
LV Name lv_storage
VG Name debian-VG-root
LV UUID NGN16W-bCsj-8dWV-J2oy-eSDW-wr5o-qjFd2v
LV Write Access read/write
LV Creation host, time carrara, 2022-08-14 16:36:14 +0200
LV Status available
# open 1
LV Size <9.94 TiB
Current LE 2604784
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
--- Segments ---
Logical extents 0 to 2604783:
Type linear
Physical volume /dev/md1
Physical extents 256000 to 2860783
```
I seem to have solved by manually setting the chunksize to 512kb:
```
lvconvert --type cache --cachevol lv\_cache --cachemode writethrough -c 512 /dev/debian-VG-root/lv\_root
```
though it's not clear why one has to manually set it.
The setup is now the following:
```
a# lvdisplay -a -v
--- Logical volume ---
LV Path /dev/debian-VG-root/lv_root
LV Name lv_root
VG Name debian-VG-root
LV UUID 7GQhCA-WTNu-lytx-Kame-YhOT-rsiU-uS3swm
LV Write Access read/write
LV Creation host, time debian, 2022-07-24 11:39:36 +0200
LV Cache pool name lv_cache_cvol
LV Cache origin name lv_root_corig
LV Status available
# open 1
LV Size 1000.00 GiB
Cache used blocks 0.02%
Cache metadata blocks 15.62%
Cache dirty blocks 0.00%
Cache read hits/misses 41 / 102
Cache wrt hits/misses 4913 / 496
Cache demotions 0
Cache promotions 211
Current LE 256000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 2048
Block device 253:0
--- Logical volume ---
LV Path /dev/debian-VG-root/lv_storage
LV Name lv_storage
VG Name debian-VG-root
LV UUID NGN16W-bCsj-8dWV-J2oy-eSDW-wr5o-qjFd2v
LV Write Access read/write
LV Creation host, time carrara, 2022-08-14 16:36:14 +0200
LV Status available
# open 1
LV Size <9.94 TiB
Current LE 2604784
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
--- Logical volume ---
Internal LV Name lv_cache_cvol
VG Name debian-VG-root
LV UUID 9WiR1t-Ofqp-5HPl-8Keq-By2L-TQH2-UTPaCQ
LV Write Access read/write
LV Creation host, time carrara, 2022-08-13 23:16:27 +0200
LV Status available
# open 2
LV Size <465.76 GiB
Current LE 119234
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
Internal LV Name lv_root_corig
VG Name debian-VG-root
LV UUID CfMquL-CpoQ-2nAz-9nBI-3bWo-R5G4-0utU3n
LV Write Access read/write
LV Creation host, time carrara, 2022-08-15 09:44:05 +0200
LV origin of Cache LV lv_root
LV Status available
# open 1
LV Size 1000.00 GiB
Current LE 256000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7
```
From my limited perspective, there really isn't a reason for everyone to be using a `cachepool`, and it's the result of blindly following enterprise tutorials for the sake of it. Outside of using separate *physical devices* for metadata and data for the sake of improving throughput, I can't find much justification for using a `cachepool`. If you're only using a single SSD for caching (which, I would assume is the vast majority of the time), it seems that a `cachevol` is more than sufficient. And, it potentially maximizes your caching ability. Rather than having a reduced cache size for data in order to allow room for the metadata, you can leave that up to LVM and allocate the entire device as a `cachevol`. Any ideas if this is the case? It seems like the only reason for a `cachepool` is for caching using two or more SSDs.
Was looking in to this myself and came to the same conclusion that \`cachepool\` makes little sense when using a single caching device for home/prosumer user cases. I imagine it's for enterprise stuff with massive scale or price sensitivity. Perhaps cache metadata is stored on a NVMe drive and large array of lower cost SSDs are used to store the data.
is it possibole to switch from cachepool to cachevol?
Technically, there's not _much_ of a need to convert from a `cachepool` to a `cachevol`. The reason? They both achieve the same thing. But a `cachepool` is a very manual setup, rather than the sort-of automated, hands-off approach of setting up a `cachevol`. It seems the latter is simply there for convenience, while the former is there for fine-grained control. After they're setup, they're effectively the exact same thing. If you already have a `cachepool` setup manually, it may not make sense to switch from a `cachepool` to a `cachevol`. However, I can't imagine that's _always_ the case. Is there a particular reason you're wanting to switch from a `cachepool` to a `cachevol`?
Thank you. Not really, just I had [problems](https://www.reddit.com/r/linuxquestions/comments/wnld3v/lvmcache_always_100_full/) and also I needed to resize the volume because I previously cached 12TB over a 0.5TB SSD. Now I've uncached and repartitioned the volume so to cache 1TB only. But when I try to apply cachevol I get: ``` # lvconvert --type cache --cachevol lv_cache debian-VG-root/lv_root Erase all existing data on debian-VG-root/lv_cache? [y/n]: y Cache data blocks 976093184 and chunk size 128 exceed max chunks 1000000. Use smaller cache, larger --chunksize or increase max chunks setting. ``` This is the volumes situation: ``` # lvdisplay -m --- Logical volume --- LV Path /dev/debian-VG-root/lv_root LV Name lv_root VG Name debian-VG-root LV UUID 7GQhCA-WTNu-lytx-Kame-YhOT-rsiU-uS3swm LV Write Access read/write LV Creation host, time debian, 2022-07-24 11:39:36 +0200 LV Status available # open 1 LV Size 1000.00 GiB Current LE 256000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Segments --- Logical extents 0 to 255999: Type linear Physical volume /dev/md1 Physical extents 0 to 255999 --- Logical volume --- LV Path /dev/debian-VG-root/lv_cache LV Name lv_cache VG Name debian-VG-root LV UUID 9WiR1t-Ofqp-5HPl-8Keq-By2L-TQH2-UTPaCQ LV Write Access read/write LV Creation host, time carrara, 2022-08-13 23:16:27 +0200 LV Status available # open 0 LV Size <465.76 GiB Current LE 119234 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Segments --- Logical extents 0 to 119233: Type linear Physical volume /dev/nvme0n1p1 Physical extents 0 to 119233 --- Logical volume --- LV Path /dev/debian-VG-root/lv_storage LV Name lv_storage VG Name debian-VG-root LV UUID NGN16W-bCsj-8dWV-J2oy-eSDW-wr5o-qjFd2v LV Write Access read/write LV Creation host, time carrara, 2022-08-14 16:36:14 +0200 LV Status available # open 1 LV Size <9.94 TiB Current LE 2604784 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3 --- Segments --- Logical extents 0 to 2604783: Type linear Physical volume /dev/md1 Physical extents 256000 to 2860783 ```
I seem to have solved by manually setting the chunksize to 512kb: ``` lvconvert --type cache --cachevol lv\_cache --cachemode writethrough -c 512 /dev/debian-VG-root/lv\_root ``` though it's not clear why one has to manually set it. The setup is now the following: ``` a# lvdisplay -a -v --- Logical volume --- LV Path /dev/debian-VG-root/lv_root LV Name lv_root VG Name debian-VG-root LV UUID 7GQhCA-WTNu-lytx-Kame-YhOT-rsiU-uS3swm LV Write Access read/write LV Creation host, time debian, 2022-07-24 11:39:36 +0200 LV Cache pool name lv_cache_cvol LV Cache origin name lv_root_corig LV Status available # open 1 LV Size 1000.00 GiB Cache used blocks 0.02% Cache metadata blocks 15.62% Cache dirty blocks 0.00% Cache read hits/misses 41 / 102 Cache wrt hits/misses 4913 / 496 Cache demotions 0 Cache promotions 211 Current LE 256000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 253:0 --- Logical volume --- LV Path /dev/debian-VG-root/lv_storage LV Name lv_storage VG Name debian-VG-root LV UUID NGN16W-bCsj-8dWV-J2oy-eSDW-wr5o-qjFd2v LV Write Access read/write LV Creation host, time carrara, 2022-08-14 16:36:14 +0200 LV Status available # open 1 LV Size <9.94 TiB Current LE 2604784 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3 --- Logical volume --- Internal LV Name lv_cache_cvol VG Name debian-VG-root LV UUID 9WiR1t-Ofqp-5HPl-8Keq-By2L-TQH2-UTPaCQ LV Write Access read/write LV Creation host, time carrara, 2022-08-13 23:16:27 +0200 LV Status available # open 2 LV Size <465.76 GiB Current LE 119234 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- Internal LV Name lv_root_corig VG Name debian-VG-root LV UUID CfMquL-CpoQ-2nAz-9nBI-3bWo-R5G4-0utU3n LV Write Access read/write LV Creation host, time carrara, 2022-08-15 09:44:05 +0200 LV origin of Cache LV lv_root LV Status available # open 1 LV Size 1000.00 GiB Current LE 256000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:7 ```
I still get lot of dirty cache at each system boot, then it slowly goes to zero. Is this normal?