# Benchmarking GFS2/DLM vs. Thin/DRBD ## Introduction I did some benchmarks between sparse files on [GFS2](https://pub.nethence.com/storage/gfs2) versus [DRBD _on top of_ LVM2 thin-provisioning](https://pub.nethence.com/storage/drbd-thin). I thought it would be worth setting up DRBD mirrors **_on top of LVM2_** volumes to gain some performance over a shared-disk file-system. But I couldn't scale-up the machines to the point where I got to notice a difference. So far I've tested, and eventhough there's still an additional layer of file-system, GFS2 scales as well, if not better than a thin-provioned LVM. ## Setup See [Bonnie++ on steroids](https://pub.nethence.com/storage/bonnie-steroids). As a result, `drbdtop` shows DRBDTOP (kernel: 9.0.24; utils: 9.13.0; host: pro5s1) ┌◉ (LIVE UPDATING) Resource List │ Name | Role | Disks | Peer Disks | Connections | Overall | Quorum │ res-data | Primary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack1 | Secondary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack2 | Secondary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack3 | Secondary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack4 | Secondary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack5 | Primary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack6 | Primary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack7 | Primary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack8 | Primary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack9 | Primary | ✓ | ✓ | ✓ | ✓ | ✓ DRBDTOP (kernel: 9.0.24; utils: 9.13.0; host: pro5s2) ┌◉ (LIVE UPDATING) Resource List │ Name | Role | Disks | Peer Disks | Connections | Overall | Quorum │ res-data | Primary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack1 | Primary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack2 | Primary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack3 | Primary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack4 | Primary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack5 | Secondary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack6 | Secondary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack7 | Secondary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack8 | Secondary | ✓ | ✓ | ✓ | ✓ | ✓ │ res-slack9 | Secondary | ✓ | ✓ | ✓ | ✓ | ✓ ## Run 1 Just a LUN either as a GFS2 sparse file vs thin-provisioned LV. Test was done one after another. ![IMAGE HERE](2020-08/run1.png) See raw results [as text](2020-08/run1.txt) or [html](2020-08/run1.html). ## Run 2 Same as run1 but here test was done concurrently (gfs2 + thindrbd at the same time), and against 9 mount points each. ![IMAGE HERE](2020-08/run2.png) See raw results [as text](2020-08/run2.txt) or [html](2020-08/run2.html). ## Run 3 Same as run2 but every bonnie++ test was processed 5 times with (`-x 5`). ![IMAGE HERE](2020-08/run3.png) See raw results [as text](2020-08/run3.txt) or [html](2020-08/run3.html). ## Conclusion As it is tremendously easier to setup and maintain, it is clear to me that GFS2 is the way to go for vdisks, just like what VMware is doing for ages with their VMFS. Unfortunately those prevent to benchmark their products in their EULA, not mentioning it is proprietary software anyway. So next time we will play with GFS2 vs OCFS2.