Raspberry Pi 5 vs refurbished Mini PC for a small k8s cluster

,

I wanted to experiment with a Kubernetes cluster. First in order to refresh my knowledge on k8s because I kinda lost touch with it in the last few years, but also for a more pragmatic reason. I mainly use docker to self-host applications, but k8s is more resilient. My main concern was storage resilience. When using docker you either have to:

  • constantly backup the volumes (you can automate the backup, I use this tool)
  • buy or build a NAS and mount your volumes from NFS mounts of your NAS

However, for k8s there are solutions like Longhorn or microCeph with resilience built in. I will get into it more when I’ll write an article about my k8s setup. And I know resilience is not a full replacement for backups, but if a drive fails, your life is much easier with such a solution. And you can still do backups.

The point of this article is just to compare the hardware options.

Hardware options and cost

I knew going into this that I wanted a 3 node, HA (highly available) k8s cluster, either running k3s + Longhorn or microK8s + microCeph. So a lightweight k8s distribution, but with multi-node, HA capability.

My first though was to use Raspberry Pi 5s. My man concern with this was the storage. Micro-SD don’t have great performance and adding a NVME.M2 hat would increase the cost.

So here are the options for me:

  • Raspberry Pi 5 bough new with original power adapter and active cooling: ~120 USD (plus the price of the micro-SD card, which is negligible)
  • Lenovo ThinkCentre M710Q Tiny i3-6100T refurbished (upgraded to 16GB RAM, single channel): ~90 USD

CPU

The Raspberry Pi 5 has an ARM64 CPU with 4 cores. The only thing extra is the official active cooling. Let’s run the benchmark:

$ sysbench --threads="$(nproc)" cpu run
sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 4
Initializing random number generator from current time


Prime numbers limit: 10000

Initializing worker threads...

Threads started!

CPU speed:
    events per second: 10831.94

General statistics:
    total time:                          10.0004s
    total number of events:              108341

Latency (ms):
         min:                                    0.37
         avg:                                    0.37
         max:                                   11.06
         95th percentile:                        0.37
         sum:                                39980.33

Threads fairness:
    events (avg/stddev):           27085.2500/68.69
    execution time (avg/stddev):   9.9951/0.00

The Lenovo Mini PC has a i3-6100T with 2 cores, 4 threads. Let’s run the same benchmark:

$ sysbench --threads="$(nproc)" cpu run
sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 4
Initializing random number generator from current time


Prime numbers limit: 10000

Initializing worker threads...

Threads started!

CPU speed:
    events per second:  3377.09

General statistics:
    total time:                          10.0011s
    total number of events:              33780

Latency (ms):
         min:                                    1.17
         avg:                                    1.18
         max:                                   13.26
         95th percentile:                        1.18
         sum:                                39996.56

Threads fairness:
    events (avg/stddev):           8445.0000/8.86
    execution time (avg/stddev):   9.9991/0.00

So the important data is:

  • Pi 5: 10831.94 events/sec
  • i3-6100T: 3377.09 events/sec

Conclusion: Pi 5 CPU is ~3x faster. This is a huge difference given the Pi’s power consumption is also a bit lower.

Note: The i3 runs ~10°C cooler than the Pi 5 in the same environment.

Memory

The Raspberry Pi 5 has 5GB of RAM, which is not upgradeable. Let’s run a memory benchmark:

$ sysbench memory --memory-block-size=1M --memory-total-size=4G run
sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 1
Initializing random number generator from current time


Running memory speed test with the following options:
  block size: 1024KiB
  total size: 4096MiB
  operation: write
  scope: global

Initializing worker threads...

Threads started!

Total operations: 4096 (12108.29 per second)

4096.00 MiB transferred (12108.29 MiB/sec)


General statistics:
    total time:                          0.3365s
    total number of events:              4096

Latency (ms):
         min:                                    0.08
         avg:                                    0.08
         max:                                    0.12
         95th percentile:                        0.09
         sum:                                  335.27

Threads fairness:
    events (avg/stddev):           4096.0000/0.00
    execution time (avg/stddev):   0.3353/0.00

The Lenovo came upgraded to 16GB of single channel RAM (included in the price I mentioned), so I guess it’s upgradeable to 32 since it has a second slot. Let’s run the same benchmark:

$ sysbench memory --memory-block-size=1M --memory-total-size=4G run
sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 1
Initializing random number generator from current time


Running memory speed test with the following options:
  block size: 1024KiB
  total size: 4096MiB
  operation: write
  scope: global

Initializing worker threads...

Threads started!

Total operations: 4096 (18455.30 per second)

4096.00 MiB transferred (18455.30 MiB/sec)


General statistics:
    total time:                          0.2203s
    total number of events:              4096

Latency (ms):
         min:                                    0.05
         avg:                                    0.05
         max:                                    0.11
         95th percentile:                        0.06
         sum:                                  218.99

Threads fairness:
    events (avg/stddev):           4096.0000/0.00
    execution time (avg/stddev):   0.2190/0.00

The important stuff:

  • Pi 5: 12108.29 MiB/sec
  • Lenovo ThinkCentre: 18455.30 MiB/sec

Conclusion: The Lenovo ThinkCentre’s RAM is ~1.5x faster (and it has 2x more).

Storage

The Pi 5 has a 128GB Kingston micro-SD card.

Write benchmark:

$ fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
TEST: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=1): [W(1)][11.7%][w=26.0MiB/s][w=26 IOPS][eta 00m:53s]
Jobs: 1 (f=1): [W(1)][21.7%][eta 00m:47s]                        
Jobs: 1 (f=1): [W(1)][31.7%][w=9225KiB/s][w=9 IOPS][eta 00m:41s] 
Jobs: 1 (f=1): [W(1)][41.7%][eta 00m:35s]                        
Jobs: 1 (f=1): [W(1)][51.7%][eta 00m:29s]                         
Jobs: 1 (f=1): [W(1)][61.7%][w=31.0MiB/s][w=31 IOPS][eta 00m:23s] 
Jobs: 1 (f=1): [W(1)][71.7%][w=35.0MiB/s][w=35 IOPS][eta 00m:17s] 
Jobs: 1 (f=1): [W(1)][81.7%][w=6150KiB/s][w=6 IOPS][eta 00m:11s]  
Jobs: 1 (f=1): [W(1)][91.7%][eta 00m:05s]                        
Jobs: 1 (f=1): [W(1)][7.6%][w=5125KiB/s][w=5 IOPS][eta 12m:21s]  
TEST: (groupid=0, jobs=1): err= 0: pid=21606: Sun Mar 23 15:53:24 2025
  write: IOPS=13, BW=13.4MiB/s (14.1MB/s)(810MiB/60247msec); 0 zone resets
    slat (usec): min=105, max=5898.0k, avg=38547.12, stdev=243584.33
    clat (msec): min=24, max=12437, avg=2322.20, stdev=2525.49
     lat (msec): min=52, max=12437, avg=2360.74, stdev=2525.36
    clat percentiles (msec):
     |  1.00th=[  115],  5.00th=[  498], 10.00th=[  894], 20.00th=[  995],
     | 30.00th=[ 1003], 40.00th=[ 1011], 50.00th=[ 1045], 60.00th=[ 1062],
     | 70.00th=[ 2299], 80.00th=[ 3373], 90.00th=[ 5805], 95.00th=[ 8490],
     | 99.00th=[11208], 99.50th=[11342], 99.90th=[12416], 99.95th=[12416],
     | 99.99th=[12416]
   bw (  KiB/s): min= 4096, max=94208, per=100.00%, avg=20965.05, stdev=18059.19, samples=76
   iops        : min=    4, max=   92, avg=20.47, stdev=17.64, samples=76
  lat (msec)   : 50=0.37%, 100=0.49%, 250=1.36%, 500=2.84%, 750=2.72%
  lat (msec)   : 1000=21.23%, 2000=38.27%, >=2000=32.72%
  cpu          : usr=0.11%, sys=0.29%, ctx=1879, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=96.2%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,810,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=13.4MiB/s (14.1MB/s), 13.4MiB/s-13.4MiB/s (14.1MB/s-14.1MB/s), io=810MiB (849MB), run=60247-60247msec

Disk stats (read/write):
  mmcblk0: ios=0/1824, sectors=0/1760848, merge=0/129, ticks=0/2678795, in_queue=2678795, util=99.30%

Read benchmark:

$ fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [R(1)][11.7%][r=86.1MiB/s][r=86 IOPS][eta 00m:53s]
Jobs: 1 (f=1): [R(1)][23.2%][r=86.0MiB/s][r=86 IOPS][eta 00m:43s]  
Jobs: 1 (f=1): [R(1)][40.4%][r=1295MiB/s][r=1295 IOPS][eta 00m:28s]
Jobs: 1 (f=1): [R(1)][45.5%][r=86.0MiB/s][r=86 IOPS][eta 00m:30s] 
Jobs: 1 (f=1): [R(1)][63.3%][r=85.0MiB/s][r=85 IOPS][eta 00m:18s]  
Jobs: 1 (f=1): [R(1)][67.3%][r=86.0MiB/s][r=86 IOPS][eta 00m:18s] 
Jobs: 1 (f=1): [R(1)][84.3%][r=87.1MiB/s][r=87 IOPS][eta 00m:08s]   
Jobs: 1 (f=1): [R(1)][88.7%][r=83.0MiB/s][r=83 IOPS][eta 00m:06s] 
TEST: (groupid=0, jobs=1): err= 0: pid=21658: Sun Mar 23 15:56:06 2025
  read: IOPS=216, BW=216MiB/s (227MB/s)(10.0GiB/47311msec)
    slat (usec): min=31, max=60958, avg=2279.51, stdev=4576.92
    clat (nsec): min=944, max=753402k, avg=142677458.17, stdev=179415831.84
     lat (usec): min=70, max=753452, avg=144956.96, stdev=181951.15
    clat percentiles (nsec):
     |  1.00th=[     1160],  5.00th=[     1416], 10.00th=[    84480],
     | 20.00th=[   199680], 30.00th=[   313344], 40.00th=[   444416],
     | 50.00th=[   602112], 60.00th=[   864256], 70.00th=[358612992],
     | 80.00th=[362807296], 90.00th=[371195904], 95.00th=[371195904],
     | 99.00th=[425721856], 99.50th=[541065216], 99.90th=[633339904],
     | 99.95th=[692060160], 99.99th=[742391808]
   bw (  KiB/s): min=22528, max=2609152, per=97.27%, avg=215584.68, stdev=496355.55, samples=94
   iops        : min=   22, max= 2548, avg=210.53, stdev=484.72, samples=94
  lat (nsec)   : 1000=0.19%
  lat (usec)   : 2=5.36%, 4=0.15%, 10=0.02%, 100=6.04%, 250=13.22%
  lat (usec)   : 500=18.17%, 750=15.24%, 1000=1.63%
  lat (msec)   : 10=0.01%, 20=0.08%, 50=0.24%, 100=0.39%, 250=1.62%
  lat (msec)   : 500=36.96%, 750=0.65%, 1000=0.01%
  cpu          : usr=0.05%, sys=2.36%, ctx=5946, majf=0, minf=8205
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=216MiB/s (227MB/s), 216MiB/s-216MiB/s (227MB/s-227MB/s), io=10.0GiB (10.7GB), run=47311-47311msec

Disk stats (read/write):
  mmcblk0: ios=8071/21, sectors=8264704/360, merge=0/18, ticks=1558793/3924, in_queue=1562718, util=99.21%

The Lenovo has a 128GB SSD.

Write benchmark:

$ fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
TEST: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=1): [W(1)][11.7%][w=84.1MiB/s][w=84 IOPS][eta 00m:53s]
Jobs: 1 (f=1): [W(1)][21.7%][w=29.0MiB/s][w=29 IOPS][eta 00m:47s] 
Jobs: 1 (f=1): [W(1)][31.7%][w=97.0MiB/s][w=97 IOPS][eta 00m:41s] 
Jobs: 1 (f=1): [W(1)][41.7%][w=103MiB/s][w=103 IOPS][eta 00m:35s] 
Jobs: 1 (f=1): [W(1)][51.7%][w=103MiB/s][w=103 IOPS][eta 00m:29s] 
Jobs: 1 (f=1): [W(1)][61.7%][w=88.1MiB/s][w=88 IOPS][eta 00m:23s] 
Jobs: 1 (f=1): [W(1)][71.7%][w=68.1MiB/s][w=68 IOPS][eta 00m:17s] 
Jobs: 1 (f=1): [W(1)][81.7%][w=74.1MiB/s][w=74 IOPS][eta 00m:11s] 
Jobs: 1 (f=1): [W(1)][91.7%][w=107MiB/s][w=107 IOPS][eta 00m:05s] 
Jobs: 1 (f=1): [W(1)][100.0%][w=64.0MiB/s][w=64 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=29425: Sun Mar 23 13:55:21 2025
  write: IOPS=77, BW=77.2MiB/s (80.9MB/s)(4664MiB/60441msec); 0 zone resets
    slat (usec): min=59, max=787999, avg=1247.60, stdev=22190.03
    clat (msec): min=2, max=2499, avg=413.24, stdev=378.96
     lat (msec): min=3, max=2499, avg=414.49, stdev=378.77
    clat percentiles (msec):
     |  1.00th=[   22],  5.00th=[   83], 10.00th=[  136], 20.00th=[  197],
     | 30.00th=[  234], 40.00th=[  279], 50.00th=[  313], 60.00th=[  338],
     | 70.00th=[  397], 80.00th=[  485], 90.00th=[  818], 95.00th=[ 1301],
     | 99.00th=[ 2089], 99.50th=[ 2232], 99.90th=[ 2433], 99.95th=[ 2467],
     | 99.99th=[ 2500]
   bw (  KiB/s): min= 4087, max=206435, per=100.00%, avg=79625.71, stdev=37290.76, samples=119
   iops        : min=    3, max=  201, avg=77.13, stdev=36.49, samples=119
  lat (msec)   : 4=0.04%, 10=0.34%, 20=0.54%, 50=1.14%, 100=4.59%
  lat (msec)   : 250=26.95%, 500=47.32%, 750=8.30%, 1000=2.44%, 2000=7.14%
  lat (msec)   : >=2000=1.20%
  cpu          : usr=0.81%, sys=1.24%, ctx=4246, majf=0, minf=13
  IO depths    : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=98.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,4664,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=77.2MiB/s (80.9MB/s), 77.2MiB/s-77.2MiB/s (80.9MB/s-80.9MB/s), io=4664MiB (4891MB), run=60441-60441msec

Disk stats (read/write):
  sda: ios=0/5385, sectors=0/9655456, merge=0/172, ticks=0/2255458, in_queue=2263163, util=96.15%

Read benchmark:

$ fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [R(1)][31.8%][r=456MiB/s][r=456 IOPS][eta 00m:15s]
Jobs: 1 (f=1): [R(1)][59.1%][r=458MiB/s][r=458 IOPS][eta 00m:09s] 
Jobs: 1 (f=1): [R(1)][86.4%][r=456MiB/s][r=456 IOPS][eta 00m:03s] 
Jobs: 1 (f=1): [R(1)][100.0%][r=460MiB/s][r=460 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=29469: Sun Mar 23 13:56:57 2025
  read: IOPS=458, BW=459MiB/s (481MB/s)(10.0GiB/22333msec)
    slat (usec): min=44, max=1258, avg=171.06, stdev=31.95
    clat (msec): min=4, max=165, avg=69.52, stdev=24.34
     lat (msec): min=4, max=165, avg=69.69, stdev=24.34
    clat percentiles (msec):
     |  1.00th=[   26],  5.00th=[   30], 10.00th=[   32], 20.00th=[   35],
     | 30.00th=[   69], 40.00th=[   77], 50.00th=[   80], 60.00th=[   83],
     | 70.00th=[   85], 80.00th=[   89], 90.00th=[   93], 95.00th=[   96],
     | 99.00th=[  105], 99.50th=[  120], 99.90th=[  140], 99.95th=[  146],
     | 99.99th=[  165]
   bw (  KiB/s): min=456704, max=479232, per=100.00%, avg=470091.95, stdev=4267.86, samples=44
   iops        : min=  446, max=  468, avg=459.07, stdev= 4.17, samples=44
  lat (msec)   : 10=0.14%, 20=0.23%, 50=26.94%, 100=70.65%, 250=2.03%
  cpu          : usr=1.48%, sys=9.19%, ctx=9879, majf=0, minf=8204
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=459MiB/s (481MB/s), 459MiB/s-459MiB/s (481MB/s-481MB/s), io=10.0GiB (10.7GB), run=22333-22333msec

Disk stats (read/write):
  sda: ios=15878/4, sectors=20791296/40, merge=215/1, ticks=1064517/260, in_queue=1064863, util=90.90%

The important stuff:

  • Pi 5 write: 13.4MiB/s, read: 216MiB/s
  • ThinkCentre write: 77.2MiB/s, read: 459MiB/s

Conclusion: The mini PC has ~5.5x better writing speed and ~4x better reading speed. That was to be expected given Pi 5 uses micro-SD.

Notes: The ThinkCentre also supports NVME.M2 storage, which would further increase the gap. This is something I ended up doing for the Longhorn volumes.

Power draw

Based on my inaccurate measurements, the Pi 5 draws ~2W idling while the mini PC draws ~3.5W idling. Both running Ubuntu Server. This is NOT the power draw under load and I didn’t install k8s on both to measure, so take this with a grain of salt.

Conclusion

So the TLDR is the Raspberry Pi 5 has a 3x better CPU, while the mini PC has 4-5x better storage and 1.5x faster RAM.

What I will say is that you should think about what you plan to run on the cluster. Will you have CPU intensive workloads? For me the answer is no. I’m basically the only user of the things I host. The storage performance and reliability is much more important. So I ended up using mini PCs.

Hope this helps, have fun clickity-clacking.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *