<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>homelab &amp;mdash; Shine&#39;s lair</title>
    <link>https://blog.churanova.eu/tag:homelab</link>
    <description></description>
    <pubDate>Fri, 01 May 2026 11:21:01 +0200</pubDate>
    <item>
      <title>Homelab k8s cluster part 1 - what and why</title>
      <link>https://blog.churanova.eu/homelab-k8s-cluster-part-1-what-and-why</link>
      <description>&lt;![CDATA[So, I decided to build a homelab Kubernetes cluster. Reasons? Other then boredom, ADHD driven hobby collection and impulse buying, I really wanted to get better at the Ops side of things.&#xA;&#xA;#selfhosting #homelab #kubernetes&#xA;&#xA;!--more--&#xA;&#xA;emThis is in no way a guide on correct deployment of such solution. I&#39;m learning this on the go, and my setup might include mistakes I&#39;m not aware of that might make it insecure and dangerous in production environment. If you spot such mistakes, feel free to correct me./em&#xA;&#xA;Goals&#xA;&#xA;The main goal is to build a reliable k8s cluster using industry standard technologies like RHEL, Ansible and set everything up in a declarative way, so bootstrapping new nodes or rebuilding the whole cluster is reliable and requires as little manual interaction as possible.&#xA;&#xA;At the end, I&#39;d like to use this cluster to deploy and run some services I want in our house and to have environment to locally test my projects.&#xA;&#xA;I also want to provide infra to my DevOps housemate and friend, who wants a test environment for learning.&#xA;&#xA;Hardware&#xA;&#xA;One of most common hardware solutions in this space are refurbished thin clients and similar miniature PCs or ARM boards like Raspberry PI. &#xA;&#xA;Another interesting solution is one of many Intel N100 and similar PCs that popped up lately. &#xA;&#xA;My deciding factors were:&#xA;high enough performance to run everything comfortably&#xA;price&#xA;power consumption&#xA;&#xA;Refurbished thin clients don&#39;t generally excel in power consumption (although it&#39;s not terrible) and Raspberry Pi and similar lack in performance. I ended up going for Intel N100, specifically Firebat T8 Pro Plus. It&#39;s reasonably priced, small, and it has bat in its name. I like bats, bats are cute.&#xA;Black fruit bat with bat shaped pacifier&#xA;a href=&#34;https://www.youtube.com/@BatzillatheBat&#34; target=&#34;blank&#34;emsource of bat photo: Screenshot from Batzilla the bat video/em/a&#xA;&#xA;Software&#xA;RHEL&#xA;I wanted to use a Linux distribution that is industry standard and possible to find in professional setting. Candidates were NixOS, Debian, Suse, CentOS, Rocky, Alpine and RHEL.&#xA;&#xA;As much as NixOS is my preferred OS, its steep learning curve would certainly not benefit my friend, and I already run NixOS for my public facing infrastructure like this blog, so it wouldn&#39;t teach me much more.&#xA;&#xA;I don&#39;t like Debian, and I&#39;m used to RHEL based ecosystems, so the final decision was between CentOS, Rocky, Alpine and RHEL. And since RedHat provides a reasonable number of free licenses for similar purposes, I just went with it, because it&#39;s a chance to try it.&#xA;&#xA;Ansible&#xA;I don&#39;t think there&#39;s much to say here. Ansible is at home with RHEL, and while I used SaltStack in past, it and other competing projects are not as widespread anymore. Notable exception is Terraform / OpenTofu, but its strength lies more in infrastructure deployment and if I used it, it would likely still be in tandem with Ansible for configuration management.&#xA;&#xA;a href=&#34;https://docs.rke2.io/&#34; target=&#34;blank&#34;RKE2/a&#xA;The choice of K8S distribution is a first step out of RedHat ecosystem I&#39;m making. Indeed, OKD would be more at home there, but I wanted a cleaner k8s experience while keeping some convenience of prepared solution.&#xA;&#xA;a href=&#34;https://argo-cd.readthedocs.io&#34; target=&#34;blank&#34;ArgoCD/a&#xA;From what I found, two biggest players in CD space are ArgoCD and Flux. I went mostly with feelings here, ArgoCD just looked cooler, although Flux might be overall a better solution for small deployments like this.&#xA;&#xA;a href=&#34;https://longhorn.io/&#34; target=&#34;blank&#34;Longhorn/a&#xA;Distributed block storage for my persistent volume needs. There are other competing solutions in this space, among the well known ones GlusterFS and Ceph, but I went with Longhorn for its ease of use and deployment.&#xA;&#xA;a href=&#34;https://www.isc.org/bind/&#34; target=&#34;blank&#34;Bind 9/a&#xA;We need to be able to easily access deployments on our cluster from our network. Bind 9 in combination with a href=&#34;https://github.com/kubernetes-sigs/external-dns&#34; target=&#34;_blank&#34;External DNS/a as authoritative server and .internal TLD solve this.&#xA;&#xA;In my next post, I will go into details of my Ansible setup to configure and spin up RKE2 cluster :)]]&gt;</description>
      <content:encoded><![CDATA[<p>So, I decided to build a homelab Kubernetes cluster. Reasons? Other then boredom, ADHD driven hobby collection and impulse buying, I really wanted to get better at the Ops side of things.</p>

<p><a href="https://blog.churanova.eu/tag:selfhosting" class="hashtag"><span>#</span><span class="p-category">selfhosting</span></a> <a href="https://blog.churanova.eu/tag:homelab" class="hashtag"><span>#</span><span class="p-category">homelab</span></a> <a href="https://blog.churanova.eu/tag:kubernetes" class="hashtag"><span>#</span><span class="p-category">kubernetes</span></a></p>



<p><em>This is in no way a guide on correct deployment of such solution. I&#39;m learning this on the go, and my setup might include mistakes I&#39;m not aware of that might make it insecure and dangerous in production environment. If you spot such mistakes, feel free to correct me.</em></p>

<h2 id="goals">Goals</h2>

<p>The main goal is to build a reliable k8s cluster using industry standard technologies like RHEL, Ansible and set everything up in a declarative way, so bootstrapping new nodes or rebuilding the whole cluster is reliable and requires as little manual interaction as possible.</p>

<p>At the end, I&#39;d like to use this cluster to deploy and run some services I want in our house and to have environment to locally test my projects.</p>

<p>I also want to provide infra to my DevOps housemate and friend, who wants a test environment for learning.</p>

<h2 id="hardware">Hardware</h2>

<p>One of most common hardware solutions in this space are refurbished thin clients and similar miniature PCs or ARM boards like Raspberry PI.</p>

<p>Another interesting solution is one of many Intel N100 and similar PCs that popped up lately.</p>

<p>My deciding factors were:
– high enough performance to run everything comfortably
– price
– power consumption</p>

<p>Refurbished thin clients don&#39;t generally excel in power consumption (although it&#39;s not terrible) and Raspberry Pi and similar lack in performance. I ended up going for Intel N100, specifically <a href="https://firebat.net/firebat-t8-pro-plus-mini-pc-intel-celeron-n5095-n100-desktop-gaming-computer-8gb-16gb-256gb-512gb-ddr4-ddr5-wifi5-bt4-2/">Firebat T8 Pro Plus</a>. It&#39;s reasonably priced, small, and it has bat in its name. I like bats, bats are cute.
<img src="https://cloud.shine.horse/s/xLtmgd5ojQBLyrj/download/fruit_bat.png" alt="Black fruit bat with bat shaped pacifier">
<a href="https://www.youtube.com/@BatzillatheBat" target="_blank"><em>source of bat photo: Screenshot from Batzilla the bat video</em></a></p>

<h2 id="software">Software</h2>

<p><strong>RHEL</strong>
I wanted to use a Linux distribution that is industry standard and possible to find in professional setting. Candidates were NixOS, Debian, Suse, CentOS, Rocky, Alpine and RHEL.</p>

<p>As much as NixOS is my preferred OS, its steep learning curve would certainly not benefit my friend, and I already run NixOS for my public facing infrastructure like this blog, so it wouldn&#39;t teach me much more.</p>

<p>I don&#39;t like Debian, and I&#39;m used to RHEL based ecosystems, so the final decision was between CentOS, Rocky, Alpine and RHEL. And since RedHat provides a reasonable number of free licenses for similar purposes, I just went with it, because it&#39;s a chance to try it.</p>

<p><strong>Ansible</strong>
I don&#39;t think there&#39;s much to say here. Ansible is at home with RHEL, and while I used SaltStack in past, it and other competing projects are not as widespread anymore. Notable exception is Terraform / OpenTofu, but its strength lies more in infrastructure deployment and if I used it, it would likely still be in tandem with Ansible for configuration management.</p>

<p><a href="https://docs.rke2.io/" target="_blank"><strong>RKE2</strong></a>
The choice of K8S distribution is a first step out of RedHat ecosystem I&#39;m making. Indeed, OKD would be more at home there, but I wanted a cleaner k8s experience while keeping some convenience of prepared solution.</p>

<p><a href="https://argo-cd.readthedocs.io" target="_blank"><strong>ArgoCD</strong></a>
From what I found, two biggest players in CD space are ArgoCD and Flux. I went mostly with feelings here, ArgoCD just looked cooler, although Flux might be overall a better solution for small deployments like this.</p>

<p><a href="https://longhorn.io/" target="_blank"><strong>Longhorn</strong></a>
Distributed block storage for my persistent volume needs. There are other competing solutions in this space, among the well known ones GlusterFS and Ceph, but I went with Longhorn for its ease of use and deployment.</p>

<p><a href="https://www.isc.org/bind/" target="_blank"><strong>Bind 9</strong></a>
We need to be able to easily access deployments on our cluster from our network. Bind 9 in combination with <a href="https://github.com/kubernetes-sigs/external-dns" target="_blank"><strong>External DNS</strong></a> as authoritative server and <em>.internal</em> TLD solve this.</p>

<p>In my next post, I will go into details of my Ansible setup to configure and spin up RKE2 cluster :)</p>
]]></content:encoded>
      <guid>https://blog.churanova.eu/homelab-k8s-cluster-part-1-what-and-why</guid>
      <pubDate>Thu, 13 Mar 2025 11:13:28 +0000</pubDate>
    </item>
  </channel>
</rss>