<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Crypt0s.com</title><link href="https://crypt0s.com/" rel="alternate"/><link href="https://crypt0s.com/feeds/all.atom.xml" rel="self"/><id>https://crypt0s.com/</id><updated>2026-04-12T12:00:00+02:00</updated><entry><title>My First Post</title><link href="https://crypt0s.com/my-first-post.html" rel="alternate"/><published>2026-04-12T12:00:00+02:00</published><updated>2026-04-12T12:00:00+02:00</updated><author><name>Crypt0s</name></author><id>tag:crypt0s.com,2026-04-12:/my-first-post.html</id><summary type="html">&lt;p&gt;I've migrated this blog to be a part of my mono-repo which controls my homelab.  The repo runs GitHub actions to automatically take modifications to my blog posts and render them with Pelican, a Python-based static blog generation package.&lt;/p&gt;
&lt;p&gt;In order to stand out a bit, I created a custom …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I've migrated this blog to be a part of my mono-repo which controls my homelab.  The repo runs GitHub actions to automatically take modifications to my blog posts and render them with Pelican, a Python-based static blog generation package.&lt;/p&gt;
&lt;p&gt;In order to stand out a bit, I created a custom Pelican theme using Claude code so the blog looks a bit better.  My days of dealing with CSS are long behind me and I'm not a great designer, so I'm happy to use Claude for these types of tasks.&lt;/p&gt;
&lt;p&gt;With this better workflow, I hope to be more motivated to create blog posts about my homelab this year!&lt;/p&gt;</content><category term="Misc"/></entry><entry><title>My Home Lab v1.0/v1.5</title><link href="https://crypt0s.com/my-home-lab-v10v15.html" rel="alternate"/><published>2026-04-12T12:00:00+02:00</published><updated>2026-04-12T12:00:00+02:00</updated><author><name>Crypt0s</name></author><id>tag:crypt0s.com,2026-04-12:/my-home-lab-v10v15.html</id><summary type="html">&lt;p&gt;In 2024, I was generally ignorant of Kubernetes from a systems administration standpoint.  Several years ago I resolved to increase my understanding of K8s by deploying my own bare-metal cluster onto my homelab from scratch (meaning: without kubernetes-in-a-box solutions like k3s, docker desktop, etc...).  In this blog post, I will …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In 2024, I was generally ignorant of Kubernetes from a systems administration standpoint.  Several years ago I resolved to increase my understanding of K8s by deploying my own bare-metal cluster onto my homelab from scratch (meaning: without kubernetes-in-a-box solutions like k3s, docker desktop, etc...).  In this blog post, I will share the lessons that I have learned over the past few years and demonstrate how seemingly minor decisions made when first starting out can have outsized impacts on how your cluster works.&lt;/p&gt;
&lt;h3&gt;Initial Hardware Overview&lt;/h3&gt;
&lt;p&gt;My first crack at administering K8s came when I decided to take the plunge by converting all the VMs and applications on my Proxmox server into workloads in K8s.  My server setup was:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;OS&lt;/td&gt;
&lt;td&gt;Proxmox&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;Intel i5-10400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RAM&lt;/td&gt;
&lt;td&gt;32Gb DDR4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Primary Storage&lt;/td&gt;
&lt;td&gt;1Tb PCIe 3.0 NVMe&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Secondary Storage&lt;/td&gt;
&lt;td&gt;x4 2Tb WD-Red NAS HDDs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;K8s Cluster v0.1&lt;/h3&gt;
&lt;p&gt;The Proxmox server was turned into a 1-node K8s cluster running the following stack:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Container Runtime&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/opencontainers/runc"&gt;runc&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Container Network&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/tigera/operator"&gt;Calico&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Container Storage&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner"&gt;local-provisioner&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Load Balancing&lt;/td&gt;
&lt;td&gt;&lt;a href="https://metallb.io/installation/"&gt;MetalLB&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ingress&lt;/td&gt;
&lt;td&gt;&lt;a href="https://docs.nginx.com/nginx-ingress-controller/install/helm"&gt;ingress-nginx&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;This, I felt, was a fairly simple yet complete approach.  While I had no "real" container storage interface, a one-node kubernetes cluster has no real need of a more complex storage solution, and I was happy to just be running MetalLB-backed services to expose things to my local network using BGP with my &lt;a href="https://store.ui.com/us/en/products/er-12"&gt;Ubiquity EdgeRouter 12&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;While this solution was a servicable way of running my very first workloads and services inside of Kubernetes, I quickly outgrew it in my desire to hew as closely as reasonable to the technology stack at CoreWeave as a priority.&lt;/p&gt;
&lt;p&gt;I purchased several additional nodes between June and December and began constructing what would end up being a 5-node kubernetes cluster.&lt;/p&gt;
&lt;h3&gt;Getting to Infrastructure as Code&lt;/h3&gt;
&lt;p&gt;(future)&lt;/p&gt;
&lt;h3&gt;40Gbps of ~~Infiniband~~ Ethernet&lt;/h3&gt;
&lt;p&gt;Towards the back half of 2024 I began purchasing Mellanox ConnectX-3 cards, specifically the CX354A FCBT card.  It is available cheaply: ~$15 for the card and another ~$15 for the Direct Attach Copper (DAC) cable will give you a functional 40Gbps Ethernet card if you've got a "Smart" infiniband router like the Mellanox SX3036 (~$160).  "Simply": &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;add the license for ethernet gateway to the SX3036&lt;/li&gt;
&lt;li&gt;turn the SX 3036 Host Channels into Ethernet mode&lt;/li&gt;
&lt;li&gt;purchase a cable that will bust down the 40Gbps to 4 10Gbps channels&lt;/li&gt;
&lt;li&gt;connect to your 10gbps router&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;and you'll have 40Gbps ethernet for cheaper than the aggregate cost of converting several hosts to 10Gbps ethernet equipment.&lt;/p&gt;
&lt;h3&gt;Adding GPU Nodes&lt;/h3&gt;
&lt;p&gt;(future)&lt;/p&gt;
&lt;h3&gt;Hyperconverged Ceph&lt;/h3&gt;
&lt;p&gt;I strongly recommend that you not use Rook to deploy a Ceph cluster on the same infrastructure you are leveraging to run Kubernetes.  Kubernetes nodes are (supposed to be) fungible in a way which Ceph does not handle well unless you're operating at scales far beyond what most homelab budgets are meant for.  &lt;/p&gt;
&lt;p&gt;My stabliity experience with Ceph was rather miserable:  A host would go down, perhaps from Out-of-memory issues due to a poorly coded or large workload on a host.  That outage, if it was prolonged, would cause Ceph to re-balance disks.  Then the host would come back, and the cluster would re-re-balance itself.  This back-and-forth in an early homelab cluster with non-homogenous equipment lead to overworked disks on consumer-grade systems.  Combined with early failures of older equipment this caused even more stability issues which eventually lead me to migrate off of Ceph for my Container Storage Interface.  It became too regular of an occurance for services in my network to be down and maintenance of Ceph was distracting from my core goal of learning practical Kubernetes administration.&lt;/p&gt;
&lt;h3&gt;Migrating to NFS&lt;/h3&gt;
&lt;p&gt;When I decided to leave Ceph, I was leaving a rather featureful ecosystem where adding disks was as easy as finding an open bay and SATA port on one of 5 desktops.  Adding disks scaled IOPS as well as storage, and the only growth boundary was my power budget since surplus desktops and hardware was cheap into late 2024 and early 2025.  &lt;/p&gt;
&lt;p&gt;In leaving Ceph, I could go to other clustered storage systems, but as mentioned: hyperconvergence in the homelab is more of a liability than an asset.  I resolved to migrate to either a single non-Kubernetes storage mule like a disk shelf, or an all-in-one solution.  I eventually chose the latter, opting for a Frankenstien's monster solution of an old Supermicro disk shelf with a newer SuperMicro motherboard.  &lt;/p&gt;
&lt;p&gt;Interestingly, SuperMicro chassis are ATX standards-compliant and the screw layout is user-changable (easily, I might add).  I was able to place a dual-CPU, 256Gb DDR4 system in the old chassis with the same power supplies and with zero pain.&lt;/p&gt;
&lt;p&gt;I installed Proxmox, shoved in some surplus 12Tb SAS drives, made ZFS pools, installed NFS, exported it to the cluster, and then installed the NFS container storage interface and was off to the races.  That is, until I had to migrate the existing Persistent Volumes to the new StorageClass.&lt;/p&gt;</content><category term="Homelab"/></entry></feed>