![]() With K3s and Harvester (in app mode), you can maximize your edge node’s utility by allowing it to run Linux containers and Windows VMs, down to the host OS orchestrated via Rancher’s Continuous Delivery (Fleet) GitOps deployment tool.Īt the host, we start with SLES and Ubuntu. ![]() Running Windows VMs alongside your Linux containers provides much-needed flexibility while using the Kubernetes API to manage the entire deployment brings welcome simplicity and control. We’ll then deploy the whole thing with a bit of Terraform, completing the solution.Īt the edge, we often lack the necessities such as a cloud or even spare hardware. Later I’ll use Fleet to orchestrate the entire host with OS and Kubernetes updates. In this blog and the following tutorials, I’ll map out an edge stack and set up and install Harvester. Specifically, when one host must run the dreaded Windows legacy applications and modern containerized microservers. At the edge, Harvester provides a solution for a nightmarish technical challenge. “Why would I not just containerize the workloads or orchestrate the VM natively via KVM, Xen or my hypervisor of choice?” and that approach makes a lot of sense except for one thing: the edge. At first, the idea of managing VMs via Kubernetes did not seem very exciting. About six months ago, I learned about a new project called Harvester, our open source hyperconverged infrastructure (HCI) software built using Kubernetes, libvirt, kubevirt, Longhorn and minIO.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |