enrichman 1 day ago

Hi everyone! I’m one of the maintainers of K3k at SUSE.

It’s really exciting to see this on the front page. The project actually started during a SUSE Hackweek by my colleague Hussein. It was initially envisioned as a "Kubernetes version of k3d," but it evolved into something more ambitious and eventually became a real product. We’ve always been big believers in the power of open source. For the current default "shared" mode, we even experimented with Virtual Kubelet, another CNCF project, during our development process.

I’ll be hanging around the thread today, so if you have any questions about the history, the tech stack, or where we're headed next, feel free to ask!

  • nickgerace 18 hours ago

    Great project! Always wondered if/when it would be done at Rancher. Really cool, can’t wait to try it.

teleforce 23 hours ago

Why stop at K3k, should be named K3k3k in order to capture the truly recursive and nested nature of the container-in-container system?

Joking aside I think this can be a great tool in Kubernetes and container eco-system.

Unlike one of the sibling comments that claimed it's a very niche application, or 99.9% deployment will never ever use this nested feature, I beg to differ.

Apart for testing with container-in-container arrangement, it can be a killer application for realistic simulation of network elements as has been utilized in many network simulators including ComNetsEmu and others [1],[2],[3],[4].

[1] Chapter 13 - ComNetsEmu: a lightweight emulator:

https://www.sciencedirect.com/science/chapter/edited-volume/...

[2] ViPMesh: A virtual prototyping framework for IEEE 802.11s wireless mesh networks:

https://ieeexplore.ieee.org/document/7763263

[3] NestedNet: A Container-based Prototyping Tool for Hierarchical Software Defined Networks:

https://ieeexplore.ieee.org/document/9244858

[4] Network Virtualization and Emulation using Docker, OpenvSwitch and Mininet-based Link Emulation:

https://scholarworks.umass.edu/masters_theses_2/985/

matt123456789 1 day ago

This is, if I had to guess, a monument to a small team's stubborn insistence that such a thing could be done at all. If I can hope for a reward for them, may it be that they are allowed to hand off maintaining it to another team.

randomtoast 1 day ago

This type of approach carries a significantly higher operational risk compared to operating multiple Kubernetes clusters on separate VMs or physical hardware. If you eventually update the main Kubernetes cluster that manages the virtual clusters and something goes wrong, you could potentially bring down your entire fleet of Kubernetes clusters all at once.

  • lateral_cloud 1 day ago

    I don't think this is intended for production

    • rootnod3 20 hours ago

      Then why would SuSE spend money on it?

ssousa666 21 hours ago

My team runs several HarvesterHCI/RKE2 clusters, edge deployments of our validation, simulation and fleet management tools for autonomous vehicles. The Rancher ecosystem has really been a godsend for us.

Excited to experiment with k3k, but worried that I won't have the language to accurately describe the third layer of kubernetes in the stack. Host cluster -> Guest Host Cluster -> Guest Cluster? Host Cluster -> Guest Cluster -> Guest Guest Cluster?

rjzzleep 1 day ago

Do Rancher side products generally make it into a stable state such that you would want to run mission-critical systems on?

  • sofixa 1 day ago

    RKE (their Kubernetes deployment and management platform, mostly for various flavours of self managed environments) is pretty popular with the self-managed crowd that needs something to manage their on Orem Kubernetes clusters.

    • rjzzleep 1 day ago

      That's why I wrote Rancher side products.

  • V99 1 day ago

    (Former employee) They tend to either get enough traction very quickly and be supported for years, or not and be abandoned in weeks/months.

  • enrichman 15 hours ago

    This is not a side product but it's currently GA and part of the Rancher Prime offering. :)

weitzj 1 day ago

I don’t understand how they are separating security in the virtual mode as they only mention pods. It seems every workload still shares the underlying node, even when in virtual mode. Take for example the OCI cache on the nodes. What about cache poisoning?

  • ithkuil 1 day ago

    Aren't OCI caches content addressed?

    • weitzj 1 day ago

      I was thinking of people were to use an image…:$my_tag on the host cluster and some roughe pod on the child cluster (but same underlying physical nodes) somehow overwriting the local cached :my_tag, you could do something on the parent cluster.

      But I don’t fully understand what you meant with content adressed :)

      Maybe one has to ensure in the host cluster that the image pull policy is set to Always or all references to images have to be based on the shasum rather than Tags.

  • enrichman 1 day ago

    In virtual mode, the only pods running directly on the host are the K3s servers and agents. All "virtual cluster pods" run within these components, meaning they do not appear as individual pods on the host cluster.

    The only trade-off is that K3s currently requires privileged mode to operate. We are actively exploring ways to address this limitation and improve security, such as implementing user namespaces or microVMs.

    • weitzj 1 day ago

      Thank you for your feedback.

      I understood from the host cluster perspective you won’t see the child cluster pods. And what is the perspective on nodes?

      Can you have like a host cluster spawning on host nodes and the host cluster has control over spawning separate physical nodes which contain the child cluster (api server) + workload pods ?

      • enrichman 1 day ago

        As I understand it, the virtual cluster pods are treated as standard workloads by the host. This means if you scale the nodes up or down, they will be rescheduled accordingly. You can currently use node selectors to manage this behavior, though we are developing a more flexible approach using affinity rules.

ohnei 1 day ago

It doesn't seem like it is at a deep layer such that it could be used to test updates to kubernetes and CRDs in a cluster that isn't yet updated?

nonameiguess 1 day ago

Hacker News sure does love posting links to random Github repos with no context for why it was posted, then a bunch of comments come along and basically ask why.

Since I do have context, the original Rancher labs CTO created k3s, one of the earliest severely stripped down versions of Kubernetes, which bundles all of the required executables into a single multi-call binary, in order to be able to run Kubernetes on a Raspberry Pi. Along the lines of kind, k3d was released to be able to run k3s in Docker containers instead of full Linux hosts. The main use case is testing. We used it extensivel in the early days of Air Force and IC cloud migrations that insisted we needed to rehost all systems in Kubernetes so developers could have local targets to work with. Rancher eventually rebuilt its Kubernetes engine when Docker fell out of favor and based rke2 on k3s, but with the Kubernetes components as static pods instead of embedded multi-call binaries and kubelet and containerd extracted from an embedded virtual filesystem to the host when rke2 is first run.

When KubeVirt came out, Rancher also released an HCI product that uses it, Harvester, running on top of rke2 and Rancher's storage project Longhorn. This runs a full virtual machine manager with virtualized networking and storage, a la something like ESXI, vSAN, and vSphere, with Multus and the bridge CNI plugin providing the networking (it now has KubeOVN as well).

Harvester relies on being imported to and managed by Rancher to have things like SSO and Rancher's multi-cluster RBAC and node provisioners for Harvester to run guest clusters. A whole lot of customers migrating off of VMWare since the Broadcom acquisition want all of that, but without necessarily having an external Rancher. Early on, Harvester offered an experimental vCluster addon that created a guest cluster with Rancher installed on it and that automatically managed Harvester.

This had a lot of problems. I'm not going to rehash them because I don't want to come across as bashing vCluster, but it was not a tenable long-term option that crashed hard on most who tried to use it. Since Rancher already had k3d, it was pretty natural step to just create their own virtualized Kubernetes that runs in Kubernetes by adapting k3d to become k3k, which runs k3s in Kubernetes rather than in Docker. Now you can get a guest cluster to install Rancher onto and get the full suite of Rancher features and a much better experience than the bare Harvester UI without needing to run full VMs.

Why not just install Rancher directly onto the same rke2 cluster that is running Harvester itself? Because it already has one, but that was considered an implementation detail that developers used to bootstrap and not have to duplicate work that was already done, but not meant to be exposed to users. If you try to install a second Rancher to actually use, you'll conflict with a whole bunch of resources that already exist and it won't work.

It's a tangled mess of confusing layers, but that's the world we live in. It's why we still have IPv4, VLAN, VXLAN, virtual terminals, discretionary access control for Linux. We build on top of what is already there instead of rebuilding from scratch in a saner way. This isn't just how software works. It's why city designs rarely make sense. It's why life itself has vestigial anti-features. Cruft rarely disappears. It just gets buried underneath whatever comes next.

2ndorderthought 1 day ago

Can someone explain what this even means? Explain it like I am a software engineer with 20 years experience who has not yet found a strong use case for running kubernetes outside of hand holding cloud provider options

  • geoffbp 1 day ago

    Send the link to AI and ask :)

    • 2ndorderthought 1 day ago

      I have found I learn more when I talk to people who are really interested in a topic.

  • mystifyingpoi 1 day ago

    This is extremely niche. 99.9% of Kubernetes deployments will never need such nesting. It could be useful for testing tooling (I guess maybe operators?) without recreating the "top-level" cluster all the time.

    Also it's a fun idea. Sandbox in a sandbox.

    • dboreham 1 day ago

      I've seen many bugs get to production for the lack of such testing.

      • never_inline 17 hours ago

        Or you know, you can architect around testability from the beginning, where multiple branches / instances of same application can run in the same cluster - in different namespaces.

  • phrotoma 1 day ago

    K8s encourages thinking about workloads as "cattle not pets". App running in K8s falls over? Blow it away and let K8s recreate it, etc.

    However clusers themselves often become the new pets. Many orgs do not reach a level of operational maturity where they can blow away and recreate whole clusters without downtime and toil.

    A meta-pattern has emerged where higher order tooling managers a whole fleet of clusters. This is an implementation of that meta pattern which uses K8s itself as the higher order tool to manage other clusters.

    It's not a new idea, just a new implementation of the pattern.

    • 2ndorderthought 1 day ago

      Thank you. Wow I had no idea this was a problem. Seems kind of nightmare territory. In a weird way it makes me respect elixir/erlang even more. It's not the exact same problem obviously but really had me thinking about beam etc

      • dboreham 1 day ago

        Imagine you are the developer of k8s hosted systems. Now imagine you want to test your systems in a repeatable fashion. You'd need some way to spin up a test k8s cluster, deploy your application, subject it to a test workload. That's simple and easy if you only need one physical cluster node: you can use k3s or perhaps kind. But if you want multiple physical nodes, not so easy. This solves that problem by leveraging an existing k8s cluster, which is a standard thing easily obtained. You might now ask why not just use that cluster (why the terduckin?) Answer: cost, time, hassle, you want a different version of k8s than the hosting provider gives you.

bloppe 1 day ago

What does k3k stand for? Can we just put whatever number we want between 2 letters now?

  • BurpyDave 1 day ago

    I suspect it’s ‘kubernetes in kubernetes’

  • stingraycharles 1 day ago

    I suspect it's a play on another kubernetes variant, `k3s` ?

  • olblak 1 day ago

    Disclosure as I am working for SUSE on Rancher.

    It's Kubernetes in Kubernetes and a reference in k3s which is also a project we are heavily contributing to, at SUSE.

  • nextaccountic 9 hours ago

    https://github.com/k3s-io/k3s#whats-with-the-name

    > What's with the name?

    > We wanted an installation of Kubernetes that was half the size in terms of memory footprint. Kubernetes is a 10 letter word stylized as k8s. So something half as big as Kubernetes would be a 5 letter word stylized as K3s. A '3' is also an '8' cut in half vertically. There is neither a long-form of K3s nor official pronunciation.

    k3k is a play on k3s

    but k3s is itself a play on words (k3s is supposed to be half the size of k8s, which stands for kubernetes)

rootnod3 20 hours ago

Cool, one more layer of indirection and abstraction. May I ask why? I fail to see the point, but I might just be grumpy.

madduci 1 day ago

Nice, now we need K3Kind

freakynit 1 day ago

Can we go deeper than two level? (inception vibes..)