Reimagining Application Definition in Kubernetes with Custom Resource Definitions
Kubernetes users frequently encounter challenges in comprehending the holistic view of applications running within their clusters. The ubiquitous kubectl get all command, contrary to its name, provides an incomplete and often misleading overview, omitting critical resources like Ingresses or Persistent Volume Claims. This fundamental gap extends to defining what constitutes a complete application or logical system, as Kubernetes lacks a built-in mechanism to group related resources beyond implicit ownerReferences. While ownerReferences establish crucial parent-child relationships for lifecycle management and garbage collection—demonstrated by controllers creating child resources like Pods from Deployments or custom operators from CRDs—they primarily offer a bottom-up view. Tools like kubectl tree can visualize these hierarchies, but they depend on knowing the top-level owner and fail to encompass resources that are logically part of an application but not directly owned by a single parent, leaving a fragmented understanding of system components and their collective health.
To address this pervasive issue, a custom resource definition (CRD) has been developed, providing a structured approach to define logical groups of Kubernetes resources. This CRD encapsulates not only a list of top-level resources comprising an application but also critical context, intent, and consolidated status information. By declaring this CRD, users can explicitly delineate which disparate components—such as a PostgreSQL cluster, an application deployment, Ingresses, and PVCs—collectively form a coherent system. This top-down grouping enables kubectl tree to present a complete, hierarchical overview of an entire application, significantly improving cluster navigability and simplifying the process of understanding resource interdependencies and overall system health. The CRD aims to empower developers and operators by providing a clearer, more consistent framework for managing and observing complex applications within Kubernetes environments, a benefit extending to AI-driven cluster management solutions that also require a structured understanding of application topology.