Dev.to: 10 kubectl Plugins That Help Make You the Most Valuable Kubernetes Engineer in the Room

Kubernetes is insanely powerful and becomes much easier to manage when you extend kubectl with plugins. Thanks to the open-source community (and the Krew plugin manager), you can add tons of new subcommands to kubectl that streamline tasks and make cluster management easier.
But with hundreds of available plugins, how do you decide which to try?
We're sharing our favorites. All the plugins that made our list are actively maintained and compatible with recent Kubernetes versions (think 1.30+).
Let’s dive in!
1. Preq – Detect Reliability Issues Early
What is it:
preq
(pronounced “preek”) is a reliability problem detector that looks for common problems in your application before they cause outages. It is powered by a community-driven catalog of failure patterns (sort of like a vulnerability database, but for reliability).
With the preq
, you can run checks against your running application cluster and get alerted to bugs, misconfigurations, or anti-patterns that others have already identified.
Why it’s useful:
If you’ve ever been blindsided by a production incident, preq
can be a lifesaver. It hunts through events and configurations looking for sequences that match known failure patterns.
When it finds something, it provides a detailed report (with a recommended fix) so you can act fast. In short, it brings SRE expertise to your fingertips, helping teams pinpoint and mitigate problems before they escalate. It’s like having someone checking your cluster 24/7 (for free and open source!).
This is an exciting new project, you can find and ⭐ star the repo here: https://github.com/prequel-dev/preq
Installation:
kubectl krew install preq
Pro-tip: Install the Krew plugin manager first
Example usage:
Once installed, you can run preq
on a specific workload or pod. For instance, to run reliability checks on a PostgreSQL pod:
kubectl preq pg17-postgresql-0
This will scan the pod’s logs and events against the library of Common Reliability Enumeration (CRE) rules. If any known issues are detected, you’ll get an output detailing the problem and how to fix it (with references to documentation).
You can even schedule preq
as a CronJob in your cluster to continuously monitor and push alerts (e.g. to Slack) when something’s amiss. In short, preq
gives you proactive reliability insights that help stop outages before they happen.
2. Neat – Clean Up Verbose YAML Output
What is it:
kubectl-neat does exactly what it sounds like – it neatens up Kubernetes YAML output by removing all the clutter. When you do kubectl get ... -o yaml, the output is often filled with extra fields (status, managedFields, selfLink, etc.) that make it hard to focus on the spec. neat strips out those noisy fields and default values, leaving you with a clean manifest that’s much easier to read.
Why it’s useful:
If you find your eyes glazing over from endless autogenerated metadata, this plugin is for
you. It omits fields like managedFields, status, creationTimestamp, resourceVersion, and other boilerplate that Kubernetes injects. The result is a tidy view of the resource’s actual configuration.
This is super helpful for troubleshooting or comparing manifests – you can see just the fields you or your tools defined, without the Kubernetes-added noise. In short, neat makes YAML outputs concise
and readable.
Installation:
kubectl krew install neat
Example usage:
Simply pipe any verbose output into kubectl neat. For example:
kubectl get podnginx-abc123 -o yaml | kubectl neat
This will output the YAML for pod nginx-abc123 but without all the junk (owner references, timestamps,
default values, etc.).
You can then easily diff or inspect this trimmed manifest. It’s a huge timesaver when you want to quickly understand a resource’s config without wading through Kubernetes added fields.
3. View-Secret – Decode Secrets on the Fly
What is it:
No more manual Base64 decoding! The kubectl-view-secret plugin makes it effortless to view Kubernetes Secrets in plain text. Normally, if you do kubectl get secret my-secret -o yaml
, you'll see Base64 encoded content for each key. With view-secret, those values are decoded to human-readable strings automatically.
Why it's useful:
Ever needed to quickly check what value was stored in a Secret? This plugin saves you tons of time. Instead of copying the encoded string and running echo ... | base64 -d
for each key, you just run one command and see actual secret values. This is especially handy for Secrets with multiple keys, like TLS certs or app configs. It's also great for verifying that your Secret data is correct (in dev/test environments) without the hassle of decoding. Essentially, view-secret eliminates the manual steps when managing Secrets.
Installation:
kubectl krew install view-secret
Example usage:
To view a secret named my-secret in default namespace, just run:
kubectl view-secret my-secret --all
Adding the --all flag shows all key-value pairs. For example, you might get:
key1=supersecret
key2=topsecret
as the output. No more copy-pasting and decoding – you immediately see that key1 is "supersecret" and key2 is "topsecret". In one quick command, you have your secret values at hand (be careful where you run this though, as it will expose sensitive info in your terminal).
4. Tree – Visualize Resource Ownership
What is it:
The kubectl-tree plugin displays ownership hierarchy in your cluster in a nice tree format. Kubernetes objects often own or control other objects (e.g., a Deployment owns ReplicaSets which own Pods). tree lets you pick a top-level resource and see all its descendants laid out as an ASCII tree.
Why it's useful:
This is fantastic for understanding which resources are linked together. For example, if you have a complex app, you can run kubectl tree on a Deployment or on a CustomResource and instantly see the chain of owned objects beneath it. It's especially useful with CRDs, where the relationships might not be obvious. Instead of manually cross-referencing owners, you get a clear picture: e.g., a StatefulSet -> Pods -> PVCs, etc. This helps in cleanup (to ensure you delete dependents) and in troubleshooting cascading issues. In short, tree gives you a birds-eye view of how Kubernetes controllers have orchestrated your resources in a hierarchy.
Installation:
kubectl krew install tree
Example usage:
To see what a particular resource owns, run:
kubectl tree deploy my-app
This might output a tree of all objects created by the Deployment my-app, such as ReplicaSets and Pods. For instance:
Deployment/my-app
└─ ReplicaSet/my-app-5fd76f7d5c
├─ Pod/my-app-5fd76f7d5c-abcde
└─ Pod/my-app-5fd76f7d5c-fghij
This tells you the Deployment owns a ReplicaSet, which in turn has two Pods. You can use kubectl tree on other high-level resources too (like an Ingress or a CRD instance) to reveal what's underneath. It's a superb way to navigate complex deployments and ensure you understand resource dependencies.
5. Tail – Stream Logs from Multiple Pods
What is it:
kubectl-tail (the Krew plugin name is just tail) is a handy plugin for tailing logs from multiple pods in real time. It's like a supercharged version of kubectl logs -f
, allowing you to aggregate logs across several pods or even an entire label/selector. Under the hood, this plugin is based on Kail, providing Stern-like functionality directly as a kubectl plugin.
Why it's useful:
When debugging an app that's distributed across many pods (say a deployment with replicas or a microservice with multiple components), it's inconvenient to open separate log streams for each pod. kubectl tail solves this by letting you target multiple pods at once – for example, by deployment name, service name, or label selector. The logs from all matching pods are merged and streamed to your terminal. You can even filter by timeframe (--since) or specific pods. As Alex Moss noted, one great feature is targeting a higher-level resource: e.g. kubectl tail --svc=my-service
to see logs from all pods behind that Service. This plugin simplifies multi-pod debugging and gives you a consolidated, live view of what's happening across your application.
Installation:
kubectl krew install tail
Example usage:
Here are a few common ways to use kubectl tail:
By namespace: View logs from all pods in a namespace (e.g. default):
kubectl tail --ns default
By label selector: Stream logs from pods matching a label, e.g. all pods with app=web:
kubectl tail -l app=web --since=10m
(This would show the last 10 minutes of logs from every pod labeled app=web, and then continue streaming.)
Multiple specific pods: If you want to tail two particular pods:
kubectl tail --pod web-abcde --pod web-fghij --since=1h
In each case, logs from all targeted pods will stream live to your terminal. The plugin even color-codes logs by pod, making it easier to distinguish sources. It's a simple but powerful way to debug issues that span many pods without juggling multiple kubectl logs commands.
6. Who-Can – Investigate RBAC Permissions
What is it:
kubectl-who-can helps you answer the question: "Who can do X in my cluster?" It's an RBAC permissions investigator. You give it an action (verb) and resource, and it tells you which users, service accounts, or roles are allowed to perform that action. Essentially, it wraps kubectl auth can-i --list
logic into an easy query tool.
Why it's useful:
Kubernetes RBAC can get complicated. If you're debugging a "permission denied" error or just auditing access, this plugin is gold. Instead of manually inspecting ClusterRoleBindings, you can simply ask "who can delete pods in this namespace?" and get an immediate answer. It's particularly useful for debugging RBAC issues and ensuring your policies are set correctly. For example, if a deployment failed because it couldn't list Secrets, who-can will show you which account needs a role update. In short, it provides quick visibility into who has access to what, saving you from hunting through YAML and docs.
Installation:
kubectl krew install who-can
Example usage:
Say you want to find out who can delete pods in namespace foo:
kubectl who-can delete pods --namespace foo
The plugin will return a list of subjects (users, serviceaccounts, roles) that have that capability. For instance, you might see that a certain RoleBinding gives the "deploy-bot" service account the delete permission on pods, and maybe cluster admins can too. You can also run broader queries, like:
kubectl who-can get secret/db-password
to see who can read the db-password secret. This is incredibly useful for security audits—quickly verify that only the intended identities have access. In summary, who-can turns RBAC from a mystery into an answerable question, helping you secure and troubleshoot your cluster's access control.
7. kubectx – Swiftly Switch Contexts
What is it:
kubectx is a popular command-line tool (and kubectl plugin) for fast context switching. In Kubernetes, a "context" is essentially which cluster + user you're currently using. If you work with multiple clusters (dev, staging, prod, or multiple cloud providers), kubectx lets you flip between them with a single short command, instead of typing kubectl config use-context ...
each time.
Why it's useful:
Working in the wrong cluster can be disastrous ("oops, I just ran that in prod!"). kubectx makes it trivial to see your available contexts and switch in a heartbeat. It increases productivity for those managing multiple clusters by removing friction from context changes. It also supports tab-completion, so you can quickly auto-complete context names. Both beginners and pros love this because it simplifies multi-cluster workflows—keeping you from accidentally operating in the wrong environment and saving you time every day.
Installation:
You can install via Krew (there are plugins named ctx and ns for kubectx/kubens). To use Krew, run:
kubectl krew install ctx
Example usage:
After installation, switching contexts is as easy as:
kubectx prod-cluster
This will switch your current kubectl context to "prod-cluster" (whatever name you have in your kubeconfig). To list all contexts, just run kubectx with no arguments and it will show an interactive list. You can also shorten context names or set up aliases for convenience. With this, managing multiple clusters (like bouncing between dev → staging → prod) becomes a breeze. Pair it with kubens (below) for full power.
8. kubens – Speedy Namespace Switching
What is it:
A perfect companion to kubectx, kubens lets you quickly switch between Kubernetes namespaces. Rather than typing -n <namespace>
every time or editing your context, you just run kubens <name>
and it changes your current namespace in the kubeconfig context.
Why it's useful:
Kubernetes namespaces help divide resources, but it's tedious to constantly specify -n or modify context YAML by hand. kubens streamlines this by making namespace changes a single short command. This is great for both beginners (who might forget to target the right namespace) and advanced users managing multi-tenant clusters. It prevents mistakes like deploying to the default namespace unintentionally. Combined with kubectx, you can navigate clusters and namespaces with ease. It's all about efficiency: less typing, less context switching in your head, and more focus on what you're deploying or debugging.
Installation:
kubectl krew install ns
Example usage:
To switch to the kube-system namespace (in your current context):
kubens kube-system
Your default namespace for kubectl commands is now kube-system until you switch again. Running kubens with no arguments lists all namespaces in the current context, so you can pick one interactively. For instance, if you're juggling multiple projects in a cluster, kubens lets you jump between dev, test, and prod namespaces instantly. No more forgetting to add -n and wondering "why can't it find my pods?"—this tool keeps your namespace context correct at all times.
9. Kube-Score – Lint Your Kubernetes Manifests
What is it: kube-score (available via the Krew plugin score) is a static code analysis tool for Kubernetes YAML resources. In simpler terms, it's like a linter or "config validator" for your deployment files. You run it on your manifests (deployments, services, etc.), and it scores them and gives recommendations for improvements.
Why it's useful:
Kubernetes will happily accept configs that are syntactically valid but follow bad practices. kube-score flags those issues before you apply them. It checks for things like missing resource limits, improper health checks, deprecated API versions, and many other best-practice violations. The output is a list of suggestions on what to improve for better reliability and security. This is extremely useful for validating configuration files – you catch mistakes or omissions early, in CI/CD or during development. Essentially, kube-score acts as a quality gate for your Kubernetes manifests, ensuring they adhere to recommended standards (so you don't get surprises at runtime).
Installation:
kubectl krew install score
(This installs the kubectl score plugin, which internally uses the kube-score tool.)
Example usage:
To analyze a file (or directory of YAMLs) for issues, run:
kubectl score -f my-app.yaml
The plugin will output a report, for example:
[WARNING] Deployment my-app: Container without resource limits
↳ It's recommended to set resource limits for containers to avoid resource hogging.
[CRITICAL] Service my-app-service: Uses targetPort name that doesn't match any container port
↳ The targetPort "http" is not found in any container of the associated pods.
Each finding comes with a severity and an explanation. You'd then go back and fix those in your YAML. You can also run kubectl score -f <folder>
to scan multiple manifests at once. This plugin is perfect for validating Kubernetes config files as part of code reviews or CI pipelines. It helps both newbies and experts by pointing out potential misconfigurations (before they hit your cluster).
10. Sniff – Capture Pod Traffic Like a Pro
What is it:
Ever wished you could tcpdump inside a Kubernetes pod? kubectl sniff (a.k.a. ksniff) is a plugin that makes that possible. It uses tcpdump and Wireshark under the hood to capture network traffic from any pod in your cluster. With one command, it starts a remote packet capture on the target pod and streams the data for you to open in Wireshark or analyze with other tools.
Why it's useful:
Debugging network issues in Kubernetes can be tricky – you often need to see the raw traffic. sniff automates the heavy lifting of deploying a capture container alongside your pod and piping the output to your workstation. You get the full power of Wireshark for inspecting packets, with minimal impact on the running pod. This is incredibly useful for advanced troubleshooting: e.g., investigating why a service isn't responding, checking if a pod is actually making calls to an external API, or diagnosing weird networking behavior. Instead of crafting tcpdump commands on a node or modifying the pod, you run one plugin and get a pcap of what's happening. It's a game-changer for network debugging in Kubernetes clusters.
Installation:
kubectl krew install sniff
(Note: You'll need Wireshark installed locally for live capture, or you can output to a pcap file and open it later. Also, the target pod's node needs to allow running the capture – the plugin can handle many scenarios including non-privileged containers by using a helper Pod.)
Example usage:
To start capturing traffic from a pod my-pod in namespace default:
kubectl sniff my-pod -n default -c main-container
Here -c specifies the container (if omitted, it defaults to the first container in the pod). By default, sniff will launch Wireshark on your machine showing the live traffic from my-pod. You can apply capture filters with -f, for example -f "port 80" to only capture web traffic. If you prefer to save to file instead of live view, use the -o flag to write a pcap:
kubectl sniff my-pod -n default -o output.pcap
After running this, you'll have output.pcap with all packets captured from my-pod's network interface. Open that in Wireshark and you can dissect the traffic at your leisure. kubectl sniff brings deep network insight to Kubernetes – previously, you might have had to exec into the node or use complex setups, but now it's one simple command. It's an advanced tool that becomes surprisingly approachable thanks to this plugin.
Conclusion:
These kubectl plugins can dramatically enhance your productivity and capabilities with Kubernetes. From validating your configs to debugging live clusters, they fill in gaps that the default kubectl doesn't cover. Best of all, they integrate seamlessly – you invoke them as if they were native kubectl commands. Go ahead and try installing a few that pique your interest (via Krew), and you'll wonder how you managed Kubernetes without them!
Enjoy! (And let us know in the comments which kubectl plugin is your favorite, or if we missed one that you think should have be in our top 10.)