New Tutorials:   KOTLIN    JAVASCRIPT    SASS/SCSS    PL/SQL  
CLOSE
   Kubernetes  HowTo  
   Technology    Cloud

[SOLVED] Missing required field "selector" in Kubernetes

         
 JULY 24, 2020   by iamabhishek

The missing required field "selector" arise in Kubernetes resources post version upgrade of Kubernetes from 1.15+ to 1.16+ or newer. You can get this issue in a Kubernetes Deployment, Daemonset or other resources if you are moving from an old version to a newer version.

The fix for this is easy, all you have to do is add the spec.selector field to your YAML if it is not present or if it's empty then provide it a proper value.

Let's take an example to understand this. Below we have an old YAML file, which used to work fine in Kubernetes older version as back then, a default value was automatically set for the spec.selector field but not anymore. The spec.Selector no longer defaults to .spec.template.metadata.labels and will need to be explicitly set.

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluent-bit
  namespace: logging
  labels:
    app: fluent-bit
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        app: fluent-bit
        version: v1
        kubernetes.io/cluster-service: "true"
...
...

Apart from the spec.selector field missing, in the above YAML file, we are also using the extension/v1beta1 version which is no longer used in latest version of Kubernetes (We have a separate post for it, click on the link).

Let's focus on the spec.selector field. The above YAML file creates a Kubernetes daemonset. You may get this issue with Kubernetes Deployment, but the solution is same.

We will change the above YAML to:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluent-bit
  namespace: logging
  labels:
    app: fluent-bit
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels:
      app: fluent-bit
  template:
    metadata:
      labels:
        app: fluent-bit
        version: v1
        kubernetes.io/cluster-service: "true"
...
...

Notice the following section added to the spec field in above YAML:

selector:
  matchLabels:
    app: fluent-bit

That is what is required to solve this issue. The matchLabels field should have the key-value pair that we specify in the template field. You can have labels as component: serviceName or maybe k8s-app: serviceName, then that should be provided in the matchLabels field in spec.selector field.

The selector field defines how the Daemonset or Deployment finds which Pods to manage. In the above YAML code, we just used a label that is defined in the Pod template (app: fluent-bit). But, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule.

Conclusion:

As Kubernetes is still undergoing changes, hence such issues will keep coming because the Kubernetes team is constantly trying to improve Kubernetes. So we will keep posting such solution to the problems faced by developers while using Kubernetes.

Subscribe to our Newsletter to get all our new articles directly into your mailbox.


RELATED POSTS



Subscribe and receive amazing posts directly in your inbox.