Skip to main content

Command Palette

Search for a command to run...

ndots : The Hidden DNS Logic Behind resolving Kubernetes Service Names

DNS Search Domains, ndots, and Kubernetes Resolver Behavior

Updated
7 min read
ndots : The Hidden DNS Logic Behind resolving Kubernetes Service Names
K

Hey, Y'all 👋 Kratik Here :)

Hailing from The City of Lakes, India.

I love to learn and share stuff about DevOps, SRE, Security, Cost Optimization, etc.

Join me on this journey where we will learn together with the help bunch of interesting blogs.

Ever wondered how Kubernetes resolves a service name like my-service without any domain attached?
In this blog, we’ll dive into the DNS concept of ndots, understand how it works, and see how it plays a crucial role in Kubernetes DNS resolution.

Before we dive deeper, let’s take a step back and understand how a basic DNS query and resolution actually work


Working of a DNS Query

so for example - let’s understand what happens when you enter blogs.kratik.dev in your browser

(I hope somewhere someone is actually entering this to read my blogs 🫠 )

Your browser tries to resolve IP of the domain in following order

  1. check if domain exists in the /etc/hosts file

  2. If the domain is not found there, the OS sends the request to the DNS servers configured on the device (from Wi-Fi, Ethernet, mobile data, or VPN).

On Linux, these DNS server settings are exposed through /etc/resolv.conf;
on other systems, they are managed internally, but the behaviour is the same.

what is /etc/hosts file ?

/etc/hosts is a local, static hostname-to-IP mapping file used by the operating system before DNS is consulted. It allows you to manually define how hostnames resolve without querying any DNS server.

in my machine, it looks something like this :

##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1    localhost
255.255.255.255    broadcasthost
::1             localhost

# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal

127.0.0.1 kratik.servers
# End of section

💡- now you also know how localhost IP resolution works :)

so let’s test this with a domain we have defined kratik.awesome - it does not exist

let’s verify it first

curl

browser

okay, obviously it does not exist

(I wish It did )

Let’s add this domain in our /etc/hosts file

##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1    localhost
255.255.255.255    broadcasthost
::1             localhost

# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal

127.0.0.1 kratik.servers

127.0.0.1 kratik.awesome
# End of section

now let’s run a test server in a python quickly 😎

(basically this commands runs a python server which serves files from the specified directory)

mkdir -p /tmp/sample

cat <<EOF >/tmp/sample/index.html
<h1>Hello from DNS Blog!</h1>
EOF

python3 -m http.server 80 --directory /tmp/sample

Ok, let’s check browser again

now you can see the above webpage, because :

  1. browser requested the IP of domain kratik.awesome

  2. /etc/hosts had an entry for above domain so replied with the IP instantly which was 127.0.0.1

  3. we already running a Python HTTP server on port 80, so this server responded with our created HTML file.

I hope that clear the basics for /etc/hosts file, let’s move next to resolv.conf file

what is /etc/resolv.conf file

/etc/resolv.conf is a system configuration file that tells the operating system which DNS servers to use when resolving domain names.

it look something like this on a linux machine (this is an AWS EC2 machine btw)

# This is /run/systemd/resolve/resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
#
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 172.22.0.2
search ap-south-1.compute.internal

basic explanation of the fields in resolver files are:

  1. nameserver - server to query for DNS lookups, you can define more servers by adding one more nameserver in same format in a new line.

  2. search - this option tells the system which domain names to automatically append when you try to resolve a short (unqualified) hostname.

  3. option - (it’s not present in the above file) - this field is used to configure the behaviour of the DNS lookups, for example - how long it waits, how many times it retries, and when it treats a name as fully qualified - we will understand it better further.

    available options are ndots & timeout & attempts

now let’s move deeper and check a resolver file in Kubernetes environment

/etc/resolv.conf file in Kubernetes Pods

If you see the same file in your pods in a Kubernetes cluster, it will look something like this

search kube-system.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5

This is where DNS resolution gets interesting: search domains and ndots

Let’s understand it one by one

search kube-system.svc.cluster.local svc.cluster.local cluster.local

What it does : If a hostname is not fully qualified, the resolver tries appending each of these domains in order until one resolves.

ndots

options ndots:5

so this actually counts the dots(.) in our DNS query, for example - api.github.com has 2 dots and my-service.my-namespace.svc.cluster.local has 4 dots

so basically it means, if a domain does not contain dots more than or equal to 5, it will not be considered as a fully qualified domain name and then all the domains listed in `search` will be appended in the DNS lookup one by one.

Let’s see this in action!

I will do a DNS query and also show you the logs of the dns server in k8s.

Setup of the k8s cluster for this test

We have a Kubernetes cluster with two namespaces:

  • my-namespace

    • also create a nginx service here named my-service
  • my-other-namespace

Each namespace has its own DNS search domain:

  • my-namespace.svc.cluster.local

  • my-other-namespace.svc.cluster.local

Kubernetes configures pods so that DNS lookups first try services in the same namespace, and then fall back to cluster-wide service discovery.

I will create netshoot pods to do DNS lookup in each of these namespace.

Case 1: Lookup from the same namespace (my-namespace)

cat /etc/resolv.conf file in 1st namespace

Command

CoreDNS logs

"A IN my-service.my-namespace.svc.cluster.local." NOERROR
"AAAA IN my-service.my-namespace.svc.cluster.local." NOERROR

What happened

  • The pod’s resolver has this search list:

      search my-namespace.svc.cluster.local svc.cluster.local cluster.local
    
  • Since my-service is a short name, the resolver appends the first search domain.

  • The first attempt is:

      my-service.my-namespace.svc.cluster.local
    
  • This service exists, so CoreDNS returns NOERROR.

  • Resolution stops immediately.

✅ Result: my-service resolves successfully inside its own namespace.


Case 2: Lookup from another namespace (my-other-namespace)

cat /etc/resolv.conf file in 2nd namespace

Command

CoreDNS logs (in order)

"A IN my-service.my-other-namespace.svc.cluster.local." NXDOMAIN
"A IN my-service.svc.cluster.local." NXDOMAIN
"A IN my-service.cluster.local." NXDOMAIN
"A IN my-service." NXDOMAIN

What happened

From my-other-namespace, the resolver search list is:

search my-other-namespace.svc.cluster.local svc.cluster.local cluster.local

The resolver tries each option one by one:

  1. my-service.my-other-namespace.svc.cluster.local
    Service does not existNXDOMAIN

  2. my-service.svc.cluster.local
    No service with that name across namespacesNXDOMAIN

  3. my-service.cluster.local
    Not a valid service zoneNXDOMAIN

  4. my-service (as-is)
    No global DNS recordNXDOMAIN

After all attempts fail, the lookup fails.

Result: my-service does not resolve from another namespace.

additionally, as soon as I add the namespace name in my lookup, it works! even in the other namespace. why? because now one of the search domain led to a valid fully qualified domain name.

CoreDNS logs


In this blog, we walked through how DNS resolution works at a basic level and then explored how Kubernetes builds on top of it using search domains and ndots.

By understanding how these pieces fit together, you can better debug DNS issues, avoid unnecessary lookups, and design more predictable service communication inside your cluster.

💡 - Default value of ndots in Linux (glibc) is 1 and in k8s environments it’s 5


Bonus Tip

How to skip adding search domains to your DNS query?

You can do this by adding a dot (.) at the end of the hostname.

for example :

nslookup my-service.
curl my-service.

What the trailing dot does

A trailing dot tells the resolver:
“This name is already fully qualified — do not append search domains.”

Where is the trailing dot useful?

Adding a trailing dot (.) is useful whenever you want full control over DNS resolution and want to avoid resolver-side surprises.


Hope you liked this knowledge byte.

Tune in for more blogs. Time to clear the year old drafts.

see you in the next one!