6 Kubernetes Ports: A Definitive Look - Expose, NodePort, TargetPort, & More
Ever felt confused about server, docker, service, container, target or node ports in Kubernetes? This article breaks it down for you, explaining every port in your workflow, from development to deployment Dive in and simplify the complexity today!
Recently I was trying to setup a deployment pipeline on top of our Kubernetes infrastructure.
I was searching for a proper guide on the types of ports and how traffic is navigated between them, and I couldn’t find any ready-made solutions.
After learning about it and solving my problem, I have written this article to help you gain clarity on ports in a simple way and foster discussions. Perfect for both self-study and helping out a friend!
Communication of Ports:
In the below method, I'm using NodePort, a Service type in Kubernetes to demonstrate how traffic flows between an Application Server and a Web Server.
This article is focused on providing conecptual clarity of ports in Kubernetes.
1. Application Server Port (8001
)
App Server Port
You probably know this already.
You write your code in your framework of choice.
Either Django, or Node, or Gin, or any of the other options.
These frameworks have their respective run commands.
For example, in Django python manage.py runserver
And we see the Django app being accessible at port 8001
2. Container Port (8001
)
App Server Port -> Container Port
You probably might know this as well already.
In Kubernetes, a "container" is like a compact and portable package that holds everything your application needs to run. Imagine it as a virtual box that contains your app, its dependencies, and even the environment it requires.
Now, let's talk about ports. Think of them as doors or entry points to your application. When we create a Docker image (a snapshot of your app and its environment), we also decide which port our application should use. If your app runs on port 3000
, Docker exposes that same port.
When we start the Docker image, it transforms into a "container" - a running instance of your application.
Since we've already exposed a port, the container is ready to accept the incoming traffic and forward it to the Application inside.
3. Target Port (8001
)
App Server Port -> Container Port -> Target Port
The Target Port
refers to the port on your Pod
which forwards the traffic to the Container Port
.
The Target Port
is highlighted in red colour below.
The Service
forwards the traffic from the Internal Service Port
to the Target Port
on the Pod
.
The App Server Port
, Container Port
and Target Port
were pretty straightforward and understandable as they were all meant to be the same. That means the service is going to redirect traffic to the Target Port
which reaches the App Server
.
Extras
You can view the Pod
’s description by using the kubectl
command
kubectl get pods kubectl describe pod <pod-name>
The Target Port
can be set in the service.yaml
apiVersion: v1 kind: Service metadata: name: fb-backends spec: type: NodePort ports: - targetPort: 8001 protocol: TCP selector: app: fb-backends
4. Internal Service Port (5001
)
App Server Port -> Container Port -> Target Port -> Internal Service Port
Internal Service Port
by default will be 80
and is commonly referred to as Port
of the Service.
ubuntu@master:~$ kubectl get svc karma-daemon NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) karma-daemon NodePort 10.106.199.236 <none> 80:32488/TCP,2121:32461/TCP
The Service
uses Internal Service Port
to route traffic to the Pod
it is responsible for.
In my example, I have specified the name for the Internal Service Port
as ka-port
in the service.yaml
and used 5001
for clear understanding.
apiVersion: v1 kind: Service metadata: name: fb-backends spec: type: NodePort ports: - name: ka-port port: 5001 targetPort: 8001 protocol: TCP selector: app: fb-backends
ubuntu@master:~$ kubectl get svc fb-backends NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) fb-backends NodePort 10.101.234.168 <none> 5001:30904/TCP
The red color highlight shows the Internal Service Port
of the Service
.
Internal Service Port
is made available only within the Kubernetes Cluster
and not outside.
Example of making a request to the Internal Service Port
within the Node
<Cluster Ip>:<ISP>
ubuntu@master:~$ curl 10.101.234.168:5001 {"success": false, "message": "Please refresh the page, if the problem persists, retry login.", "error": "Couldn't find the JWT in the 'Authorization' header."}
If I try to make a request to the Cluster IP
using the Target Port
or Node Port
, it does not work.
ubuntu@master:~$ curl 10.101.234.168:8001 ^C ubuntu@master:~$ curl 10.101.234.168:30904 ^C
The Target Port
is designed for internal redirection within the Pod
, directing traffic to the container's specific port.
On the other hand, the Node Port
serves as the Service
's externally exposed port, accessible on all Nodes of the Cluster.
Directly accessing the Cluster IP
with the Target Port
or Node Port
bypasses the internal routing logic established by the Internal Service Port
(5001), leading to connection failures.
The conclusion is when traffic is reached to the Node Port
(30904
) it is redirected to the Internal Service Port
(5001
) which then redirects the traffic to Target Port
(8001
).
5. Node Port (30904
)
App Server Port -> Container Port -> Target Port -> Internal Service Port -> Node Port
Node Port
is the external port through which the App Server
is accessible outside the Cluster
.
In the case of a Node Port
type of Service
, by default, Kubernetes allocates a unique Node Port
for each Service ranging from 30000
-32767
.
The red colour highlight shows the Node Port
made available for communication.
The Node Port
is constant for all your Nodes
. You can simply access your application using the Public Node IP.
curl <Node IP>:<ServicePort>
Extras
There are other types of services as well like Cluster IP
, Load Balancer
and External Name
, each serving different purposes.
6. Web Server or Load Balancer Port (80/443
)
App Server Port -> Container Port -> Target Port -> Internal Service Port -> Node Port -> Web Server Port
This is the port where traffic reaches the hosted server, either directly or via a Load Balancer such as Azure LB or AWS ELB.
It's the port where the Ingress controller, like the NGINX Ingress controller, listens for incoming traffic. By default, these ports are 80
for HTTP
and 443
for HTTPS
.
The Ingress controller uses these ports to route incoming traffic to the appropriate services within the Kubernetes cluster based on its configuration rules
The yellow coloured highlights are the ports which are Web Server ports and the incoming traffic is being redirected to the Node Port
30904
For example if you have already purchased the domain for your backend and setup the routing in the cloud provider,
You can create a proxy server for the Kubernetes cluster.
When a request reaches your server you can have the rule to redirect to a specific Node Port
.
The Flow of an API Request
External Traffic -> Web Server Port (80/443) -> Node Port (30904) -> Internal Service Port (5001) -> Target Port (8001) -> Container Port (8001) -> App Server Port(8001)
External Traffic: The journey begins with external traffic directed towards the Web Server Port
(80/443
), where the API is hosted.
Web Server Port (80/443
): The Web Server, often an Nginx instance, listens on ports 80
and 443
. These ports act as the entry points for incoming requests.
Node Port (30904
): The external traffic is then routed to the Node Port
(30904
). Node Ports are accessible on every node in the Kubernetes cluster, providing a consistent entry point.
Internal Service Port (5001
): The request progresses through the Internal Service Port
, acting as a gateway within the cluster, directing traffic to the intended service.
Target Port (8001
): The Internal Service forwards the request to the Target Port
, specifying the port on which the application service is exposed.
Container Port (8001
): The Target Port
redirects the request to the Docker Container Port
. Within the container, the application server is configured to listen on this specific port.
App Server Port (8001
): The journey concludes as the request reaches the Application Server Port
(8001
) inside the Docker container, where the application processes the request.
Conclusion
The ports mentioned in the article are examples to show how port communication is done, it can be fit according to your needs.
The article uses snapshots from k9s, a tool for looking at your Kubernetes cluster in real-time, to help you better understand and get practical insights.
Want to learn more about Kubernetes, AI/ML and other interesting tech topics?
Click on the Profile icon in the bottom for subscribing to our newsletter for insightful content about Helm Templates for Kubernetes in the upcoming weeks.
Additionally, don't miss our related article on IP Tables in Kubernetes. Stay informed, stay ahead!