Getting started with Azure Kubernetes Services - Step by Step - 3

In this part, you will learn about working with Helm, Tiller and create an Nginx ingress controller.
Helm
Helm is the best way to find, share, and use software built for Kubernetes. You can
install Helm
for your OS easily. Do it, and proceed.
Tiller
Simply run helm init
to install Tiller on the Kubernetes cluster you are connected to. This will validate that Helm’s local environment is set up correctly. Then it will connect to whatever cluster kubectl connects to by default. After connecting, it will install tiller into the kube-system namespace.
$ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Install nginx/ingress
To install an ingress use the following:
$ helm install stable/nginx-ingress
== NOTE: == If you get an error that says Error: no available release name found
, you can try the following commands
$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller
$ helm init --service-account tiller --upgrade
$ helm install stable/nginx-ingress
AKS on azure doesn't provide default cluster-admin role and a user has to create it.
Try running the following to know more about your running pods
$ helm list
braided-salamander 1 Tue Sep 3 20:36:26 2019 DEPLOYED nginx-ingress-1.17.1 0.25.1 default
You can see the name is cool and all, but you might want more control over it. Let's delete it: $ helm delete braided-salamander
and use the --name
switch:
$ helm install stable/nginx-ingress --name <MY_NAME>
One command, and it creates the following:
### Services
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer 10.0.148.108 <pending> 80:32163/TCP,443:31476/TCP 4s
nginx-ingress-default-backend ClusterIP 10.0.250.36 <none> 80/TCP 4s
### Deployments
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-ingress-controller 0/1 1 0 12s
nginx-ingress-default-backend 1/1 1 1 12s
### Pods
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-857bc4f987-pzpd7 0/1 Running 0 14s
nginx-ingress-default-backend-77c7c664bb-6cv9w 1/1 Running 0 14s
Cool! Huh?
Let's host some code
Clone the following repository:
$ git clone https://github.com/attosol-samples/profile-server-front.git
Let's tag the repository:
$ docker-compose build
$ docker tag profile-server-front <ACR_NAME_ONLY>.azurecr.io/profile-server-front:v1
$ docker push <ACR_NAME_ONLY>.azurecr.io/profile-server-front:v1
Time to update the K8s cluster. Let's execute this from inside the profile-server-front
directory...
$ kubectl apply -f k8s.yaml
You will get an output similar to the following:
$ kubectl get deployments
profile-server-back 1/1 1 1 5m
profile-server-front-1 1/1 1 1 5m
profile-server-front-2 1/1 1 1 5m
$ kubectl get services
nginx-ingress-controller LoadBalancer 10.0.148.108 104.215.194.225 80:32163/TCP,443:31476/TCP 1h
nginx-ingress-default-backend ClusterIP 10.0.250.36 <none> 80/TCP 1h
profile-server-back NodePort 10.0.128.183 <none> 6379:32183/TCP 5m
profile-server-front-1 NodePort 10.0.245.6 <none> 9000:32326/TCP 5m
profile-server-front-2 NodePort 10.0.39.107 <none> 9000:31751/TCP 5m
$ kubectl get pods
nginx-ingress-controller-857bc4f987-pzpd7 1/1 Running 0 1h
nginx-ingress-default-backend-77c7c664bb-6cv9w 1/1 Running 0 1h
profile-server-back-6644f7f54-hlz8j 1/1 Running 0 5m
profile-server-front-1-7c7d7b9f84-fhvv9 1/1 Running 0 5m
profile-server-front-2-6bd4f99cd6-dj9qz 1/1 Running 0 5m
If you are getting a similar output, you are good! It means all your services, deployments, and ingress service are doing good. Now let's open the browser and hit http://104.215.194.225/one/profile
. This is the external IP listed above for nginx-ingress-controller. You should be able to hit http://104.215.194.225/two/profile
as well.
Notice that the Server: one
and Server: two
will render based on the URL you hit. To understand it better, take a look at the k8s.yaml file. I will break it down so it is easy for you to understand.
Ingress
The nginx ingress, as you can see has 2 different backends, profile-server-front-1
& profile-server-front-2
listening on port 9000
. The path is /one
and /two
respectively.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: profile-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- backend:
serviceName: profile-server-front-1
servicePort: 9000
path: /one
# path: /one(/|$)(.*)
- backend:
serviceName: profile-server-front-2
servicePort: 9000
path: /two
# path: /two(/|$)(.*)
Deployment for 1st Front End Server
This deployment is responsible for spinning the pods based on your desired state. It says 1 replica needs to be created using an image ACR_NAME_ONLY.azurecr.io/profile-server-front:v1
. The Node.js code is written such that it respects the HOSTING_ENDPOINT
environment variable, i.e. one
.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: profile-server-front-1
spec:
replicas: 1
template:
metadata:
labels:
name: profile-server-front-1
spec:
containers:
- name: profile-server-front-1
image: <ACR_NAME_ONLY>.azurecr.io/profile-server-front:v1
ports:
- containerPort: 9000
env:
- name: HOSTING_ENDPOINT
value: "one"
- name: NODE_ENV
value: "production"
Deployment for 2nd Front End Server
It is the same as first, except the name and the HOSTING_POINT
environment variable, which is set to two
.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: profile-server-front-2
spec:
replicas: 1
template:
metadata:
labels:
name: profile-server-front-2
spec:
containers:
- name: profile-server-front-2
image: <ACR_NAME_ONLY>.azurecr.io/profile-server-front:v1
ports:
- containerPort: 9000
env:
- name: HOSTING_ENDPOINT
value: "two"
- name: NODE_ENV
value: "production"
The Redis Backend Deployment
This one spins a pod for Redis back-end server. The name profile-server-back
is important, since the Node.js code connects to this server. Ideally, it should also have been a part of the environment variable, but for now... it is ok.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: profile-server-back
spec:
replicas: 1
template:
metadata:
labels:
name: profile-server-back
spec:
containers:
- name: profile-server-back
image: redislabs/rejson
ports:
- containerPort: 6379
The Redis Backend Service
This service ensures that the appropriate ports are open and the front-end would connect easily with the name provided.
apiVersion: v1
kind: Service
metadata:
name: profile-server-back
spec:
selector:
name: profile-server-back
type: NodePort
ports:
- protocol: TCP
port: 6379
targetPort: 6379
Service for front end 1 & 2
They are needed so the Nginx ingress controlled could route the traffic accordingly.
apiVersion: v1
kind: Service
metadata:
name: profile-server-front-1
spec:
selector:
name: profile-server-front-1
type: NodePort
ports:
- protocol: TCP
port: 9000
targetPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: profile-server-front-2
spec:
selector:
name: profile-server-front-2
type: NodePort
ports:
- protocol: TCP
port: 9000
targetPort: 9000
As you can see, working with Nginx ingress and K8s is not really hard. It does have its own sweet little learning curve, but once you understand the stuff, it is really easy to work with.
Setup SSL
To setup SSL, use the following commands (taken from here):
# Create the namespace for cert-manager
kubectl create namespace cert-manager
# Label the cert-manager namespace to disable resource validation
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
# Install the CustomResourceDefinitions and cert-manager itself
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.10.1/cert-manager.yaml
== Issuers ==
Before you can begin issuing certificates, you must configure at least one Issuer or ClusterIssuer resource in your cluster. An Issuer is scoped to a single namespace, and can only fulfill Certificate resources within its own namespace. This is useful in a multi-tenant environment where multiple teams or independent parties operate within a single cluster. On the other hand, a ClusterIssuer is a cluster wide version of an Issuer. It is able to be referenced by Certificate resources in any namespace.
Create a file called letsecnrypt-prod.yaml
and paste the following content (replace the REAL_EMAIL_ID
with a valid one):
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: REAL_EMAIL_ID
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: certificate-name
namespace: dev
spec:
secretName: certificate-name-tls
commonName: 'www.site.com'
dnsNames:
- www.site.com
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- www.site.com
secretName: certificate-name-tls
rules:
# - host: www.site.com
- http:
paths:
- backend:
serviceName: some-service
servicePort: 80
path: /service
What next?
Well, stay tuned for upcoming articles. You may contact us at contact@attosol.com for your software and consultancy requirements.
© 2025, Attosol Private Ltd. All Rights Reserved.