Quantcast
Channel: B:\datenbrei » kubernetes
Viewing all articles
Browse latest Browse all 6

Routing traffic (aka user requests) from the outside in (to a RHEL Atomic Host service)

$
0
0

The Application

As a basic application I will deploy a http pod (based on fedora/apache)  and a service replicated onto two hosts. This will fulfil the requirement of a highly available service provided to the end user.

Amazon Web Services (AWS)

AWS provides three major tools to implement our requirements:

Route 53 is AWS’s DNS service. Usage and configuration is straightforward. In addition to the well known resource records of DNS (like A or AAAA), Route 53 is integrated with other AWS services like Elastic Load Balancer.

Where as the Elastic IP is a permanent allocation of an IP address that could be dynamically assigned to a given EC2 instance and used with Route 53, the Elastic Load Balancer provides a permanent allocated IP address that could be reused within Route 53 (as a kind of a virtual DNS RR).

Using only Elastic IP requires to handle load balancing by the DNS system (Route 53), either by manual configuration or by automated processes external to AWS services. If Elastic Load Balancer is used, its integration with Route 53 and handles most of the internals, like health check of the service, reconfiguration of DNS etc.

I have chosen to use Elastic Load Balancer. The assignment of one Elastic IP is only for convenience, so that I have a fixed entry point into my Virtual Private Cloud (VPC as AWS calls it).

AtomicOnAws

AWS Setup

I created three instances on AWS running RHEL Atomic Host. 54.93.167.95 (elastic IP) (internal: 172.31.15.86) is the kubes master and 172.31.8.233 and 172.31.8.18 are two kubes minion.

As a result of this first step I get:

[cloud-user@localhost ~]$ sudo kubectl get -o json minions
{
   "metadata": {
       "selfLink": "/api/v1beta1/minions"
   },
   "items": [
       {
           "metadata": {
               "name": "172.31.8.233",
               "resourceVersion": "12",
               "creationTimestamp": null
           },
           "resources": {
               "capacity": {
                   "cpu": 1000,
                   "memory": 3221225472
               }
           }
       },
       {
           "metadata": {
               "name": "172.31.8.18",
               "resourceVersion": "13",
               "creationTimestamp": null
           },
           "resources": {
               "capacity": {
                   "cpu": 1000,
                   "memory": 3221225472
               }
           }
       }
   ]
}

As a service I created a Kubernetes replication set running two pods providing http via port 80. It’s just a simple web server based on docker image fedora/apache.

[cloud-user@master ~]$ cat >replica.yaml <<EOT
id: httpController
apiVersion: v1beta1
kind: ReplicationController
desiredState:
 replicas: 2
 replicaSelector:
   name: http
 podTemplate:
   desiredState:
     manifest:
       version: v1beta1
       id: http
       containers:
         - name: http
           image: fedora/apache
           ports:
             - containerPort: 80
               hostPort: 80
               protocol: TCP
   labels:
     name: http
EOT
[cloud-user@master ~]$ sudo kubectl create -f replica.yaml
[cloud-user@master ~]$ sudo kubectl get pods
NAME                                   IMAGE(S)            HOST
LABELS              STATUS
d5d751b7-70a4-11e4-b4c4-063194b910ce   fedora/apache       172.31.8.233/
name=http           Running
d5d73ed6-70a4-11e4-b4c4-063194b910ce   fedora/apache       172.31.8.18/
name=http           Running
[cloud-user@master ~]$

On the both minions http server are serving port 80/tcp now. 80 is bound to the VPC IP address:

[cloud-user@minion1 ~]$ sudo docker ps
CONTAINER ID        IMAGE                     COMMAND
CREATED             STATUS              PORTS                NAMES
45b987434700        fedora/apache:latest      "/run-apache.sh"    About
an hour ago   Up About an hour
k8s_http.6ac97f65_d5d751b7-70a4-11e4-b4c4-063194b910ce.default.etcd_1416481399_aee49744   
747eb021b0e0        kubernetes/pause:latest   "/pause"            About
an hour ago   Up About an hour    0.0.0.0:80->80/tcp
k8s_net.e9a68336_d5d751b7-70a4-11e4-b4c4-063194b910ce.default.etcd_1416481399_53c55094    
[cloud-user@minion1 ~]$ sudo netstat -tanp | grep :80\
tcp6       0      0 :::80                   :::*
LISTEN      1774/docker-proxy   
[cloud-user@minion1 ~]$ ip addr show eth0
2: eth0:  mtu 9001 qdisc pfifo_fast
state UP qlen 1000
   link/ether 06:03:82:79:3a:f2 brd ff:ff:ff:ff:ff:ff
   inet 172.31.8.233/20 brd 172.31.15.255 scope global dynamic eth0
      valid_lft 3015sec preferred_lft 3015sec
   inet6 fe80::403:82ff:fe79:3af2/64 scope link
      valid_lft forever preferred_lft forever
[cloud-user@minion1 ~]$

Using Amazon’s EC2 Management Console I configured a Elastic Load Balancer (ELB) so that Port 80 of the load balancer is forwarded to both AWS EC2 instances (aka the two minions). The Elastic Load Balancer gets an DNS hostname assigned that could be accessed from the outside: web-lb-1600167574.eu-central-1.elb.amazonaws.com. As Elastic Load Balancer is integrated with Route 53, ELB updates the DNS zone used to access our service: both minions will be included in the zone with an A RR and the the ELB will also point to both minions.

In my example I use the domain haslohas.com to be used with Route 53: the hostname web.public.haslohas.com is configured to be used by the ELB.

Having a client requesting some service from web.public.haslohas.com has
the following sequence of steps:

  1. DNS query for web.public.haslohas.com public
    1. dynamic IP addresses of both minions are returned
    2. select the first IP address and
  2. make a request to 80/tcp
    1. client sends request to 80/tcp of minion2
    2. minion2 receives request on 80/tcp on public dynamic IP address of EC2 instance
    3. request is forwarded by docker-proxy process to 80/tcp of the pod/docker
      container httpd process within the container receives the request and answers it

If one minion goes down or the pod is stopped on one minion, ELB health check (will port 80 answer?) will observe that and will take the EC2 instance out of service.

Summary

AWS provides services to run RHEL Atomic Host and a multi host setup with a kubernetes master and 1 to n minions. In addition to that, EC2 and ELB provide a load balancing service. ELB integration with Route 53 automates the process of DNS configuration on changes within EC2 or Kubernetes.


Viewing all articles
Browse latest Browse all 6

Trending Articles