DOCKER

INSTALL DOCKER-CE (Ubuntu 22.0.4)

Prerequisites A server with Ubuntu 22.04 as OS User privileges: root or non-root user with sudo privileges

Step 1. Update the System

  • sudo apt update -y && sudo apt upgrade -y

Step 2. Install Docker Dependencies

  • sudo apt install apt-transport-https curl gnupg-agent ca-certificates software-properties-common -y

Step 3. Add Docker Key and Repo

  • curl -fsSL https://download.docker.com/linux/ubuntu/gpg |sudo apt-key add -

  • sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu jammy stable" ----> jammy is 22.04, use whatever is for your distro

Step 4. Install Docker

  • sudo apt install docker-ce docker-ce-cli containerd.io -y

  • sudo systemctl enable docker && sudo systemctl start docker

  • sudo systemctl status docker

INSTALL DOCKER-CE (Ubuntu 22.0.4)-RASPBERRY PI 4

 
sudo apt install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
 echo \
  "deb [arch=arm64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update && sudo apt install docker-ce docker-ce-cli containerd.io -y

Docker Secrets

Docker includes a secrets management solution, but it doesn’t work with standalone containers, but rather with 'Docker Swarm' however if you use 'docker compose', this is the most accessible for everyday use. It’s also your only option if you’re not using Docker Swarm. SECRETS are defined in docker compose files within the top level secrets field. EACH named secret references a file in your working directory. When you you run docker compose up, compose will automatically mount that file into the container

Secrets are mounted to a predictable container path /run/secrets/<secrets_name>

version: "3"
services:
  mysql:
    image: mysql:8.0
    environment:
      MYSQL_ROOT_PASSWORD_FILE: /run/secrets/mysql_root_password
    secrets:
      - mysql_root_password
secrets:
  mysql_root_password:
    file: /opt/docker/password.txt

Let’s analyze what’s happening in this file

  • The secrets section defines a single secret called mysql_root_password.

  • The secret’s value is read from password.txt in your working directory.

  • The mysql service references the secret within its own secrets field.

  • When the container starts, the contents of password.txt will be read and mounted to /run/secrets/mysql_root_password (the name of the secret) inside the container.

  • The MYSQL_ROOT_PASSWORD_FILE environment variable instructs the official MySQL Docker image to read its password from the mounted file.

Testing Docker Compose Secrets

To test this example, first create the

file in the /opt/docker/ (or wherever you choose) directory:

$ echo foobar > password.txt

You can now use Docker Compose to bring up your container:

$ docker compose up -d

Inspecting the /run/secrets directory inside the container will confirm the secret’s existence:

$ docker compose exec -it mysql bash

bash-4.4# ls /run/secrets
mysql_root_password

bash-4.4# cat /run/secrets/mysql_root_password
foobar

The value can’t be directly accessed from outside the container. The output from docker inspect will show that password.txt is mounted to /run/secrets/mysql_root_password, but its content won’t be displayed:

This output demonstrates how Compose implements its secrets management functionality. Secrets are injected into the container using a bind mount from the file in your working directory </opt/docker/>.

services:
   db:
     image: mysql:latest
     volumes:
       - db_data:/var/lib/mysql
     environment:
       MYSQL_ROOT_PASSWORD_FILE: /run/secrets/db_root_password
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD_FILE: /run/secrets/db_password
     secrets:
       - db_root_password
       - db_password

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD_FILE: /run/secrets/db_password
     secrets:
       - db_password


secrets:
   db_password:
     file: db_password.txt
   db_root_password:
     file: db_root_password.txt

volumes:
    db_data:

TL;DR

docker network ls docker network create asgard docker network rm asgard docker network disconnect bridge thor docker network connect asgard thor docker network create -d macvlan --subnet 10.0.30.0/24 --gateway 10.0.30.1 -o parent=eth0 newasgard

docker inspect <container_name> -f "{{json .NetworkSettings.Networks }}"

docker run -itd --rm --name thor nginx docker run -itd --rm -p 8080:80 --name thor nginx docker run -itd --rm --network asgard --name loki alpine

docker stop loki

bridge link

docker inspect bridge/asgard/none etc

docker exec -it thor sh

docker ps docker ps -a

docker run -itd --rm --network newasgard --ip 10.0.30.11 --name thor busybox

You can find unused images using the command: docker images -f dangling=true

and just a list of their IDs: docker images -q -f dangling=true

In case you want to delete them: docker rmi $(docker images -q -f dangling=true)

Docker Compose

docker compose down docker compose pull docker compose up -d

DOCKER NETWORKING - 7 Different Types

Install docker: sudo apt update sudo apt install docker.io -y

Add user to docker group: sudo usermod -aG docker

1 - [The Default Bridge]

When you run "ip a" you will see new virtual BRIDGE interface (docker0) - auto IP = 172.17.0.1/16

DRIVER = NETWORK TYPE

ip a sudo docker network ls (DRIVER= network type - see bridge, host and none) sudo docker run -itd --rm --name thor alpine --------itd = interactive/detached --rm = used to clean up the container and remove file system when it exits

sudo docker run -itd --rm --name odin nginx sudo docker run -itd --rm --name loki alpine

bridge link --- see

sudo docker inspect bridge -------- can see the 3 containers with their IP addresses 172.17.0.2/16, 3/16 and 4/16

sudo docker exec -it thor sh ----- get into terminal /shell mode # ping others + internet 1.1.1.1 --ALL GOOD That is the bridged network - different networks but NAT MASQUERADE (Many to one) therefore can ping each other as well as the internet

BUT!!! You cant connect to them - they are in their own isolated area !! To get to the containers in a bridged network -HAVE TO EXPOSE PORTS !!!

So, odin is running nginx - a web server but we cant get to it (ISOLATION) so will have to expose port 8080 to get to it SO - REDEPLOY ODIN: sudo docker stop odin sudo docker run -itd --rm -p 8080:80 --name odin nginx sudo docker ps

Now go to NUC running the VM (10.0.30.2:8080) takes me straight to odin nginx web page!!!!

2 - [User-Defined Bridge] -Built in DNS RESOLUTION

Isolated Networks - odin/loki cannot ping thor, stormbreaker, etc

Just about same as default bridge, but you create your own networks

sudo docker network create asgard ---- creates a network asgard

ip a ------------- will see a new bridge interface (br-e1xxxx) with address - The virtual bridge docker0 interface 172.17.0.1/16

sudo docker network ls ------------ will see network "asgard" with network type (DRIVER) as "bridge"etw

Lets put some 2 containers into asgard network

sudo docker run -itd --rm --network asgard --name loki alpine sudo docker run -itd --rm --network asgard --name odin alpine

Or you can move a container from one network to another (except if its already in 'host' network)

docker network disconnect bridge thor docker network connect asgard thor

ip a ---------- see new virtual interfaces created

bridge link --- will see new virtual interfaces "attached to" virtual bridge created for asgard br-e1xxx

sudo docker inspect network asgard -------- will see the 2 containers loki and odin with their new asgard IP's (172.18.0.2/16 & .3)

3- [HOST NETWORK]

STORMBREAKER SITS ON TH HOST AND USES THE HOSTS IP ADDRESS

One of the default networks built in -- see 'docker network ls' The docker "lives directly on the host" dont need to expose ports etc-- If i add in the IP of the host (no ports) - I will get the nginx page

Lets create a container 'freya' and attach it to the default created "host" network docker run -itd --rm --network host --name freya nginx Go to host IP and you will see the NGINX Welcome Page !!

4- [MACVLAN NETWORK] - Two Modes: Bridge & 802.1q

BRIDGE MODE :

All the benefits of a bridge network except its directly connected to the network. Connecting the docker containers to the PHYSICAL NETWORK, as if their network interfaces connect directly onto the switch, acting like PHYSICAL MACHINES. They even have their own allocated MAC addresses as well as their own IP addresses from the LAN So far we have been using the built in "NETWORK TYPES" or "DRIVERS when you see 'docker network ls" With MACVLAN we have to create the NETWORK TYPE or DRIVER using the "-d " syntax

CREATE MACVLAN NETWORK

docker network create -d macvlan \ ---backslash to create a new line --subnet 10.0.30.0/24 \--my home IOT network ( for other networks see VLAN MODE below) --gateway 10.0.30.1 \ -o parent=ens18 \ ---o= option and binding this to the host physical network interface !!!!! newasgard ----- gave the network a new name "newasgard"

OR: docker network create -d macvlan --subnet 10.0.30.0/24 --gateway 10.0.30.1 -o parent=eth0 newasgard

ADDING CONTAINERS TO MACVLAN NETWORK

STATIC IP's

IP
DESCRIPTION
CONTAINER

10.0.30.11

thor

busybox

10.0.30.12

odin

nginx

10.0.30.13

loki

alpine

10.0.30.14

freya

nginx

docker run -itd --rm --network newasgard --ip 10.0.30.11 --name thor busybox docker run -itd --rm --network newasgard --ip 10.0.30.12 --name odin nginx docker run -itd --rm --network newasgard --ip 10.0.30.13 --name loki alpine docker run -itd --rm --network newasgard --ip 10.0.30.14 --name freya nginx

MACVLAN - 802.1q Mode

Lets assume: 20 = VLAN 20 IP = 192.168.20.0/24 30 = VLAN 30 IP = 192.168.30.0/24

Delete any MACVLAN - Remember to stop and remove any containers associated to the MACVLAN network docker stop thor odin loki docker network rm macvlan-bridge

Recreate macvlan docker on sub-interface .20 docker network create -d macvlan --subnet 10.0.50.0/24 --gateway 10.0.50.1 -o parent=ens18.20 macvlan20

DO the same for .30 network docker network create -d macvlan --subnet 10.0.40.0/24 --gateway 10.0.40.1 -o parent=ens18.50 macvlan50

==================================================================== IPVLAN (L2) - the containers will share the exact same mac address as the physical network card (ens18)

docker network create -d ipvlan \ --------you dont have to say its a L2 as its by default !! --subnet 10.0.30.0/24 --gateway 10.0.30.1 -o parent=ens18 verynewasgard

docker run -itd --rm --network verynewasgard --ip 10.0.30.12 --name loki nginx

docker exec -it loki shp check it can ping locally and internet ip a - check mac address of container " b6:74:cb:74:15:e0 " EXIT check mas address of ens18 (ip a) b6:74:cb:74:15:e0 =====SAME !!!

IPVLAN(L3) (IP address, routes and routing) no more arp, no mac addresses no broadcast traffic !!!

Create 2 IPVLAN L3 (have control and isolation of networks) 192.168.94.0/24 192.168.95/0/24

Delete IPVLAN network from previous example - cant have more than 1 type of IPVLAN per interface card

docker stop loki docker network rm veynewasgard

Create the IPvlan L3 network

$ docker network create -d ipvlan \
> --subnet=192.168.1.0/24 \
> -o parent=ens18 -o ipvlan_mode=l3 \
> --subnet=192.168.2.0/24 \
> ipvlan3

docker run -itd --rm --network l3_farms --ip 192.168.94.7 --name loki busybox

docker run -itd --rm --network l3_farms --ip 192.168.94.8 --name odin busybox

docker run -itd --rm --network l3_farms --ip 192.168.95.7 --name trelowia busybox

docker run -itd --rm --network l3_farms --ip 192.168.95.8 --name hawkstone busybox sh

With the "parent"option all networks connected to the ens18 interface can ping each other without a router. I had to add static routes on the cisco l3 switch as well as the mikrotik router pointing to 10.0.30.2 (the vm on proxmox running docker and the containers)

docker exec -it loki ping odin ping trelowia ping hawkstone ping 1.1.1.1 ping www.bbc.co.uk ----AL WORK ~!!!

Ping from latop 192.168.94.7, 8, 95.7 and 95.8 --All oK :)

Last updated

Was this helpful?