Docker! Have you heard of it before? Whatever the answer is, after this article you will be learning something new about docker or revising the concepts you’ve already known. It’s okay if you are not aware of any software concepts how companies work, etc. and all. Not everyone knows everything. We all learn as we evolve. This is the perfect time to start your evolution. No prerequisites are required for this, all you need is “determination” to learn a new tool today. Without any delay, let’s get started!
Here’s the tip to learn something new and not to forget, to question yourself about it as
Overview of Workshop
What’s before it
Why use it
How to use it?
Yes! That simple it is, but really helps you to learn technologies/tools in a short time. In this article, I am going to teach you this method so that you can understand this better in this format, this is what I believe.
What’s before docker?
Before docker came into the picture, there were two methodologies, most of the companies used to deploy their application. (Deploy simply refers to making your application live to the general audience like you and me).
Traditional Bare Metal Services
Virtual Machines
In the first methodology, Bare metal services companies used to maintain each server(the place where you deploy your application) for each application. Can you expect what problems it led to?
- It was hectic to manage Multiple Servers
Underutilization of resources, for example, there are three applications, each requiring a few amount of resources from the hardware of the server(RAM, ROM, Processors, etc). Instead of having multiple servers and maintaining them as highly configured, we can also use a single server for three of them and having it highly configured is better.
Expensive
Limited Scalability
Maintenance
Resource Allocation is a Challenge
Resource underutilization
Isolation and Security
Complex Disaster Recovery
Limited Automation
Virtual Machines
VMs are the next most used methodology used by most organizations to deploy applications. Imagine you can run multiple computers with the same hardware. It is the concept of a Virtual Machine. You’ll have a server, and on top of it, you can install a virtual machine and run applications in it.
Let us take the same example as discussed in Bare Metal services, instead of having multiple servers, on a single server you can deploy these three with Virtual Machines. It helps you to provide isolation between multiple applications, helps in cost reduction, maximum utilization of the resources, etc. But even though, it’s not a good idea to use them, because
Virtual Machines take more time to turn on / off
It requires a separate operating system for each virtual machine,
Licensing
Difficult to multiple manage virtual machines.
As we use multiple OS, and provide memory for all those separately, the same problem of Traditional Bare metals can occur, - the chance of underutilization of Virtual Machines.
So, to break all these cons and provide better services with cost efficiency, reliability, fast, and many other factors, we have something known as Containers!
What is Docker?
Docker is the tool that helps us to containerize the applications. But, what do you mean by containers?
Containers - An isolated box in which you can run your application with all the dependencies, code files and others. These containers depend on the Images of an application.
Containers don’t require any separate OS to run each individual, you can run those by using your host OS, which is typically the OS of the Server.
So, it helps you to utilize the hardware easily and without worrying about the licensing of the OS.
You can store data dynamically in the containers
It does provide isolation, but not so well as Virtual Machines do.
Containers can start/stop within seconds
It is easy to deploy and manage billions of containers with a single command.
Image - Image is the same thing as the container, like it contains all the dependencies, and code files required to run the application. Then, what is the difference between Containers and Images?
Imagine that you’ve prepared your favorite dish and you really love it. You wanted to share this with your friend who lives in another city which is within miles distance from your place. What can you do to share the dish with them? Do you parcel it to them or do you just share the recipe with them?
Obviously, you’ll share the recipe with them, so that they can easily prepare the dish with the same measurements, ingredients, tips and all, to get the same taste and flavors that you’ve experienced, right?
In the same way instead of sending the whole application to your teammate or anyone you can easily send the image so that they can run the application in which you’ve dependencies, requirements to run the application, and code of the application as well.
Don’t overthink, Containers —> Active form of an application
Images —> Passive form of application, by running these you can get the container.
Like, when you run an image, it’s the active form and it is called a container, it’s in the passive state and called an Image.
If you are a developer, you might have faced this situation as “This works on my machine!” but your operations or someone else says “Then, why it’s not on mine?”. Docker solves this problem, like when you’ve the image with all the dependencies, code files, and requirements then you can run the application on any platform so that you won’t face this problem of sharing with the code.
How does Docker work?
When you work with Docker, it’s important to know its architecture. This knowledge of architecture helps you to debug while working on it.
The architecture of Docker is divided into 6 components - Containers, Images, Registry, Daemon, Client and Repository.
As you know about the containers, let us get to know the images completely. As I’ve said before, we can include dependencies in the image, right? We won’t use the complete file of dependencies that you typically install on your machine for development, To create your own image and include a few dependencies, you can use those images in your custom image.
Organizations like, python, nginx and others create their own images that act as dependencies, using these as the base images you can create your own image.
For example, you’ve written code using Python and database as well, to create your own docker image and run it, you can include these base images in yours so that, based on these you can run your own image. Don’t worry, if you can’t understand it’s okay, you’ll get it in the hands-on experience section, within this article.
Don’t assume that the article is not well documented, this DevOps is actually full of those tools that are confusing in the beginning, later with practice, you can implement this seamlessly without any hesitation.
To know the images that are available on docker, you can easily go to the docker hub and search for it or we’ll shortly discuss a few commands about it.
Basically, the video over here refers, to the terminal on your system acting as a client which is helpful to use docker. When you give a command, it interacts with the Docker Daemon using APIs and Docker Daemon, acts accordingly.
If you give the command to use the images from the registry, it acts as it pulls the image from it to the local system. And then, if you give it to push, it checks for the image on your system you are looking to push, and it processes the request accordingly.
Daemon is like the central hub for the docker which processes the requests and responses to you according to the commands.
What is Registry then?
The registry is like a place where you store all your images, You can find it in the docker hub.
What is Docker Hub then?
Just like the Git and GitHub relation, docker and docker hub are related. Using docker, you create images, run containers, manage them, store those images online and use them whenever you need, you can simply use the Docker Hub, where you can store them for free.
The registry is like a Repository, in which you store each image that you’ve built. Just like you organize the files in folders, same way registries are helpful for you to organize your images on Docker Hub.
By now you’ve learned about docker, docker hub and docker registry. Now, we’ll dive into the hands-on practice of docker.
For installation, you can refer to this blog, where it is mentioned clearly.
Click here to get installation of Docker
docker -v
#This helps you to check the docker version you've isntalled. No matter of version, it's important to install and practice it.
docker images
# This command helps you to list all the images that you've pulled from the Registry.
docker pull hello-world
# This pulls the image of Hello-world from the registry of Docker
docker run <image-id>
# not just by name, you can run the container of an image by just using it's local image id.
docker ps
#To list all the containers which are currently running
docker sudo docker run ubuntu:latest
It first search for that particular image on local, if the image is available it directly runs it.
Else, it pulls the image from the Docker Hub automatically and runs it.
docker ps
docker stop <container_id> <container_id>
#stops the container of ubuntu , you can alos stop multiple containers at once.
docker run -d -it ubuntu
-d runs enables us to run the ubuntu in the background i.e, daemon mode and -it is used to interact witht the shell of ubuntu
docker ps -a
# lists all the containers which also stopped
docker run -h GDSC_PU –it ubuntu:latest /bin/bash
# -h changes the host name of the image,
docker run –p 8000:80 manju2033/test
#This pulls the image that I've already pushed to my repo on Docker Hub and runs it on port 8000
docker rm <container_id>
#removes the container with the ID
sudo docker rm –f <container_id>
#to remove container forcefully
sudo docker rmi <image_id> or <image_name>
sudo docker inspect <container_id>
sudo docker inspect <container_id> | grep IPAddress
# inspect used to get few key information about the container
sudo docker rm $(docker ps -aq)
#To remove all the currently running containers at once
docker rm -v $(docker ps -aq -f status=exited)
# To get all the IDs of the containers whcih have been stopped
sudo docker history <image_id>
# TO get the history of an image
Dockerfile -
let us create our own dockerfile, so create the directory as mentioned below.
mkdir first_image
cd mkdir first_image
touch myscript.sh
nano myscript.sh
#in myscript.sh file, include these lines of code--
#!/bin/bash
rows=5
for ((i=1; i<=rows; i++))
do
for ((j=1; j<=i; j++))
do
echo -n "* "
done
echo
done
echo "Google Developers Student Club Parul University"
# Use the official Ubuntu base image
FROM ubuntu:latest
# Set the working directory inside the container
WORKDIR /app
# Copy the shell script into the container
COPY myscript.sh .
# Make the script executable
RUN chmod +x myscript.sh
# Run the shell script when the container starts
CMD ["./myscript.sh"]
Note: These two files should be in the same directory.
sudo docker build –t first_image .
Now, after writing and saving the code, gve this command in directory where Dockerfile is in.
- t implies tag. It gives a tag by which we can identify our image which we’ve built
And . Is used to include all the files in the directory
To push this to your docker hub,
docker push [USERNAME/]IMAGE_NAME[:TAG]
Dockerfile Instructions
CMD -
: Specifies the default command to run when a container is started.COPY -
: Copies files or directories from the host system to the container's filesystem.ENTRYPOINT -
: Configures a container that will run as an executable, setting the default application.ENV -
: Sets environment variables in the container.EXPOSE -
: Informs Docker that the specified network ports on the container should be published to the host.FROM -
: Sets the base image for subsequent instructions in the Dockerfile.RUN -
: Executes a command in a new layer on top of the current image and commits the results.VOLUME -
: Creates a mount point and/or declares a volume for a container.
DIY - Try out these commands, these help you to get the history and other meta information about Docker.
Docker managing commands --
sudo docker create <image_id>
sudo docker start <container_id>
sudo docker exec <container_id>
sudo docker kill <container_id>
sudo docker restart <container_id>
sudo docker pause <container_id>
sudo docker unpause <container_id>
#docker info commands
docker info <container_id>
Prints various information on the Docker system and host.
docker help <command name>
Prints usage and help information for the given subcommand. Identical to run‐ ning a command with the --help flag.
docker version
Prints Docker version information for client and server as well as the version of Go used in compilation.
#Docker registry commands
docker login - Prints various information on the Docker system and host.
docker pull - Prints usage and help information for the given subcommand. Identical to run‐ ning a command with the --help flag.
docker push - Prints Docker version information for client and server as well as the version of Go used in compilation.
docker logout
docker search - Prints the image that close to the command you've given.
Resources
Here are some great resources for you to learn about Docker and implement Microservices
You can get these books from my GitHub Repo and I suggest reading books rather than watching videos on YouTube or enrolling in a Course. These books help you to become a master in Docker.
For reference, you can watch these videos on the YouTube channel, to get to know more stuff about DevOps.
These are some of the best videos that I’ve ever watched on YouTube about Docker
It’s okay if you feel demotivated while getting into DevOps, try to give it some time and learn it from the best resources. Take a deep breath, have a few minute's breaks and restart! That’s the only thing to do when you feel stuck in learning DevOps. Ping me if you ever have any doubts or need any help from me.
I am Manjunath, I am a DevOps and Open-Source Lead at the Google Developers Student Club, at Parul University.