Docker - Knowledge Base Archives - Hivelocity Hosting https://www.hivelocity.net/kb/tag/docker/ Dedicated Servers, Private Cloud & Colocation Wed, 06 Dec 2023 14:48:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 Working with Docker: Must-Know Docker Commands https://www.hivelocity.net/kb/working-with-docker-must-know-docker-commands/ Tue, 30 May 2023 14:09:37 +0000 https://www.hivelocity.net/?post_type=hv_knowledgebase&p=27041 Welcome back to our series covering all things Docker. In our previous article, Docker Advanced: Deploying with Docker-Compose, we covered the basics of Docker-Compose, how to create a docker-compose.yaml file, and the structure of commands within Docker-Compose. In this article, we’ll cover a selection of basic Docker commands which are essential for any systemadmin managing …

Working with Docker: Must-Know Docker Commands Read More »

The post Working with Docker: Must-Know Docker Commands appeared first on Hivelocity Hosting.

]]>
Hero image of the Docker logo with text reading "Part 5: Working with Docker"

Welcome back to our series covering all things Docker. In our previous article, Docker Advanced: Deploying with Docker-Compose, we covered the basics of Docker-Compose, how to create a docker-compose.yaml file, and the structure of commands within Docker-Compose. In this article, we’ll cover a selection of basic Docker commands which are essential for any systemadmin managing containers within Docker. 

Looking for information on a specific command? Use the table of contents below to jump to any section you’re specifically looking for or read on for a general covering of the 6 commands you most need to know.

docker ps Command

So let’s say you’ve built a container or two in Docker. First things first, how can we tell which containers are actively running? We can get this information by using the command docker ps which will display a list of running containers. This is the Docker counterpart to the ps command in Linux. 

Running the command in our terminal we see:

1 [hivelocity@fedora Docker-Article]$ docker ps

2 CONTAINER ID   IMAGE          COMMAND                  CREATED       STATUS       PORTS                                       NAMES

3 dccb3084edee   ghost:latest   "docker-entrypoint.s…"   4 hours ago   Up 4 hours   0.0.0.0:8080->2368/tcp, :::8080->2368/tcp   docker-article_ghost_1

4 3532dab3e08f   mysql:8.0      "docker-entrypoint.s…"   4 hours ago   Up 4 hours   3306/tcp, 33060/tcp                         docker-article_db_1

5 [hivelocity@fedora Docker-Article]$

This shows us a plethora of useful information on the running state of our containers. This command is useful for checking if a container deployed correctly, verifying container/host ports, and much more.

docker logs Command

So now that we know our containers are running, we can use the docker logs command to display the logging output of any running container. Entering this command with the -n option will tail the logs to a certain number of lines. 

But when would this command be useful?

Well, there are many instances where an application will refuse to deploy correctly or the network may be nonfunctional. It may not be apparent what the problem is at a glance, but thankfully, Docker makes logs very easy with just a single command. 

For instance, I wonder what our ghost container will say if I kill the db container?

1 [hivelocity@fedora Docker-Article]$ docker kill db

2 db

3 [hivelocity@fedora Docker-Article]$ docker logs ghost

4 [2022-11-30 05:45:33] INFO Ghost is running in production...

5 [2022-11-30 05:45:33] INFO Your site is now available on http://localhost:8080/

6 [2022-11-30 05:45:33] INFO Ctrl+C to shut down

7 [2022-11-30 05:45:33] INFO Ghost server started in 2.63s

8 [2022-11-30 05:45:34] INFO Database is in a ready state.

9 [2022-11-30 05:45:34] INFO Ghost database ready in 3.601s

10 [2022-11-30 05:45:41] INFO Ghost URL Service Ready in 10.157s

11 [2022-11-30 05:45:41] INFO Adding offloaded job to the queue

12 [2022-11-30 05:45:41] INFO Scheduling job clean-expired-comped at 59 5 0 * * *. Next run on: Thu Dec 01 2022 00:05:59 GMT+0000 (Coordinated Universal Time)

13 [2022-11-30 05:45:41] INFO Ghost booted in 10.353s

14 [2022-11-30 05:45:41] INFO Adding offloaded job to the queue

15 [2022-11-30 05:45:41] INFO Scheduling job update-check at 44 19 0 * * *. Next run on: Thu Dec 01 2022 00:19:44 GMT+0000 (Coordinated Universal Time)

16 Connection Error: Error: Connection lost: The server closed the connection.

17 Connection Error: Error: Connection lost: The server closed the connection.

18 [2022-11-30 05:50:01] ERROR Unhandled rejection: getaddrinfo EAI_AGAIN db

19 

20 getaddrinfo EAI_AGAIN db

21 Error Code: 

22     EAI_AGAIN

23 ----------------------------------------

24 Error: getaddrinfo EAI_AGAIN db

25     at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:109:26)

26 

27 [2022-11-30 05:50:38] WARN Ghost is shutting down

28 

As you can see above, thanks to the Docker logs, we can see exactly what the problem is. If we want to investigate the db further (say we didn’t already know we killed it), we can run docker logs db for logs on the database container too.

docker restart Command

This next command, docker restart, is rather straightforward. It restarts a Docker container. This command is useful when changes are made to configuration files, or for troubleshooting purposes.

docker network ls Command

The command docker network ls lists out all available networks within Docker. For example, running this command, we get the following output:

1 [hivelocity@fedora Docker-Article]$ docker network ls

2 NETWORK ID     NAME                       DRIVER    SCOPE

3 0ccfeda06a30   bridge                     bridge    local

4 ddc14603bb6e   docker-article_ghost-net   bridge    local

5 5e9387d96b95   host                       host      local

6 902ca789121a   none                       null      local

Here we can see that we have four networks. Three are created by default by Docker, and the docker-article_ghost-net is the one we defined in our Docker-Compose file from the previous article. There isn’t much to be done with the three networks created by default, as most administration is done with the networks we define with our containers.

docker network inspect Command

The docker network inspect command allows us to inspect networks and view the gateway, subnet, and containers inside a network.

1 [hivelocity@fedora Docker-Article]$ docker network inspect docker-article_ghost-net 

2 [

   {

       "Name": "docker-article_ghost-net",

       "Id": "ddc14603bb6ec254607379bd97fd712eafe93ffc9dee63025a70a03d42c08efb",

       "Created": "2022-11-29T19:42:28.49966953-06:00",

       "Scope": "local",

       "Driver": "bridge",

       "EnableIPv6": false,

10         "IPAM": {

11             "Driver": "default",

12             "Options": null,

13             "Config": [

14                 {

15                     "Subnet": "172.20.0.0/16",

16                     "Gateway": "172.20.0.1"

17                 }

18             ]

19         },

20         "Internal": false,

21         "Attachable": true,

22         "Ingress": false,

23         "ConfigFrom": {

24             "Network": ""

25         },

26         "ConfigOnly": false,

27         "Containers": {

28             "6933100da8ea8d61f64f19dc283fb2bda800adf3ece6c623a66c8b9e9629b3fa": {

29                 "Name": "ghost",

30                 "EndpointID": "dd4c19f1c658a51c8e97e3d59986c07838ecb5354ce70b9ff6d73dca39517816",

31                 "MacAddress": "02:42:ac:14:00:03",

32                 "IPv4Address": "172.20.0.3/16",

33                 "IPv6Address": ""

34             }

35         },

36         "Options": {},

37         "Labels": {

38             "com.docker.compose.network": "ghost-net",

39             "com.docker.compose.project": "docker-article",

40             "com.docker.compose.version": "1.29.2"

41         }

42     }

43 ]

Here we can see some useful metadata along with the IP addresses for our individual containers. These IP addresses exist within Docker, so they will not be accessible outside the host machine. All Docker network addresses exist within private IP address space, which can be customized within Docker. 

docker exec Command

The docker exec command allows us to run new commands within containers. Most commonly this command is called with the -it flags. The i flag stands for interactive, and -t for tty. These flags allow for a terminal within the container. 

For instance we can run docker exec -it ghost bash to start a bash shell within our Ghost container.

1 [hivelocity@fedora Docker-Article]$ docker exec -it ghost bash

2 root@6933100da8ea:/var/lib/ghost# ls

3 config.development.json  config.production.json  content  content.orig current  versions

This command is extremely useful for debugging containers or for learning more about how they work.

A Visual Demonstration Using Docker-Compose

Other Common Docker Commands

Although this article only covers 6 of the most essential Docker commands, there are many many more useful commands available for working with containers. Thankfully, all of these commands can be found whenever needed in the official Docker CLI reference.

So, there you have it. We’ve learned what Docker is, how containers run, how to make Docker-Compose files, and some examples of commonly used Docker commands. 

Now what?

Time to Have Fun with It

From here the best way to learn Docker better is to deploy containers for yourself! This can be by hosting services with containers, deploying containers for test use as a lightweight alternative to VM’s, and much, much more. 

Once you become familiar with containers, I highly doubt you will go back to traditional deployments. Docker makes everything simpler, allowing organizations to focus on growing their business instead of worrying about the operational overhead of their deployments. If you find that Docker is improving development for your organization, look into Kubernetes as well. In the end, the best solution is whichever one gives the best results for your specific use case, and the only way to find out which tools will serve you best is to try them and see. 

We hope that you now have a better understanding of Docker and how it can revolutionize the way your digital infrastructure develops and grows. Thanks for reading, and be sure to check out our blog and knowledge base for all the latest content and industry insights from the hosting experts at Hivelocity.

– written by Eric Lewellen

The post Working with Docker: Must-Know Docker Commands appeared first on Hivelocity Hosting.

]]>
Docker Advanced: Deploying with Docker-Compose https://www.hivelocity.net/kb/docker-advanced-deploying-with-docker-compose/ Tue, 30 May 2023 14:08:25 +0000 https://www.hivelocity.net/?post_type=hv_knowledgebase&p=27034 Welcome back to our series covering all things Docker. In our previous article, Docker for Beginners: Deploying Containers & the Anatomy of Commands, we covered a basic test example of deploying a Ghost webpage inside a Docker container, as well as analyzing the components that make up the command we used to execute it. In …

Docker Advanced: Deploying with Docker-Compose Read More »

The post Docker Advanced: Deploying with Docker-Compose appeared first on Hivelocity Hosting.

]]>
Hero image of the Docker logo with text reading "Part 4: Docker Advanced"

Welcome back to our series covering all things Docker. In our previous article, Docker for Beginners: Deploying Containers & the Anatomy of Commands, we covered a basic test example of deploying a Ghost webpage inside a Docker container, as well as analyzing the components that make up the command we used to execute it. In this article, we’ll cover Docker-Compose, an addon to Docker which makes managing and replicating your containers even easier. We’ll also recreate our Ghost website test page from the previous article, but using Docker-Compose.

So, to get started, what is Docker-Compose? 

Introducing Docker-Compose

Remember how in the previous article we mentioned that docker run isn’t typically used for production deployments? This is because all the variables and options for our deployment are merely flags in a command. This isn’t ideal because there could be multiple environment variables, port mappings, volumes, and more. 

For instance, what if we want to have a service that has multiple containers working in tandem? We would have to type docker run multiple times, which sounds quite silly for managing large deployments. 

Or, what about scenarios where we need multiple containers to interact in a reproducible and consistent manner?

This is where Docker-Compose comes in. Docker-Compose is a tool built on top of Docker that allows one to manage entire Docker deployments within a docker-compose.yaml file. 

But what does that look like?

Let’s take the same Ghost service we used in the previous article and modify it for a typical production scenario with Docker-Compose.

Docker-Compose File for Running a Ghost Website

Here is our Docker-Compose file for converting our test instance of Ghost from the previous article to a production instance.

1 version: '3.8'

2 

3 services:

4 

5   ghost:

6     container_name: ghost

7     image: ghost:latest

8     restart: always

9     ports:

10       - 8080:2368

11     environment:

12       database__client: mysql

13       database__connection__host: db

14       database__connection__user: root

15       database__connection__password: example

16       database__connection__database: ghost

17       url: http://localhost:8080

18     networks:

19       - ghost-net

20   db:

21     container_name: db

22     image: mysql:8.0

23     restart: always

24     volumes:

25       - ghost-data:/var/lib/mysql

26     environment:

27       MYSQL_ROOT_PASSWORD: example

28     networks:

29       - ghost-net

30 networks:

31   ghost-net:

32 volumes:

33   ghost-data:

34 

Now, before diving into the structure of a Docker-Compose file, we need to know a little about .yaml. All Docker-Compose files are in the .yaml format. In this format we don’t use brackets, semicolons, or tabs, because .yaml files are structured with spacing and colons instead. It’s important to know that every value that moves forward will increase by 3 spaces.

Every Docker-Compose file created should either be docker-compose.yaml or docker-compose.yml. Otherwise, the filename will need to be explicitly stated when using Docker-Compose CLI commands. 

In the example above, I have created a docker-compose.yaml file for our Ghost website. Notice there are two containers defined, not one. We have added a MySQL container so our production Ghost instance will have a database with better performance. 

Docker-Compose Structure

In the Docker-Compose file example above, we see there are several properties followed by colons, then followed by a value on each line. Let’s run through these individual values to get a better idea of how a compose file works.

version – The version of the Docker-Compose file-format. This dictates which compose parameters (properties) and types of syntax are available to use. I recommend Docker-Compose version 3.8 as that has the most features and syntax available. Be mindful however that some Docker-Compose files on the internet will have certain features and/or syntax that work best with an earlier version.

services – A property that is always placed in a Docker-Compose file before the individual services. A service is the definition for a single container in the compose stack along with all its properties and values.

ghost – Ghost is the name for our service containing the Ghost image.  Further below, db is also the name for the service for our MySQL container. We see our service has a single image, along with restart, ports, environment, and network properties.

container_name – Simply a property that tells Docker to name the container with the following value. This is useful because Docker will sometimes add a prefix to the container name. This value forces Docker to use Ghost as the container name with nothing tacked on.

image – The property that tells our service which container image it will be using. Here we are stating that we want the latest image of the Ghost image. Notice below with MySQL we stated a version to use, which was version 8.0 of MySQL. Sometimes stating the version explicitly is better in case an update causes unwanted changes.

restart – Restart is a policy that determines when a container should be restarted in case it stops. In this case, always states it should always be restarted unless the container is removed.

ports – The property that determines which ports the container uses. The left side value is the port used by the host, the right side value is the port used within Docker. We can think of the left value as the port that is publicly reachable and the right value as the port used to communicate with other containers within.

environment – The property for any environment variables for the Docker image being used. These can vary per image. For Ghost we have variables that allow our Ghost image to interface with our MySQL container.

networks – Defines what Docker networks our containers will be using. For two containers to communicate with one another, they will need to reside in the same network. You can think of a Docker network as a VLAN in Docker terms.

volumes – Defines a volume for a Docker container. For our MySQL container we define a volume named ghost-data that maps to /var/lib/mysql in the container. 

The /var/lib/mysql directory is where the MySQL container stores its databases. So when we map our volume to that directory, we declare the database for the container as persistent. 

If we decide to kill and remove both containers, our database is still intact because of our volume declaration.

At the very bottom we declare our volumes and networks again. This is because we might share volumes or networks across containers in our compose stack. Because of this we also state them at the bottom so Docker knows which volumes and networks to create. 

They can be declared with no options like above, or they can have special properties like driver and external. For instance, if we desire a network to be available to containers created in different compose files, we configure our network as such:

1 networks:

 ghost-net:

    external: true

Now our ghost-net network is available to containers running outside of our compose file.

To learn more about how compose files work, I recommend visiting the official compose file reference:

Compose file specification

Deploying with Docker-Compose

We now have a docker-compose.yaml file that can be used for a production instance of our Ghost site.

To run the necessary commands, run these commands:

Debian/Ubuntu:

sudo apt install docker-compose

RHEL/CentOS/Fedora/Almalinux:

sudo dnf install docker-compose or if on older version of CentOS: sudo yum install docker-compose

Now while inside the same directory as our compose file, run docker-compose up -d. This command deploys all the containers in a compose file, and the -d stands for detached. To bring down the entire compose stack and kill then remove the containers, run docker-compose down.

Docker will then pull the images, extract them, and start the containers. View the same IP address of the hosts on port 8080 and the Ghost site will be present again. 

With this compose file it’s possible to migrate or backup the data with our volume, edit our environment variables, change network configuration and more all within one file.

Working with Docker

So, that wraps up a lot of useful information on what Docker is, the advantages to using Docker, and how we can deploy services with Docker and Docker-Compose. But now, we need to understand how to work with Docker to accomplish tasks. 

In our next article in this series, Working with Docker: Must-Know Docker Commands, we’ll cover some essential commands that will help any sysadmin utilizing Docker, and provide examples using the ghost and db container examples from this Docker-Compose article as a basis for creating more complex networks of containers. Read on to learn more or check out our blog and knowledge base for more great content and industry insights from the hosting experts at Hivelocity.

– written by Eric Lewellen

The post Docker Advanced: Deploying with Docker-Compose appeared first on Hivelocity Hosting.

]]>
Docker for Beginners: Deploying Containers & the Anatomy of Commands https://www.hivelocity.net/kb/docker-for-beginners-deploying-containers-the-anatomy-of-commands/ Tue, 30 May 2023 14:07:05 +0000 https://www.hivelocity.net/?post_type=hv_knowledgebase&p=27014 Welcome back to our series covering all things Docker. In our previous article, Getting Started with Docker: Installation & Basics, we covered key Docker components including storage and networking as well as the basics of installing Docker onto your Linux system.  But now that you have Docker installed, what can you do with it?  In …

Docker for Beginners: Deploying Containers & the Anatomy of Commands Read More »

The post Docker for Beginners: Deploying Containers & the Anatomy of Commands appeared first on Hivelocity Hosting.

]]>
Hero image of the Docker logo with text reading "Part 3: Docker for Beginners"

Welcome back to our series covering all things Docker. In our previous article, Getting Started with Docker: Installation & Basics, we covered key Docker components including storage and networking as well as the basics of installing Docker onto your Linux system. 

But now that you have Docker installed, what can you do with it? 

In this article we’ll cover the basics of container deployment alongside a step-by-step example deployment using Ghost. In addition, we’ll break down the anatomy of a Docker command, highlighting what each component designates.

To start, let’s deploy our first container with Docker.

Learning the Basics of Docker with Ghost 

Deploying containers with Docker is very quick, and it’s possible to spin-up an entire website with just a single command. To learn the basics, let’s spin-up a simple blog website as a test. 

Ghost is an alternative to WordPress that deploys easily with Docker. That’s not to say that WordPress won’t work with Docker, but because WP’s image size is larger, it’ll mean a longer download time, and for the purposes of our experiment, it’s not the particular service we use that matters, just the container we’re going to be putting it in. 

To deploy Ghost with Docker, run the following command:

docker run -d --name ghost-test -e NODE_ENV=development -e url=http://localhost:3001 -p 3001:2368 ghost

With the command properly entered, Docker will then proceed to download the Docker image for Ghost on Docker hub, extract the image, then build the container and run it. In this particular example, I am using Docker within a VM on a private subnet, with a private IP of 10.0.9.2.

Now, by visiting the server IP on port 3001, we see Ghost is building the site:

Screenshot stating "We'll be right back. We're busy updating our site"

After a minute or so, refreshing that page will present us with a blog website!

Screenshot of Ghost site with Coming Soon messaging

Now what exactly happened here? During this process Docker installed all of the necessary dependencies, spun up a webserver, and exposed the necessary ports to have a live website. 

Pretty cool, right? We just deployed an entire website with a single command. But how is this possible? 

The Anatomy of Docker Commands

Let’s take a look at the command we ran above:

docker run -d --name ghost-test -e NODE_ENV=development -e url=http://localhost:3001 -p 3001:2368 ghost

This is a command that deploys a Docker container according to specified environment variables and options. But let’s take a deeper look at what each piece of this command is designating. 

  • docker run – The command used to deploy single-stack Docker containers. As a matter of fact, this command is seldom used for production deployments as Docker-Compose is preferred. More on that in our next article, Docker Advanced: Deploying with Docker-Compose.
  • -d – This option stands for detached. What it does is leave the container running in the background so our terminal is still usable. 
  • --name – This states the name of the container. This might seem like a trivial option, but container names are also the value for container hostnames! This will be relevant when dealing with networking in Docker, specifically reverse-proxies.
  • -e NODE_ENV=development -e url=http://localhost:3001 – The -e flag is used for loading specific environment variables into the container. In this case, the Ghost instance is set to development. Development configurations are usually for testing and lack certain production requirements, such as a separate database or SSL. The url variable states that the site’s address is localhost, so you would visit this Ghost site via the IP of the server. 
  • ghost – The last value we have, which is the image we want to deploy. This value is also required, so we don’t need a flag for it. 

Looking for a full breakdown of Docker syntax? The best place for finding proper syntax for Docker commands is the official Docker CLI reference.

Understanding Docker-Compose

Now that we know how to deploy a service into a container and understand the anatomy of a basic Docker command, it’s time to look at one of the more advanced elements of the Docker platform, Docker-Compose. By simplifying and unifying your entire Docker deployment into a single, editable file, Docker-Compose allows for more easily reproducible containers and reduced manual input.

In our next article in this series, Docker Advanced: Deploying with Docker-Compose, we’ll cover the basics of Docker-Compose by recreating our Ghost test container from this article, and analyze the ways in which Docker-Compose can help speed up and regulate your container deployments. Read on to learn more or check out our blog and knowledge base for more great content and industry insights from the hosting experts at Hivelocity.

– written by Eric Lewellen

The post Docker for Beginners: Deploying Containers & the Anatomy of Commands appeared first on Hivelocity Hosting.

]]>
Getting Started with Docker: Installation & Basics https://www.hivelocity.net/kb/getting-started-with-docker-installation-basics/ Tue, 30 May 2023 14:06:45 +0000 https://www.hivelocity.net/?post_type=hv_knowledgebase&p=27012 Welcome back to our series covering all things Docker. In our previous article, An Introduction to Docker: What is Docker & What Are Its Benefits?, we covered the basics of what the Docker platform is as well as some of the advantages containers provide over traditional deployment methods. But now that you know that your …

Getting Started with Docker: Installation & Basics Read More »

The post Getting Started with Docker: Installation & Basics appeared first on Hivelocity Hosting.

]]>
Hero image of the Docker logo with text reading "Part 2: Getting Started with Docker"

Welcome back to our series covering all things Docker. In our previous article, An Introduction to Docker: What is Docker & What Are Its Benefits?, we covered the basics of what the Docker platform is as well as some of the advantages containers provide over traditional deployment methods. But now that you know that your organization would likely benefit from using a containerization solution like Docker, how do you get started?

Well, first we need to understand a few key components of Docker, specifically Docker Volumes and Docker Networking, and how these help optimize your containers system-wide. 

Eager to start experimenting with Docker? For those ready to install Docker on their system, feel free to jump ahead to the Installing Docker section at the bottom of this article, for an overview of how to easily install Docker onto your preferred Linux operating systems.

Docker and Persistent Data

One of the key differences between Docker and traditional deployment methods is in how data is stored. Containers are considered to be “ephemeral”. This means that containers do not persistently store any data. This is a key aspect of stateless architecture, which means that referencing previous data (state) is not tied directly to running instances of a server. 

Stateless architecture provides a few key advantages, including easier to scale applications, simpler methods of implementing failover, and increased flexibility between services.

For example, if you have a currently running container containing a service and you enter the command docker kill container-name (replacing “container-name” with the name of your designated container) followed by docker rm container-name, Docker will completely remove the container along with any changes made to the service inside the container. Re-running the docker run command would then provide a fresh instance of the container, with no changes surviving from the previous instance.

So How Does Docker Persistently Store Data?

Docker stores lasting data in something called volumes or bind mounts.

A volume is a store for data that is managed by Docker. You simply name your volume and provide it a directory within the container to map to. Volumes are the preferred choice for storage in Docker. This is because volumes can be shared between containers and easily backed up or migrated to a different container, in addition to other Docker storage features. Docker volumes can be managed with ease via the Docker API or CLI tools. 

A bind mount is a mapping from the host directory to a directory within the container. This is accomplished by stating a location on the host, say /home/user/application, and mapping it to a directory on the container. This is useful if there are config files that need to be edited frequently, as this can be done conveniently using a cli text editor on the host. 

To help illustrate these differences, I’ve included a diagram below with a bind mount and volume configured as persistent storage for a MySQL container alongside Ghost (an alternative to WordPress). In the diagram, our persistent storage is mapped to the /var/lib/mysql container side. 

Chart showing the interconnectedness of a Docker container and the root filesystem of the Docker Host OS

In the diagram above, we can see some differences in how our files are easily accessed in a bind mount, but more secure and manageable with a volume.

So, Should I Use a Volume or a Bind Mount?

In general, it’s best to use volumes for storage in Docker. Volumes allow for greater flexibility within deployments and give way to more features. Another key advantage of volumes is that they are inherently more secure. Since the data stored for each container is completely managed by Docker, any malicious data cannot escape Docker to infect the host system, whereas with a bind mount, any files within the container storage exist on the host filesystem.  

In a Docker host with potentially hundreds of containers running, allowing any single container to breach the host is not ideal.

Docker Networking

Now that we understand how storage works in Docker, it’s time to understand how Docker containers are interlinked. Docker networking is the component of Docker that allows for complex networks connecting groups of containers all on a single operating system. 

You can think of Docker networks as individual LAN networks that only exist inside Docker for use with containers.

A Visual Representation of Docker Networking

In the diagram below, we’ve illustrated how Docker containers connect with each bubble representing a container and the lines connecting them serving as network mappings.

Chart showing the interconnectedness of multiple networks of containers

One of the first things you’ll notice in the diagram above is how we have our red, green, and yellow internal networks isolated from each other. These networks are intentionally set up so that they are only able to communicate with other containers in the same network. The purple network is the only network that hits the gateway container, exposing the services to the internet.

We can use these mappings to join containers to shared networks to allow them to communicate with one another or isolate containers that we want secured. Think of the division between these internal and public networks as VLANs defined within Docker.

Also note that we have a metrics service defined via Prometheus and Grafana, which can pay close attention to our public gateway network via Traefik metrics for any suspicious activity and gather statistics.

A key advantage of the Docker networking model is that since all the networks are defined in Docker, we could export our entire deployment in the blue box above to a completely different machine with ease. We wouldn’t need to configure any physical routers, switches, or firewalls.  All we would need to do is copy over our Docker configuration files and volumes.

Useful Facts About Docker Networking

When working with Docker Networking, it’s important to remember that all containers use the service name as their hostname, unless explicitly defined with the container_name: parameter.  These hostnames are only available to neighboring Docker containers. Even the host OS has no knowledge of these hostnames outside of Docker. 

However, these Docker hostnames are still useful within Docker when we want to connect our containers to create complex services. We can define a cache such as redis, a database, a storage bucket (such as MinIO), and a webserver that all communicate within Docker. 

Since Docker containers have their own ports and hostnames that don’t conflict with our host OS, deployments can be reproduced across infrastructure without modifying Docker configurations. This also means that large and complex networks are not accessible outside of the host operating system, which means fewer attack vectors for malicious activity.

We will learn how to define Docker networks using Docker-Compose in a later article in this series, Docker Advanced: Deploying with Docker-Compose.

Installing Docker

Now that you understand how Docker’s containers work, the next step is to install Docker onto your system and start experimenting with it. Assuming you have a Linux test server or Linux computer, you can install Docker quickly and easily by typing the following commands into your terminal.

For Debian/Ubuntu:

sudo apt install docker

For RHEL/CentOS/Fedora/Almalinux:

sudo dnf install docker (or if on older version of CentOS: sudo yum install docker)

Once Docker is installed, unless using root (which is not recommended in production environments), adding the user to the Docker group will allow running the docker command without sudo privileges. To accomplish this, enter the following: 

sudo usermod -aG docker $USER 

Then, just logout and log back in for the changes to take effect.

Working with Docker

Now that you have Docker installed, it’s time to start deploying your first containers and seeing firsthand the ways in which Docker can help your system become more efficient. 

In our next article in this series, Docker for Beginners: Deploying Containers & the Anatomy of Commands, we’ll cover the step-by-step instructions for deploying a basic container as well as breaking down the anatomy of commands essential for working with Docker. Read on to learn more or check out our blog and knowledge base for more great content and industry insights from the hosting experts at Hivelocity.

– written by Eric Lewellen

The post Getting Started with Docker: Installation & Basics appeared first on Hivelocity Hosting.

]]>
An Introduction to Docker: What is Docker & What Are Its Benefits? https://www.hivelocity.net/kb/an-introduction-to-docker-what-is-docker-what-are-its-benefits/ Tue, 30 May 2023 14:05:30 +0000 https://www.hivelocity.net/?post_type=hv_knowledgebase&p=27010 Containerization platforms like Docker have revolutionized the way businesses think about their data and digital infrastructure. By emphasizing optimal resource usage and scaling, Docker allows system admins to build focused, precise systems with minimal overlap, resulting in platforms and services which are more reliable, reproducible, and efficient. With its ability to standardize software deployments into …

An Introduction to Docker: What is Docker & What Are Its Benefits? Read More »

The post An Introduction to Docker: What is Docker & What Are Its Benefits? appeared first on Hivelocity Hosting.

]]>
Hero image of the Docker logo with text reading "Part 1: An Introduction to Docker"

Containerization platforms like Docker have revolutionized the way businesses think about their data and digital infrastructure. By emphasizing optimal resource usage and scaling, Docker allows system admins to build focused, precise systems with minimal overlap, resulting in platforms and services which are more reliable, reproducible, and efficient. With its ability to standardize software deployments into single deployment solutions that work across all environments, it’s of little wonder why containerization solutions like Docker have become the preferable alternative to more traditional deployment methods.

But what is Docker and what can its benefits mean for you? Is containerization the right solution for your organization? 

In this series of five articles, we’ll cover an overview of Docker and its advantages, how its components work, the basic installation process, deploy our first container, discuss advanced solutions like Docker-Compose which allow for system-wide control via a single, modifiable document, and a selection of need-to-know commands for working with Docker. By the end, you’ll have a thorough understanding of Docker, know the basics of deploying containers, and see some examples of the ways Docker can help revolutionize your organization’s systems.

Prefer a more visual format? Check out our Quick Overview of Docker: The Benefits of Containerization section at the bottom of this article for a video summary of the concepts we cover here as well as charts and diagrams designed to help visualize the differences between Docker and more traditional deployment methods.

But first, what is Docker?

What Is Docker?

Released in 2013, Docker is a free-to-use PaaS solution designed to offer a lightweight alternative to traditional virtual machines. Unlike VMs, Docker’s containers don’t require their own kernel, virtualized hardware, or dedicated resources. This allows containers to run with the minimum amount of dependencies, resources, and disk space needed to run an application.  

By splitting complex systems into smaller services, containerization allows for higher resource efficiency with consistent output results. This improves performance by isolating workflows and reducing resource overlap, allowing more processes to work in unison with fewer bottlenecks, and allowing entire services to ship as singular deployments. 

Docker is also preferable to VM or bare metal methods of running processes because the monotonous task of micromanaging an operating system is passed off to Docker. This means that developers can focus on writing code instead of managing multiple releases. Additionally, system administrators can leverage Docker to reduce downtime, save resources, and increase consistency across their infrastructure.

So in the end, Docker is similar to a virtual machine, but minus the overhead and hardware virtualization. 

Docker logo

What Can Docker Do?

Docker can run anything you would want to deploy on a bare metal server or virtual machine, but inside of a container instead. This can be done by taking the code or program of the intended service, stating its dependencies, and specifying an OS layer for it to run off of in something called a Dockerfile. A Dockerfile is the blueprint of a Docker image, which is used to run containers. 

A large variety of software can be run inside containers, including operating systems, webservers, and much more.

What Makes Docker Preferable Over Traditional Deployments?

Size: Containers only take up a small amount of space, usually 100MB – 500MB per image. Compare this to a VM which would require multiple GB of disk space for the OS alone.

Resources: Resources are another inherent benefit with containers. With VM’s the RAM allocated to a VM will be unavailable to other VM’s in the system. With Docker, a container only uses the amount of RAM it needs for its current process, with the rest of the RAM available for other containers to use. This aspect of Docker holds true in regards to CPU allocation as well. Many deployments have seen massive efficiency gains upon moving to a containerized deployment environment.

Flexibility/Management: Flexibility and management of containers is where Docker wins the most. Take for example a server binary: for traditional deployments like .exe, .deb, and .rpm, binaries need to be developed and maintained for the service to be available to the majority of administrators. This takes effort and resources, as different codebases need to be updated and monitored. Any dependencies will need to be installed in addition to the base binary, which increases time to deploy and size of deployments.

Unlike traditional deployments, Docker only requires one deployment method that will work on almost any operating system, with all the dependencies shipped and configured by default. With a traditional deployment you port the process only, but with Docker you ship everything needed to run the service. This includes the operating system, dependencies, and full environment, all neatly bundled into the aforementioned container. 

There are many other benefits to containers as well, but the above mentioned advantages are reason enough to consider Docker.

Still unsure if Docker is the right solution for your organization? Watch the video below for a quick visual overview of why Docker is useful, how it compares to traditional virtual machines, and some real life examples of what a Docker deployment might look like.

A Quick Overview of Docker: The Benefits of Containerization

A Note on Kubernetes

I would be remiss at this point to not mention another very prominent technology related to containers: Kubernetes. Developed by Google and released as an open source solution in 2014, Kubernetes (or K8s) has taken the next major step with containers by orchestrating and automating them. This allows for large, complex platforms known as microservices which have the traits of auto-healing, auto-failover, and auto-scaling. Basically, Kubernetes is taking Docker and clustering it across multiple instances, allowing your businesses and services to easily scale horizontally with the most efficiency possible.

Unfortunately, Kubernetes has a much steeper learning curve than Docker. As both platforms operate on similar principles, before attempting to use Kubernetes, I highly recommend obtaining a working knowledge of Docker first. 

Getting Started with Docker

So, now that we have a grasp of how Docker works and the benefits it can bring to your digital infrastructure, how do we actually use and deploy Docker containers in a real environment?

To understand how to work with Docker, we first need to understand several components necessary to using Docker successfully in a production environment. These components are Docker Volumes, Docker Networking, Docker-Compose, and a working knowledge of the Docker cli client.

In our next article, Getting Started with Docker: Installation & Basics, we’ll cover these key components of Docker as well as providing basic instructions for installing Docker onto your system so you can start working with the platform. Read on to learn more or check out our blog and knowledge base for more great content and industry insights from the hosting experts at Hivelocity.

– written by Eric Lewellen

The post An Introduction to Docker: What is Docker & What Are Its Benefits? appeared first on Hivelocity Hosting.

]]>
What is Docker and What are Its Features? https://www.hivelocity.net/kb/what-is-docker/ Fri, 05 Jun 2015 20:05:52 +0000 https://www.hivelocity.net/?post_type=hv_knowledgebase&p=11908 What is Docker? Docker is an open source platform for developers and system administrators to build, ship, and run distributed applications based on Linux containers. At its core, Docker is basically a container engine which uses Linux Kernel features such as namespaces and control groups. This allows it to create containers on top of an …

What is Docker and What are Its Features? Read More »

The post What is Docker and What are Its Features? appeared first on Hivelocity Hosting.

]]>
What is Docker?

Docker is an open source platform for developers and system administrators to build, ship, and run distributed applications based on Linux containers.

At its core, Docker is basically a container engine which uses Linux Kernel features such as namespaces and control groups. This allows it to create containers on top of an operating system and automate application deployment on the containers.

In addition to providing a light weight environment to run the application code, the use of containers allow users to package up an application with all of the parts it needs to operate correctly. By including libraries and other dependencies, applications can be transferred from one machine to be easily run on another.

Because Docker makes use of the Linux kernel housed on the machine it’s running on, regardless of any differences or customized settings, so long as any non-native elements are included within the package, your applications will run on any Linux machine. This means developers are more able to focus on coding without having to build around a specific system.

 

What are Docker’s Features?

The following is a list features that make Docker unique:

Features

  1. An isolated, rapid framework.
  2. An open source solution
  3. Cross cloud infrastructure
  4. Moderate CPU/memory overhead
  5. Fast reboot

Components

Docker is made up of the following major components:

1) Docker Daemon

The Docker daemon is a service that runs on a host machine and acts as the brains of the system. A user can’t directly interact with the daemon. By entering commands into the Docker client, these commands are translated and sent over to the daemon to execute them.

2) Docker Client

Docker client is the primary user interface which helps users interact with the Docker daemon. It processes the commands from the user and communicates back and forth with the daemon in order to execute those commands.

3) Docker Images

These are read-only templates that help launch Docker containers. A Docker image can be of CentOS operating system with Apache and your web application installed. These images are then used to create the Docker containers. Docker allows users to build new images or you can simply edit and update the images.

4) Docker Registries

Docker registries hold the Docker images. These registries are either public or private stores where you upload or download images. The public Docker registry, also called Docker Hub, provides a huge collection of existing images for use. You can easily edit and update the images as per your requirements and can upload them to other registries.

5) Docker Containers

Each Docker container is an isolated & secured application platform which holds everything that is needed for an application to run. You can perform run, start, stop, migration, and delete operations on a docker container.

 

Popular Links

Looking for more information on Docker? Search our Knowledge Base!

Interested in more articles about Virtualization? Navigate to our Categories page using the bar on the left or check out these popular articles:

Popular tags within this category include: Proxmox, OpenStack, Cloud Storage, and more.

Don’t see what you’re looking for? Use the search bar at the top to search our entire Knowledge Base.

 

The Hivelocity Difference

Seeking a better Dedicated Server solution? In the market for Private Cloud or Colocation services? Check out Hivelocity’s extensive list of products for great deals and offers.

With best-in-class customer service, affordable pricing, a wide-range of fully-customizable options, and a network like no other, Hivelocity is the hosting solution you’ve been waiting for.

Unsure which of our services is best for your particular needs? Call or live chat with one of our sales agents today and see the difference Hivelocity can make for you.

The post What is Docker and What are Its Features? appeared first on Hivelocity Hosting.

]]>