Loading AI tools
Software for deploying containerized applications From Wikipedia, the free encyclopedia
Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers.[5] The service has both free and premium tiers. The software that hosts the containers is called Docker Engine.[6] It was first released in 2013 and is developed by Docker, Inc.[7]
Original author(s) | Solomon Hykes |
---|---|
Developer(s) | Docker, Inc. |
Initial release | March 20, 2013[1] |
Stable release | |
Repository | |
Written in | Go[3] |
Operating system | Linux, Windows, macOS |
Platform | x86-64, ARM, s390x, ppc64le |
Type | OS-level virtualization |
License |
|
Website | docker.com |
Docker is a tool that is used to automate the deployment of applications in lightweight containers so that applications can work efficiently in different environments in isolation.
Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.[8] Because all of the containers share the services of a single operating system kernel, they use fewer resources than virtual machines.[6]
Docker can package an application and its dependencies in a virtual container that can run on any Linux, Windows, or macOS computer. This enables the application to run in a variety of locations, such as on-premises, in public (see decentralized computing, distributed computing, and cloud computing) or private cloud.[10] When running on Linux, Docker uses the resource isolation features of the Linux kernel (such as cgroups and kernel namespaces) and a union-capable file system (such as OverlayFS)[11] to allow containers to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.[12] Docker on macOS uses a Linux virtual machine to run the containers.[13]
Because Docker containers are lightweight, a single server or virtual machine can run several containers simultaneously.[14] A 2018 analysis found that a typical Docker use case involves running eight containers per host, and that a quarter of analyzed organizations run 18 or more per host.[15] It can also be installed on a single board computer like the Raspberry Pi.[16]
The Linux kernel's support for namespaces mostly[17] isolates an application's view of the operating environment, including process trees, network, user IDs and mounted file systems, while the kernel's cgroups provide resource limiting for memory and CPU.[18] Since version 0.9, Docker includes its own component (called libcontainer) to use virtualization facilities provided directly by the Linux kernel, in addition to using abstracted virtualization interfaces via libvirt, LXC and systemd-nspawn.[19][9][10][20]
Docker implements a high-level API to provide lightweight containers that run processes in isolation.[21]
The Docker software as a service offering consists of three components:
dockerd
, is a persistent process that manages Docker containers and handles container objects. The daemon listens for requests sent via the Docker Engine API.[23][24] The Docker client program, called docker
, provides a command-line interface (CLI) that allows users to interact with Docker daemons.[23][25]An example of a Dockerfile:[29]
ARG CODE_VERSION=latest
FROM ubuntu:${CODE_VERSION}
COPY ./examplefile.txt /examplefile.txt
ENV MY_ENV_VARIABLE="example_value"
RUN apt-get update
# Mount a directory from the Docker volume
# Note: This is usually specified in the 'docker run' command.
VOLUME ["/myvolume"]
# Expose a port (22 for SSH)
EXPOSE 22
docker-compose
CLI utility allows users to run commands on multiple containers at once; for example, building images, scaling containers, running containers that were stopped, and more.[31] Commands related to image manipulation, or user-interactive options, are not relevant in Docker Compose because they address one container.[32] The docker-compose.yml file is used to define an application's services and includes various configuration options. For example, the build
option defines configuration options such as the Dockerfile path, the command
option allows one to override default Docker commands, and more.[33] The first public beta version of Docker Compose (version 0.0.1) was released on December 21, 2013.[34] The first production-ready version (1.0) was made available on October 16, 2014.[35]docker swarm
CLI[38] utility allows users to run Swarm containers, create discovery tokens, list nodes in the cluster, and more.[39] The docker node
CLI utility allows users to run various commands to manage nodes in a swarm, for example, listing the nodes in a swarm, updating nodes, and removing nodes from the swarm.[40] Docker manages swarms using the Raft consensus algorithm. According to Raft, for an update to be performed, the majority of Swarm nodes need to agree on the update.[41][42]dotCloud Inc. was founded by Kamel Founadi, Solomon Hykes, and Sebastien Pahl[44] during the Y Combinator Summer 2010 startup incubator group and launched in 2011, and renamed to Docker Inc in 2013.[45] The startup was also one of the 12 startups in Founder's Den first cohort.[46] Hykes started the Docker project in France as an internal project within dotCloud, a platform-as-a-service company.[47]
Docker debuted to the public in Santa Clara at PyCon in 2013.[48] It was released as open-source in March 2013.[21] At the time, it used LXC as its default execution environment. One year later, with the release of version 0.9, Docker replaced LXC with its own component, libcontainer, which was written in the Go programming language.[19][49]
In 2017, Docker created the Moby project for open research and development.[50]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.