In this blog post How to Share Volumes Between Docker Containers Safely and Reliably we will walk through the cleanest ways to share files and state between containers, why it works, and how to avoid common pitfalls.
At a high level, Docker volumes let multiple containers read and write the same data on the host, without baking that data into any image. Think of a volume as a managed, persistent folder that containers can attach to. Sharing a volume is how you let a build container produce artifacts for a web server, a worker process read jobs created by an API, or multiple services access a cache. The trick is choosing the right type of volume, mounting it correctly, and setting permissions so everything “just works.”
The technology behind Docker volumes
Containers use a union filesystem (such as overlay2) for their read-only image layers plus a thin writable layer. That writable layer is copy-on-write and tied to the container lifecycle. Volumes, however, bypass that layer entirely. They mount a host-managed directory directly into the container’s filesystem. This separation provides predictable performance, persistence across container restarts, and easy sharing.
There are two primary ways to mount host storage:
- Named volumes: Managed by Docker. Portable, fast on all platforms, and ideal for sharing between containers. Backed by the local volume driver by default, but can use others (e.g., NFS) if configured.
- Bind mounts: Map an exact host path into the container. Great for local development when you need to edit files on your machine and see changes in the container immediately.
For production and container-to-container sharing, named volumes are the most reliable starting point.
Common use cases for shared volumes
- Build artifacts: One container compiles assets to a shared volume; another (like NGINX) serves them.
- Queues and scratch space: An API writes job files; workers consume them from the same shared folder.
- Caching: Multiple services share a dependency cache to speed up builds or installs.
- Logs: Sidecar containers collect and ship logs from application containers via a shared directory.
Quick start with named volumes
1) Create a named volume
# Create a Docker-managed volume
$ docker volume create shared_data
# Inspect it if you like
$ docker volume inspect shared_data
2) Mount it into multiple containers
Use the modern --mount
syntax for clarity.
# Producer writes files into /data
$ docker run -d --name producer \
--mount source=shared_data,target=/data \
alpine sh -c "sh -c 'while true; do date >> /data/ticks.log; sleep 2; done'"
# Consumer reads the same files from the volume (read-only for safety)
$ docker run -it --name consumer \
--mount source=shared_data,target=/input,readonly \
alpine sh -c "tail -f /input/ticks.log"
Both containers see the same files because they are attached to the same Docker-managed volume.
Sharing volumes with Docker Compose
Compose makes multi-container sharing declarative and repeatable.
version: "3.9"
services:
producer:
image: alpine
command: sh -c "sh -c 'while true; do date >> /data/ticks.log; sleep 2; done'"
volumes:
- shared_data:/data
consumer:
image: alpine
command: sh -c "tail -f /input/ticks.log"
volumes:
- type: volume
source: shared_data
target: /input
read_only: true
volumes:
shared_data:
Bring it up with docker compose up -d
. Note that named volumes defined under volumes:
persist even if services are removed. Use docker compose down -v
to also remove them.
Permissions that “just work”
Permissions are the number one source of “it doesn’t work” when sharing volumes. Strategies:
- Align container user IDs: Run containers with a user that owns the shared paths.
- Initialize ownership: Chown the directory once before normal workloads start.
- Use read-only mounts where possible for consumers.
# Option A: Initialize ownership once
$ docker run --rm \
--mount source=shared_data,target=/data \
alpine chown -R 1000:1000 /data
# Option B: Run containers as a matching user
$ docker run -d --name app \
--user 1000:1000 \
--mount source=shared_data,target=/data \
myapp:latest
On SELinux-enabled hosts (RHEL/Fedora), add labels to bind mounts (not needed for named volumes):
# Shared (multi-container) access
-v /srv/shared:/data:z
# Private (only this container) access
-v /srv/private:/data:Z
Alternative: bind mounts for exact host paths
Bind mounts map a specific host directory. Ideal for local dev, but keep in mind performance on macOS/Windows (due to virtualization) can be slower than named volumes.
# Create a host directory and share it
$ mkdir -p /srv/shared
$ docker run -d --name producer \
-v /srv/shared:/data \
alpine sh -c "echo hello > /data/hello.txt & sleep infinity"
$ docker run --rm -it -v /srv/shared:/input:ro alpine cat /input/hello.txt
On Windows, use paths like //c/Users/you/shared
with Docker Desktop. On macOS/Windows, consider excluding heavy directories from antivirus for better I/O.
Data in a tmpfs volume disappears when unmounted or the host reboots.
Concurrency, consistency, and gotchas
- One writer, many readers is the simplest model. If multiple writers are unavoidable, ensure your app uses file locks or atomic renames to avoid partial reads.
- Databases do not like shared writers across multiple containers unless designed for clustering. Don’t share one database directory between two DB instances.
- Line endings and encoding: Cross-platform teams should standardize to avoid surprises in shared files.
- Cleanup: Removing containers does not remove named volumes. Prune carefully with
docker volume ls
anddocker volume rm
.
Security tips
- Mount consumer containers as
read-only
where possible. - Scope mounts to the smallest needed path (e.g.,
/data/output
rather than/data
). - Use non-root users inside containers and align UIDs/GIDs with volume ownership.
- On SELinux, use
:z
or:Z
for bind mounts as appropriate.
Legacy and advanced options
- –volumes-from: Still supported, but mostly superseded by named volumes and Compose. It mounts all volumes from another container. Prefer explicit named volumes for clarity.
- Remote drivers: You can back a named volume with NFS, CIFS, or cloud drivers. Useful for multi-host sharing but introduces network and locking complexity.
Operational patterns
Initialize a shared volume with starter data
# Copy files from the current directory into the volume
$ docker run --rm \
-v $(pwd):/seed:ro \
--mount source=shared_data,target=/data \
alpine sh -c "cp -a /seed/. /data/"
Back up and restore a volume
# Backup
$ docker run --rm \
--mount source=shared_data,target=/data \
-v $(pwd):/backup \
alpine sh -c "tar czf /backup/shared_data.tgz -C /data ."
# Restore
$ docker run --rm \
--mount source=shared_data,target=/data \
-v $(pwd):/backup \
alpine sh -c "tar xzf /backup/shared_data.tgz -C /data"
Summary
Sharing data between containers is straightforward when you pick the right tool:
- Use named volumes for reliable, portable sharing.
- Use bind mounts for local development where you need live access to host files.
- Set permissions and read-only mounts thoughtfully to keep things secure.
- Plan for backups and handle concurrency properly.
With these patterns, your containers can collaborate safely and efficiently, whether you’re building a development workflow or running production workloads.
Discover more from CPI Consulting
Subscribe to get the latest posts sent to your email.