The Excavation Draft

Yes Nick of course you are, those of you who’ve know me are now saying. You’ve been writing a book for as long as I’ve known you, and I’ve known you a long time. That is true, so let’s say it this…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Fantastic Containers and How to use them

Every developer has probably encountered this:

Fear not! We already have a solution to this problem and they’re called containers. No, not the enormous amount of plastic every Asian mom keeps somewhere in the kitchen.

They’re meant to package up and hold the code and its dependencies like how the plastic ones hold your food, instead of you holding it in your hands. It enables you to run your applications reliably and easily in different environments. Containers are lightweight, standalone, executable packages of software that contains everything needed to run the programs you’ve created. Containers even have their own filesystems!

These containers have to be created somehow through these blueprints, which we call images. Images are essentially schematics that define a filesystem, the code or binary, runtimes, dependencies, and any other thing required to run the code. Without an image, there is no container.

Now that we know what containers are,

We actually have different options of solving the problem, so let's weigh one of the largest contenders to containers, the Virtual Machines or VMs. Virtual Machines originally started because servers processing power increased and normal applications aren’t able to maximize these resources. This, however, introduced a new issue, portability.

Some key differences between containers and virtual machines are:

The key advantage of the Virtual Machine is that allows any OS to run on top of the host machine. The Guest OS has virtual access to the host’s resources. This, however, means that there’s a lot of overhead needed to run the VM and runs slower on lower-end machines.

Containers, on the other hand, are just discrete processes. They share the host machine’s kernel and thus needs to use the same OS as the host. It is considered to be flexible, lightweight, portable, loosely coupled, scalable, and secure. You can build the container locally, deploy it to the cloud and run the container on any machine with Docker. Containers are also easily scalable due to them being built with images. This means that it can be easily replicated and distributed.

2. Create a Dockerfile in the directory you’re working on There are several parts to a Dockerfile.

A base image is always the suggested way of building your docker images. Many times someone has already built an image with all the dependencies you would probably need. For R, I usually use something from Rocker. For Python, I would use Jupyter if it’s a notebook I’m dealing with, but most of the time I just take one of the images from the Docker official image for Python. There are many base images out there for you to use.

The path to your code needs to be included in the Dockerfile definition. You would do this by performing some COPY commands where you copy only the necessary directories. You could just copy everything by doing COPY . ., but this is not recommended. We want to keep our image as lightweight as possible shedding any possible files unneeded to run the application. We could also define a .dockerignore file that works much like a .gitignore file.

The final part of the Dockerfile is going to be your container start command. This could come in the form of CMD or ENTRYPOINT.

The key difference between the two is that CMD is a default command that can be overridden easily. ENTRYPOINT, on the other hand, is the choice when you want to start the container with a specific executable. The only way to override the ENTRYPOINT command is to pass a --entrypoint flag.

A tip is to use the build-cache to making building new images much faster. This is the very reason why we put our code at the end of the Dockerfile. It is because Docker will use the build cache while it can until it detects a new change.

Take this example. Only the first 2 lines will probably use the build cache because our code has changed. The install will not be cached.

The more efficient implementation of this would look like

4. Run the built image by running the docker run command. We can again pass extra flags to the command such as --rm, which automatically removes the container on exit. You can also pass environment variables to the container at runtime by adding a -e or --env flag to the command.

One trick I have personally found very useful is turning all these commands in to Make commands and using a standard pattern across all my projects.

Add a comment

Related posts:

Face Recognition 4

Untuk face recognition ke-4 atau yang terakhir ini, kita akan menyambungkan data yang kita masukkan kedalam database terhadap visual penangkapan wajah kita. Untuk yang pertama, kita siapkan software…