Creation of Your Linux Environment and the Process of Launching it on DigitalOcean :



The Custom image feature of DigitalOcean helps you to bring your custom Linux and Unix-like virtual disk images from an on-premises environment or another cloud platform to DigitalOcean and use them to start DigitalOcean Droplets. But as described in the Custom Images Documentation, the types of images that are supported natively by Custom Images Upload tool are Raw (.img), qcow2, Vhdx, VDI, VMDK, however the ISO format images are not supported officially. You can create and compress a disk image of your Unix-like or Linux system provided it has the prerequisite softwares and drivers installed, if you do not have a compatible image. You need to begin by ensuring that your image matches up with the Custom Image requirements, for which you will need to configure the system and install some software essentials and create the image using dd command-line utility and compress it using gzip. Following this you need to upload the compressed image file to DigitalOcean spaces, from where you can import it as Custom Image and then finally you will have to boot up a droplet using the uploaded image.

The Essentials:

You should use one of the DigitalOcean-provided images as a base, or an official distribution-provided cloud image like Ubuntu Cloud. After that you can install software and applications on top of this base image to bake a new image, using tools like Packer and Virtual Box. Virtualization environments and many cloud providers also offers tools to export virtual disks to one of the compatible formats listed above, so, if possible, you should use these to simplify the import process. In the cases where you need to manually create a disk image of your system, you can follow the instructions in this guide. Note that these instructions have only been tested with an Ubuntu 18.04 system, and steps may vary depending on your server’s OS and configuration.

Not Every One Focuses On Your Requirements! Get What You Want- Revenue & Ranking Both. Make Money While Stepping Up The Ladder Of SERPs.
We hate spam. Your email address will not be sold or shared with anyone else.

Things you will be needing:

A Linux or Unix-like system that meets all of the requirements listed in the Custom Images product documentation . For example, your boot disk must have:

  • A max size of 100GB
  • An MBR or GPT partition table with a grub boot loader
  • VirtIO drivers installed

A non-root user with administrative privileges available to you on the system you’re imaging. An additional storage device used to store the disk image created in this guide, preferably as large as the disk being copied. This can be an attached block storage volume, an external USB drive, an additional physical disk, etc. A DigitalOcean Space and the s3cmd file transfer utility configured for use with your Space. To learn how to create a Space, consult the Spaces Quickstart. To learn how set up s3cmd for use with your Space, consult the s3cmd 2.xSetup Guide.

Installing Cloud-Init and Enabling SSH

In the beginning, you need to install the cloud-Init initialization package. It is a set of scripts that runs at boot to configure certain cloud instance properties like default locale, hostname, SSH keys and network devices.
Steps for installing cloud-init will vary depending on the operating system you have installed. In general, the cloud-init package should be available in your OS’s package manager, so if you’re not using a Debian-based distribution, you should substitute apt in the following steps with your distribution-specific package manager command.

Installing cloud-init

In this blog, we’ll use an Ubuntu 18.04 server and so will use apt to download and install the cloud-init package. Note that cloud-init may already be installed on your system (some Linux distributions install cloud-init by default). To check, log in to your server and run the following command: cloud-init .If you see the following output, cloud-init has already been installed on your server and you can continue on to configuring it for use with DigitalOcean:


usage: /usr/bin/cloud-init [-h] [--version] [--file FILES] [--debug] [--force]

/usr/bin/cloud-init: error: the following arguments are required: subcommand

If instead you see the following, you need to install cloud-init:


cloud-init: command not found

To install cloud-init, update your package index and then install the package using apt:

sudo apt update
sudo apt install cloud-init

Now that we’ve installed cloud-init, we’ll configure it for use with DigitalOcean, ensuring that it uses the ConfigDrive datasource. Cloud-init datasources dictate how cloud-init will search for and update instance configuration and metadata. DigitalOcean Droplets use the ConfigDrive datasource, so we will check that it comes first in the list of datasources that cloud-init searches whenever the Droplet boots.

Reconfiguring cloud-init

By default, on Ubuntu 18.04, cloud-init configures itself to use the NoCloud datasource first. This will cause problems when running the image on DigitalOcean, so we need to reconfigure cloud-init to use

the ConfigDrive datasource and ensure that cloud-init reruns when the image is launched on DigitalOcean.

From the command line, navigate to the /etc/cloud/cloud.cfg.d directory: cd /etc/cloud/cloud.cfg.d. Use the ls command to list the cloud-init config files present in the directory: ls


05_logging.cfg 50-curtin-networking.cfg 90_dpkg.cfg curtin-preserve-sources.cfg README

Depending on your installation, some of these files may not be present. If present, delete the 50-curtin-networking.cfg file, which configures networking interfaces for your Ubuntu server. When the image is launched on DigitalOcean, cloud-init will run and reconfigure these interfaces automatically, so this file is not necessary. If this file is not deleted, the DigitalOcean Droplet created from this Ubuntu image will have its interfaces misconfigured and won’t be accessible from the internet: sudo rm 50-curtin-networking.cfg .Next, you will have to run dpkg-reconfigure cloud-init to remove the NoCloud datasource, ensuring that cloud-init searches for and finds the ConfigDrive datasource used on DigitalOcean: sudo dpkg-reconfigure cloud-init

You should see the following graphical menu:

Image Coutesy :

The NoCloud datasource is initially highlighted. Press SPACE to unselect it, then hit ENTER. Lastly, navigate to /etc/netplan: cd /etc/netplan. Then remove the 50-cloud-init. yaml file, which was generated from the cloud-init networking file we removed previously: sudo rm 50-cloud-init.yaml. The final step is assuring that we clean up configuration from the initial cloud-init run so that it reruns when the image is launched on DigitalOcean. To do this, you need to run the following command that is, cloud-init clean: sudo cloud-init clean. At this point you have installed and configured cloud-init for using it with DigitalOcean. You can thus move on to enabling SSH access to your droplet.

Enable SSH Access

Once you’ve installed and configured cloud-init, the next step is to ensure that you have a non-root admin user and password available to you on your machine, as outlined in the prerequisites. This step is essential to diagnose any errors that may arise after uploading your image and launching your Droplet. If a preexisting network configuration or bad cloud-init configuration renders your Droplet inaccessible over the network, you can use this user in combination with the DigitalOcean Droplet Console to access your system and diagnose any problems that may have surfaced.

Once you’ve set up your non-root administrative user, the final step is to ensure that you have an SSH server installed and running. SSH often comes preinstalled on many popular Linux distributions. The process for checking whether a service is running will vary depending on your server’s operating system.. If you aren’t sure of how to do this, consult your OS’s documentation on managing services. On Ubuntu, you can verify that SSH is up and running using the following command: sudo service ssh status

You should see the following output:


● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2018-10-22 19:59:38 UTC; 8 days 1h ago

Docs: man:sshd(8)
Process: 1092 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS)
Main PID: 1115 (sshd)
Tasks: 1 (limit: 4915)
Memory: 9.7M
CGroup: /system.slice/ssh.service
└─1115 /usr/sbin/sshd -D

If your SSH isn’t up and running, you can install it using apt (on Debian-based distributions):

sudo apt install openssh-server

By default, the SSH server will start on boot unless configured otherwise. This is desirable when running the system in the cloud, as DigitalOcean can automatically copy in your public key and grant you immediate SSH access to your Droplet after creation.

Once you’ve created a non-root administrative user, enabled SSH, and installed cloud-init, you’re ready to move on to creating an image of your boot disk.

Creating Disk Image

In this step, we’ll create a RAW format disk image using the dd command-line utility, and compress it using gzip. We’ll then upload the image to DigitalOcean Spaces using s3cmd.

To begin, log in to your server, and inspect the block device arrangement for your system using lsblk: lsblk

You should see something like the following:

loop0 7:0 0 12.7M 1 loop /snap/amazon-ssm-agent/495
loop1 7:1 0 87.9M 1 loop /snap/core/5328
vda 252:0 0 25G 0 disk
└─vda1 252:1 0 25G 0 part /
vdb 252:16 0 420K 1 disk

You can mark in this case that your main boot disk is /dev/vda, a 25GB disk, and the primary partition, mounted at /, is /dev/vda1. In most cases the disk containing the partition mounted at / will be the source disk to image. We are going to use dd to create an image of /dev/vda. Now you should also decide that where you want to store the disk image. You can either attach another block storage device, that should be as large as the disk and then you can upload it to Digital Ocean Spaces. You can add an external USB disk or you can add an additional drive to the machine or attach another storage drive and if you have physical access to the server you need add an additional drive. Another option, which you will get to know through this blog, is copying the image over SSH to a local machine, from which you can upload it to Spaces. You need to ensure that the storage device to which you are saving the compressed image has enough free space; no matter which method you choose to follow. The compressed image file will be prominently smaller than the original disk if the disk you are imaging is almost empty. However, you need to keep this thing in your mind that before running the following dd command, you need to ensure that copying an actively –used disk may result in some corrupted files. Therefore, you need to be very sure to halt any data intensive operations and close down as many running application as possible.

1: Creating Image Locally

Here is the syntax for the dd command we’re going to execute is like as follows:

dd if=/dev/vda bs=4M conv=sparse | pv -s 25G | gzip > /mnt/tmp_disk/ubuntu.gz

You will have to select /dev/vda as the input disk to image, and setting the input/output block sizes to 4MB (from the default 512 bytes). Its function is generally speeding up things up a little bit. In addition, you can also use the conv=sparse flag to minimize the output file size by overlooking over empty space. We then pipe the output to the pv pipe viewer utility so we can visually track the progress of the transfer (this pipe is optional, and requires installing pv using your package manager). If you know the size of the initial disk ,you can add the -s 25G to the pv pipe to get an ETA for when the transfer will complete. Now you will have to pipe it all to gzip, and save it in a file called ubuntu.gz on the temporary block storage volume we’ve attached to the server.Now you will have to replace /mnt/tmp_disk with the path to the external storage device you’ve attached to your server.

2: Creating Image over SSH

You can also execute the copy over SSH if you have enough disk space available on your local machine, instead of provisioning additional storage for your remote machine. You need to mark that depending on the bandwidth available to you, this can be slow and you may incur additional costs for data transfer over the network.To copy and compress the disk over SSH, execute the following command on your local machine:

ssh remote_user@your_server_ip "sudo dd if=/dev/vda bs=4M conv=sparse | gzip -1 -" | dd of=ubuntu.gz.

We are SSHing into our remote server, executing the dd command there, and piping the output to gzip. We then transfer the gzip output over the network and save it as ubuntu.gz locally. Ensure you have the dd utility available on your local machine before running this command:

which dd

Now you will have to create the compressed image file using either of the above methods. This may take several hours, depending on the size of the disk you’re imaging and the method you’re using to create the image.Once you’ve created the compressed image file, you can move on to uploading it to your DigitalOcean Spaces using s3cmd.

Uploading Image to Spaces and Custom Images

As described in the essentials, you should have s3cmd installed and configured for use with your DigitalOcean Space on the machine which will be containing t=your compressed image. Then you need to locate the compressed image file and then upload it to your space using

You need to keep a note that you should replace your_space_name  with with your Space’s name and not its URL. For example, if your Space’s URL is, then your Space’s name is example-space-name. Now after the upload completes, navigate to your Space with the help of DigitalOcean Control Panel, and locate the images in the list of files. You will have to make the image public accessible so that the custom image and get an access of it and save a copy.

Now you need to click in the right-hand side of the image listing, click the More drop down menu, then click into Manage Permissions:

Image Courtesy :

Then, click the radio button next to Public and hit Update to make the image publicly accessible. You need to keep a note that your image will temporarily be publicly accessible to anyone with its spaces path during this process. However if you like to avoid making your image temporarily public you can create your custom image using the DigitalOcean API and be sure that you set your image to Private with the help of the above procedure after your image has successfully been transferred to Custom Images. Then you need to fetch the Spaces URL for your image by sticking over the image name in the control panel, and hit Copy URL in the window that pops up, then youn will have to navigate to Images in the left hand navigation bar, and then Custom Images. Then you will have to upload your image using this URL and then create a Droplet from this image. you also nned to keep in mind that ou need to add an SSH key to the Droplet on creation .


Now if you attempt to SSH into your Droplet and are unable to connect, make sure that your image meets the listed requirements and has both cloud-init and SSH installed and properly configured. If you still can’t run the Droplet, you can attempt to use the DigitalOcean Droplet Console and the non-root user you created earlier to explore the system and debug your networking, cloud-init and SSH configurations. The other way of debugging your image is to use a virtualization tool like Virtualbox  to boost up your disk image inside of a virtual machine, and help it debugging your system’s configuration from within the VM.


Therefore in this blog, you’ve learned how to create a disk image of an Ubuntu 18.04 system using the dd command line utility and upload it to DigitalOcean as a Custom Image from which you can launch Droplets.The steps in this blog may vary depending on your operating system, existing hardware, and kernel configuration but, in general, images created from popular Linux distributions should work using this method. But be sure to carefully follow the steps for installing and configuring cloud-init, and assure that your system meets all the requirements listed in the essentials section above.

Header Image Courtesy:


Barry Davis is a Technology Evangelist who is joined to Webskitters for more than 5 years. A specialist in Website design, development & planning online business strategy. He is passionate about implementing new web technologies that makes websites perform better.


Interested in working with us?

We'd love to hear from you
Webskitters LLC
7950 NW 53rd St #337 Miami, Florida 33166
Phone: 732.218.7686