Skip to main content

Overview

Attaching a volume connects a persistent block device to a running compute instance, making it accessible as a disk inside the guest operating system. A volume with status Available can be attached to any instance in the same availability zone. After attachment, the volume appears as a new block device (e.g., /dev/vdb) inside the instance — you must format and mount it before use if it is a new blank volume.
Prerequisites
  • A volume with status Available
  • A running compute instance in the same availability zone as the volume
  • SSH access to the instance for filesystem preparation

Attach a Volume

Attaching a newly created blank volume does not create a filesystem. After attaching, format the device inside the instance before mounting. Attaching to a running instance does not require a reboot.

Open the Volumes list

Log in to the Xloud Dashboard (https://connect.<your-domain>) and navigate to Project → Volumes → Volumes.

Attach to instance

Locate the volume with status Available. Click the Actions dropdown and select Manage Attachments.In the dialog, select the target instance from the Attach to Instance dropdown and click Attach Volume.
The Device Name field can be left blank — Xloud assigns the next available device path automatically (e.g., /dev/vdb, /dev/vdc).

Confirm attachment

The volume status changes to In-use and the instance name appears in the Attached To column.
Volume attached — status shows In-use.

Format and Mount (First Use)

After attaching a new blank volume, prepare the filesystem inside the instance:

Identify the new device

SSH into the instance and list block devices:
List block devices
lsblk
The new volume appears as an unformatted disk — typically /dev/vdb or the next available device letter.

Format the volume

Format with ext4
sudo mkfs.ext4 /dev/vdb
For XFS (preferred for databases and large files):
Format with XFS
sudo mkfs.xfs /dev/vdb
Only run mkfs on a new, blank volume. Running it on a volume that already contains data will permanently erase that data.

Mount the volume

Create mount point and mount
sudo mkdir -p /mnt/data
sudo mount /dev/vdb /mnt/data

Persist the mount across reboots

Add the mount to /etc/fstab so it survives instance reboots:
Add to /etc/fstab
echo '/dev/vdb /mnt/data ext4 defaults 0 2' | sudo tee -a /etc/fstab
Use the volume UUID instead of the device path for more reliable fstab entries — device names can change after reboot on some configurations:
Find volume UUID
sudo blkid /dev/vdb
Then use: UUID=<uuid> /mnt/data ext4 defaults 0 2

Verify

Confirm mount
df -h /mnt/data
Device mounted and accessible — filesystem capacity matches the volume size.

Detach a Volume

Always unmount the volume inside the instance before detaching. Detaching a mounted volume without unmounting first can cause filesystem corruption and data loss.

Unmount inside the instance

SSH into the instance and unmount the volume:
Unmount the volume
sudo umount /mnt/data
Remove or comment out the corresponding line in /etc/fstab to prevent boot errors after detachment.

Detach from the Dashboard

Navigate to Project → Volumes → Volumes. Click Actions → Manage Attachments on the target volume, then click Detach Volume.

Confirm detachment

The volume status returns to Available.
Volume detached — status shows Available and ready for reattachment.

Troubleshooting

Cause: The volume and instance are in different availability zones. Volumes can only be attached to instances in the same zone.Resolution:
Check volume availability zone
openstack volume show <volume-id> -c availability_zone
Check instance availability zone
openstack server show <instance-id> -c OS-EXT-AZ:availability_zone
If the zones do not match, create a new volume in the correct availability zone, or migrate the instance.
Cause: The guest OS did not detect the hot-plug event, or the virtio driver is not loaded.Resolution:
Rescan SCSI bus inside instance
sudo echo "- - -" | sudo tee /sys/class/scsi_host/host*/scan
If the device still does not appear, verify that the instance’s kernel supports virtio block devices. Modern Linux kernels include this by default.
Cause: A process inside the instance still has files open on the mounted volume.Resolution:
Find processes using the mount
sudo lsof /mnt/data
Stop or kill the listed processes, then retry umount. Alternatively:
Force unmount (use with caution)
sudo umount -l /mnt/data
Lazy unmount (-l) detaches the filesystem immediately but keeps references open until all processes close their file handles. Only use this when the process cannot be stopped gracefully.

Next Steps

Extend a Volume

Increase volume capacity online without detaching or stopping the instance

Volume Snapshots

Create point-in-time snapshots for fast recovery and volume cloning

Create a Volume

Provision a new volume with the appropriate size and storage tier

Volume Backups

Create full and incremental backups for long-term data retention