Adding a FileSystem/Mount Point to Ubuntu DB2 Sandbox Servers on VMs

I blogged a little while ago on how to create a basic sandbox VM for DB2. Since I’m a big fan of separating filesystems properly, I needed to learn how to add another disk and another filesystem to my sandbox VMs.

Creating a Filesystem

In production environments, we often work with System Administrators (SA) and Storage Admins to get the filesystems we need. That means that I don’t spend much time on things like creating filesystems. I am very clear with clients that I am a database expert, and there are certain tasks that I will hand off to SAs and other experts. I’m sure this gets much more complicated in other kinds of environments, but I thought I’d share how I added a filesystem on my little Ubuntu VM running on VMware Workstation 12 Player (not the fully featured VMware Workstation Pro).

I started with this article, and for the most part followed it very closely, even though that is for a more advanced implementation of VMware. The steps aren’t rocket science:

  1. Log on to your Linux VM
  2. Run this command and note the sdx entries:
    ecrooks@ubuntu:~$ ls /dev/sd*
    /dev/sda  /dev/sda1  /dev/sda2  /dev/sda5
  3. Shut down the Linux VM
  4. Go into the VMware Workstation Player, right click on the VM, then select “Settings:
    VMWare_settings
  5. On the hardware tab, select “Add”, and a Wizard will pop up:
    VMware_add_hardware
  6. In the Add Hardware Wizard, select “Hard Disk” and click next:
    VMware_add_hardware2
  7. Select SCSI and click next:
    VMware_add_hardware3
  8. Select “Create a new virtual disk” and click next:
    VMware_add_hardware4
  9. Select a size that makes sense both for your host and your VM and click next:
    VMware_add_hardware5
  10. Take the default and click Finish:
    VMware_add_hardware6
  11. Now boot up your linux VM, and check the listed disks again:
    ecrooks@ubuntu:~$ ls /dev/sd*
    /dev/sda  /dev/sda1  /dev/sda2  /dev/sda5  /dev/sdb
  12. Acting as root, make the file system:
    ecrooks@ubuntu:~$ sudo mkfs -t ext3 /dev/sdb
    [sudo] password for ecrooks: 
    mke2fs 1.42.13 (17-May-2015)
    Creating filesystem with 1310720 4k blocks and 327680 inodes
    Filesystem UUID: c8fd701b-e492-4de4-a16f-c69d65466d02
    Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736
    
    Allocating group tables: done                            
    Writing inode tables: done                            
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done 
  13. As root, use fdisk to verify your new disk is there (at the bottom):
    ecrooks@ubuntu:~$ sudo fdisk -l
     
    ... 
    
    Disk /dev/ram15: 64 MiB, 67108864 bytes, 131072 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    
    
    Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x3101965c
    
    Device     Boot    Start      End  Sectors  Size Id Type
    /dev/sda1  *        2048 39845887 39843840   19G 83 Linux
    /dev/sda2       39847934 41940991  2093058 1022M  5 Extended
    /dev/sda5       39847936 41940991  2093056 1022M 82 Linux swap / Solaris
    
    
    Disk /dev/sdb: 5 GiB, 5368709120 bytes, 10485760 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
  14. Make a directory that your new file system will be mounted on:
    ecrooks@ubuntu:~$ sudo mkdir /db2home2
  15. Make a backup of /etc/fstab (in case you mess up):
    ecrooks@ubuntu:~$ sudo cp /etc/fstab /etc/fstab.bak
  16. Edit /etc/fstab using your favorite editor to add this line:
    /dev/sdb /New_Directory_Name ext3 defaults 1 3
  17. Restart the Linux VM

Obviously this procedure wouldn’t work so well for a production environment. I’m sure there are online ways to do much of this, but this is just my little sandbox.

Ember Crooks
Ember Crooks

Ember is always curious and thrives on change. She has built internationally recognized expertise in IBM Db2, spent a year working with high-volume MySQL, and is now learning Snowflake. Ember shares both posts about her core skill sets and her journey learning Snowflake.

Ember lives in Denver and work from home

Articles: 544

2 Comments

  1. In a non sandbox environment normally LVM is being used vs just formatting the “physical” disk. ESXi based hypervisors will support adding the disk on the fly and rescanning the bus to detect it.

    -your former systems admin

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.