Loading...
 

Solaris Disk Addition

Solaris disk addition

Suppose you’ve added a new disk to your Solaris system and the disk has shown up as /dev/dsk/c8d1. The first step is to label the disk and add it to a new storage pool:
solaris$ sudo zpool create zones c8d1
ZFS then labels the disk, creates the pool “zones,” creates a filesystem root inside that pool, and mounts that filesystem as /zones. The filesystem will be remounted automatically when the system boots.
solaris$ ls -a /zones
.
..
Create a new filesystem
solaris$ sudo zfs create zones/new_fs
solaris$ zfs list -r zones
NAME USED AVAIL REFER MOUNTPOINT
zones 100K 488G 21K /zones
zones/new_fs 19K 488G 19K /zones/new_fs
The -r flag to zfs list makes it recurse through child filesystems. Most other zfs subcommands understand -r, too. Ever helpful, ZFS automounts the new filesystem as soon as we create it. This is handled by the canmount and mountpoint properties
To simulate traditional filesystems of fixed size, you can adjust the filesystem’s properties to add a “reservation” (an amount of space reserved in the storage poolfor the filesystem’s use) and a quota. This adjustment of filesystem properties is one of the keys to ZFS management, and it’s something of a paradigm shift for administrators who are used to other systems. Here, we set both values to 1GB:
solaris$ sudo zfs set reservation=1g zones/new_fs
solaris$ sudo zfs set quota=1g zones/new_fs
solaris$ zfs list -r zones
NAME USED AVAIL REFER MOUNTPOINT
zones 1.00G 487G 21K /zones
zones/new_fs 19K 1024M 19K /zones/new_fs
The new quota is reflected in the AVAIL column for /zones/new_fs. Similarly, the reservation shows up immediately in the USED column for /zones. That’s because the reservations of /zones’s descendant filesystems are included in its size tally. Both property changes are purely bookkeeping entries. The only change to the actual storage pool is the update of a block or two to record the new settings. No process goes out to format the 1GB of space reserved for /zones/new_fs. Most ZFS operations, including the creation of new storage pools and new filesystems,are similarly lightweight. Using this hierarchical system of space management, you can easily group several filesystems to guarantee that their collective size will not exceed a certain threshold; you do not need to specify limits on individual filesystems. You must set both the quota and reservation properties to properly emulate a traditional fixed-size filesystem. The reservation alone simply ensures that the filesystem will have enough room available to grow at least that large. The quota limits the filesystem’s maximum size without guaranteeing that space will be available for this growth; another object could snatch up all the pool’s free space, leaving no room for /zones/new_fs to expand.
Property inheritance
Many properties are naturally inherited by child filesystems. For example, if we wanted to mount the root of the zones pool in /opt/zones instead of /zones, we could simply set the root’s mountpoint parameter:
solaris$ sudo zfs set mountpoint=/opt/zones zones
solaris$ zfs list -r zones
NAME USED AVAIL REFER MOUNTPOINT
zones 1.00G 487G 21K /opt/zones
zones/new_fs 19K 1024M 19K /opt/zones/new_fs
solaris$ ls /opt/zones
new_fs
Setting the mountpoint parameter automatically remounts the filesystems, and the mount point change affects child filesystems in a predictable and straightforward way. The usual rules regarding filesystem activity still apply, however.
Use zfs get to see the effective value of a particular property where zfs get all dumps them all. In the output, the SOURCE column tells you why each property has a particular value. For example, local means that the property was set explicitly, and a dash “-” means that the property is read-only. If the property value is inherited from an ancestor filesystem, SOURCE shows the details of that inheritance as well.
solaris$ zfs get all zones/new_fs
solaris$ zfs get all zones/new_fs
NAME PROPERTY VALUE SOURCE
zones/new_fs type filesystem -
zones/new_fs creation Wed Mar 17 17:57 2010 -
zones/new_fs used 19K -
zones/new_fs available 1024M -
zones/new_fs referenced 19K -
zones/new_fs compressratio 1.00x -
zones/new_fs mounted yes -
zones/new_fs quota 1G local
zones/new_fs reservation 1G local
zones/new_fs mountpoint /opt/zones/new_fs inherited from zones
...
You will notice that the available and referenced properties look suspiciously similar to the AVAIL and REFER columns shown by zfs list. In fact, zfs list is just a different way of displaying filesystem properties. If we had included the full output of our zfs get command above, there would be a used property in there, too. You can specify the properties you want zfs list to show with the “-o”.
We can see a more granular definition of how the diskspace is being utilised by looking at other parameters like usedbychildren and usedbysnapshots. See the ZFS man page for a complete list of parameters
Snapshots
 
On the command line, you create snapshots with zfs snapshot. For example, the following command sequence illustrates creation of a snapshot, use of the snapshot through the filesystem’s .zfs/snapshot directory, and reversion of the filesystem to its previous state.
solaris$ sudo touch /opt/zones/new_fs/now_you_see_me
solaris$ ls /opt/zones/new_fs
now_you_see_me
solaris$ sudo zfs snapshot zones/new_fs@snap1
solaris$ sudo rm /opt/zones/new_fs/now_you_see_me
solaris$ ls /opt/zones/new_fs
solaris$ ls /opt/zones/new_fs/.zfs/snapshot/snap1
now_you_see_me
solaris$ sudo zfs rollback zones/new_fs@snap1
solaris$ ls /opt/zones/new_fs
now_you_see_me
You assign a name to each snapshot at the time it’s created. As mentioned above, the complete specifier for a snapshot is usually written in the form filesystem@snapshot.
Use zfs snapshot -r to create snapshots recursively. The effect is the same as executing zfs snapshot on each contained object individually: each subcomponent receives its own snapshot. All the snapshots have the same name, but they’re logically distinct.
You can instantiate a snapshot as a full-fledged, writeable filesystem by “cloning” it.
solaris$ sudo zfs clone zones/new_fs@snap1 zones/subclone
solaris$ ls /opt/zones/subclone
now_you_see_me
solaris$ sudo touch /opt/zones/subclone/and_me_too
solaris$ ls /opt/zones/subclone
and_me_too now_you_see_me
The snapshot that is the basis of the clone remains undisturbed and read-only. However, the new filesystem (zones/subclone in this example) retains a link to both the snapshot and the filesystem on which it’s based, and neither of those entities can be deleted as long as the clone exists.
Detaching and attaching
Sometimes disks can become disconected from a pool without a disk failure. For example, when a cable fails. Reforming the pool using the same disk may require it to be detached first, then reattached.
solaris$ sudo zpool detach c2t1d0
solaris$ sudo zpool attach c2t0d0 c2t1d0
It is also worth noting that autoreplace should be set to on to ensure that the previous vdev configuration is kept (a previous mirror could become a stripe)
solaris$ zpool get autoreplace zones
NAME  PROPERTY  VALUE SOURCE
zones autoreplace on  local
Moving disks across systems
If you have to move a pool from one system to another, when the disks are inserted into the new machine, the pool will need to be imported.
zpool import zones1

 

zpool status<br>  pool: zones<br> state: ONLINE<br>  scan: resilvered 30.5G in 0h13m with 0 errors on Sat Apr  7 14:35:35 2018<br>config:</p><p>        NAME        STATE     READ WRITE CKSUM<br>        zones       ONLINE       0     0     0<br>          mirror-0  ONLINE       0     0     0<br>            c2t1d0  ONLINE       0     0     0<br>            c2t0d0  ONLINE       0     0     0</p><p>errors: No known data errors</p><p>  pool: zones1<br> state: ONLINE<br>status: Some supported features are not enabled on the pool. The pool can<br>        still be used, but some features are unavailable.<br>action: Enable all features using 'zpool upgrade'. Once this is done,<br>        the pool may no longer be accessible by software that does not support<br>        the features. See zpool-features(5) for details.<br>  scan: none requested<br>config:</p><p>        NAME        STATE     READ WRITE CKSUM<br>        zones1      ONLINE       0     0     0<br>          c3t0d0    ONLINE       0     0     0</p><p>errors: No known data errors