Supporting EVMS
The EVMS suite aims to be a complete disk management solution:
it recognises disk partitions, RAID configurations, concatenation
of disk partitions, and file systems. It does all of this using its
own plugin architecture, and is largely selfcontained: in particular,
LVM, mdadm or libdevmapper are not required. There are some external
dependencies though: EVMS uses the same kernel modules to do RAID
that other packages use, and it uses an external
mkfs command to support file systems.
What can be moved out of the kernel has been moved out: as an
example, EVMS does not rely on code in the kernel to interpret
partition tables: a partition such as hda1 is
unused. Instead, EVMS uses the dm mechanism to present parts of a
physical disk as independent block devices. The advantage of this
approach is that new partition table formats can be supported
without kernel changes.
The plugin architecture provides three different user interfaces:
command line, curses based and GUI. There also is a
configuration and backup/restore mechanism, where plugins can send
and receive state related data. There does not seem to be a
central state file other than basic configuration: all state
information is kept with the plugins.
Plugins are implemented as shared libraries, but the relation
between library and plugin is not simple: there's no command
to determine which plugins are contained in a library.
This makes it difficult to determine in a maintainable way
what's the minimal set of plugins needed to boot the system;
the current implementation makes no attampt in that direction,
and just loads the lot of them.
Once the hardware is available and device drivers are loaded, the
EVMS system expects to take care of everything. This means
yaird support can be fairly simple:
once we find that a device is supported by EVMS (it's listed
with by the command "evms_query volumes"), we determine the
underlying physical disks with the command "evms_query disks".
We then build a boot image that loads drivers for the physical
disk and afterwards runs the command "evms_activate" that will
recreate all volumes.
There's a twist: the volume may need RAID drivers; to accomodate
this, all RAID related modules are inserted into the kernel before
starting "evms_activate". A possible improvement is to include
modprobe on the image, and to let EVMS load only the required
modules. This would save RAM at the expense of a somewhat larger
initial boot image.
Note that some devices are visible in EVMS without actually
working; these normally are shown with device number 0:0.
This seems to happen mostly with devices that are not completely
under the control of EVMS.
I'm not sure whether this a bug or a feature; but either way
yaird will need to be aware of such
devices and the fact that they may be visible, but that they are
not bootable.