Saturday, August 1, 2015

Inspecting Puppet Module Metadata

Last week at #puppethack, @hunner helped me land a patch to stdlib to add a load_module_metadata function. This function came out of several Puppet module triage sessions and a patch from @raphink inspired by a conversation with @hirojin.

The load_module_metadata function is available in master of puppetlabs-stdlib, hopefully it will be wrapped up into one of the later 4.x releases, but will almost certainly make it into 5.x.

On it's own this function doesn't do much, but it is composable. Let's see some basic usage:

$: cat metadata.pp

$metadata = load_module_metadata('stdlib')

notify { $metadata['name']: }

$: puppet apply --modulepath=modules metadata.pp
Notice: Compiled catalog for hertz in environment production in 0.03 seconds
Notice: puppetlabs-stdlib
Notice: /Stage[main]/Main/Notify[puppetlabs-stdlib]/message: defined 'message' as 'puppetlabs-stdlib'
Notice: Finished catalog run in 0.03 seconds

As you can see this isn't the most amazing thing ever. However access to that information is very useful in the following case:

$apache_metadata = load_module_metadata('apache')

case $apache_metadata['name'] {
  'puppetlabs-apache': {
    # invoke apache as required by puppetlabs-apache
  'example42-apache': {
    # inovke apache as required by example42-apache
  default: {
    fail("Apache module author not recognized, please add it here")

This is an example of Puppet code that can inspect the libraries loaded in the modulepath, then make intelligent decisions about how to use them. This means that module authors can support multiple versions of 'library' modules and not force their users into one or the other.

This is a real problem in Puppet right now. For every 'core' module there are multiple implementations, with the same name. Apache, nginx, mysql, archive, wget, the list goes on. Part of this is a failure of the community to band behind a single module, but we can't waste time finger pointing now. The cat is out of the bag and we have to deal with it.

We've had metadata.json and dependencies for a while now. However, due to the imperfectness of the puppet module tool, most advanced users do not depend on dependency resolution from metadata.json. At my work we simply clone every module we need from git, users of r10k do much the same.

load_metadata_json enables modules to enforce that their dependencies are being met. Simply put a stanza like this in params.pp:

$unattended_upgrades_metadata = load_module_metadata('unattended_upgrades') 
$healthcheck_metadata = load_module_metadata('healthcheck')

if versioncmp($healthcheck_metadata['version'], '0.0.1') < 0 {
  fail("Puppet-healthcheck is too old to work")
if versioncmp($unattended_upgrades_metadata['version'], '2.0.0') < 0 {
  fail("Puppet-unattended_upgrades is too old to work")

As we already saw, modules can express dependencies on specific implementations and versions. They can also inspect the version available and use that. This is extremely useful when building a module that depends on another module, and that module is crossing a symver major version boundary. In the past, in the OpenStack modules, we passed a parameter called 'mysql_module_version' to each class which allowed that class to use the correct invocation of the mysql module. Now classes anywhere in your puppet code base can inspect the mysql module directly and determine which invocation syntax to use.

$mysql_metadata = load_module_metadata('mysql')

if versioncmp($mysql_metadata['version'], '2.0.0') <= 0 {
  # Use mysql 2.0 syntax
} else {
  # Use mysql 3.0 syntax

Modules can even open up their own metadata.json, and while it is clunky, it is possible to dynamically assert that dependencies are available and in the correct versions.

I'm excited to see what other tricks people can do with this. I'm anticipating it will make collaboration easier, upgrades easier, and make Puppet runs even more safe. If you come up with a neat trick, please share it with the community and ping me on twitter(@nibalizer) or IRC: nibalizer.

No comments:

Post a Comment