24.06.2013 19:29

Hosting with Puppet - Design

Two years ago I was a small time Cfengine user moving to Puppet on a large installation, and more specifically introducing it to a managed hosting provider (which is an important factor driving my whole design and decision making process later). I knew how important it's going to be to get the base design right, and I did a lot of research on Puppet infrastructure design guidelines but with superficial results. I was disappointed, the DevOps crowd was producing tons of material on configuration management, couldn't at least a small part be applicable to large installations? I didn't see it that way then, but maybe that knowledge was being reserved for consulting gigs. After criticizing it is only fair that I write something of my own on the subject.

First of all, a lot has happened since. Wikimedia decided to release all their Puppet code to the public. I learned a lot, even if most of it was what not to do - but that was the true knowledge to be gained. One of the most prominent Puppet Forge contributors, example42 labs, released the next generation of their Puppet modules, and the quality has increased immensely. The level of abstraction is high, and for the first time I felt the Forge can possibly become a provider for me. Then 8 months ago the annual PuppetConf conference hosted engineers from Mozilla and Nokia talking about design and scaling challenges they faced running Puppet in a big enterprise. Someone with >2,000 servers sharing their experiences with you, soak it up.

* Puppet design principles


Running Puppet in a hosting operation is a very specific use case. Most resources available to you will concern running one or two web applications, on a hopefully standardized software stack across a dozen servers all managed by Puppet. But here you are a level above that, running thousands of such apps and sites, across hundreds of development teams that have nothing in common. If they are developing web-apps in Lisp you are there to facilitate it, not to tell stories about Python.

Some teams are heavily involved with their infrastructure, others depend entirely on you. Finally, there are "non-managed" teams which only need you to manage hardware for them but you still want to provide them with a hosted Puppet service. All this influences my design heavily, but must not define it. If it works for a 100 apps it must work for 1 just the same, so the design principles below are universal.

- Object oriented


Do not treat manifests like recipes. Stop writing node manifests. Write modules.

Huge manifests with endless instructions, if conditionals, and node (server) logic are a trap. They introduce an endless cycle of "squeezing in just one more hack" until the day you throw it all away and re-factor from scratch. This is one of the lessons I learned from Wikimedia.

Write modules (see Modular services and Module levels) that are abstracted. Within modules write small abstracted classes with inheritance in mind (see Inheritance), and write defined types (defines) for resources that have to be instantiated many times. Write and distribute templates where possible, not static files, to reduce chances of human error, to reduce number of files to be maintained by your team, and finally number of files compiled into catalogs (which concerns scaling).

Here's a stripped down module sample to clarify this topic, and those discussed below:
# - modules/nfs/manifests/init.pp
class nfs (
    $args = 'UNSET'
    ){

    # Abstract package and service names, Arch, Debian, RedHat...
    package { 'portmap': ensure => 'installed', }
    service { 'portmap': ensure => 'running', }
}
# - modules/nfs/manifests/disable.pp
class nfs::disable inherits nfs {
    Service['portmap'] { ensure => 'stopped', }
}
# - modules/nfs/manifests/server.pp
class nfs::server (
    $args = 'UNSET'
    ){

    package  { 'nfs-kernel-server': ensure => 'installed', }
    @service { 'nfs-kernel-server': ensure => 'running', }
}
# - modules/nfs/manifests/mount.pp
define nfs::mount (
    $arg  = 'UNSET',
    $args = 'UNSET'
    ){

    mount { $arg: device => $args['foo'], }
}
# - modules/nfs/config.pp
define nfs::config (
    $args = 'UNSET'
    ){

    # configure idmapd, configure exports...
)

- Modular services


Maintain clear roles and responsibilities between modules. Do not allow overlap.

Maybe it's true that a server will never run PHP without an accompanying web server, but it's not a good reason to bundle PHP management into the apache2 module. Same principle is here to prevent combining mod_php and PHP-FPM management into a single module. Write php5, phpcgi, phpfpm modules, and use them for Apache2, Lighttpd, Nginx web servers interchangeably.

- Module levels


Exploit modulepath support. Multiple module paths are supported, they can greatly improve your design.

Reserve default /etc/puppet/modules path for modules exposing the top level API (for lack of a better acronym). These modules should define your policy for all the software you standardize on, how a software distribution is installed and how it's managed: iptables, sudo, logrotate, dcron, syslog-ng, sysklogd, rsyslog, nginx, apache2, lighttpd, php5, phpcgi, phpfpm, varnish, haproxy, tomcat, fms, darwin, mysql, postgres, redis, memcached, mongodb, cassandra, supervisor, postfix, qmail, puppet it self, puppetmaster, pepuppet (enterprise edition), pepuppetmaster...

Use the lower level modules for defining actual policy and configuration for development teams in organizations (or customers in the enterprise), and their servers. Here's an example:
- /etc/puppet/teams/t1000/
  |_ /etc/puppet/teams/t1000/files/
     |_ php5/
        |_ apc.ini
  |_ /etc/puppet/teams/t1000/manifests/
     |_ init.pp
     |_ services.pp
     |_ services/
        |_ encoder.pp
     |_ webserver.pp
     |_ webserver/
        |_ production.pp
     |_ users/
        |_ virtual.pp
  |_ /etc/puppet/teams/t1000/templates/
     |_ apache2/
        |_ virtualhost.conf.erb
For heavily involved teams the "services" classes are here to enable them to manage their own software, code deployments and simillar tasks.

- Inheritance


Understand class inheritance, and use it to abstract your code to allow for black-sheep servers.

These servers are always present - that one server in 20 which does things "just a little differently".
# - teams/t1000/manifests/init.pp
class t1000 {
    include ::iptables

    class { '::localtime': timezone => 'Etc/UTC', }

    include t1000::users::virtual
}
# - teams/t1000/manifests/webserver.pp
class t1000::webserver inherits t1000 {
    include ::apache2

    ::apache2::config { 't1000-webcluster':
        keep_alive_timeout  => 10,
        keep_alive_requests => 300,
        name_virtual_hosts  => [ "${ipaddress_eth1}:80", ],
    }
}
# - teams/t1000/manifests/webserver/production.pp
class t1000::webserver::production inherits t1000::webserver {
    include t1000::services::encoder

    ::apache2::vhost { 'foobar.com':
        content => 't1000/apache2/virtualhost.conf.erb',
        options => {
            'listen'  => "${ipaddress_eth1}:80",
            'aliases' => [ 'prod.foobar.com', ],
        },
    }
}
Understand how resources are inherited across classes. This will not work:
# - teams/t1000/manifests/webserver/legacy.pp
class t1000::webserver::legacy inherits t1000::webserver {
    include ::nginx

    # No, you won't get away with it
    Service['apache2'] { ensure => 'stopped', }
}
Only a sub-class inheriting its parent class can override resources of that parent class. But this is not a deal breaker, once you understand it. Remember our "nfs::disable" class from an earlier example, which inherited its parent class "nfs" and proceeded to override a service resource?
# - teams/t1000/manifests/webserver/legacy.pp
class t1000::webserver::legacy inherits t1000::webserver {
    include ::nginx

    include ::apache2::disable
}
This was the simplest scenario. Consider these as well: legacy server needs to run MySQL v5.1 in a cluster of v5.5 nodes, server needs Nginx h264 streaming support compiled into nginx binary and its provider is a special package, server needs PHP 5.2 to run a legacy e-commerce system...

- Function-based classifiers


Export only bottom level classes of bottom level modules to the business, as node classifiers:
# - manifests/site.pp (or External Node Classifier)
node 'man0001' { include t1000::webserver::production }
This leaves system engineers to define system policy with a 100% flexibility, and allows them to handle complex infrastructure. They in turn must ensure the business is never lacking, a server either functions as a production webserver or not, it must never include top level API classes.

- Dynamic arguments


Do not limit your templates to a fixed number of features.

Use hashes to add support for optional arbitrary settings that can be passed onto resources in defines. When a developer asks for a new feature there is nothing to modify, nothing to re-factor, options hash (in earlier "apache2::vhost" example) is extended and the template is expanded as needed with new conditionals.

- Convergence


Embrace continuous repair. Design for it.

Is it to my benefit to go all wild on class relationships to squeeze everything into a single puppet run? But if just one thing changes whole policy breaks apart. Micro manage class dependencies and resource requirements. If a webserver refused to start because a Syslog-ng FIFO was missing we know it will succeed on the next run. Within a few runs we can deploy whole clusters across continents.

There is however a specific here which is not universal, a hosting operation needs to keep agent run intervals frequent to keep up with an endless stream of support requests. Different types of operations can get away with 45-60 minute intervals, and sometimes use them for one reason or another (ie. scaling issues). I followed the work of Mark Burgees (author of Cfengine) for years and agree with Cfengine's 5 minutes intervals for just about any purpose.

- Configuration abstraction


Know how much to abstract, and where to draw the line.

Services like Memcache and MongoDB have a small set of run-time parameters. Their respective "*::config" defines can easily abstract their whole configuration files into a dozen arguments expanded into variables of a single template. Others like Redis support hundreds of run-time parameters, but if you consider that >80% of Redis servers run in production with default parameters even a 100 arguments accepted by "redis::config" is not too much. For any given server you will provide 3-4 arguments, the rest will be filled from default values, and yet when you truly need to deploy an odd-ball Redis server the flexibility to do so is there without the need to maintain a hundred redis.conf copies.

Services like MySQL and Apache2 can exist in an endless number of states, which can not be abstracted. Or to be honest they can, but you make your team miserable when you set out to make their jobs better. This is where you draw the line. For the most complex software distributions abstract only the fundamentals and commonalities needed to deploy the service. Handle everything else through "*::dotconf", "*::vhost", "*::mods" etc. defines.

- Includes


Make use of includes in services which support them, and those that don't.

Includes allow us to maintain small fundamental configuration files, which include site specific configuration from small configuration snippets dropped into their conf.d directories. This is a useful feature when trying to abstract and bring together complex infrastructures.

Services which do not support includes by default can fake them. Have the "*::dotconf" define install configuration snippets and then call an exec resource to assemble primary configuration file from individual snippets in the improvised conf.d directory (alternative approach is provided by puppet-concat). This functionality also allows you to manage shared services across shared servers, where every team provides a custom snippet in their own repository. They all end up on the shared server (after review) without the need to manage a single file across many teams (opening all kind of access-control questions).

- Service controls


Do not allow Puppet to become the enemy of the junior sysadmin.

Every defined type managing a service resource should include 3 mandatory arguments, let's call them: onboot, autorestart, and autoreload. On clustered setups it is not considered useful to bring back broken or outdated members into the pool on boot, it is also not considered useful to automatically restart such service if detected as "crashed" while it's actually down for maintenance, and often times it is not useful to restart such a service when a configuration change is detected (and in the process flush 120GB of data from memory).

Balance these arguments and provide sane defaults for every single service on its own merits. If you do not downtime will occur. You will also have sysadmins stopping Puppet agents the moment they login, naturally forgetting to start it again, and 2 weeks later you realize half of your cluster is not in compliance (Puppet monitoring is important, but is an implementation detail).

- API documentation


Document everything. Use RDoc markup and auto-generate HTML with puppet doc.

At the top of every manifest: document every single class, every single define, every single of their arguments, every single variable they search for or declare, and provide multiple usage examples for each class and define. Finally include contact information, bug tracker link and any copyright notices.

Puppet includes a tool to auto generate documentation from these headers and comments in your code. Have it run periodically refreshing your API documentation, and export it to your operations and development teams. It's not just a nice thing to do for them, it is going to save you from re-inventing the wheel on your Wiki system. Your Wiki now only needs the theory documented; what is Puppet, what is revision control, how to commit a change... and these bring me to the topics of implementation and change management, which are beyond the scope of design.


Written by anrxc | Permalink | Filed under work, code