Vagrant-specific files
Vagrantfile
Defines how Vagrant should set up all the VirtualBox VMs required to develop a hosted application.
We define 2 web front end VMs, and 1 database VM.
Each VM has a private network interface shared with the host, and a second interface on a shared network.
Each VM gets its ssh port forwarded to a unique localhost-only port on the host, and optionally could get other service ports forwarded to localhost-only ports on the host.
Vagrant has a plugin to update each VM’s /etc/hosts
file with the other VMs’ IP addresses.
Once the OS is loaded on each VM, Vagrant runs bootstrap.sh
in each VM to download and install the Puppet agent binaries.
Once the Puppet agent binaries are available on each VM, Vagrant runs puppet apply
to configure anything else required by the application.
Files directly referenced:
boostrap.sh
puppet/manifests/site.pp
hiera/hiera.yaml
puppet/modules
folderpuppet/site
folder
Files indirectly referenced:
Puppetfile
File contents:
# Vagrantfile -- creates and provisions VMs for local development environment # From https://github.com/devopsgroup-io/vagrant-hostmanager/issues/86#issuecomment-357191881 $logger = Log4r::Logger.new('vagrantfile') def read_ip_address(machine) # Effectively, this finds the IP of the second NIC, # since the first NIC is on a NAT-only network and is useless for # VM-VM communication. command = "ip a | grep 'inet' | grep -v '127.0.0.1' | cut -d: -f2 | awk '{ print $2 }' | cut -f1 -d\"/\"" result = "" $logger.info "Processing #{ machine.name } ... " begin # sudo is needed for ifconfig machine.communicate.sudo(command) do |type, data| result << data if type == :stdout end $logger.info "Processing #{ machine.name } ... success" rescue result = "# NOT-UP" $logger.info "Processing #{ machine.name } ... not running" end # the second inet is more accurate result.chomp.split("\n").select { |hash| hash != "" }[1] end Vagrant.configure("2") do |config| # Default OS for all VMs config.vm.box = "bento/centos-7.7" # hostmanager for managing /etc/hosts on Vagrant VMs. # Maps hostname of each VM to its second NIC IP. config.hostmanager.enabled = true config.hostmanager.manage_guest = true config.hostmanager.ignore_private_ip = false config.vm.network "private_network", type: "dhcp" if Vagrant.has_plugin?("HostManager") config.hostmanager.ip_resolver = proc do |vm, resolving_vm| read_ip_address(vm) end end # Web server pool, host port 8084, 8085, ... forwarded to each VM's port 80 (1..2).each do |web_idx| config.vm.define "web0#{web_idx}" do |web| web.vm.hostname = "web0#{web_idx}" web.vm.network "forwarded_port", guest: 80, host: 8084+(web_idx-1) end end # DB server config.vm.define "db" do |db| db.vm.hostname = "db01" db.vm.network "forwarded_port", guest: 3306, host: 33306 end # Install Puppet Agent on all VMs config.vm.provision "shell", path: "./bootstrap.sh" # Provision with puppet apply config.librarian_puppet.puppetfile_dir = "puppet" config.librarian_puppet.use_v1_api = '1' config.librarian_puppet.destructive = false config.vm.provision "puppet" do |puppet| # Define a custom fact for the application under development puppet.facter = { "application" => "hello" } # Separate Puppet Forge (modules) from local roles, profiles, components (site) puppet.module_path = ["puppet/modules", "puppet/site"] puppet.manifests_path = "puppet/manifests" puppet.manifest_file = 'site.pp' # puppet.environment = 'dev' # /etc/puppetlabs/code/environments/(environment) # Hiera data source puppet.hiera_config_path = "hiera/hiera.yaml" puppet.options = "--verbose" end end
boostrap.sh
Detects if Puppet is already installed, and if not, downloads and installs it. References:
puppet6-release-*.rpm
puppet6-release-*.deb
File contents:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | #!/bin/sh command -v puppet > /dev/null && { echo "Puppet is installed! skipping" ; exit 0; } ID=$(cat /etc/os-release | awk -F= '/^ID=/{print $2}' | tr -d '"') VERS=$(cat /etc/os-release | awk -F= '/^VERSION_ID=/{print $2}' | tr -d '"') case "${ID}" in centos|rhel) wget https://yum.puppet.com/puppet6-release-el-${VERS}.noarch.rpm rpm -Uvh puppet6-release-el-${VERS}.noarch.rpm yum install -y puppet-agent ;; fedora) rpm -Uvh https://yum.puppet.compuppet6-release-fedora-${VERS}.noarch.rpm yum install -y puppet-agent ;; debian|ubuntu) wget https://apt.puppetlabs.com/puppet6-release-$(lsb_release -cs).deb dpkg -i puppet6-release-$(lsb_release -cs).deb apt-get -qq update apt-get install -y puppet-agent ;; *) echo "Distro '${ID}' not supported" 2>&1 exit 1 ;; esac |
Puppet-specific files
puppet/Puppetfile
Defines all the modules that need to be pulled from the Puppet Forge or other Git repositories:
forge 'https://forgeapi.puppetlabs.com' mod 'puppetlabs-apache', '5.4.0' mod 'puppetlabs-mysql', '10.4.0' mod 'puppet-php', '7.0.0' mod 'remi', :git => 'https://github.com/mikerenfro/puppet-remi', :ref => '1c5d6e7'
Pinning to particular versions means that we get fewer surprises and more repeatability in a test or production environment.
In the long run, since there would be a separate Puppetfile
in each branch of a Puppet control repository, new versions can be vetted through the development and test environments before the production environment is touched.
The remi
module is a fork of a Puppet Forge module, adjusted to allow use of current Puppet dependencies.
puppet/manifests/site.pp
Contains one node default
line that includes all the classes defined in hiera. File contents:
# puppet/manifests/site.pp -- applies to all nodes, # node-specific items are handled through hiera class lookup node default { # https://rnelson0.com/2019/12/24/updating-puppet-classification-with-hiera-to-use-the-modern-lookup-command/ # Find the first instance of `classes` in hiera data and includes unique values. Does not merge results. $classes = lookup('classes', Variant[String,Array[String]]) case $classes { String[1]: { include $classes } Array[String[1],1]: { $classes.unique.include } default: { fail('This node did not receive any classification') } } }
hiera/hiera.yaml
Contains the hiera
variable lookup hierarchy:
- nodes/(nodename).yaml
- application/(application-name).yaml
- os/(os)-(major)-(minor).yaml
- os/(os)-(major).yaml
- os/(os).yaml
- osfamily/(osfamily)-(major)-(minor).yaml
- osfamily/(osfamily)-(major).yaml
- osfamily/(osfamily).yaml
- (environment).yaml
- global.yaml
File contents:
--- # hiera/hiera.yaml -- defines hierarchy for variable lookup, # first file to define a value wins version: 5 hierarchy: - name: "Per-node data" path: "nodes/%{::trusted.hostname}.yaml" - name: "Application-specific data" path: "application/%{::application}.yaml" - name: "OS minor release data" path: "os/%{::facts.os.name}-%{::facts.os.release.major}.%{::facts.os.release.minor}.yaml" - name: "OS major release data" path: "os/%{::facts.os.name}-%{::facts.os.release.major}.yaml" - name: "OS generic data" path: "os/%{::facts.os.name}.yaml" - name: "OS family minor release data" path: "osfamily/%{::facts.os.family}-%{::facts.os.release.major}.%{::facts.os.release.minor}.yaml" - name: "OS family major release data" path: "osfamily/%{::facts.os.family}-%{::facts.os.release.major}.yaml" - name: "OS family generic data" path: "osfamily/%{::facts.os.family}.yaml" # - name: "Team data" # path: "team/%{::team}.yaml" - name: "Environment (devel, test, production) data" path: "%{::environment}.yaml" - name: "Global settings" path: "global.yaml" defaults: datadir: '/vagrant/hiera' data_hash: yaml_data
hiera/nodes/(nodename).yaml
Contains one entry for its classes
. For a front end web server:
--- classes: - "role::%{::application}_web"
and for a database server:
--- classes: - "role::%{::application}_db" # classes should contain one and only one role class # If there's any truly node-specific settings, they're defined here. mysql::server::root_password: 'strongpassword'
The application
fact is currently defined as hello
in Vagrantfile
.
Need to figure out how to push that out for test or production: if a missing application fact will fail gracefully through the Hiera lookups, no problem.
Role/Profile Layout
There’s a lot of class layers here:
- Roles
- Profiles
- Component Modules (private or public)
Roles and Profiles
See What Goes in a Puppet Role or Profile?, Intro to Roles and Profiles with Puppet and Hiera, and The roles and profiles method for some definitions, but basically:
- A node in Puppet gets assigned one role class.
- The assigned role class should include as many profile classes as required, and can define the load order of profile classes.
- Profile classes can include component modules, and can define the load order of any included modules. Profile classes can also create resources (files, templates, user-defined types).
- Component module classes can create resources or anything else required.
Front end web server files
puppet/site/role/manifests/hello_web.pp
Contains all the dependencies to provide a web server running the front end of the hello
LAMP stack:
class role::hello_web { include '::profile::apache' include '::profile::php_fpm' include '::profile::hello_web' Class['::profile::apache'] -> Class['::profile::hello_web'] }
Here, we also want to ensure that the profile::apache
class is complete before we try to set up anything application-specific.
That way, we don’t try to push out web content files until we have a web server installed.
puppet/site/profile/manifests/apache.pp
Contains logic to set up a generic Apache web server.
Uses Hiera lookup of apache::vhost
hash to create all needed vhosts:
class profile::apache { class { 'apache': default_vhost => false, } $myApacheVhosts = lookup('apache::vhost', {}) create_resources('apache::vhost', $myApacheVhosts) }
In this case, the apache::vhost
hash is defined as part of the hiera/application/hello.yaml
file.
File contents (includes more than just the virtual host definition):
--- # hiera/application/hello.yaml -- settings specific to all servers required # to deliver the "hello, world" LAMP application. hello::web_pool: "web__" hello::db_server: "db01" mysql::db: "%{facts.application}": user: "%{facts.application}" password: "world" host: "%{lookup('hello::web_pool')}" grant: - "ALL" sql: "/root/%{facts.application}.sql" apache::vhost: "%{facts.networking.hostname}": servername: "%{facts.networking.hostname}" serveradmin: 'renfro@tntech.edu' port: 80 docroot: '/var/www/html' custom_fragment: 'ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9000/var/www/html/$1' remi::remi_php72_enabled: 1 php::extensions: mysqlnd: {}
In the vhost hash, we’ve avoided hard-coding any hostname-specific information so we can easily build up a pool of web front end servers. Each front end server will get their server admin defined consistently, and each will get FastCGI proxied consistently.
puppet/site/profile/manifests/php_fpm.php
Contains logic to set up a generic PHP-FPM server. We want to ensure that Apache has the ability to proxy FCGI traffic. We also ensure that the Remi repositories are enabled before we install anything PHP-related.
class profile::php_fpm { include '::apache::mod::proxy' include '::apache::mod::proxy_fcgi' include '::remi' include '::php' Class['::remi'] -> Class['::php'] }
From hiera/application/hello.yaml
, we define which of Remi’s PHP versions we want, and define needed PHP extensions required for the application (in this case, just the MySQL native driver with default settings):
remi::remi_php72_enabled: 1 php::extensions: mysqlnd: {}
puppet/site/profile/manifests/hello_web.pp
Contains logic to get the non-Apache, non-PHP-FPM parts of the LAMP application running.
class profile::hello_web { include '::mysql::client' include '::hello_web' }
We load the MySQL client for debugging purposes; it shouldn’t be required by the regular LAMP application.
We load the hello_web
private component to define the rest of the application resources.
puppet/site/hello_web/manifests/init.pp
Contains logic to distribute the web content required for the hello
LAMP application.
To reduce hard-coding hello
everywhere, we use the application
fact everywhere we can.
We also define a few convenience variables for hw.php
template, which simplifies the template language.
class hello_web { file { "/var/www/html/index.html": source => "puppet:///modules/${facts['application']}_web/index.html", mode => '0640', owner => 'root', group => lookup("php::fpm_group") } $db_server = lookup("${application}::db_server") $db_user = lookup("mysql::db")["${application}"]['user'] $db_password = lookup("mysql::db")["${application}"]['password'] $db = "${application}" file { "/var/www/html/hw.php": content => template("${application}_web/hw.php.erb"), mode => '0640', owner => 'root', group => lookup("php::fpm_group") } }
We’re also looking up who our PHP user and group is to ensure files are readable by the web server, but secured from other users.
By default, this should only differ by OS family (i.e., all releases of Red Hat and CentOS should be identical, as should all releases of Debian and Ubuntu).
So we store this in an OS family Hiera file, in our case, hiera/osfamily/RedHat.yaml
.
File contents:
--- remi::remi_safe_enabled: 1 php::ensure: latest php::fpm: true php::fpm_user: 'apache' php::fpm_group: 'apache'
puppet/site/hello_web/files/index.html
This file is copied down without modifications. Just a basic HTML page.
puppet/site/hello_web/templates/hw.php.erb
Nothing Puppet-specific in this file except for places where we do variable interpolation for the DB host, DB credentials, and a hostname to make sure we can tell which front end host we’re using. Excerpts:
$link = mysqli_connect("<%= @db_server -%>", "<%= @db_user -%>", "<%= @db_password -%>", "<%= @db -%>");
and:
echo "Web server name: ", <%= @hostname -%>;
Alternatively, we might use Puppet to write a .ini or similar file elsewhere containing the credentials, rather than inserting them into the PHP file, but I didn’t want to make the LAMP application side of things any more complicated than necessary. Complete file contents:
<html> <body> <?php /* These parameters could be read from a config file, but it's the same principle for Puppet templates, regardless */ $link = mysqli_connect("<%= @db_server -%>", "<%= @db_user -%>", "<%= @db_password -%>", "<%= @db -%>"); /* check connection */ if (mysqli_connect_errno()) { printf("Connect failed: %s\n", mysqli_connect_error()); exit(); } /* Select queries return a resultset */ if ($result = mysqli_query($link, "SELECT c FROM test")) { printf("Select returned %d rows.\n", mysqli_num_rows($result)); $array = mysqli_fetch_all($result, MYSQLI_ASSOC); echo "Web server name: ", <%= @hostname -%>; echo "<pre>", var_dump($array), "</pre>"; /* free result set */ mysqli_free_result($result); } mysqli_close($link); ?> </body> </html>
Back end database server files
puppet/site/role/manifests/hello_db.pp
Contains all the dependencies to provide a database server running the back end of the hello
LAMP stack:
class role::hello_db { include "profile::${facts['application']}_db" }
This is a much shorter set of classes than before, since we have a resource dependency farther in, where the schema file must exist before we create any databases.
puppet/site/profile/manifests/hello_db.pp
class profile::hello_db { include "::${facts['application']}_db" }
puppet/site/hello_db/manifests/init.pp
This private component class does all the work. There’s several references to Hiera variables to allow easier copying/pasting for other LAMP apps. Here we make sure we have the database schema file available before we create any databases.
class hello_db { $my_app = ${facts['application']} file { "/root/${my_app}.sql": source => "puppet:///modules/${my_app}_db/${my_app}.sql", mode => '0600', owner => 'root', group => 'root', before => Class['::mysql::server'] } $myMySqlDbs = lookup('mysql::db', {}) create_resources('mysql::db', $myMySqlDbs) include '::mysql::server' }
Excerpts from hiera/application/hello.yaml
relevant to the database server:
hello::web_pool: "web__" hello::db_server: "db01" mysql::db: "%{facts.application}": user: "%{facts.application}" password: "world" host: "%{lookup('hello::web_pool')}" grant: - "ALL" sql: "/root/%{facts.application}.sql"
puppet/site/hello_db/files/hello.sql
CREATE table test (c CHAR(255)); INSERT INTO test (c) VALUES ("hello world");