tag:blogger.com,1999:blog-13691678560344521002024-03-29T00:46:35.089-07:00Science Officer's Blogblindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.comBlogger62125tag:blogger.com,1999:blog-1369167856034452100.post-27780451387096094052016-02-08T02:30:00.000-08:002016-02-08T02:52:38.541-08:00Testing Ansible with Inspec<h3>
<b>Background Information & References</b></h3>
<br />
<a href="http://www.ansible.com/">Ansible</a> testing is something I think we're going to start seeing more of in the future. Tonight I took to testing it with inspec. <a href="https://github.com/chef/inspec">Inspec</a> is an infrastructure testing tool from <a href="https://www.chef.io/">Chef</a>. Inspec is created in the style of <a href="https://github.com/mizzy/serverspec">Serverspec</a> and even throws a shoutout to that software and to mizzy specifically in the readme. There are a bunch of other tools in this space: <a href="https://github.com/philpep/testinfra">testinfra</a> is written in python, <a href="https://github.com/puppetlabs/beaker-rspec">beaker</a> to some extent fills this role, and <a href="https://github.com/aelsabbahy/goss">goss</a> is a go version, nicely covered by <a href="http://www.unixdaemon.net/tools/testing-with-goss/">Dean Wilson</a>.<br />
<br />
At a high level, I feel strongly that these tools can be used to make systems operations code better. However, I think it is easy to fall into writing your code twice, once in Puppet and once in serverspec. Our limited use of Beaker at work started strong and has atrophied, mostly from frustration with this duplication and a shared sense that it wasn't providing value.<br />
<br />
I also worry that we will end up coding to the bugs in implementations. Puppet's core types have remained untouched for years, and their behavior at this time is essentially an API contract. Each of these tools has specific implementations that have bugs and decisions baked in, and we'll be stuck with them until someone boldy breaks compatibility or we simply move to the next tool.<br />
<br />
I was drawn to Inspec by <a href="https://twitter.com/stuartpreston">Stuart Preston's</a> <a href="http://www.slideshare.net/mpgoetz/inspec-or-how-to-translate-compliance-spreadsheets-into-code">presentation</a> at <a href="http://cfgmgmtcamp.eu/">Config Mgmt Camp 2016</a>. What drew me in further was the use of inspec on a remote host without software installed on it. This 'agentless' mode neatly pairs with the Ansible methods, so using them together seemed reasonable. <a href="https://rawgit.com/arlimus/reveal-inspec-cfgmgmtcamp2016/master/index.html#/">This presentation</a> on inspec is also exceptional from <a href="https://twitter.com/arlimus">Dominik Richter</a>.<br />
<br />
<h3>
<b>Procedure & Results</b></h3>
<br />
We can start with a simple ansible inventory file:<br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">[openstack]<br />198.61.207.40 ansible_user=root</span><br />
<br />
From this we can perform a list hosts and a ping.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">$: ansible --version<br /><br />ansible 2.0.0.2<br /> config file = /etc/ansible/ansible.cfg<br /> configured module search path = /usr/share/ansible<br />$: ansible -i simple_inventory all --list-hosts<br /><br /> hosts (1):<br /> 198.61.207.40<br />$: ansible -i simple_inventory all -m ping<br /><br />198.61.207.40 | SUCCESS => {<br /> "changed": false, <br /> "ping": "pong"</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br />}</span><br />
<br />
At this point we can set up a simple ansible playbook:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$: cat simple_ansible_playbook.yml <br />- hosts: all<br /> user: root<br /> tasks:<br /> - debug: msg="debug {{inventory_hostname}}"<br /> - apt: name=apache2 state=present</span><br />
<br />
This playbook does very little, installing only the apache2 package.<br />
<br />
Then we can write the inspec tests for it. Inspec is written in a DSL, the best introductory docs seem to be <a href="https://github.com/chef/inspec/blob/master/docs/dsl_inspec.rst">here</a>. Inspec tests (as far as I can tell) must be contained in a 'control' block. See the example below:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">control "spencer" do<br /> impact 0.7<br /> title "Test some simple resources"<br /> describe package('apache2') do<br /> it { should be_installed }<br /> end<br /><br /> describe port(80) do<br /> it { should be_listening }<br /> its('processes') {should eq ['apache2']}<br /> end<br /><br />end</span><br />
<br />
This reads like any block of Puppet, Ansible, or other systems tooling that has been around, with a sprinkling of rspec or rspec-puppet. There is a long list of available <a href="https://github.com/chef/inspec/blob/master/docs/resources.rst">resources</a> in the inspec documentation.<br />
<br />
Inspec easily installs with gem.<br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">$: inspec version<br />0.10.1</span><br />
<br />
Inspec can detect the remote machine and give you its operatingsystem version. I don't really see the direct value in this, but it is a nice ping/pong subcommand to test the connection.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$: inspec detect -i ~/.ssh/insecure_key -t ssh://root@198.61.207.40<br />{"name":null,"family":"ubuntu","release":"14.04","arch":null}</span><br />
<br />
Inspec right now does not have the ability to ask my ssh-agent for permission to use my key, I have a less-secure key (though by no means totally insecure) key that I use in instances like this. The -t connection infromation flag has a fairly straightforward syntax. Like any agentless tool inspec supports a number of flags allowing it to connect as an unprivileged user and to use sudo to achieve root permissions.<br />
<br />
We can now run our test (before our ansible playbook has run, to validate that we are getting failures).<br />
<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$: inspec exec -i ~/.ssh/insecure_key -t ssh://root@198.61.207.40 inspec.rb <br />FFF<br /><br />Failures:<br /><br /> 1) System Package apache2 should be installed<br /> Failure/Error: DEFAULT_FAILURE_NOTIFIER = lambda { |failure, _opts| raise failure }<br /> expected that `System Package apache2` is installed<br /> # inspec.rb:6:in `block (3 levels) in load'<br /> # /tmp/file1otIsd/gems/inspec-0.10.1/lib/inspec/runner_rspec.rb:55:in `run'<br /> # /tmp/file1otIsd/gems/inspec-0.10.1/lib/utils/base_cli.rb:52:in `run_tests'<br /> # /tmp/file1otIsd/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'<br /> # /tmp/file1otIsd/gems/thor-0.19.1/lib/thor/invocation.rb:126:in `invoke_command'<br /> # /tmp/file1otIsd/gems/thor-0.19.1/lib/thor.rb:359:in `dispatch'<br /> # /tmp/file1otIsd/gems/thor-0.19.1/lib/thor/base.rb:440:in `start'<br /><br /> 2) Port 80 should be listening<br /> Failure/Error: DEFAULT_FAILURE_NOTIFIER = lambda { |failure, _opts| raise failure }<br /> expected `Port 80.listening?` to return true, got false<br /> # inspec.rb:10:in `block (3 levels) in load'<br /> # /tmp/file1otIsd/gems/inspec-0.10.1/lib/inspec/runner_rspec.rb:55:in `run'<br /> # /tmp/file1otIsd/gems/inspec-0.10.1/lib/utils/base_cli.rb:52:in `run_tests'<br /> # /tmp/file1otIsd/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'<br /> # /tmp/file1otIsd/gems/thor-0.19.1/lib/thor/invocation.rb:126:in `invoke_command'<br /> # /tmp/file1otIsd/gems/thor-0.19.1/lib/thor.rb:359:in `dispatch'<br /> # /tmp/file1otIsd/gems/thor-0.19.1/lib/thor/base.rb:440:in `start'<br /><br /> 3) Port 80 processes should eq ["apache2"]<br /> Failure/Error: DEFAULT_FAILURE_NOTIFIER = lambda { |failure, _opts| raise failure }<br /><br /> expected: ["apache2"]<br /> got: nil<br /><br /> (compared using ==)<br /> # inspec.rb:11:in `block (3 levels) in load'<br /> # /tmp/file1otIsd/gems/inspec-0.10.1/lib/inspec/runner_rspec.rb:55:in `run'<br /> # /tmp/file1otIsd/gems/inspec-0.10.1/lib/utils/base_cli.rb:52:in `run_tests'<br /> # /tmp/file1otIsd/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'<br /> # /tmp/file1otIsd/gems/thor-0.19.1/lib/thor/invocation.rb:126:in `invoke_command'<br /> # /tmp/file1otIsd/gems/thor-0.19.1/lib/thor.rb:359:in `dispatch'<br /> # /tmp/file1otIsd/gems/thor-0.19.1/lib/thor/base.rb:440:in `start'<br /><br />Finished in 0.30626 seconds (files took 3.26 seconds to load)<br />3 examples, 3 failures<br /><br />Failed examples:<br /><br />rspec # System Package apache2 should be installed<br />rspec # Port 80 should be listening<br />rspec # Port 80 processes should eq ["apache2"]</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">$: echo $?<br />1</span><br />
<br />
<br />
Seeing partial tracebacks is not great, but the output is easy enough to understand. We had three things we were asserting and none of them are true.<br />
<br />
Let's now run our ansible playbook<br />
<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$: ansible-playbook -i simple_inventory simple_ansible_playbook.yml <br /><br /><br />PLAY</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">*******************************************************************<br /><br />TASK [setup] *******************************************************************<br />ok: [198.61.207.40]<br /><br />TASK [debug] *******************************************************************<br />ok: [198.61.207.40] => {<br /> "msg": "debug 198.61.207.40"<br />}<br /><br />TASK [apt] *********************************************************************<br />changed: [198.61.207.40]<br /><br />PLAY RECAP *********************************************************************<br />198.61.207.40 : ok=3 changed=1 unreachable=0 failed=0 </span><br />
<br />
The colors are lost here, but the 'ok' output is green and the 'changed' output is yellow.<br />
<br />
We can reasonably expect that the apache package was installed and that the service is running. I am usually tempted at this point to run the playbook again to verify idempotence but for such a simple playbook that isn't necessary.<br />
<br />
Now we can re-run our inspec tests:<br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">$: inspec exec -i ~/.ssh/insecure_key -t ssh://root@198.61.207.40 inspec.rb <br />...<br /><br />Finished in 0.46273 seconds (files took 3.37 seconds to load)<br />3 examples, 0 failures</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">$: echo $?<br />0</span><br />
<br />
A much prettier output, to be sure.<br />
<br />
<b>Conclusions</b><br />
<br />
With this we have two tools, one to make changes and the other to verify state. Unfortunately there is a lot of code duplication. We could probably write a simple tool to parse ansible and dump inspec code, but I feel like that would ultimately only be a tool that would reveal implementation differences between the two tools.<br />
<br />
One issue I ran into in my brief use of this was around this snippit:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;"> its('processes') {should eq ['apache2']}</span><br />
<br />
It initially read:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;"> its('processes') {should include 'apache'}</span><br />
<br />
I could not get this check to pass. Eventually I dug into the code and discovered that its('processes') returns an array of strings, each string containing a process name. In unix, only one process can hold a port like that, so this is a bit of an odd choice for a return type. I had expected a single string to be returned and for 'include' to do a substring match on 'apache'. Not the least important result of this brief debugging was that inspec resources are very small and readable.<br />
<br />
Using these tools together means having both ruby and python set up. This isn't a problem for me, because both of those are already in my development and testing environments. For others, using a single runtime would be more valuable than picking up a new tool. There are a lot of tools in this space, and all seem passable.<br />
<br />
This kind of testing still requires a unix node of
some type. Both ansible and inspec can test using docker, but I have
come to prefer testing my infrastructure code on virtual machines. Part
of this is that I have almost always had a ready supply of virtual
machines to use for this testing.<br />
<br />
Inspec is flexible.
It has docker, virtual machine, and local functionality. It even has
windows support. Inspec has a long list of native resources, enabling
you to test high level features like 'postgres_config.' Because an
inspec resource is only a status check, it is infinitely easier to
write, debug and reason about.<br />
<br />
The tone at ConfigMgmtCamp 2016 was that inspec is the future and will be replacing serverspec in most uses.<br />
<br />
<br />
<b>Further Work </b><br />
<br />
I am interested to see what can be done with testing inspec in CI pipelines. The simple three command pattern of this blog could easily be encoded to a CI pipeline, if the CI system has sufficient resources. Inspec has a local application mode that could be used with puppet apply. I am interested to see if and when beaker integration will be attempted, as most of my infrastructure testing uses beaker at the moment.<br />
<br />
<span style="font-size: x-small;">Thanks to Bastelfreak and Bkero for editing this post prior to publication. </span>blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com279tag:blogger.com,1999:blog-1369167856034452100.post-20793597026277436632015-11-22T12:14:00.001-08:002015-11-22T12:20:42.880-08:00FOSSETCON 2015This week I attended <a href="http://www.fossetcon.org/">FOSSETCON</a> in Orlando, Florida. I had the opportunity to meet a number of free/open source software leaders, and they took the opportunity to make me feel very included. Overall I had a great time.<br />
<br />
I was able to present twice at this conference, due to a comedy of errors around scheduling. I enjoyed giving both talks immensely and I will be talking again on these subjects I am sure.<br />
<br />
On Friday I spoke on <a href="http://www.tinc-vpn.org/">Tinc</a> and <a href="https://www.consul.io/">Consul,</a> a private mesh networking tool which we then overlaid with service discovery. Using these together is a pet project some personal friends and I have been working on for some time. I was able to focus mostly on the tinc components of our infrastructure. After the talk, I was mobbed by people wanting to use tinc to flatten a network somewhere in their infrastructure. I admit I had not even considered that application! Amusingly my talk made the "news" section of the tinc website. I want to especially thank <a href="http://bke.ro/">Ben Kero </a>for stepping up to give this talk and for writing the first draft of the slides. I did a live demo of tinc providing security for NFS, then played fullscreen video over NFS over the internet on conference wifi! I had an <a href="https://twitter.com/chrisjrn/status/667738256692285440">Awesome</a> <a href="https://twitter.com/vmbrasseur/status/667737082710908928">Audience!</a> <br />
<br />
My slides can be viewed at: <a href="https://speakerdeck.com/nibalizer/secure-peer-networking-with-tinc">https://speakerdeck.com/nibalizer/secure-peer-networking-with-tinc</a><br />
And the source to generate them can be found at: <a href="https://github.com/nibalizer/tinc-presentation">https://github.com/nibalizer/tinc-presentation</a><br />
And if you don't like speakerdeck the raw pdf is here: <a href="http://spencerkrum.com/talks/tinc_presentation.pdf">http://spencerkrum.com/talks/tinc_presentation.pdf</a><br />
<br />
<br />
On Saturday I spoke on OpenStack. This was a talk I inherited from <a href="https://twitter.com/e_monty">Monty Taylor</a>, who couldn't be there due to a scheduling conflict. I spoke on how OpenStack is a functioning platform and that the success of Infra project is evidence of that. I then talked about the rougher spots in OpenStack right now, particularly in abstractions that leak deployment details. I then introduced the <a href="http://docs.openstack.org/developer/os-client-config/">OpenStack Client-Config</a> and <a href="http://docs.openstack.org/infra/shade/">Shade</a> efforts as a way to ameliorate that.<br />
<br />
The source to generate my slides can be found at: <a href="https://github.com/nibalizer/inaugust.com/blob/master/src/talks/now-what-nibz.hbs">https://github.com/nibalizer/inaugust.com/blob/master/src/talks/now-what-nibz.hbs</a><br />
The cannonical version that Monty gave and I edited slightly is viewable at: <a href="http://inaugust.com/talks/now-what.html#/">http://inaugust.com/talks/now-what.html#/</a><br />
A video of Monty giving the talk about six months ago: <a href="https://www.youtube.com/watch?v=G971S1UT0Kw&spfreload=10">https://www.youtube.com/watch?v=G971S1UT0Kw&spfreload=10</a><br />
<br />
<br />
Of the talks I saw at FOSSETCON two stand out to me. The<a href="http://www.fossetcon.org/2015/sessions/vagrant-docker-kubernetes-and-oh-my-vagrant"> first</a> was the introduction and demo of <a href="https://github.com/purpleidea/oh-my-vagrant">Oh My Vagrant</a> by <a href="https://ttboj.wordpress.com/">James (just James)</a>. In this talk, James took us through Vagrant (sneakily running through libvirt instead of virtualbox) into docker and then all the way to kubernetes. James did lose some people along this lightning ride but for those of us that kept up it was quite enjoyable and informative.<br />
<br />
The second talk I enjoyed was <a href="https://twitter.com/marinaz">Marina Zhurakhinskaya</a>'s talk on <a href="http://www.fossetcon.org/2015/sessions/effective-outreach-four-steps">diversity</a> at the closing keynote. She had some concrete advice and I took a couple key items away from her talk that I will be applying to the communities I have influence in. The most surprising tip to me (but not really once you think about it) was the need for there to be a room for new mothers at conferences. If we require (by law) for companies to provide this resource, it makes sense to make an effort to provide it at a conference with hundreds of attendees. The slides from Marina's talk can be found <a href="https://wiki.gnome.org/Outreachy/SpreadTheWord#Presentation_Materials">here.</a><br />
<br />
Overall FOSSETCON was a great conf. I met so many new people, and I connected with people like Deb Nicholson that I had met before but never gotten to know well. I would definitely compare it to <a href="http://seagl.org/">SeaGL</a> on the west coast. It has the same low-budget, high-community, minimal-coporate feel that makes it ok to talk about free software without a direct application to business needs. At the conf I got turned on to <a href="http://www.southeastlinuxfest.org/">SELF</a> which I plan to apply to soon.<br />
<br />
I strongly recommend you attend FOSSETCON 2016 if you are in the central Florida area next November.<br />
<br />
<br />blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com10tag:blogger.com,1999:blog-1369167856034452100.post-67594584361890524812015-08-21T18:18:00.000-07:002015-08-21T18:18:47.189-07:00Upgrading to Puppetlabs-Apt 2.0The Puppetlabs Apt module went through a major change earlier this year. It crossed a semver boundary and released as 2.0. This is one of the only cases we've had as a community where a core module has moved over a major version. The initial reaction to Apt 2.0 was everyone quickly pinning their modules to use < 2.0. Morgan, Daenney and the puppetlabs modules team quickly pushed out a 2.1.0 release which is backwards compatible with some core functionality inside the Apt module. It is important to note that not everything is backwards compatible, only a few things.<br />
<br />
At OpenStack-Infra, we wanted to use the latest version of <a href="https://github.com/bfraser/puppet-grafana">bfraser's graphana module</a> but it requires apt >= 2.0. <a href="https://twitter.com/pabelanger">Paul</a> spun up a <a href="https://review.openstack.org/#/c/209195/">change</a> to our main repository and then several more changes to move to the new syntax. <a href="https://review.openstack.org/#/c/215776/">Here</a> is an example.<br />
<br />
Why does this work? Because apt::key was added back in 2.1.0 to be compatible with older apt versions. See the warning that it will generate <a href="https://github.com/puppetlabs/puppetlabs-apt/blob/master/manifests/key.pp#L17">here.</a> Because of this, you can upgrade apt in place safely, provided you are not use the gnarlier parts of the old Apt module. Notably the unattended-upgrades subsection has been moved out into its own module.<br />
<br />
I encourage those of you running an infrastructure to follow our lead and upgrade your Apt module. I encourage those of you maintaining and releasing modules to bump your minimum version of Apt to => 2.1. I believe there is a requirement for some velocity in this. If we wait too long, too many new users of Puppet will be caught across a schism of the apt module. That is, unless everyone just runs RedHat anyways. blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com2tag:blogger.com,1999:blog-1369167856034452100.post-44898634138529840752015-08-10T15:07:00.000-07:002015-08-10T17:26:38.931-07:00Just What Is OpenStack Infra?I work for HP doing two things. By day I work inside the HP firewall setting up and running a CI system for testing HP's OpenStack technology. We call this system Gozer. (By the way we are <a href="mailto:krum.spencer+jobmebro@gmail.com">hiring</a>). By night I work upstream (in the Open Source world) with the OpenStack Infrastructure Team setting up and running a CI system for OpenStack developers.<br />
<br />
This blog post concerns my work upstream.<br />
<br />
One of my chief initiatives since joining the team two years ago is to make the Puppet codebase used by infra more in-line with standards, more reusable, and generally better. I have never attempted to use infra as a testbed for experimental uses of Puppet, I've always tried to apply the best practices known in the community. Of course there are exceptions to this (see all the Ansible stuff). This initiative is codified in a few different specifications accepted by the team (you don't need to read these):<br />
<ul>
<li><a href="http://specs.openstack.org/openstack-infra/infra-specs/specs/server_base_template_refactor.html">Puppet codebase Refactor</a></li>
<li><a href="http://specs.openstack.org/openstack-infra/infra-specs/specs/puppet-modules.html">Git Repo per Module</a></li>
<li><a href="http://specs.openstack.org/openstack-infra/infra-specs/specs/config-repo-split.html">Move core configuration files to their own repo</a></li>
<li><a href="http://specs.openstack.org/openstack-infra/infra-specs/specs/public_hiera.html">Create a public hiera directory in the main repo</a></li>
<li><a href="http://specs.openstack.org/openstack-infra/infra-specs/specs/puppet_4_prelim_testing.html">Puppet 4 GO GO GO</a></li>
<li><a href="http://specs.openstack.org/openstack-infra/infra-specs/specs/puppet-module-functional-testing.html">Functional Testing of Puppet (beaker-rspec)</a></li>
</ul>
<br />
One mark of the success of this ongoing initiative is that I am now in a place where I am recommending parts of our code to other people in my community. Those are the people for whom I intend this blog post. Someone sees a neat part of the Puppet OpenStack 'stuff' and wants to use it, but it needs a patch or a use case covered. This blog post is supposed to provide a high level overview of what we do, who 'we' are, and the bigger pieces and how they interact with each other. We'll start with a long series of names and definitions.<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfzT5BQO4VNLvOty_c5DNRGiFIhUrax5SWr9KSuleG4p3P0FQ3jPIEFRjTOOt6lSW3KqnB8F66Kkp1OabTsbrFEq9IPCXOf2MHjq5ZhVQE96R0qUCyZ2A8_UxO-kHWLlWbcZBlmSPNijGM/s1600/You-Keep-Using-that-Word.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="276" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfzT5BQO4VNLvOty_c5DNRGiFIhUrax5SWr9KSuleG4p3P0FQ3jPIEFRjTOOt6lSW3KqnB8F66Kkp1OabTsbrFEq9IPCXOf2MHjq5ZhVQE96R0qUCyZ2A8_UxO-kHWLlWbcZBlmSPNijGM/s320/You-Keep-Using-that-Word.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Naming things is hard</td></tr>
</tbody></table>
<br />
So what is OpenStack? OpenStack is an Open Source software collection based around providing cloud software. The OpenStack Foundation is a nonprofit organization that provides centralized resources to support the effort, this comes in both technical (sysadmins) and other forms (legal, conference organizing, etc). OpenStack is made up of many components, the simplest is that 'nova' provides a compute layer to the cloud i.e. kvm or xen management.<br />
<br />
OpenStack can be installed with Puppet. The Puppet code that does this is called "OpenStack Puppet Modules." These modules install OpenStack services such as nova, glance, and cinder. Their source code is available by searching for <a href="https://git.openstack.org/cgit/?q=openstack%2Fpuppet">openstack/puppet-*</a>. The team that develops this code is called the OpenStack Puppet Module Team. This team uploads to the forge under the namespaces 'openstack' or 'stackforge.'<br />
<br />
I do not work with these modules on a daily basis.<br />
<br />
I work with the OpenStack Infrastructure Team. This team deploys and maintains the CI system used by OpenStack upstream developers. We have our own set of Puppet modules that are completely unrelated to the OpenStack Puppet Modules. Their source code can be found by searching for <a href="https://git.openstack.org/cgit/?q=openstack-infra%2Fpuppet">openstack-infra/puppet-*</a>. These modules are uploaded under the forge namespaces 'openstackci' and 'openstackinfra.' We use these modules to deploy services like Gerrit, Jenkins, and Drupal. We also have a number of utility modules useful for generic Linux administration. We have Precise, Trusty, Centos 6, and various Fedora flavors in our infrastructure, so our modules often have good cross-platform support.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEBzJ0k-zYE-ZRvYNNfcFq7tdYp1HqhotEOJbSYCwxN2a4mw55lLCEZIBd54LIWOceC_W0g8NIYYddwmwL_5VSKj_NsmmtdV56lb1Vtuqhf8zoZviCNe6ndprUJPVpn8DROK9qHL3F3inA/s1600/nexus.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEBzJ0k-zYE-ZRvYNNfcFq7tdYp1HqhotEOJbSYCwxN2a4mw55lLCEZIBd54LIWOceC_W0g8NIYYddwmwL_5VSKj_NsmmtdV56lb1Vtuqhf8zoZviCNe6ndprUJPVpn8DROK9qHL3F3inA/s1600/nexus.gif" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Central Nexus</td></tr>
</tbody></table>
<br />
<br />
All the openstack-infra/puppet-* modules are consumed from master by our 'central nexus' repository: <a href="https://git.openstack.org/cgit/openstack-infra/system-config/">system-config</a>. System-config uses a second repository for flat-files: <a href="https://git.openstack.org/cgit/openstack-infra/project-config/">project-config</a>. System-config contains node definitions, public hiera data(soon), a few utility scripts, a modules.env file, a single module to stick 'roles' in called 'openstack_project'. The more 'core' roles in openstack_project call out to another repo called: <a href="https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/">puppet-openstackci</a>. The secrets are stored in a hiera directory that is not public.<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQrbPTRRaIYNUWi7UN3prOlGHNAuU4Zkkbx4oV41mLlgPbtxtRaISVWVvH74ZAcSNN3fR7Znn-LH1YGk6sWbFqA_vCPp0jx0LpBPDSjgN7ns-mqxc1XQ1h4nNMg08M1f-dFsgF1pfr6H5A/s1600/puppet_infra_cropped.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="281" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQrbPTRRaIYNUWi7UN3prOlGHNAuU4Zkkbx4oV41mLlgPbtxtRaISVWVvH74ZAcSNN3fR7Znn-LH1YGk6sWbFqA_vCPp0jx0LpBPDSjgN7ns-mqxc1XQ1h4nNMg08M1f-dFsgF1pfr6H5A/s400/puppet_infra_cropped.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Crude Drawing</td></tr>
</tbody></table>
<br />
<br />
The crude drawing above shows a typical flow. A node definition lives in site.pp, which include a role class from openstack_project, which includes a role class from the openstackci module, which then uses resources and classes from the other modules, in this case puppet-iptables.<br />
<br />
There are other code paths too. Sometimes, often in fact, an openstack_project role will include openstack_project::server or openstack_project::template, these classes wrap up most of the 'basics' of linux administration. Template or server will go on to include more resources.<br />
<br />
There are multiple places to integrate here. At the most basic, a Puppet user could include our puppet-iptables module in their modulepath and start using it. An individual who wants a jenkins server or another server like ours could use openstackci and it's dependencies and write their own openstack_project wrapper classes to include openstackci classes.<br />
<br />
We do not encourage site.pp or openstack_project classes to be extended at this time, we instead encourage features or compatibility extensions to be put into openstackci or the service-specific modules themselves. This is a work in progress and some important logic still lives in openstack_project and should be moved out. A stretch-goal is to move to a place where all of openstack infra runs out of openstackci, providing only a hiera yaml file to set parameters.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://media.giphy.com/media/l41lXRD8SbgUsNKPS/giphy.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="168" src="https://media.giphy.com/media/l41lXRD8SbgUsNKPS/giphy.gif" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Continuous Deployment</td></tr>
</tbody></table>
<br />
A note about modules.env: OpenStack-infra has a modules.env file instead of a Puppetfile. This file contains the location, name, and ref of git repositories to put inside the modulepath on the Puppetmaster. OpenStack infra deploys all of its own Puppet modules from master, so any change to any module can break the whole system. We counteract this danger by having lots of testing and code review before any change goes through.<br />
<br />
A note about project-config: One of the patterns we use in OpenStack Infra is to push our configuration into flat files as much as possible. We have one repository, project-config, which holds files that control the behaviour of our services, Puppet's job is only to copy files out of the repo and into the correct location. This makes it easier for people to find these often-changed files, and means we can provide more people access to merge code there than we would with our system-config repository.<br />
<br />
A note about puppet agent: We run puppet-agent, but it is fired from the Puppetmaster by an ansible run. We hope to move to puppet apply triggered by ansible soon.<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMhzKdGVBS90zGEnQZ7cES1tKPxueL0-mOb31DUikGTLUu9ycOG3RYoenV2smDBgT945dnQvPRM7lIGVkSCFejWpvOawAxDDkn_sVSr5OuVmla7cvAL0ld0-uGRopXTN1IxrB6UQ9G6h3M/s1600/Amazon-box-500x344.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="220" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMhzKdGVBS90zGEnQZ7cES1tKPxueL0-mOb31DUikGTLUu9ycOG3RYoenV2smDBgT945dnQvPRM7lIGVkSCFejWpvOawAxDDkn_sVSr5OuVmla7cvAL0ld0-uGRopXTN1IxrB6UQ9G6h3M/s320/Amazon-box-500x344.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The part where I give you things</td></tr>
</tbody></table>
<br />
There are two modules right now that you might be interested in using yourself. The first is our <a href="https://git.openstack.org/cgit/openstack-infra/puppet-httpd">puppet-httpd</a> module. This module was forked from <a href="https://github.com/puppetlabs/puppetlabs-apache/">puppetlabs-apache</a> at version 0.0.4. It has seen some minor improvements from us but nothing major, other than a name change from 'apache' to 'httpd'. You can see why we forked in the <a href="https://git.openstack.org/cgit/openstack-infra/puppet-httpd/tree/README.md">Readme</a> of the project but the kicker is that this module allows you to use raw 'myhost.vhost.erb' templates with apache. You no longer need to know how to translate the apache syntax you want into puppetlabs-apache parameters. Let's see what this looks like:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"><b><span style="font-family: inherit;">openstack_project/templates/status.vhost.erb:</span></b> </span><br />
<span style="font-family: "Courier New",Courier,monospace;"># ************************************<br /># Managed by Puppet<br /># ************************************<br /><br />NameVirtualHost <%= @vhost_name %>:<%= @port %><br /><VirtualHost <%= @vhost_name %>:<%= @port %>><br /> ServerName <%= @srvname %><br /><% if @serveraliases.is_a? Array -%><br /><% @serveraliases.each do |name| -%><%= " ServerAlias #{name}\n" %><% end -%><br /><% elsif @serveraliases != '' -%><br /><%= " ServerAlias #{@serveraliases}" %><br /><% end -%><br /> DocumentRoot <%= @docroot %><br /><br /> Alias /bugday /srv/static/bugdaystats<br /> <Directory /srv/static/bugdaystats><br /> AllowOverride None<br /> Order allow,deny<br /> allow from all<br /> </Directory><br /><br /> Alias /reviews /srv/static/reviewday<br /> <Directory /srv/static/reviewday><br /> AllowOverride None<br /> Order allow,deny<br /> allow from all<br /> </Directory><br /><br /> Alias /release /srv/static/release<br /><br /> <Directory <%= @docroot %>><br /> Options <%= @options %><br /> AllowOverride None<br /> Order allow,deny<br /> allow from all<br /> </Directory><br /><br /> # Sample elastic-recheck config file, adjust prefixes<br /> # per your local configuration. Because these are nested<br /> # we need the more specific one first.<br /> Alias /elastic-recheck/data /var/lib/elastic-recheck<br /> <Directory /var/lib/elastic-recheck><br /> AllowOverride None<br /> Order allow,deny<br /> allow from all<br /> </Directory><br /><br /> RedirectMatch permanent ^/rechecks(.*) /elastic-recheck<br /> Alias /elastic-recheck /usr/local/share/elastic-recheck<br /> <Directory /usr/local/share/elastic-recheck><br /> AllowOverride None<br /> Order allow,deny<br /> allow from all<br /> </Directory><br /><br /><br /> ErrorLog /var/log/apache2/<%= @name %>_error.log<br /> LogLevel warn<br /> CustomLog /var/log/apache2/<%= @name %>_access.log combined<br /> ServerSignature Off<br /></VirtualHost></span><br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"><br /></span><span style="font-family: "Courier New",Courier,monospace;">::httpd::vhost { 'status.openstack.org':</span><br />
<span style="font-family: "Courier New",Courier,monospace;"> port => 80, <br /> priority => '50',<br /> docroot => '/srv/static/status',<br /> template => 'openstack_project/status.vhost.erb',<br /> require => File['/srv/static/status'],</span><br />
<span style="font-family: "Courier New",Courier,monospace;">}</span><br />
<br />
<br />
If you don't need a vhost and just want to serve a directory, you can:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">::httpd::vhost { 'tarballs.openstack.org':</span><br />
<span style="font-family: "Courier New",Courier,monospace;"> port => 80, <br /> priority => '50',<br /> docroot => '/srv/static/tarballs',<br /> require => File['/srv/static/tarballs'],</span><br />
<span style="font-family: "Courier New",Courier,monospace;">}</span><br />
<br />
The second is <a href="https://git.openstack.org/cgit/openstack-infra/puppet-iptables">puppet-iptables</a>, which provides the ability to spit direct iptables rules into a Puppet class and have those rules set. You can also specify the ports to open up. Again this is an example of weak modeling. Concat resources around specific rules are coming soon in this <a href="https://review.openstack.org/#/c/193847/">change</a>. Let's see what using the iptables module looks like:<br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">class { '::iptables':</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> public_tcp_ports => ['80', '443', '8080'],</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> public_udp_ports => ['2003'],</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> rules4 => ['-m state --state NEW -m tcp -p tcp --dport 8888 -s somehost.openstack.org -j ACCEPT'],</span><br />
<span style="font-family: "Courier New", Courier, monospace;"><span style="font-family: "Courier New", Courier, monospace;"> rules6 => ['-m state --state NEW -m tcp -p tcp --dport 8888 -s somehost.openstack.org -j ACCEPT'],</span></span><br />
<span style="font-family: "Courier New", Courier, monospace;"><span style="font-family: "Courier New", Courier, monospace;">} </span> </span><br />
<br />
This enables you to manage iptables the way you view iptables. It is easy to debug, easy to reason about, and extensible. We think it provides a significant advantage over the puppetlabs-firewall module. Unfortunately, the puppet-iptables module currently is hardcoded to open up certain openstack hosts, that should be fixed very soon (possibly by you!). Both of these modules try to be as simple as possible.<br />
<br />
Getting these modules right now is done through git. If you don't want to ride the 'master' train with us, you can hop in #openstack-infra on freenode and ask for a tag to be created at the revision you need. We're working on getting forge publishing in to the pipeline, it's not a priority for us right now but if you need it you can ask for it and we can see about increasing focus there.<br />
<br />
There are two generic modules that advance the puppet ecosystem coming out of OpenStack Infra and we hope there will be more to come. If you'd like to help us develop these modules we'd love the help. You can start learning how to contribute to OpenStack <a href="https://wiki.openstack.org/wiki/How_To_Contribute">here</a>.<br />
<br />blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com0tag:blogger.com,1999:blog-1369167856034452100.post-5299013914475213282015-08-01T14:07:00.000-07:002015-08-01T14:07:20.946-07:00Inspecting Puppet Module MetadataLast week at #puppethack, <a href="https://github.com/hunner">@hunner</a> helped me land a <a href="https://github.com/puppetlabs/puppetlabs-stdlib/pull/483">patch</a> to stdlib to add a <a href="https://github.com/puppetlabs/puppetlabs-stdlib#load_module_metadata">load_module_metadata</a> function. This function came out of several Puppet module triage sessions and a <a href="https://github.com/puppetlabs/puppetlabs-stdlib/pull/457">patch</a> from <a href="https://twitter.com/raphink">@raphink</a> inspired by a conversation with <a href="https://twitter.com/hirojin">@hirojin</a>.<br />
<br />
The load_module_metadata function is available in master of puppetlabs-stdlib, hopefully it will be wrapped up into one of the later 4.x releases, but will almost certainly make it into 5.x.<br />
<br />
On it's own this function doesn't do much, but it is composable. Let's see some basic usage:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">$: cat metadata.pp <br /><br />$metadata = load_module_metadata('stdlib')<br /><br />notify { $metadata['name']: }</span><br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">$: puppet apply --modulepath=modules metadata.pp <br />Notice: Compiled catalog for hertz in environment production in 0.03 seconds<br />Notice: puppetlabs-stdlib<br />Notice: /Stage[main]/Main/Notify[puppetlabs-stdlib]/message: defined 'message' as 'puppetlabs-stdlib'<br />Notice: Finished catalog run in 0.03 seconds</span><br />
<br />
As you can see this isn't the most amazing thing ever. However access to that information is very useful in the following case:<br />
<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">$apache_metadata = load_module_metadata('apache')<br /><br />case $apache_metadata['name'] {<br /> 'puppetlabs-apache': {<br /> # invoke apache as required by puppetlabs-apache<br /> }<br /> 'example42-apache': {<br /> # inovke apache as required by example42-apache<br /> }<br /> default: {<br /> fail("Apache module author not recognized, please add it here")<br /> }<br />}</span><br />
<br />
This is an example of Puppet code that can inspect the libraries loaded in the modulepath, then make intelligent decisions about how to use them. This means that module authors can support multiple versions of 'library' modules and not force their users into one or the other.<br />
<br />
This is a real problem in Puppet right now. For every 'core' module there are multiple implementations, with the same name. Apache, nginx, mysql, archive, wget, the list goes on. Part of this is a failure of the community to band behind a single module, but we can't waste time finger pointing now. The cat is out of the bag and we have to deal with it.<br />
<br />
We've had metadata.json and dependencies for a while now. However, due to the imperfectness of the puppet module tool, most advanced users do not depend on dependency resolution from metadata.json. At my work we simply clone every module we need from git, users of r10k do much the same.<br />
<br />
load_metadata_json enables modules to enforce that their dependencies are being met. Simply put a stanza like this in params.pp:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">$unattended_upgrades_metadata = load_module_metadata('unattended_upgrades') </span><br />
<span style="font-family: "Courier New",Courier,monospace;">$healthcheck_metadata = load_module_metadata('healthcheck')<br /><br />if versioncmp($healthcheck_metadata['version'], '0.0.1') < 0 {<br /> fail("Puppet-healthcheck is too old to work")<br />}<br />if versioncmp($unattended_upgrades_metadata['version'], '2.0.0') < 0 {<br /> fail("Puppet-unattended_upgrades is too old to work")<br />}</span><br />
<br />
As we already saw, modules can express dependencies on specific implementations and versions. They can also inspect the version available and use that. This is extremely useful when building a module that depends on another module, and that module is crossing a symver major version boundary. In the past, in the OpenStack modules, we passed a parameter called 'mysql_module_version' to each class which allowed that class to use the correct invocation of the mysql module. Now classes anywhere in your puppet code base can inspect the mysql module directly and determine which invocation syntax to use.<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">$mysql_metadata = load_module_metadata('mysql')<br /><br />if versioncmp($mysql_metadata['version'], '2.0.0') <= 0 {<br /> # Use mysql 2.0 syntax<br />} else {<br /> # Use mysql 3.0 syntax<br />}</span><br />
<br />
Modules can even open up their own metadata.json, and while it is clunky, it is possible to dynamically assert that dependencies are available and in the correct versions.<br />
<br />
I'm excited to see what other tricks people can do with this. I'm anticipating it will make collaboration easier, upgrades easier, and make Puppet runs even more safe. If you come up with a neat trick, please share it with the community and ping me on twitter(@nibalizer) or IRC: nibalizer.<br />
<br />
<br />
<br />
blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com2tag:blogger.com,1999:blog-1369167856034452100.post-80629458863306176392015-05-10T09:46:00.001-07:002015-05-10T09:46:32.097-07:00Managing patchset stacks with git-reviewIn OpenStack, we use gerrit and git-review to propose changes to the repository. The workflow for that is pretty complicated, and downright confusing if you are coming from the github workflow.<br />
<br />
One of the places where it gets hard/complicated/annoying to use our workflow is if you have multiple dependent changes. I have a technique I use, that I will present below.<br />
<br />
The situation: You have two patches in a stack. There is a bug in the first patchset that you need to fix.<br />
<br />
The simple play: Checkout the patchset with 'git review -d <review number>', ammend and git-review. The problem with this is that now you need to go rebase all dependent patchsets against this new review. Sometimes you can get away with using the 'rebase' button but sometimes you cannot.<br />
<br />
What I do: I use 'git rebase -i HEAD~2' and use 'edit' to change the commit that needs to be changed, rebase goes ahead and auto-rebases everything above it (pausing if needed for me to fix things), then I can 'git review' and it will update all the patchsets that need to be changed.<br />
<br />
This approach works for any sized stack, but using it on a two-stack is the simplest example that works.<br />
<br />
<br />
<br />
The git log before we start:<br />
<br />
<pre>commit e394aba4321f6d30131793e69a4f14b011ce5560
Author: Spencer Krum <nibz@spencerkrum.com>
Date: Wed May 6 15:43:27 2015 -0700
Move afs servers to using o_p::template
This is part of a multi-stage process to merge o_p::server and
o_p::template.
Change-Id: I3bd3242a26fe701741a7784ae4e10e4183be17cf
commit 3e592608b4d369576b88793377151b7bfaacd872
Author: Spencer Krum <nibz@spencerkrum.com>
Date: Wed May 6 15:38:23 2015 -0700
Add the ability for template to manage exim
Managing exim is the one thing sever can do that template cannot.
This is part of a multi stage process for merging server and template.
Change-Id: I354da6b5d489669b6a2fb4ae4a4a64c2d363b758
</pre>
<br />
<br />
Note that we have two commits and they depend on each other. The bug is in 3e592608b4d369576b88793377151b7bfaacd872. We start the interactive rebase below, first with a vim session then with output on the command line. The vim session:<br />
<br />
<pre>$ git rebase -i HEAD~2
1 e 3e59260 Add the ability for template to manage exim
2 pick e394aba Move afs servers to using o_p::template
3
4 # Rebase af02d02..e394aba onto af02d02
5 #
6 # Commands:
7 # p, pick = use commit
8 # r, reword = use commit, but edit the commit message
9 # e, edit = use commit, but stop for amending
10 # s, squash = use commit, but meld into previous commit
11 # f, fixup = like "squash", but discard this commit's log message
12 # x, exec = run command (the rest of the line) using shell
13 #
14 # These lines can be re-ordered; they are executed from top to bottom.
15 #
16 # If you remove a line here THAT COMMIT WILL BE LOST.
17 #
18 # However, if you remove everything, the rebase will be aborted.
19 #
20 # Note that empty commits are commented out
</pre>
<br />
Note that the 'top' commit in the rebase view is the 'bottom' commit in the git log view, because git is stupid. We change the 'pick' to 'e' for 'edit' meaning stop at that point for ammending. And the shell output:<br />
<br />
<pre>Stopped at 3e592608b4d369576b88793377151b7bfaacd872... Add the ability for template to manage exim
You can amend the commit now, with
git commit --amend
Once you are satisfied with your changes, run
git rebase --continue
(master|REBASE-i 1/2)$: git st
rebase in progress; onto af02d02
You are currently editing a commit while rebasing branch 'master' on 'af02d02'.
(use "git commit --amend" to amend the current commit)
(use "git rebase --continue" once you are satisfied with your changes)
nothing to commit, working directory clean
</pre>
<br />
Then we make changes to modules/openstack_project/manifests/template.pp (not shown) and continue the rebase:<br />
<br />
<br />
<pre> (master *|REBASE-i 1/2)$: git st
rebase in progress; onto af02d02
You are currently editing a commit while rebasing branch 'master' on 'af02d02'.
(use "git commit --amend" to amend the current commit)
(use "git rebase --continue" once you are satisfied with your changes)</pre>
<pre> </pre>
<pre>Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: modules/openstack_project/manifests/template.pp</pre>
<pre>no changes added to commit (use "git add" and/or "git commit -a")</pre>
<pre> </pre>
<pre> (master *|REBASE-i 1/2)$: git add modules/openstack_project/manifests/template.pp
(master +|REBASE-i 1/2)$: git rebase --continue
[detached HEAD 6ca26e9] Add the ability for template to manage exim
1 file changed, 7 insertions(+)
Successfully rebased and updated refs/heads/master.
</pre>
Then we publish our changes with git-review:<br />
<pre>(master u+2)$: git review
You are about to submit multiple commits. This is expected if you are
submitting a commit that is dependent on one or more in-review
commits. Otherwise you should consider squashing your changes into one
commit before submitting.
The outstanding commits are:
2bc78a8 (HEAD, master) Move afs servers to using o_p::template
6ca26e9 Add the ability for template to manage exim
Do you really want to submit the above commits?
Type 'yes' to confirm, other to cancel: yes
remote: Resolving deltas: 100% (4/4)
remote: Processing changes: updated: 2, refs: 2, done
To ssh://krum-spencer@review.openstack.org:29418/openstack-infra/system-config.git
* [new branch] HEAD -> refs/publish/master
</pre>
With that we have changed a commit deep in the stack, rebased any commits above it, and published our changes to the gerrit server. blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com43tag:blogger.com,1999:blog-1369167856034452100.post-13563037312478599702015-05-10T09:12:00.002-07:002015-05-10T09:46:52.923-07:00Overview of Puppet in OpenStack InfraLast week I gave this presentation at the PDX Puppet Users group. It is an overview of how we use Puppet in the OpenStack Infra project. There is no video or audio recording.<br />
<br />
<a href="http://docs.openstack.org/infra/publications/puppet-overview/#%281%29">Presentation</a>blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com3tag:blogger.com,1999:blog-1369167856034452100.post-33884658565149505262015-03-06T00:10:00.000-08:002015-03-06T00:14:41.570-08:00Checking out Servo<a href="https://github.com/servo/servo">Servo</a> is an experimental web browser from Mozilla. It was the impetus and driver for early development of the Rust language. I'm excited to ditch firefox because of its performance issues and I don't want to run google anything these days. Blogging on blogger, I know.<br />
<br />
I got it built from the instructions on the github, here are some screencaps of what it can do.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqj8sGLTDecxe6d6GXbbWnRzD9v29y1wssY9ghWj_ov58YXMlU6GYWq7ymUG6Cx8dRz3ljAcZqTxXjOxJU_5zVk2yVB3ydqpV8Tb8pQxo3i_oPyduuDHQDyYawrTLftgizGfdKxlG4aeS7/s1600/2015-03-06-00:14:06_1600x900_scrot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqj8sGLTDecxe6d6GXbbWnRzD9v29y1wssY9ghWj_ov58YXMlU6GYWq7ymUG6Cx8dRz3ljAcZqTxXjOxJU_5zVk2yVB3ydqpV8Tb8pQxo3i_oPyduuDHQDyYawrTLftgizGfdKxlG4aeS7/s1600/2015-03-06-00:14:06_1600x900_scrot.png" height="225" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDMp5EDwEhcvQn90j6oml9O3M_dUKHdnG_5RN83sa72ZxA6p4CN41t8ulR1S3XQ6ffd847H9AKL9yuVk6un1Ebq4wvhQhJNaw-0iqATgg_DGIrGIjdM-ttdTr65wVsGWBLK717foU12ANS/s1600/2015-03-06-00:00:10_1600x900_scrot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDMp5EDwEhcvQn90j6oml9O3M_dUKHdnG_5RN83sa72ZxA6p4CN41t8ulR1S3XQ6ffd847H9AKL9yuVk6un1Ebq4wvhQhJNaw-0iqATgg_DGIrGIjdM-ttdTr65wVsGWBLK717foU12ANS/s1600/2015-03-06-00:00:10_1600x900_scrot.png" height="225" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEis-StX4ghd486zFbTtwIhqnxOmriAJbDxrl_NN87T1xnFEqbgyYWbNacHYNxMwLSNbRc-MMfu5NYr1-OpdfAxQxx5pLx72zgyx14GaZPARlwDMt9O3ssm6vQ40uAw74dL-TVikLG9cqYeh/s1600/2015-03-06-00:00:58_1600x900_scrot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEis-StX4ghd486zFbTtwIhqnxOmriAJbDxrl_NN87T1xnFEqbgyYWbNacHYNxMwLSNbRc-MMfu5NYr1-OpdfAxQxx5pLx72zgyx14GaZPARlwDMt9O3ssm6vQ40uAw74dL-TVikLG9cqYeh/s1600/2015-03-06-00:00:58_1600x900_scrot.png" height="225" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkiP9S0CEauddbz0NnDxEbn0uXJrfaIlSSMA36Lvcya3-PBMbsKTqAmAuHZz9HgdRtHOJFJxCarAnNY42tJjf00s5ZgR4kQImUhRCgAvOp2sMOeHPKQ13lQh3qvNYqhyphenhyphenWfICTHOcG0NuR5/s1600/2015-03-06-00:01:44_1600x900_scrot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkiP9S0CEauddbz0NnDxEbn0uXJrfaIlSSMA36Lvcya3-PBMbsKTqAmAuHZz9HgdRtHOJFJxCarAnNY42tJjf00s5ZgR4kQImUhRCgAvOp2sMOeHPKQ13lQh3qvNYqhyphenhyphenWfICTHOcG0NuR5/s1600/2015-03-06-00:01:44_1600x900_scrot.png" height="225" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDXqKKlw4C8WuBbyWQJl4MZT3MTqGEiaaH6tWOkr288yAI94RsAn4tyTssOyVuKObKbuOSDPkf2CTFaJru_y8I8CyfpgniUyWevG2JJN6pL3wLq1z-Ovs1VjakACTxvP2azUS8Rv_Ml3nt/s1600/2015-03-06-00:03:00_1600x900_scrot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDXqKKlw4C8WuBbyWQJl4MZT3MTqGEiaaH6tWOkr288yAI94RsAn4tyTssOyVuKObKbuOSDPkf2CTFaJru_y8I8CyfpgniUyWevG2JJN6pL3wLq1z-Ovs1VjakACTxvP2azUS8Rv_Ml3nt/s1600/2015-03-06-00:03:00_1600x900_scrot.png" height="225" width="400" /></a></div>
<br />blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com0tag:blogger.com,1999:blog-1369167856034452100.post-90521951405748023242015-02-15T18:25:00.000-08:002015-05-10T10:07:33.600-07:00EFI boot on HP EliteBook 840After entirely too long with my HP EliteBook 840, I have made it boot successful without human interaction. After installing ubuntu my typical power-on process looked like this:<br />
<br />
<ul>
<li> Power button</li>
<li> Computer tries and fails to boot, dumping to diagnostics</li>
<li> Power button</li>
<li> Interupt boot process at the right time with f-9</li>
<li> Select 'boot from efi file'</li>
<li> Select a disk</li>
<li> Drill into the filesystem and select 'shimx64.efi'</li>
</ul>
This was super annoying. I finally got fed up and went exploring settings. There is a section in settings for setting a custom efi path. The interface is a bit derpy, but its eventually possible to get to a text input box.<br />
<br />
I put in my text box:
<code> </code><br />
<code>\EFI\ubuntu\shimx64.exe</code><br />
<br />
At this point, I saved and rebooted. The machine was able to come up into ubuntu with no human intervention.
My next work is to enable the 'Custom Logo at boot' component.blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com3tag:blogger.com,1999:blog-1369167856034452100.post-80230021082601739542015-02-11T05:37:00.002-08:002015-02-11T14:00:08.462-08:00Rocket: First stepsRocket is a container runtime for CoreOS. In this post we will do some basic tasks with Rocket: installing it, creating an ACI, publishing that ACI to OpenStack Swift, and pulling it down.<br />
<br />
<pre><code>wget https://github.com/coreos/rocket/releases/download/v0.3.1/rocket-v0.3.1.tar.gz
tar xzvf rocket-v0.3.1.tar.gz
cd rocket-v0.3.1
./rkt help</code></pre>
<pre><code> </code></pre>
<pre><code>I moved 'rkt' and 'stage1.aci' to ~/bin for ease of use.</code></pre>
<pre><code>
</code></pre>
We also need actool:
<br />
<pre><code>
</code></pre>
<pre>derp@myrkt:~$ git clone https://github.com/appc/spec.git
Cloning into 'spec'...
remote: Counting objects: 1604, done.
remote: Compressing objects: 100% (20/20), done.
Receiving objects: 100% (1604/1604), 614.19 KiB | 0 bytes/s, done.
remote: Total 1604 (delta 7), reused 1 (delta 0)
Resolving deltas: 100% (924/924), done.
Checking connectivity... done.
derp@myrkt:~$ cd spec/
derp@myrkt:~/spec$ ls
ace aci actool build CONTRIBUTING.md DCO discovery examples Godeps LICENSE pkg README.md schema SPEC.md test VERSION
derp@myrkt:~/spec$ ./build
Building actool...
go linux/amd64 not bootstrapped, not building ACE validator
derp@myrkt:~/spec$ ls bin/actool
bin/actool
derp@myrkt:~/spec$ ./bin/actool -h
Usage of actool:
-debug=false: Print verbose (debug) output
-help=false: Print usage information and exit </pre>
<pre> </pre>
<pre> </pre>
Now we need a simple go application:
<pre> </pre>
<pre>package main
import (
"log"
"net/http"
)
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
log.Printf("OHAIDER: request from %v\n", r.RemoteAddr)
w.Write([]byte("hello\n"))
})
log.Fatal(http.ListenAndServe(":5000", nil))
} </pre>
<pre> </pre>
<pre>And a manifest.json file:</pre>
<pre> </pre>
<pre> {
"acKind": "ImageManifest",
"acVersion": "0.2.0",
"name": "nializer/daemon",
"labels": [
{
"name": "version",
"value": "1.0.0"
},
{
"name": "arch",
"value": "amd64"
},
{
"name": "os",
"value": "linux"
}
],
"app": {
"user": "root",
"group": "root",
"exec": [
"/bin/daemon"
],
"ports": [
{
"name": "www",
"protocol": "tcp",
"port": 5000
}
]
},
"annotations": [
{
"name": "authors",
"value": "Kelsey Hightower , Spencer Krum "
},
{
"name": "created",
"value": "2014-10-27T19:32:27.67021798Z"
}
]
}</pre>
<pre> </pre>
<pre> </pre>
And a file structure:
<pre> </pre>
<pre>root@myrkt:~/app# find daemon-layout/
daemon-layout/
daemon-layout/rootfs
daemon-layout/rootfs/bin
daemon-layout/rootfs/bin/daemon
daemon-layout/manifest</pre>
<pre> </pre>
<pre> </pre>
Then we can build(and verify) this image:
<pre> </pre>
<pre>root@myrkt:~/app# find daemon-layout/
daemon-layout/
daemon-layout/rootfs
daemon-layout/rootfs/bin
daemon-layout/rootfs/bin/daemon
daemon-layout/manifest
root@myrkt:~/app# actool build daemon-layout/ daemon-static.aci
root@myrkt:~/app# actool --debug validate daemon-static.aci
daemon-static.aci: valid app container image </pre>
<pre> </pre>
<pre> </pre>
<pre>Then we can run the image (this doesn't work):</pre>
<pre> </pre>
<pre>root@myrkt:~/app# rkt run daemon-static.aci
/etc/localtime is not a symlink, not updating container timezone.
Error: Unable to open "/lib64/ld-linux-x86-64.so.2": No such file or directory
Sending SIGTERM to remaining processes...
Sending SIGKILL to remaining processes...
Unmounting file systems.
Unmounting /proc/sys/kernel/random/boot_id.
All filesystems unmounted.
Halting system. </pre>
<pre> </pre>
<pre> </pre>
The issue is that our go binary is not statically compiled:
<pre> </pre>
<pre>root@myrkt:~/app# ldd daemon
linux-vdso.so.1 => (0x00007fff2d2cd000)
libgo.so.5 => /usr/lib/x86_64-linux-gnu/libgo.so.5 (0x00007ff809e45000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007ff809c2f000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ff809868000)
/lib64/ld-linux-x86-64.so.2 (0x00007ff80aca3000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ff80964a000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007ff809344000) </pre>
<pre> </pre>
<pre> </pre>
But this is okay because we can just add these files (plus a few more) to our ACI:
<pre> </pre>
<pre>root@myrkt:~/app# find daemon-layout/
daemon-layout/
daemon-layout/rootfs
daemon-layout/rootfs/lib64
daemon-layout/rootfs/lib64/libc.so.6
daemon-layout/rootfs/lib64/ld-linux-x86-64.so.2
daemon-layout/rootfs/bin
daemon-layout/rootfs/bin/daemon
daemon-layout/rootfs/usr
daemon-layout/rootfs/usr/lib
daemon-layout/rootfs/usr/lib/x86_64-linux-gnu
daemon-layout/rootfs/usr/lib/x86_64-linux-gnu/libgo.so.5
daemon-layout/rootfs/lib
daemon-layout/rootfs/lib/x86_64-linux-gnu
daemon-layout/rootfs/lib/x86_64-linux-gnu/libpthread.so.0
daemon-layout/rootfs/lib/x86_64-linux-gnu/libc.so.6
daemon-layout/rootfs/lib/x86_64-linux-gnu/libgcc_s.so.1
daemon-layout/rootfs/lib/x86_64-linux-gnu/libm.so.6
daemon-layout/manifest </pre>
<pre> </pre>
<pre> </pre>
Now we can re-build, verify, run:
<pre> </pre>
<pre> </pre>
<pre>root@myrkt:~/app# actool build --overwrite daemon-layout/ daemon.aci
root@myrkt:~/app# actool --debug validate daemon.aci
daemon.aci: valid app container image
root@myrkt:~/app# du -sh daemon.aci
5.6M daemon.aci
root@myrkt:~/app# rkt run daemon.aci
/etc/localtime is not a symlink, not updating container timezone.
2015/02/11 13:32:58 OHAIDER: request from 127.0.0.1:55024</pre>
<pre></pre>
This means everything is working. You can exit by pressing ^] three times.
<pre></pre>
<pre>We can then post the daemon file to swift, using the tempurl system from a previous post. Then using a tiny url service, we can run the aci from the network:</pre>
<pre></pre>
<pre></pre>
<pre>root@myrkt:~/app# rkt fetch http://l.pdx.cat/nibz_daemon.aci
rkt: fetching image from http://l.pdx.cat/nibz_daemon.aci
Downloading aci: [============================================ ] 5.81 MB/5.84 MB
Downloading signature from http://l.pdx.cat/nibz_daemon.sig
EOF
root@myrkt:~/app# rkt run nibz_daemon
rkt only supports http or https URLs (nibz_daemon)
root@myrkt:~/app# rkt run http://l.pdx.cat/nibz_daemon.aci
rkt: fetching image from http://l.pdx.cat/nibz_daemon.aci
Downloading aci: [===================================== ] 4.89 MB/5.84 MB
Downloading signature from http://l.pdx.cat/nibz_daemon.sig
EOF </pre>
<pre> </pre>
<pre> </pre>
Okay, so that didn't work. Maybe later on I will figure that part out. I am particularly excited to use the swfit meta tags to add the security meta tags used by rocket for collecting signatures.
<pre> </pre>
<pre> </pre>
<pre> </pre>
blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com1tag:blogger.com,1999:blog-1369167856034452100.post-6688173583751628622015-02-11T05:06:00.003-08:002015-02-11T05:06:48.971-08:00Rocket: Containers without Docker<a href="https://github.com/coreos/rocket">Rocket</a> is a container runtime from <a href="https://coreos.com/">CoreOS.</a> It is a response to Docker's feature creep. Simultaneously with Rocket, the CoreOS team released the <a href="https://github.com/appc/spec/blob/master/SPEC.md">App Container Spec</a>, a specification of the image format consumed by a container runtime. Multiple container runtimes could then be written and could all consume the same images. In this post I will talk about my experience with it and what I like and don't like so far. Note that I don't have a ton of experience with this tool at this point.<br />
<br />
There are a couple of things inside the app container spec/rocket ecosystem that are just fantastic(actually I'm pumped about basically the whole thing):<br />
<br />
<h3>
Security is a first class concern</h3>
Rocket uses gpg to verify the authenticity of App Container Images(aci). It does this by allowing the administrator to trust keys, then containers signed by those keys are trusted. Rocket maintains its own keyring with trust levels. This borrows from the techniques used to secure Linux packaging. Rocket/ACI also use sha512sums to uniquely identify ACIs.<br />
<br />
<h3>
Built on core unix utilities</h3>
The main operations (actually all operations) involved in creating, signing, verifying, and sharing ACI's are composed out of the standard unix utilites: tar, gpg, gzip. Any helper utilities just serialize these functions. <a href="https://github.com/appc/spec/tree/master/actool">actool</a> is one such utility. This keeps ACI's simple, doesn't tie anyone to custom tooling, increases debugability and hackability. Particularly in the signing and verification components, this means no one has to trust CoreOS or anyone else.<br />
<br />
<h3>
Emphasis on pushing exactly what you want into the container</h3>
With ACI, you copy the files you want into the container and stop there. This encourages small images, and encourages administrators to know exactly what is going in their images.<br />
<br />
<h3>
Networking just works</h3>
No crazy -p port:port:ipaddress nonsense, you specify the listen port in the configuration file, boom done. Listens on all interfaces.<br />
<br />
<h3>
Configuration file makes sense, extendable</h3>
When you build an ACI, you bake a manifest.json into it. This is a configuration file with a combination of runtime settings and overall metadata. I am already comparing and contrasting with Puppet's metadata.json. Both of these files contain basic metadata such as authorship information. And both are young formats with tooling and use still growing up. JSON's schemalessness allows users and devs to rapidly prototype and try out new information and structures in these files.<br />
<br />
<h3>
HTTP</h3>
Http is used as the primary transport. ACI's are simple files. These ACI's can be pushed into webservers or s3 and pulled out with wget or the rocket utility itself.<br />
This is a massive improvement over the current docker-hub situation. Rocket has some rudimentary support for inferring github http locations from names such as 'coreos.com/etcd'blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com0tag:blogger.com,1999:blog-1369167856034452100.post-44888993152445991162015-02-08T18:57:00.002-08:002015-02-08T18:58:11.982-08:00OpenStack Swift on HP CloudOpenStack Swift is the Object Storage component of OpenStack. It is roughly analogous to Amazon S3. HP Cloud (full disclaimer: I work at hp and get cloud resources for free) has an Object Storage component. This post will be about getting basic functionality out of it.<br />
<br />
A very long document on HP's object storage can be found <a href="http://docs.hpcloud.com/publiccloud/api/object-storage/">here.</a> Reading as much of it as I have has permanently damaged my soul, so I am posting here to share my story, hopefully you won't have to spend so much time kicking it.<br />
<br />
Usually when doing things on HP Cloud's OpenStack services I just poke around the command line utility's until I am happy. Let's look at basic tooling with the python-swiftclient tool:<br />
<br />
Check deps:<br />
<br />
<pre>$: pip install -U python-swiftclient
Requirement already up-to-date: python-swiftclient in /disk/blob/nibz/corepip/lib/python2.7/site-packages
Requirement already up-to-date: six>=1.5.2 in /disk/blob/nibz/corepip/lib/python2.7/site-packages (from python-swiftclient)
Requirement already up-to-date: futures>=2.1.3 in /disk/blob/nibz/corepip/lib/python2.7/site-packages (from python-swiftclient)
Requirement already up-to-date: requests>=1.1 in /disk/blob/nibz/corepip/lib/python2.7/site-packages (from python-swiftclient)
Requirement already up-to-date: simplejson>=2.0.9 in /disk/blob/nibz/corepip/lib/python2.7/site-packages (from python-swiftclient) </pre>
<br />
Check creds:<br />
<br />
$: [ -z $OS_PASSWORD ] && echo set password<br />
$: [ -z $OS_TENANT_NAME ] && echo set tenant name<br />
$: echo $OS_AUTH_URL<br />
https://region-b.geo-1.identity.hpcloudsvc.com:35357/v2.0/<br />
<br />
With that set, we can use the swift command line client to upload, list, download, and delete files. All swift objects are put in containers, the equivalent of s3 buckets.<br />
<br />
$: swift list craigslist<br />
x220t.jpg<br />
$: swift upload craigslist r61e.jpg <br />
r61e.jpg<br />
$: swift download craigslist x200t.jpg<br />
Object 'craigslist/x200t.jpg' not found<br />
$: swift list craigslist<br />
r61e.jpg<br />
x220t.jpg<br />
$: time swift list craigslist<br />
r61e.jpg<br />
x220t.jpg<br />
<br />
real 0m1.913s<br />
user 0m0.283s<br />
sys 0m0.051s<br />
$: swift download craigslist x220t.jpg<br />
x220t.jpg [auth 6.971s, headers 13.029s, total 13.314s, 0.002 MB/s]<br />
$: swift delete craigslist r61e.jpg<br />
r61e.jpg<br />
$: time swift list craigslist<br />
x220t.jpg<br />
<br />
real 0m2.330s<br />
user 0m0.299s<br />
sys 0m0.059s<br />
<br />
<br />
As you can see from the timing information, some of the operations are fast and some are slow. Swift download provides it's own timing information, which is nice. All upload/download operations above are authenticated through keystone. To provide a 'fake cdn' service like amazon s3, swift uses tempurls. This is how tempurls are typically used:<br />
<br />
$: swift tempurl GET 3600 /v1/10724706841504/craigslist/x220t.jpg<br />
Usage: swift tempurl <method> <seconds> <path> <key><br />
Generates a temporary URL for a Swift object.<br />
<br />
Positions arguments:<br />
[method] An HTTP method to allow for this temporary URL.<br />
Usually 'GET' or 'PUT'.<br />
[seconds] The amount of time in seconds the temporary URL will<br />
be valid for.<br />
[path] The full path to the Swift object. Example:<br />
/v1/AUTH_account/c/o.<br />
[key] The secret temporary URL key set on the Swift cluster.<br />
To set a key, run 'swift post -m<br />
"Temp-URL-Key:b3968d0207b54ece87cccc06515a89d4"'<br />
$: swift tempurl GET 3600 /v1/10724706841504/craigslist/x220t.jpg supersecret<br />
/v1/10724706841504/craigslist/x220t.jpg?temp_url_sig=7388703d11f6a3362dff1008a2f5dd3b2fd31293&temp_url_expires=1423453300<br />
<br />
Then, given this information you prepend the root url of the swift service and curl at it (don't forget single quotes!):<br />
<br />
$: swift tempurl GET 3600 /v1/10724706841504/craigslist/x220t.jpg supersecret<br />
/v1/10724706841504/craigslist/x220t.jpg?temp_url_sig=7388703d11f6a3362dff1008a2f5dd3b2fd31293&temp_url_expires=1423453300<br />
$: keystone catalog | grep -i object<br />
Service: object-store<br />
| publicURL | https://region-b.geo-1.objects.hpcloudsvc.com/v1/10724706841504 |<br />
| versionInfo | https://region-b.geo-1.objects.hpcloudsvc.com/v1/ |<br />
| versionList | https://region-b.geo-1.objects.hpcloudsvc.com |<br />
$: curl 'https://region-b.geo-1.objects.hpcloudsvc.com/v1/10724706841504/craigslist/x220t.jpg?temp_url_sig=7388703d11f6a3362dff1008a2f5dd3b2fd31293&temp_url_expires=1423453300'<br />
401 Unauthorized: Temp URL invalid<br />
<br />
<br />
After poking this for <i>some time</i> I found this tibit buried in the HP Cloud docs on swift, er excuse me, Object Storage:<br />
<br />
<blockquote class="tr_bq">
<h5 id="2752-differences-between-object-storage-and-openstack-swift-tempurl-signature-generation">
2.7.5.2 Differences Between Object Storage and OpenStack Swift TempURL Signature Generation</h5>
There are two differences between Object Storage and OpenStack Swift TempURL signature generation:<br />
<ul>
<li>OpenStack Swift Temporary URLs (TempURL) required the
X-Account-Meta-Temp-URL-Key header be set on the Swift account. In
Object Storage you do not need to do this. Instead we use Access Keys to
provide similar functionality.</li>
<li>Object Storage Temporary URLs require the user's Project ID and
Access Key ID to be prepended to the signature. OpenStack Swift does
not.</li>
</ul>
</blockquote>
<br />
So this basically means you can't use the python-swiftclient tool with hpcloud object storage. At least nearby they provide a code snipit. I've created my own script.<br />
<br />
<br />
$: python hpcloud_swift.py GET 3600 /v1/10724706841504/craigslist/x220t.jpg $OS_ACCESS_KEY_SECRET
https://region-b.geo-1.objects.hpcloudsvc.com/v1/10724706841504/craigslist/x220t.jpg?temp_url_sig=10724706841504:KAPYAYFHCH51B3ZCCB4W:044cefcdf58d4bccea7da3a4836145488a7c8496&temp_url_expires=1423453940
<br />
<br />
<br />
This depends on setting OS_ACCESS_KEY_ID and OS_ACCESS_KEY_SECRET in your environment. These are nonstandard environment variables. Once this is set, anyone will be able to prepend the object storage root url to the front of the url output by hpcloud_swift.py and make their own janky-cdn.<br />
<br />
The script to perform the hpcloud fakery can be found <a href="https://gist.github.com/nibalizer/a8f26ba773fcd221fabe">here</a>.<br />
<br />
<br />blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com1tag:blogger.com,1999:blog-1369167856034452100.post-29781548442516738402014-12-15T17:52:00.002-08:002014-12-15T17:52:16.835-08:00Puppet cert inspectorToday while poking around in the puppet source code, I came across a utility in the ext/ directory called cert_inspector. This seems to be a little utility that opens up certificates and interrogates them for useful data. This is better than what I usually do, which is incanting openssl directly. It also is capable of chewing up an entire /var/lib/puppet/ssl directory and dumping information on every cert and key it finds. See the output below:<br />
<br />
<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"> (master u=)$: ./ext/cert_inspector ~/.puppet/ssl/certs/ca.pem <br />/home/nibz/.puppet/ssl/certs/ca.pem:<br /> Certificate assigning name /CN=Puppet CA: zabava.cat.pdx.edu to key</CN=Puppet CA: zabava.cat.pdx.edu><br /> serial number 1<br /> issued by /CN=Puppet CA: zabava.cat.pdx.edu<br /> signed by key</CN=Puppet CA: zabava.cat.pdx.edu><br /><br /> (master u=)$: ./ext/cert_inspector ~/.puppet/ssl/<br />WARNING: file "/home/nibz/.puppet/ssl/public_keys/hunner_what_r_u_doin.pem" could not be interpreted<br />WARNING: file "/home/nibz/.puppet/ssl/public_keys/hunner_stahp.pem" could not be interpreted<br />WARNING: file "/home/nibz/.puppet/ssl/public_keys/maxwell.hsd1.or.comcast.net.pem" could not be interpreted<br />WARNING: file "/home/nibz/.puppet/ssl/public_keys/hunner.pem" could not be interpreted<br />/home/nibz/.puppet/ssl/certs/ca.pem:<br /> Certificate assigning name /CN=Puppet CA: zabava.cat.pdx.edu to key</CN=Puppet CA: zabava.cat.pdx.edu><br /> serial number 1<br /> issued by /CN=Puppet CA: zabava.cat.pdx.edu<br /> signed by key</CN=Puppet CA: zabava.cat.pdx.edu><br /><br />/home/nibz/.puppet/ssl/certificate_requests/hunner.pem:<br /> Certificate request for /CN=hunner having key key</CN=hunner><br /> signed by key</CN=hunner><br /><br />/home/nibz/.puppet/ssl/certificate_requests/hunner_stahp.pem:<br /> Certificate request for /CN=hunner_stahp having key key</CN=hunner_stahp><br /> signed by key</CN=hunner_stahp><br /><br />/home/nibz/.puppet/ssl/certificate_requests/hunner_what_r_u_doin.pem:<br /> Certificate request for /CN=hunner_what_r_u_doin having key key</CN=hunner_what_r_u_doin><br /> signed by key</CN=hunner_what_r_u_doin><br /><br />/home/nibz/.puppet/ssl/private_keys/hunner.pem:<br /> Private key for key</CN=hunner><br /><br />/home/nibz/.puppet/ssl/private_keys/hunner_stahp.pem:<br /> Private key for key</CN=hunner_stahp><br /><br />/home/nibz/.puppet/ssl/private_keys/hunner_what_r_u_doin.pem:<br /> Private key for key</CN=hunner_what_r_u_doin><br /><br />/home/nibz/.puppet/ssl/private_keys/maxwell.hsd1.or.comcast.net.pem:<br /> Private key for key</home/nibz/.puppet/ssl/private_keys/maxwell.hsd1.or.comcast.net.pem></span><br />blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com0tag:blogger.com,1999:blog-1369167856034452100.post-62926918184269786712014-12-09T15:32:00.001-08:002014-12-09T15:43:21.282-08:00Testing Puppet node definitionsSometimes Puppet node definitions get a little hairy. Here is a quick trick I use to validate them manually. This is inspired by <a href="https://review.openstack.org/#/c/140480/">this review.</a><br />
<br />
Given a regex node definition create a test file called node.pp:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">node /^git(-frontend\d+)?\.openstack\.org$/ { </span><br />
<span style="font-family: "Courier New",Courier,monospace;"> notify { 'match': } </span><br />
<span style="font-family: "Courier New",Courier,monospace;">} </span><br />
<br />
Then, using the --certname="testnode" syntax to puppet apply, do some quick spot testing to see what happens.<br />
<br />
<pre><span style="font-family: "Courier New",Courier,monospace;">$: puppet apply nodedef.pp
Error: Could not find default node or by name with 'maxwell.pdx.edu, maxwell.pdx, maxwell' on node maxwell.pdx.edu
Error: Could not find default node or by name with 'maxwell.pdx.edu, maxwell.pdx, maxwell' on node maxwell.pdx.edu
$: puppet apply --certname='git.openstack.org' nodedef.pp
Notice: Compiled catalog for git.openstack.org in environment production in 0.02 seconds
Notice: match
Notice: /Stage[main]/Main/Node[git-frontendd.openstack.org]/Notify[match]/message: defined 'message' as 'match'
Notice: Finished catalog run in 0.03 seconds
$: puppet apply --certname='git48.openstack.org' nodedef.pp
Error: Could not find default node or by name with 'git48.openstack.org, git48.openstack, git48, maxwell.pdx.edu, maxwell.pdx, maxwell' on node git48.openstack.org
Error: Could not find default node or by name with 'git48.openstack.org, git48.openstack, git48, maxwell.pdx.edu, maxwell.pdx, maxwell' on node git48.openstack.org
$: puppet apply --certname='git-frontend01.openstack.org' nodedef.pp
Notice: Compiled catalog for git-frontend01.openstack.org in environment production in 0.02 seconds
Notice: match
Notice: /Stage[main]/Main/Node[git-frontendd.openstack.org]/Notify[match]/message: defined 'message' as 'match'
Notice: Finished catalog run in 0.03 seconds </span></pre>
<pre><span style="font-family: "Courier New",Courier,monospace;"> </span></pre>
<pre> <span style="font-family: inherit;"> </span></pre>
<pre><span style="font-family: inherit;">This gives us the confidence to push this node definition to production without worrying about affecting existing git servers.</span></pre>
blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com0tag:blogger.com,1999:blog-1369167856034452100.post-63207903924824635202014-12-08T03:26:00.001-08:002014-12-08T03:26:17.121-08:00#puppethack#puppethack is the new version of the Puppet triage-a-thon. It is a decentralized hackathon for open source Puppet projects. This year I participated mostly by contributing to the <a href="https://github.com/puppetlabs/puppetlabs-rabbitmq">puppetlabs-rabbitmq</a> module. I worked closely with <a href="https://github.com/cmurphy/">Colleen Murphy</a> of Puppet Labs on this.<br />
<br />
When we started there were 31 outstanding pull requests. Now there are only 21. And five of those have been opened during or after the hackathon.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFCL1R-PK75CxKB9DNClNrBT-jETUpThAM3fLh4dzGTrxXGfZoWei6VK990SO1CXBLfhdVbzyvowaX6Ja-tD2zlSOnpku7eYSSpRDBkz-UNhHDoPFMissDXgghSzie_fIOB38FdJj_mePA/s1600/2014-12-08--1418037802_473x199_scrot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFCL1R-PK75CxKB9DNClNrBT-jETUpThAM3fLh4dzGTrxXGfZoWei6VK990SO1CXBLfhdVbzyvowaX6Ja-tD2zlSOnpku7eYSSpRDBkz-UNhHDoPFMissDXgghSzie_fIOB38FdJj_mePA/s1600/2014-12-08--1418037802_473x199_scrot.png" height="134" width="320" /></a></div>
<br />I am most proud of my beaker testing PR which added beaker (acceptance and integration) testing to the rabbitmq_user, rabbitmq_vhost, and rabbitmq_policy types.<br />
<br />
Overall #puppethack was a success and I am glad I participated. I want to thank my employer, HP, for allowing me to participate in the open source ecosystem. I am looking forward to doing it next year!blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com0tag:blogger.com,1999:blog-1369167856034452100.post-83412167047652215952014-12-07T21:25:00.001-08:002014-12-07T21:25:26.117-08:00Puppet Functions in stdlibYou should read up on the Puppet Functions in puppetlabs/stdlib. Seriously.<br />
<br />
If you consider yourself a serious Puppet user, i.e. you use it more than twice a month, you owe it to yourself to read through them. <a href="https://github.com/puppetlabs/puppetlabs-stdlib/blob/master/README.markdown">The README</a> has a brief description of the functions that are available. Every time I read through it, I find more useful functions have been added. And with stronger protections for function composability, there is no reason not to use functions all the time, every time.<br />
<br />
Even if all you do is learn about the existence of the validation functions, you will be able to in two lines of code make your code more robust and easier on users.<br />
<br />
For extra credit check out <a href="https://github.com/puppet-community/puppetcommunity-extlib">the puppet-community extlib module</a> which has more functions not deemed cool enough for puppet core.<br />
<br />
To eat my own words, I'll now post some functions whose existence I did not know about:<br />
<br />
<ul>
<li>chomp</li>
<li>chop</li>
<li>defined_with_params</li>
<li>diference</li>
<li>delete_undef_values</li>
<li>empty (OMG USEFUL)</li>
<li>get_param (this changes eveeeeerything)</li>
<li>private</li>
<li>reject (duuuuuudeee)</li>
<li>squeeze</li>
</ul>
Happy hacking, fellow Puppeteers! blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com0tag:blogger.com,1999:blog-1369167856034452100.post-13089210420890363082014-11-29T12:28:00.001-08:002014-11-29T12:28:59.211-08:00Bashrc: GerritGerrit is the code review and git hosting tool used by OpenStack. It is common courtesy to mark a change 'work in progress' when you have submitted it but it is not ready for others to review. Others will see the work in progress bit is set and not waste time reviewing patches that are not ready yet.<br />
<br />
The 'git review' tool is good for submitting changes, but then I have to go to the web ui to mark a change as wip. I can use gertty for this, but again that means going and searching it out.<br />
<br />
I have added the following function to my .bashrc:<br />
<br /><span style="font-family: "Courier New", Courier, monospace;">gerrit () {<br /><br /> if [ $1 = "wip" ]; then<br /> commit=`git show | grep -m1 commit | cut -d " " -f 2 2>/dev/null`<br /> if [ -z $commit ]; then<br /> echo "Not in git directory?"<br /> return 1<br /> fi<br /> gerrit review $commit --workflow -1<br /> return $?<br /> fi <br /> username=`git config gitreview.username`<br /><br /> ssh -o VisualHostKey=no -p 29418 $username@review.openstack.org gerrit $*<br />}</span><br />
<span style="font-family: "Courier New", Courier, monospace;"></span><br />
<span style="font-family: "Courier New", Courier, monospace;"></span><br />
<span style="font-family: "Courier New", Courier, monospace;"></span><br />
<span style="font-family: "Courier New", Courier, monospace;"></span><br />
<span style="font-family: "Courier New", Courier, monospace;"></span><br />
<span style="font-family: "Courier New", Courier, monospace;"><br /> </span><br />
<br />
This function enables some pretty cool features. It takes as arguments any arguments that the gerrit ssh command line interface takes. Meaning you can do things like these:<br />
<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">$: gerrit ls-projects | grep puppet<br />openstack-infra/puppet-apparmor<br />openstack-infra/puppet-dashboard<br />openstack-infra/puppet-github<br />openstack-infra/puppet-httpd<br />openstack-infra/puppet-jenkins<br />openstack-infra/puppet-kibana<br />openstack-infra/puppet-pip<br />openstack-infra/puppet-storyboard<br />openstack-infra/puppet-vcsrepo<br />openstack-infra/puppet-vinz<br />openstack-infra/puppet-yum<br />openstack-infra/puppet-zuul<br />openstack/tripleo-puppet-elements<br />stackforge/puppet-ceilometer<br />stackforge/puppet-ceph<br />stackforge/puppet-cinder<br />stackforge/puppet-designate<br />stackforge/puppet-glance<br />stackforge/puppet-heat<br />stackforge/puppet-horizon<br />stackforge/puppet-ironic<br />stackforge/puppet-keystone<br />stackforge/puppet-manila<br />stackforge/puppet-monasca<br />stackforge/puppet-n1k-vsm<br />stackforge/puppet-neutron<br />stackforge/puppet-nova<br />stackforge/puppet-openstack<br />stackforge/puppet-openstack-cloud<br />stackforge/puppet-openstack-specs<br />stackforge/puppet-openstack_dev_env<br />stackforge/puppet-openstack_extras<br />stackforge/puppet-openstacklib<br />stackforge/puppet-sahara<br />stackforge/puppet-swift<br />stackforge/puppet-tempest<br />stackforge/puppet-trove<br />stackforge/puppet-tuskar<br />stackforge/puppet-vswitch<br />stackforge/puppet_openstack_builder<br />$: gerrit -h<br />gerrit [COMMAND] [ARG ...] [--] [--help (-h)]<br /><br /> -- : end of options<br /> --help (-h) : display this help text<br /><br />Available commands of gerrit are:<br /><br /> ban-commit Ban a commit from a project's repository<br /> create-account Create a new batch/role account<br /> create-group Create a new account group<br /> create-project Create a new project and associated Git repository<br /> flush-caches Flush some/all server caches from memory<br /> gc Run Git garbage collection<br /> gsql Administrative interface to active database<br /> ls-groups List groups visible to the caller<br /> ls-members List the members of a given group<br /> ls-projects List projects visible to the caller<br /> ls-user-refs List refs visible to a specific user<br /> plugin <br /> query Query the change database<br /> receive-pack Standard Git server side command for client side git push<br /> rename-group Rename an account group<br /> review Verify, approve and/or submit one or more patch sets<br /> set-account Change an account's settings<br /> set-members Modify members of specific group or number of groups<br /> set-project Change a project's settings<br /> set-project-parent Change the project permissions are inherited from<br /> set-reviewers Add or remove reviewers on a change<br /> show-caches Display current cache statistics<br /> show-connections Display active client SSH connections<br /> show-queue Display the background work queues<br /> stream-events Monitor events occurring in real time<br /> test-submit <br /> version Display gerrit version<br /><br />See 'gerrit COMMAND --help' for more information.</span><br /><br />
We also inspect the first command to see if it is 'wip.' This allows us to create new commands to the gerrit cli without changing any code or having access to the gerrit server. What I've added is the 'wip' command which inspects the local git repository for the latest change, and marks it as wip with gerrit. This changes my workflow to look like this:<br />
<br />
<br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">$: git review</span><br />
<span style="font-family: "Courier New", Courier, monospace;">$: gerrit wip</span><br />
<br />
This is much shorter, more unixy, and doesn't require me to hop out of the terminal. Future improvements would be to identify if you are in a stack of changes and wip all of them.blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com1tag:blogger.com,1999:blog-1369167856034452100.post-23524459861950792392014-11-22T12:30:00.000-08:002014-11-22T12:30:06.855-08:00Leaving BloggerIt's time to join the future and host my own blog. I'm also going to do the regular stuff of evaluating technologies and picking one, developing tools that other people have developed, etc.<br />
<br />
I've debated doing this many times. I've always felt that I didn't want to be that person who only blogs about blogging. I feel after a couple years of pretty consistent blogging, that I won't totally ignore my blog after putting a lot of effort into it. Plus this is an excuse to build a website, something I am embarrassingly weak in.<br />
<br />
I'm pretty sure the next location of my blog will be http://spencerkrum.com but I'm not entirely sure.<br />
<br />
Right now the plan is to move to pelican because python and restructured text are both technologies that have crossover with OpenStack, and I like that. What I may end up writing is tooling to pull my old posts out of Blogger.<br />
<br />
I have no idea if blogger will let me put a redirect in for my subdomain on their domain. I have to think that literally no one at google works on this now right? After reader went away I was sure this would get the axe and yet it remains...blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com0tag:blogger.com,1999:blog-1369167856034452100.post-43545641722508051892014-11-12T00:22:00.004-08:002014-11-12T00:22:56.228-08:00Guest Post on Puppet-a-dayToday I have been honored to post a guest post on the puppet-a-day community blog. You can find my post <a href="http://puppet-a-day.com/blog/2014/11/06/linting-metadatajson/">here.</a><br />
<br />
<br />
Big thanks to @daenney for making the puppet-a-day thing go.blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com0tag:blogger.com,1999:blog-1369167856034452100.post-13052276302732777762014-11-05T02:30:00.002-08:002014-11-05T02:30:46.839-08:00Future posts/projectsThere are a list of projects I want to do or see get done, and posts I'd like to write. For lack of better place, I'll simply post the names here and we'll see where it goes.<br />
<br />
<br />
<ul>
<li>Puppet-kick replacement</li>
<ul>
<li>Puppet kick really is super dead </li>
<li>need new daemon/maybe new command line utility</li>
<li>some kindof daemon that listens for http kicks and fires puppet</li>
<li>could re-use the kick api, or bold new territory</li>
<ul>
<li>/api/status</li>
<ul>
<li>can return:</li>
<ul>
<li>'puppet running'</li>
<li>'puppet not running'</li>
<li>'puppet last run was: <bool: success> <bool: changes>'</li>
<li>'puppet admin disabled'</li>
</ul>
</ul>
<li>/api/run</li>
<ul>
<li>async</li>
<li>can fire a run</li>
<li>can fire a run against an environment</li>
<li>maybe noop?</li>
</ul>
</ul>
<li>Fuck on im not dealing with auth.conf. Maybe just a list of dns or fingerprints that are allowed to fire a kick? </li>
</ul>
</ul>
<ul>
<li>PuppetBoard-like web applications</li>
<ul>
<li>CA signing/revoking web appication</li>
<li>DB-backed, ENC</li>
</ul>
<li>Barbican integration</li>
<ul>
<li>Use openstack-barbican</li>
<li>Secret storage as a backend for hiera</li>
<li>Certificate Authority api, possibly become ref implementation of external CA interaction for puppet</li>
</ul>
<li>Ceph Continuation</li>
<ul>
<li>continue exploration of ceph tool</li>
</ul>
<li>AFS exploration</li>
<ul>
<li>continue exploration of AFS/Kerb</li>
</ul>
<li>Terraform w/ OpenStack</li>
<ul>
<li>It is so freaking close, and yet</li>
</ul>
<li>Hodor</li>
<ul>
<li>hodor is a dumb script I wrote around nova to get work done </li>
</ul>
<li>GPG mapping</li>
<ul>
<li>games i've played with using javascript to visualize the gpg web of trust</li>
</ul>
</ul>
<br />
These will be done(or not done) in any random order. If you want to see one get done give me some feedback. blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com0tag:blogger.com,1999:blog-1369167856034452100.post-80714462114367671102014-10-20T23:23:00.000-07:002014-10-20T23:23:07.573-07:00Cool diagram toolSo <a href="https://github.com/tehmaze">tehmaze</a> wrote a sweet command line graphing utility. <br />
<br />
Jason Dixon retweeted this on twitter:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSR-rp9mI2zp1GGDtlPl5qvHR_qUA1vNxF4Mdxe6-hxabz-tw0zVtNlHLEw8agB0WZVZZ_ncAABvNX3Kc4-X2Zt9wG4LlPQnC0HmnHUBlmChOTmP4vpfVKxS6I1UztFPq2C9OWJpLfERAP/s1600/twitterssss.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSR-rp9mI2zp1GGDtlPl5qvHR_qUA1vNxF4Mdxe6-hxabz-tw0zVtNlHLEw8agB0WZVZZ_ncAABvNX3Kc4-X2Zt9wG4LlPQnC0HmnHUBlmChOTmP4vpfVKxS6I1UztFPq2C9OWJpLfERAP/s1600/twitterssss.png" height="148" width="400" /></a></div>
<br />
<br />
Which led me to do this:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgr1J9DFvxLkXlkep4q16UZsf8r_KvjDDAtOe6YxBDmsHoO1ckOzYy6M67ffh2TIj-J1FaNcdgZTkrZtd7kSRJongM-sAANgFJDMAl0GKfuMA8Aztq_Dzj85I0V5V7Kq99qu9rG1EwIFA-_/s1600/diagram_grhapite.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgr1J9DFvxLkXlkep4q16UZsf8r_KvjDDAtOe6YxBDmsHoO1ckOzYy6M67ffh2TIj-J1FaNcdgZTkrZtd7kSRJongM-sAANgFJDMAl0GKfuMA8Aztq_Dzj85I0V5V7Kq99qu9rG1EwIFA-_/s1600/diagram_grhapite.png" height="236" width="640" /></a></div>
<br />
Which is a modified version of what Jason Dixon taught me in this video from pdx devops days:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://vimeo.com/78759539"><img alt="" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEha1URyyc0ZZ-ugAh59EVRxjnMKARl-q-iISIvbrEp3f_cjm3lGOPvyeoQfHkG1Ya-jTlOrundOzRNwdhTnoYHEsw-NKZhhTCl8z-x8zHLGJmRI8_uA0Ckeo4DkvV7V2PspJWu7EaxUzSz2/s1600/jason_dixon.png" height="203" title="" width="320" /></a><a href="https://www.blogger.com/null"></a></div>
<br />
<br />
<br />
<br />
Diagram: https://github.com/tehmaze/diagram<br />
My line: curl -s 'http://graphite.openstack.org/render/?target=sum(stats_counts.gerrit.event.*)&format=json&from=-24hrs' | json_pp | grep ',' | grep -v \] | tr -d "," | diagram
<br />
<br />
<br />
<br />
<br />
<br />blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com0tag:blogger.com,1999:blog-1369167856034452100.post-87076764163680361542014-09-18T23:59:00.004-07:002014-09-18T23:59:57.493-07:00Puppet Conf 2014!Puppet Conf is next week. I'll be attending with Krinkle and Blkperl. We will have a table and books to sign at the 'Meet the Authors' event on the first evening.<br />
<br />
Hope to see you all there!blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com0tag:blogger.com,1999:blog-1369167856034452100.post-24604476418257046882014-08-01T22:17:00.001-07:002014-08-01T22:17:40.908-07:00Puppet Manifests and operating system updatesThis is a post about my opinion on how we should be using the params.pp pattern. It originated from review <a href="https://review.openstack.org/#/c/108649/">here.</a><br />
<br />
This is what the code used to look like:<br />
<span style="font-family: "Courier New",Courier,monospace;"><br />$ruby_bundler_package = 'ruby-bundler'</span><br />
<br />
It worked great on precise. It still does. However, when trusty came out, the changed the name of the package to <span style="font-family: "Courier New", Courier, monospace;">bundler</span>. This broke the above puppet code.<br />
<br />
The fix was simple:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"><br /></span>
<span style="font-family: "Courier New",Courier,monospace;"># will install ruby-bundler for all Debian distros</span><br />
<span style="font-family: "Courier New",Courier,monospace;"># or for Ubuntu trusty</span><br />
<span style="font-family: "Courier New",Courier,monospace;">if ($::operatingsystem == 'Debian') or ($::lsbdistcodename == 'trusty') { </span><br />
<span style="font-family: "Courier New",Courier,monospace;"> $ruby_bundler_package = 'bundler' </span><br />
<span style="font-family: "Courier New",Courier,monospace;">}</span><br />
<span style="font-family: "Courier New",Courier,monospace;">else {<br /> $ruby_bundler_package = 'ruby-bundler'</span><br />
<span style="font-family: "Courier New",Courier,monospace;">}</span><br />
<br />
<br />
This is made a little more complicated because it handles Debian in addition to Ubuntu nodes.<br />
<br />
However, there is a better way:<br />
<br />
<span style="font-family: "Courier New", Courier, monospace;"># will install ruby-bundler for Ubuntu Precise </span><br />
<span style="font-family: "Courier New", Courier, monospace;"># and bundler for Debian or newer Ubuntu distros</span><br />
<span style="font-family: "Courier New", Courier, monospace;">if ($::lsbdistcodename == 'precise') {</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> $ruby_bundler_package = 'ruby-bundler'</span><br />
<span style="font-family: "Courier New", Courier, monospace;">}else {</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> $ruby_bundler_package = 'bundler'<br />}</span><br />
<br />
<br />
Instead of adding those names to an ever expanding case statement, this special cases the precise machines. In addition to being shorter, this better future-proofs the code. Inevitably this will be run on Utopic or later Ubuntu versions, and using the trusty package name by default will automatically work on these newer versions.<br />
<br />
Now, generally it is best practice in case statements to fail on default, using the else statement like this is a violation of the spirit of that rule. But if statements like this are common in params.pp and will save you time in the future.<br />
<br />
You can ask yourself as a follow up, "where else did i special case the new version of the operatingsystem, instead of special casing the old version."<br />
blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com2tag:blogger.com,1999:blog-1369167856034452100.post-18656567093827582462014-07-15T19:43:00.000-07:002014-07-15T19:43:24.699-07:00OSCON Talk: "Pro Puppet"I'm giving a talk at OSCON 2014! It's called Pro Puppet and will cover the techniques I think anyone can use to get the most out of using Puppet. To celebrate and promote, we've created a word cloud of all the words in Pro Puppet 2'nd Edition. Please come watch me at 4:10 pm on Wednesday!<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://web.cecs.pdx.edu/~sage/pro_puppet_flask_cloud.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://web.cecs.pdx.edu/~sage/pro_puppet_flask_cloud.png" height="640" width="518" /></a></div>
<a href="http://web.cecs.pdx.edu/~sage/pro_puppet_flask_cloud.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"></a><br />blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com0tag:blogger.com,1999:blog-1369167856034452100.post-19763463828389322492014-07-12T14:31:00.000-07:002014-07-12T14:31:37.149-07:00CephFS as a replacement for NFS: Part 1This is the first in a series of posts about CephFS. The overall goal is to evaluate and characterize the behavior of CephFS and determine if it can be a reliable replacement for NFS.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-4Ftm4tTghHE/AAAAAAAAAAI/AAAAAAAAAHY/Uzjg3YaqZ2Y/photo.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-4Ftm4tTghHE/AAAAAAAAAAI/AAAAAAAAAHY/Uzjg3YaqZ2Y/photo.jpg" height="319" width="320" /></a></div>
<br />
<br />
The current use case of NFS is 400G-1T 'stashes' shared from an NFS server to hundreds of Linux/Unix clients in an academic setting. In some cases these stashes are accessed by a single user on a single machine, in some cases dozens of users access them across dozens of machines.<br />
<br />
Drawbacks to the current situation are the same as any situation involving NFS:<br />
<br />
<ul>
<li>Security is a joke</li>
<li>Single über-powerful NFS filers present a SPOF</li>
<li>Bigger and bigger filers get more and more expensive</li>
<li>Forced to use proprietary and expensive ZFS on Solaris</li>
<li>Backing up is becoming a problem as total dataset size becomes more than a tape backup system can really hold</li>
<li>No tiering of storage. The whole dataset either goes on the fast disks or the slow disks</li>
</ul>
There are also some advantages of this system:<br />
<br />
<ul>
<li>NFS is old faithful</li>
<li>Every operating system supports it, and usually pretty well</li>
<li>NFS ipv6's like a champ</li>
<li>It's already working</li>
<li>Integrates well with pam, autofs, ldap</li>
<li>Vendor, while expensive, is really good at fixing it</li>
<li>ZFS allows 'thin provisioning' so that we can over subscribe. </li>
<li>ZFS allows full nfsv4 acls to be used (This could also go in the drawbacks section because extended acls cause much pain)</li>
</ul>
<br />
Some key advantages we hope to achieve with ceph:<br />
<br />
<ul>
<li>Clustering</li>
<li>Replication of data at the ceph layer instead of RAID</li>
<li>Authentication</li>
<li>Tiering of disks/storage</li>
<li>Setting different replication levels for different storage sets</li>
</ul>
<br />
The CephFS remote filesystem has capabilities roughly analogous to NFS. There is a single 'volume', it can be simultaneously mounted by multiple clients, it respects unix groups.<br />
<br />
In the follow up posts to this one we will build out a test ceph cluster, build filesystems on it, mount them, and generally attempt to build feature parity with an NFS system.<br />
<br />
<ul>
</ul>
blindscientisthttp://www.blogger.com/profile/04476912519573301641noreply@blogger.com7