SIGJE

a blog on sparkly devops.

Using Test Kitchen to Create Sandboxes for Learning

Earlier in 2015, I heard about Meteor a JavaScript App platform when someone was describing their application stack. When I saw it available as an introductionary course on Coursera, I decided to take the class and build on my JavaScript, HTML, and CSS experience.

I’m loathe to install development software directly onto my laptop when I’m first learning about it. One, I don’t know what I don’t know, and being able to quickly destroy the environment helps me to clean up if there is any security issues. Two, if there are conflicts with software that I depend on in my day to day this could be a nightmare of creating extra yaks to shave just to get back to a working state.

I solve this by using Test Kitchen with Chef and Vagrant (or AWS) to quickly spin up a system that I can use to complete the coursework and experiment without impacting my system (other than system resources like diskspace, memory, cpu when using Vagrant).

The following documents a little bit about how I do this. It’s very much an iterative and incremental process that leaves me with something that I can repeat the course as needed. If used in combination with git, I can quickly revert to a specific module within the class even.

Pre-Requisites

Install the Chef Development Kit, which includes Test Kitchen.

Setup the Base Cookbook

First I setup a base cookbook that is essentially my class project cookbook. For the Meteor class, I called it meteor-app which is probably not the best name to use, but it works. If I were commiting my code back to GitHub to share, I’d probably be a bunch more specific in the naming.

1
2

$ kitchen create cookbook meteor-app

I edit the newly created .kitchen.yml and edit the platforms section, choosing a single platform. As this is for a class and testing for a specific application, I’m not trying to test across all platforms. I chose to stick with Centos as that will work fine in this instance for this class and deleted the Ubuntu platform specification.

1
$ kitchen list

The output of kitchen list at this point will show one instance that is Centos specific.

Creation of Base Image

Setting up the base image is quick with a kitchen create. If the base OS image (for this instance https://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_centos-7.1_chef-provisionerless.box) isn’t available on the system, this will be downloaded from the internet. In this case, I’ve done this many times so it’s already available to me and the setup is very quick. I could also modify this and specify exactly what image I want to use.

1
$ kitchen create

Running kitchen create will set up an instance of Centos 7 for me based off of this image on my local laptop. If I wanted to use AWS, I would modify the driver name from vagrant.

Installation of Chef

1
$ kitchen converge

Converging my node with kitchen converge will install chef, and run the default recipe found in meteor-app/recipes/default.rb (which is currently empty).

Logging In

Next I’ll log into the system and follow through the process required for the class.

1
$ kitchen login

If I had more than one instance, I would need to specify a specific instance kitchen login INSTANCE. My instance is called default-centos-71 so kitchen login default-centos-71 would work.

By default I’m logging in as the user vagrant when I do kitchen login.

Within the course, the first thing they ask is to setup a working directory. It doesn’t matter as much since I’ve created a separate development instance. I make the directory mkdir dev, and also update the default.rb recipe with a resource directory.

1
directory "/home/vagrant/dev/"

Now if I were to copy my meteor-app cookbook to a new system and run kitchen converge the system that booted up will have the dev directory created for me.

For each change that the class asks me to do, I can try it out, then set up my recipe to reflect what needs to be done. I can then commit the changes as I go through each module with documentation to a local git repository or even GitHub which would allow me to go back to that specific state in time of the environment as I want to. If I want to test out something that is different from what the instructor has asked, I can without worry of completely breaking my environment. This is especially important when I’m working with something new that I don’t have enough context about to understand how my changes impact the system.

The next step is to install the Meteor JavaScript App. This uses the curl bash syntax curl https://install.meteor.com/ | sh.

To translate into Chef for my recipe, I could try out the Community Cookbook meteor, meteor, or I can just setup the minimum needed using the shell script that is available at https://install.meteor.com. We could even store the specific version within the cookbook. If we browse the script we can examine exactly what it’s doing and plan accordingly.

Next within the course, I created an app from the commandline on the virtual machine with meteor create my_first_app.

Starting up meteor

This sets up 3 files:

1
2
3
 my_first_app.css
 my_first_app.html
 my_first_app.js

Next, within the my_first_app directory I start up my system with meteor.

Once I install meteor.js, setup the app, and start up meteor, I realize that it is running on port 3000 by default. Since I’m running this on a virtual machine, I can’t just go directly to port 3000 from my web browser. I can fix this by updating the driver for vagrant to have a section on network to forward the local port on my system to the port on the virtual machine.

1
2
3
4
driver:
  name: vagrant
  network:
  - ["forwarded_port", {guest: 3000, host: 3000}]

With this configuration, I can now browse to localhost:3000.

Browsing to the Virtual Machine localhost:3000

Now I have a working environment that allows me to edit my local cookbook, converge my node, and see the output of my changes from my browser without breaking anything on my base system. I can keep iterating and changing as needed. As long as my cookbook reflects the changes that I need to replicate my environment, I can quickly get back to a working state as needed.

Open Source Community for Collaboration Skill Practice

Earlier in the month, I shared some feelings about examining tools with the devops lens. In this article, let’s dig into more of the technical aspects of working with some of these tools that enable automation and give us increased understanding, transparency, and collaboration.

One of the best things about open source communities is practicing collaboration. One of the worst things is the how to successfully work with each project can be implicit.

In this example, I’ll illustrate collaboration with tools using the Chef community cookbook users open source project. The goal of the users cookbook is to distill the complexities of what is required when adding a user to a system on various platforms into an easy to use resource. This is challenging due to the differences per platform. It’s unlikely that a single person would know everything that is required for every single platform. I’m using the users cookbook as an example as even if someone doesn’t know about the intricacies of using Chef, they can understand the intent of the cookbook, and if desired they can still contribute whether through providing additional context or correcting assumptions about existing platforms.

In community cookbooks managed by Chef, a CONTRIBUTING.md doc refers to a centralized CONTRIBUTING.md doc. Including a CONTRIBUTING doc (or a reference to contributing within the README.md) is a recommended practice for open source projects. GitHub will include a banner linking to this doc to potential contributors if it exists. This allows you to describe up-front the ways in which you would best like to interact with contributions, and the types of contributions that you would and would not like to recieve. For instance, if your project is written in Python, but you don’t care for PEP-8, you could state there that a contribution of applying PEP-8 conventions would be unwelcome.

Often these contributing documents sketch out only the minimum processes to get started but there are many workflows and branching strategies that individuals use to collaborate and resolve the conflicts that arise with different perspectives and approaches.

Many learning git tutorials give experience with solo git, but leave out the complexities of collaborative tool use. One can read up on the intricacies of git usage, but without a way to practice, understanding git workflows can be difficult.

Git Configuration Files

One way to learn about some of the hidden secrets of git is to examine dotfiles available on GitHub. If something doesn’t make sense review the git documentation. Let’s take a look at a modified example alias from Fletcher Nichol’s dotfile.

1
graph = log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr)%Creset %C(cyan)(%an)%Creset' --date=relative

The --graph option creates a text based graphical representation of the commit history.

With the --pretty flag, you can specify a formating string.

The format string allows us to focus on what we want to see when looking at the history of commits. In this alias, the symbols translate to showing us the following information in the specific colors.

1
2
3
4
5
%h: abbreviated commit hash
%d: ref names
%s: subject
%cr: committer date, relative
%an: author name

Finally --date=relative shows dates relative to the current time, e.g. “2 hours ago”.

When using git graph with this alias, it gives you output like the following:

Example of git graph in action.

This makes it easy to see recent commits. If there is a commit of interest, this view makes it really easy to just do git show of the object you want to inspect.

Examining past commits helps us understand how the code is structured on a project, as well as some of the design patterns that project uses for git workflows.

Issues and Pull Requests

Looking at open issues and pull requests (“PR”) we can obtain additional information about the project and get an idea of the needs of consumers.

Travis is a hosted, distributed and continuous integration service used to build and test projects. Travis integration is free for open source projects. This provides one mechanism for testing pull requests prior to integration to give some level of confidence about risk. The .travis.yml file defines the configuration.

We can look at a sample pull request (PR), Pull Request 117 from Arnoud Vermeer. The GitHub GUI will link to a build. We can see that Pull Request 117 has a failure with the rubocop check.

Pull Request 117 rubocop failure.

When we look at a PR we may find that there are changes we want to accept and changes that we don’t want to accept. We can cherry pick explicitly what we want to accept with the cherry-pick command with git, or we can adopt different work flows that have a similar effect.

Examining a Pull Request - Example 1

To facilitate working with Pull Request 117, let’s incorporate another helpful git alias, git pr:

1
pr = "!_git_pr() { git fetch origin pull/$1/head:pr-$1 && git checkout pr-$1; }; _git_pr"

This allows us to quickly pull down and examine someone’s contributions from a PR. In this case, I want to pull down PR 117 in the users cookbook and examine it.

1
2
3
4
5
6
7
➜  users git:(master) git pr 117
remote: Counting objects: 8, done.
remote: Total 8 (delta 4), reused 4 (delta 4), pack-reused 4
Unpacking objects: 100% (8/8), done.
From github.com:chef-cookbooks/users
 * [new ref]         refs/pull/117/head -> pr-117
Switched to branch 'pr-117'

We can examine the commits in the pull request with git graph.

Output of git graph with alias

This shows two commits 7623e00 and bc74a45.

The main changes are in bc74a45. In this commit the contributor is adding code, so that on FreeBSD platforms it checks to see if the shell specified in the databag json object exists on the node as specified, or in /usr/local. If the shell isn’t available in these two locations, we set the shell to the FreeBSD default shell /bin/sh. This PR exposes some fragility in our current definition as we don’t check the existence of the shell on any other platform. Depending on our current priority and workload we may rewrite the resource to be less fragile or accept the contributions as they are.

Examining a Pull Request - Example 2

There are additional utilities that can help us beyond just the simple git aliases that we can construct. One example is hub. As a wrapper around git, hub provides some useful additions to the git client making it easier to work with PRs. Once you’ve installed hub, you can see the project’s issues, open up a project’s wiki, and a number of other options from the command line.

When working with a PR, you can quickly create a new branch with its contents with a simple checkout:

1
git checkout https://github.com/chef-cookbooks/users/pull/117

Similar to the pr alias:

1
2
3
4
5
6
7
8
➜  users git:(master) git checkout https://github.com/chef-cookbooks/users/pull/117
Updating funzoneq
remote: Counting objects: 8, done.
remote: Total 8 (delta 4), reused 4 (delta 4), pack-reused 4
Unpacking objects: 100% (8/8), done.
From git://github.com/funzoneq/users
 * [new branch]      master     -> funzoneq/master
Branch funzoneq-master set up to track remote branch master from funzoneq.

This will create an appropriate named branch, and allow you to take what you want from the PR and add any necessary changes. For example if a PR has minor failures with any test cases, you might want to checkout the PR, tweak it until any failing test passes, and then commit the code.

After checking out the PR, the commits can be evaluated.

Squashing Commits

1
git rebase origin/master -i

Commits can be skipped, squashed, or edited interactively. Squashing is the process of taking one or more commits and merging it into a previous commit. This is useful to simplify the set of commits that a peer has to review. For just this reason, some projects prefer that commits be rebased or squashed prior to sending a pull request. Some organizations or teams discourage the practice of rebasing or squashing in order to have a high amount of verbosity and code history. Check the contributing documentation or talk to a team member before you adopt a specific practice.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
pick bc74a45 Check if shell exists on FreeBSD. If not, fall back to /bin/sh by default. If it's a manually installed shell, then it lives in /usr/local/bin/{bash,zsh,rbash}
pick 7623e00 Make Travis CI happy

# Rebase 72d3800..7623e00 onto 72d3800
#
# Commands:
#  p, pick = use commit
#  r, reword = use commit, but edit the commit message
#  e, edit = use commit, but stop for amending
#  s, squash = use commit, but meld into previous commit
#  f, fixup = like "squash", but discard this commit's log message
#  x, exec = run command (the rest of the line) using shell
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

I can modify the second pick to s and squash this into a single commit.

When I squash the commit, it creates a new object. After I squash the commit, git graph shows me a new commit object.

Output of git graph after commit squashed.

Testing

I can now test to see if the travis issue is still a problem in the current branch by running rubocop, the command that was failing in the Travis Build earlier manually.

1
2
3
  users git:(funzoneq-master) rubocop
Inspecting 16 files
................

With rubocop, a . represents a file without issues.

1
git push -fu origin funzoneq-master

This sets up a tracking branch and force pushes the edit of the history. In this example, it gives me the option to do a PR (which I did), resulting in Pull Request 123. This is possible because I have permission to commit to this repository.

Examining an Issue

Let’s take a look at a reported issue, Issue 118. In this issue, Chris Gianelloni reported a problem with the users cookbook on Mac OS X.

There is no PR in this case, so I create a branch with git checkout -b

1
git checkout -b issues_118

In the earlier example, I skipped over how to validate that the code actually worked on the system. We can manually test the code if we had a Mac OS X laptop using the chef-apply command, an executable that runs a single recipe from the command line.

Examining the cookbook structure shows that there are chefspec tests, but no other tests. Inside the test directory, there is only a fixtures directory that includes sample cookbooks. This exposes the risk of making changes to code in this project.

In last years sysadvent, I introduced writing custom resources and using Test Kitchen in the Baking Delicious Resources with Chef article. Test Kitchen is an implementation of sandbox automation that can run on a individual’s computer and integrates with a number of different cloud providers and virtualization technologies including Amazon EC2, CloudStack, Digital Ocean, Rackspace, OpenStack, Vagrant, and Docker. It has a static configuration that can be easily checked into version control along with a software project.

Using Test Kitchen to Spin Up Instances

Inside the users cookbook, there is a .kitchen.yml that has a vagrant driver and the chef_zero driver with a number of platforms. This would allow us to test any of the platforms listed with vagrant and virtualbox.

Apple’s EULA has implications towards Mac OS X image availability. While there are some images available on the internet, organizations (and individuals) have to define how to meet Apple’s legal requirements. Within Chef, we use Atlas to store private images for employees use.

To test Mac OS X, I created a new file .kitchen.vmware.yml with the following configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
driver:
  name: vagrant
  provider: vmware_fusion
  customize:
    numvcpus: 2
    memsize: 2048

provisioner:
  name: chef_zero

platforms:
  - name: macosx-10.11
    driver:
      box: chef/macosx-10.11 # private

Once I do a vagrant login on the command line I can now download the image. I created a symlink to .kitchen.vmware.yml

1
ln -s .kitchen.vmware.yml .kitchen.local.yml

Note It’s also possible to just define environment variable KITCHEN_LOCAL_YAML rather than creating a symlink.

I can list my instances and see the Mac OS X 10.11 images.

1
2
3
4
➜  users git:(issues_118) ✗ kitchen list
Instance               Driver   Provisioner  Verifier  Transport  Last Action
default-macosx-1011    Vagrant  ChefZero     Busser    Ssh        <Not Created>
sysadmins-macosx-1011  Vagrant  ChefZero     Busser    Ssh        <Not Created>

I converged kitchen converge default-macosx-1011 and reproduced the issue that Chris reported.

1
2
3
4
5
6
7
================================================================================
           Error executing action `create` on resource 'user[test_user]'


           ArgumentError
           -------------
           can't find user for test_user

Logging into the host with kitchen login default-macosx-1011, I could check to see if the user was created with the dscl command.

1
2
 vagrant$ dscl . list /Users | grep test_user
test_user

After digging a little further, and some pair code review with Nathen Harvey we discovered that the issue was with the directory resource wanting a UID rather than a username when declaring the owner on Mac OS X.

Switching from username to UID resolved the errors, but this only tested against Mac OS X. We should do some tests against other operating systems to make sure we haven’t broken the provider.

To speed up tests, I use Docker rather than trying to spin up that many VMs with Virtual Box or VMWare. I already have docker-machine installed, if you don’t check out this getting started guide.

1
2
3
4
➜  users git:(issues_118) ✗ docker-machine start  default
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
➜  users git:(issues_118) ✗ docker-machine env default
➜  users git:(issues_118) ✗ eval "$(docker-machine env default)"

I’m going to use someara’s kitchen-dokken plugin rather than kitchen-docker. After cleaning up my previous run with kitchen destroy, I update the symlink to point to .kitchen.dokken.yml. Now when I issue a kitchen list:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
➜  users git:(issues_118) ✗ kitchen list
Instance               Driver  Provisioner  Verifier  Transport  Last Action
default-centos-6       Dokken  Dokken       Busser    Dokken     <Not Created>
default-centos-7       Dokken  Dokken       Busser    Dokken     <Not Created>
default-fedora-21      Dokken  Dokken       Busser    Dokken     <Not Created>
default-debian-7       Dokken  Dokken       Busser    Dokken     <Not Created>
default-ubuntu-1204    Dokken  Dokken       Busser    Dokken     <Not Created>
default-ubuntu-1404    Dokken  Dokken       Busser    Dokken     <Not Created>
sysadmins-centos-6     Dokken  Dokken       Busser    Dokken     <Not Created>
sysadmins-centos-7     Dokken  Dokken       Busser    Dokken     <Not Created>
sysadmins-fedora-21    Dokken  Dokken       Busser    Dokken     <Not Created>
sysadmins-debian-7     Dokken  Dokken       Busser    Dokken     <Not Created>
sysadmins-ubuntu-1204  Dokken  Dokken       Busser    Dokken     <Not Created>
sysadmins-ubuntu-1404  Dokken  Dokken       Busser    Dokken     <Not Created>

A successful kitchen create and kitchen converge -c confirm that the changes work as expected. kitchen converge -c will run a converge against all matching instances concurrently.

Since there are no integration tests, we manually login in and check whether the home directory gets created as expected.

1
2
3
4
5
6
7
8
root@079f902cf103:/home/test_user# ls -al
total 24
drwxr-xr-x 3 test_user test_user 4096 Dec 20 07:02 .
drwxr-xr-x 3 root      root      4096 Dec 20 07:02 ..
-rw-r--r-- 1 test_user test_user  220 Apr  9  2014 .bash_logout
-rw-r--r-- 1 test_user test_user 3637 Apr  9  2014 .bashrc
-rw-r--r-- 1 test_user test_user  675 Apr  9  2014 .profile
drwx------ 2 test_user root      4096 Dec 20 07:02 .ssh

After manual verification, I commited and checked in the code, and created a PR.

More Than Code

The writing process I follow with sysadvent uses some of these collaborative tips. I post my article up on GitHub in a private repository, and then invite my peer reviewers to the repository.

While collaboratively writing Effective Devops with Katherine Daniels, we used git, AsciiDoc, and O’Reilly’s Atlas; a git-backed web-based platform for publishing books.

The lightweight formats of markdown and AsciiDoc can be collaborative with the use of git. I find limitations compared to more traditional writing tools are around GUI formating within the editors I use. I regularly find myself having to do a little extra commits to the repository to check what the browser view or generated PDF looks like with included images. Overall, this is a small price to pay when getting the benefit of a stronger piece through collaboration. Some of these limitations may be overcome with the use of extensions available in specific editors.

Summary

Using Test Kitchen allows me to quickly change which set of tools I want to use - docker, vagrant with Virtual Box, or vagrant with VMWare - depending on the current need. The kitchen configuration files can be saved alongside the project allowing anyone to quickly get going and collaborate from a consistent point.

Combined with the flexibility of Test Kitchen, Vagrant allows us to combine private and publicly available resources so if you are in a situation where you are working on something internal to your company while also contributing to Open Source, you can manage that complexity. In the above example, I’m providing my team with knowledge of how to replicate my testing without adding initial complexity to the base .kitchen.yml. It allows me to be transparent about my process without blocking people who don’t have access to Chef’s internal images or VMWare Fusion.

Additionally, local git configurations or tools like hub can simplify the collaboration process allowing us to cherry pick our commits. Talk to your team, peers in the industry, or review a project’s CONTRIBUTING.md file to discover other mechanisms that individuals use when working.

Here are a few examples of helpful git snippets that other folks shared with me via Twitter:

Kennon Kwok also shared tig, a text mode interface for Git as a useful utility.

In addition, folks mentioned Seth Vargo as being the inspiration for some common habits, and Seth has kindly shared his Git config.

Further Resources

Thank you!

Thank you H. Waldo Grunenwald, Robb Kidd, Carlos Maldonado, VM Brasseur, and Kennon Kwok for peer review and aditional edits.

Thank you to all of the Chef Community Engineering Team that provided answers to my questions over the last few months inspiring this article.

Thank you to Arnoud Vermeer for contributing PR 117 and Chris Gianelloni for contributing Issue 118 giving me the opportunity to add context to talking about collaboration with reported issues and pull requests. Your continued contributions to the Chef community are valued and appreciated!

Adventures in Django With OpsWorks and Chef

Recently I had the opportunity to work with the new released OpsWorks Chef 12 for Linux. I wrote up a process to deploy Django with the application_python cookbook walking through a sample deploy.

I learned quite a bit of Django in the process and found some great resources in the process. The following is just a snippet of that information.

Django Terminology

Django is a free and open source web application framework written in Python.

Web application frameworks provide a set of components that are common across applications allowing an individual to speed up development and deployment of a web application. Functionality like user authentication and authorization, forms, file management, are some examples of these common components. These frameworks exist to speed up delivery so that you don’t have to reinvent the wheel each time you want to create a site.

Within Django, an app is a Web application that does something, for example a poll app. Within Django, a project is a collection of apps and configurations. An app can be in multiple projects.

Django follows the MVC(Model View Controller) architectural pattern. In the MVC architectural pattern, the model handls all the data and business logic, the view presents data to the user in the supported format and layout, and the controller receives the requests (HTTP GET or POST for example), coordinates, and calls the appropriate resources to carry them out.

When creating a web application, we generally create a set of controllers, models, and views. The reason that it uses this pattern is to provide some separation between the presentation (what the user sees) and the application logic.

In Django, the view pattern is implemented through an abstraction called a template and the controller pattern is implemented through an abstraction called a view.

Further Resources

Examining Tools With a Devops Lens

Please enjoy my Sysadvent offering for 2015.

Over the last few years I have had the opportunity to attend a variety of conferences, meeting people and hearing diverse stories about work environments and challenges. It is fascinating to hear the variety of explanations about devops and its ongoing impact on the industry and the workplace. As I worked to gain a better appreciation of others’ experiences, I realized the challenge of misinformation about devops. How do we expect individuals to understand and choose devops when we have so many competing themes?

Two big themes I have heard: * Devops is culture not tools. * Devops is automation.

In this article, I will tackle one aspect of the challenge focusing on examining tools within the industry with a devops lens.

Devops is culture not tools.

What is culture? Culture is the totality of learned, socially transmitted customs, knowledge, objects, and behaviors. It includes the ideas, values, customs, and artifacts of a group of people. Culture encompasses everything about how we live and work, and how life and work evolve to meet the challenges we have in our environments. It gives meaning to social, political, economic, aesthetic, religious norms, and modes of organization that distinguishes us from others.

“We become what we behold, we shape our tools and then our tools shape us.”

Father John Culkin

What are tools? Tools are a vital component of our identity, how we interact with the world around us, how we use and conserve energy, and how we buffer ourselves from harm. Tools shape our thoughts and behaviors including our transparency, level of exertion over our environment, and conflict response. Tools are a cultural artifact.

Consider the introduction of automobiles and the changes to city landscapes that came about because of them; new cities have wider streets to accommodate vehicles with less care taken to provide pedestrians with safe walking. Automobiles impacted how we work and where we work. As commute times have increased in densely populated regions, the value of owning a car has decreased and the landscape will transform again.

Concrete Light Sky Silo Windows Sunlight Rays

Prior to the introduction of devops, an organization might incentivize and reward teams based on each team’s particular priorities; developers strove to build features quickly, operations stabilized systems, security minimized risk and controlled access, and so on across the organization. When the bonus for the VP of Operations is driven by uptime, system administrators will be incentivized to ensure stability and availability. When the bonus for the VP of Development is driven by meeting deadlines, developers will be incentivized to deliver features. This leads to a fundamental conflict in deciding highest priority between individuals trying to work together across organizational boundaries.

Organizations have created islands of culture based on their existing individual and collective values. For example, the roles and responsibilities of an operations engineer at one organization may significantly differ at another. Consequentially, operations engineers at different companies often have substantially disparate skill sets. Over time, the roles and concerns within an organization change leading to fractured capabilities even within a single organization.

As organizations grow larger, each team’s individual focus narrows further. Each team adopts a specific tool that meets their requirements, leading to multiple tools that overlap in purpose. This proliferation of tools both hinders transparency within the organization and stifles collaboration.

For example, bug tracking within development teams led to the development of tools like JIRA and bugzilla. Ticket tracking within operations teams led to development of tools like Request Tracker(RT). Bugs and tickets are both issues. While a single issues tracking tool can be selected at the organizational level, the tool can impact the cognitive load of teams unequally as a team tries to adopt a tool that isn’t specific to their domain.

The problem of tool proliferation continues today with the belief that a specific tool might solve all the problems an organization may have. When people say Devops is culture not tools, it is in part to focus on solving the organizational issues that can not be solved with a change in technology alone.

What is Devops?

Devops is an ideology that seeks to change how individuals think about work, value the diversity of work done, develop deliberate acceleration of business value, and measure the effect of social and technical change.

In Effective Devops, Katherine Daniels and I introduced the 5 pillars of devops: Collaboration, Hiring, Affinity, Tools, and Scale.

These 5 pillars form the foundation for effective devops and the devops compact, the continuous building of a shared mutual understanding within a team, organization, and across the industry.

With the devops compact, we commit to working together, communicate our intentions and any issues, and adjust our work in order to work towards shared organizational goals. We answer questions like: what is the value of what we are we trying to do, how are we going to do it, and what exactly are we doing? We don’t just decide to do devops, and then we’re done. The nature of doing devops, means that we are commiting to adjusting to change.

Devops is culture which includes tools and their use. Tools are not devops. How tools are used, and the ease with which they can be used impacts the acceptance and proliferation of specific aspects of culture. When we talk about devops tools, we mean the tools and the manner of their use–not fundamental characteristics of the tools themselves.

Devops tools stress “We” over “Me”; they allow teams and organizations to build mutual understanding to get work done. Your choice of tools is a choice in a common language. Is this language one that benefits your organization as a whole or merely a subset of specific teams? At times, due to the lack of availability of an equally balanced tool, a choice must be made that has a higher cognitive cost to one team over another. Be aware of the cost and empathetic to the teams impacted.

The devops culture that we embrace is one of collaboration across teams, across organizations, and across industries. When developing solutions, I think about my team instead of thinking about the easiest thing for me to do now. This sometimes means adjusting my expectations for the good of the organization and choosing something a little more difficult than my current working structure.

Tool Examination

Young Girl tilting camera up.

Let’s examine a few tools with the devops lens. First we’ll look at version control systems and then infrastructure automation. Please note that this examination of tooling is not meant as a judgement or advocacy of a specific tool in your environment. Without understanding your organization’s culture, noone can tell you the right tool to choose.

Version control systems record changes to a set of files over time. CollabNet founded the Subversion project in 2000 as an open source software versioning and revision control system that was architected to be compatible with the widely used Concurrent Versions System (CVS). Subversion 1.0 (svn) was released in February of 2004. Technology and habits at the time dictated svn’s use and features. Core to svn’s architecture is the concept of a centralized repository. This central repository allowed users to control who was and was not allowed to commit changes.

A year later in 2005, Git was released. It’s also an open source software version control system with a focus on decentralized revision control, speed, data integrity, and support for distributed nonlinear workflows. This gives every developer full local control. While you can adopt a centralized workflow and establish a “central” repository, the processes can be flexible allowing you to use technology as you wish rather than having it defined for you. While the ramp up time may be a little longer, the functionality allows for quicker organizational changes.

Andy Peatling shared the WordPress technical refresh in migrating from svn to git in 2015, and the desirable changed behaviors that included:

  • improved developer communication through code reviews and hangouts
  • improved development practices of reviewing code prior to committing to the repository because of the availability of the pull request with git
  • increased collaboration between designers and developers on a daily basis
  • greater feedback on individual work,
  • customized workflows

The technical merits between git and svn aside, significant value is derived from increased cross-team collaboration.

In many organizations, system configuration is a manual process. Individuals document the process and upgrade with a checklist. A missed step can lead to systems in an unknown state requiring considerable effort to recover.

Chef is an infrastructure automation system. Infrastructure automation is creating systems that reduce the burden on people to manage services and increase the quality, accuracy, and precision of a service to the consumers of a service.

When Adam Jacob was developing Chef software, he was trying to create a solution that could work across different organizations. Chef was built to provide abstractions for configuration and management, creating a language allowing individuals to define its infrastructure and policies with code.

Trying to create a language that allows for the nuanced views of developers, system administrators, security operations, and quality assurance engineers is difficult. With Chef, rather than reusing terminology that shows preference to one role over another, we have new terminology including resources and recipes.

Resources make up the basic building blocks of our infrastructure. We can use resources as provided by Chef core libraries, use resources provided by the larger Chef community, or we can build and customize our own.

Recipes describe a specific piece of an application that we want to have running on a system. It’s the ordered set of resources and potentially additional procedural code. Just as with a recipe for baking chocolate chip or oatmeal cookies, the recipe will be specific to what we want to create.

In this simple webserver.rb recipe from the Learn Chef website, we use 3 resources: package, service, and file. These resources describe everything needed to stand up an apache web server, have it run at boot time, and have it serve a simple “hello world” page.

More complex tasks like opening specific ports in a firewall or deploying Hadoop can be abstracted into a common language that allows for comprehension across teams. Because this code is stored in your standard version control, you can take advantage of the same ease of use and transparency that the rest of your source code is afforded.

Devops is not limited to infrastructure. A lot of focus is on tools that improve collaboration between development and operations teams as those teams are where the devops movement originated. As other teams begin adopting devops methodology, we’ll see tools that support the devops compact between new teams. For example, Chef has released Chef Delivery this year. Chef Delivery simplifies organizational complexity by creating a common language to describe the process of promoting products between teams. This common language allows us to codify “delivery as code”.

Devops is Automation

When people say Devops is Automation, it is due in part because many tools in devops, while codifying understanding to bridge the chasm between teams and increase velocity, have resulted in automation for repetitive tasks. Automation is a result of improved technology focused on repeatable tasks.

While reviewing the aviation industry in 1977, the House Committee on Science and Technology identified cockpit automation as a leading safety concern. Studies of the aviation field discovered that while pilots could still fly planes, the automation in use was causing an atrophy to critical thinking skills. Pilots were losing the ability to track position without the use of a map display, decide what should be done next, or recognize instrument system failures. This is a warning for us as we implement automation within our environments. Tools change our behavior and the way that we think.

In July 2013, Asiana Airlines flight 214 struck a seawall at San Francisco International Airport, resulting in three fatalities. During the investigation, the National Transportation Safety Board (NTSB) identified a number of issues, among them that there was insufficient monitoring of airspeed due to an overreliance on automated systems that they didn’t fully understand.

“In their efforts to compensate for the unreliability of human performance, the designers of automated control systems have unwittingly created opportunities for new error types that can be even more serious than those they were seeking to avoid.” James Reason, Managing the Risks of Organizational Accidents

Automation is critical to us as systems become more complex and organizations become interdependent due to shared services. Without shared mutual context or concern for human needs, automation creates unknown additional risk.

Devops enabled automation may make work faster, but more importantly it has increased transparency, collaboration, and understanding.

In summary, devops is culture which includes tools and their use. Tools help us progress from manual to continuous processes. Effective devops tools enable automation without isolating humans from the automation process so that the mutual understanding occurs across time, ensuring that the “we” of tomorrow understands the “we” of today.

What Next?

Find a local DevOpsDays, CoffeeOps, or other cross-functional meetups that facilitates people coming together to build strong foundations of common understanding.

The devops community calendar also lists upcoming conferences and events.

Study

Watch

Listen

Read

Share

I’d love to hear your feedback! What tools are you using in your environment and how do they impact organizational culture?

Tweet at @sigje with hashtag #sysadvent, email me at sparklydevops@chef.io, or add a comment to the article.

Thank yous

Many thanks to Greg Poirier for being my Sysadvent Editor. Additional thanks to Nathen Harvey, Peter Nealon, Amye Scavarda, and H. Waldo Grunenwald for peer review and aditional edits.

DevOpsDays Silicon Valley Welcome

On November 6, 2015, I opened DevOpsDays Silicon Valley 2015. My goal in sharing this is to provide help in organizing conferences for future and current organizers. I also hope that it encourages others to share their processes.

Opening a conference requires a careful balance of a few things including the purpose of the conference, thank yous to the sponsors who make the event possible, setting the values and expectations of the event to align behaviors, and conveying important information like wireless connectivity and agenda.

Thanks to the captioning provided by White Coat Captioning, I have (an edited) version of the speech I gave.


What is DevOps? Well, it started out a few years ago in 2009 where John and Paul Hammond gave a talk at Velocity Conf talking about developers and operations. Meanwhile Andrew Clay Shafer and Patrick Debois were trying to come up with a way for people to do agile system administration. As Andrew sat in the audience he realized of course DevOps. Developers and operations staff working together. Now, DevOps has become so much more. We know every part of the business matters not just in our organization but across organizations in the industry. We are changing how we work, why we work, what we’re working on together.

DevOps is about the we, not the me. I can solve a problem very fast, and it’s good enough for me. But my organization has a single point of failure or a single point of knowledge. Later on, we will have talks from people talking about the challenges of being that single point of failure.

What are DevOpsDays? Well, they’re held all around the world, every year there’s been more. This is the Silicon Valley one. This is your and my DevOpsDays. We all come together as participants in this movement to create a space where everyone can share what they’re working on, how they’re working on it, and the problems they face. It has been shown through many studies that the diversity of voices and experience help us to make stronger products, make stronger, stable solutions.

That’s why we want to talk about bridging DevOps cultures because each one of us works at a company that has a slightly different culture, the values that we talk about, what we care about, we are bridging here at DevOpsDays, connecting our islands together so that we can have strong, stable foundations to build the products that are going to get us on those spaceships out to Mars; right? I don’t want to be in a spaceship where people are arguing who’s the person responsible for the oxygen supply? Everyone’s dead if we don’t actually figure that out.

So the amazing thing about the Bay Area is we talk about the Bay Area, and we talk about Silicon Valley, but we’re all separate little cultures. We’ve got great, rich culture in Oakland. We’ve got little coffee shops, all kinds of music, great innovation labs. We’ve got culture in San Francisco. We’ve got culture here in South Bay. One of the challenges of creating a DevOpsDay for this area is the commute is pretty bad; right? There’s a lot of people here.

We have people talking, we’re going to try to figure out a way so we can have more voices coming in.

Why are we here? Well, I think of this is balance. We want to create that strong, stable structure at the bottom, have that flexibility in the middle, and have the agility at the top. From the individual down to the organizational and industry levels, this is what we need to do in order to be successful.

It’s a journey, it’s not just a destination. We want to make sure that people are able to get where they need to get to and not just think “Oh, there’s where we’re at. We got there, we’re done” DevOps is a constant journey.

There’s going to be a lot of friendly faces, you’ll see people saying hello, if this is your first time, raise your hand. Welcome to DevOpsDays. The rule of three is you see people, you have permission to talk to them. I tell you go up, say hello, tell us your name.

Now, if you’re an introvert, it’s cool. Just walk up and stand there. It’s cool. We’re cool with that. Except in the chillax zone, it’s a special place for us to recharge, no talking on your cell phone, no talking to each other. If you see someone in the chillax zone, leave them alone, you can smile but that’s the recharge zone.

The goals for the next two days is to listen to the stories we all have to tell. Share those stories you have from your experiences because you have different points of view and learn from each other.

Respect the different perspectives. We’re all coming from different places. Some of us might see this as an old woman, some might see it as a young lady; right? We all have value to bring. It doesn’t matter if you’re a developer, a operations, data scientist, a product manager, a marketer, salesperson, we all have value to bring to improve our businesses.

Towards that, we also want to make this as much of as inclusive space as we can. We have an anti-harassment policy. We are dedicated to providing harassment-free space to come together. Everyone by attending this is agreeing to our code of conduct, which you can find here.

Can all my speakers stand up or wave? We’ve got some awesome speakers. They come from all walks. We’ve got a very diverse line up. This is not just a developers and operations staff conference.

We’ll have games crafting here in the grand hall as well as in the rooms. So what is this gaming at a conference about? Isn’t that for the nerds? No. Well, yes, but it’s okay.

It’s also for all of us. It is a great way to build teams. To create the collaboration and corporation we want to see. It teaches the separation of our personal identity from the role we play.

DevOpsDays Silicon Valley 2015 Speaker Selection Process

Earlier in the year, I mentioned wanting to write up a blog post on my talk selection process. This isn’t that blog post. Instead I’m going to share a behind the scenes for the DevOpsDays Silicon Valley 2015 speaker selection process. The process for creating a viable program is hard.

Viable is the right word for so many reasons when it comes to a DevOpsDays event.

Every DevOpsDay event is locally organized and managed by passionate individuals who care about the sociotechnical concerns in the workplace. Organizers strive to create an environment that engages everyone to be part of the conference. Carolyn Van Slynck described her experience of DevOpsDays Chicago 2015 and the value of the emphasis on this participation. DevOpsDays provide an environment that allows individuals to create with others a learning, inspiring, problem-solving safe space.

Each and every one of us comes from different backgrounds with different day to day experiences leading to a multiplicity of devops. Sharing our experiences in this open space format allows for cross-pollination across organizations strengthening our industry as a whole.

Speakers, whether 30 minute sessions or 5 minute ignites seed the conference participants with ideas. After lunch, we come together as a group to plan out the rest of the day with the ideas that:

  • Whoever comes to a session are the right people,
  • Whatever happens is the only thing that could have,
  • Whenever it starts is the right time,
  • When a session is over, it’s over.

Every participant of DevOpsDays is ok to leave an Open Space when they are no longer engaged or continue discussing a topic if it’s not over (even if the schedule says it’s over. Just be respectful to the group interested in the next topic and move the discussion if necessary). Participation is about engagement and speaking up when it feels right for the individual.

So when it comes to speaker selection, we are looking for charismatic diverse speakers AND topics that will encourage discussions. How do we do this, and not allow our unconcious (or concious) biases get in the way? How do we uncover those new voices that will bring topics that we don’t even know we need to consider, or bring new perspective to problems that we think are solved? I’m not going to go into ensuring you have enough proposals here, instead I’m going to focus on the selection process of the available proposals.

The TL;DR; is:

  • Anonymize proposals.
  • Identify proposal review process.
  • Rate proposals.
  • Rank proposals based on rating, unanonymize and take the top 30 talks.
  • Discuss, re-rank as necessary and plan program.

After proposals are emailed to the list, Peter Mooshammer, one of the Silicon Valley DevOpsDays Organizers, anonymized proposals for the website as well as for Judy, the tool we chose to use for our first round of ranking talks. Anonymizing is hard work because it was a very manual process. Removing company and personally identifying information. Next year we need to come up with a better way to do this. Thanks Peter for taking the time and effort to anonymize!

Thanks to Jason Dixon for sharing Judy and making it easy to setup which allows organizers to collect abstracts in one place, read abstracts in a common format, rate talks, and provide a way to analyze ratings.

I wrote up a proposal reviewing process. It’s an important step to have reviewers start with a common language and process of understanding how to rate proposals. Here is a modified version of our proposal review process. You’re welcome to take it as is, or extend.

Once the more than 100 proposals were input into Judy, and the CFP closed, all reviewers were encouraged to rate proposals.

A week after the CFP proposal closing, we met to plan the program. We did a scan for any anonymized proposals that hadn’t received as many reviews based on the convenient mode sorting in Judy, and verified whether any of them should be included. This was an important step as we did find a few proposals were higher ranked once we came together as a group. We took the top 30 talks based on ranking and unanonymized them.

Overall the general anonymized ranking process worked to generate a diverse set of speakers. In the next few days we’ll be announcing the program. I’m pretty excited to share and get your feedback! I’d love to hear some ideas on measuring “viability”, metrics to collect during and after the conference to help improve future selection processes.

There are definitely some problems around how we selected proposals. For now, I’ll share 3 of the problems uncovered.

One problem is that it biases us towards well written proposals. One mechanism that helps with this problem, is the Agile Conf’s process of opening up CFPs and providing coaching as part of the process. Speakers can choose whether they want to have feedback for their proposal before submitting it. A direct submission portal for speakers into Judy that allowed staging proposals before commiting to the proposal would help with this.

A second problem is that we didn’t allocate enough time to review proposals allowing for additional input on our individual rankings. Even with adjusting the CFP (separate issues with this to come in a later blog post), we didn’t have enough time to meet as a group multiple times. I saw how throughout the meeting, we discussed proposals and this changed how we perceived some of the proposals. One method of solving this problem is to pair over proposal review. Different perspectives helped us recognize value.

A third problem was managing multiple systems of entry. Peter and I coordinated to make sure we included all proposals, even so a proposal ended up being missed. We caught it before the selection period, but this took time and added stress. It wasn’t easy to “un-anonymize” submissions, notify speakers of acceptance, receive acceptances from speakers, notify individuals who had not been accepted, coordinate to the website for program listing, or provide a way for speakers to easily update information if needed. This is a problem that definitely could use some improved automation.

I hope this peek behind the curtain has been helpful. Further blog posts to come on observations and improvements on conference organization.

Thanks Peter Nealon for being my beta reader!

Idea Generation Summary

Today we had a CoffeeOps DevOpsDays Silicon Valley Idea Generation event, and I had a ton of fun. I led an Open Space at Chef Summit last year which included an idea generation exercise and I followed that format.

We started off in normal SV CoffeeOps style chatting around topics. We then took a look at the current proposals, reading off the titles to prime the creativity pump. I had everyone take 5 minutes to just think about their feels and any ideas. Everyone wanted to start writing so I pushed a little on that. We ended up only “thinking” for 3 minutes.

Then, for 5 minutes we used postits to write down ideas, 1 idea per postit. If you want to host an event like this yourself, figure out what works for your group. The most important items is that everyone gets primed through some mechanism, the process is explained with supporting tools as needed (whiteboard, postits, etc), and everyone gets the opportunity to share.

Because I know the folks who showed, I forgot to explicitly express “This is a safe space with no stupid ideas”. Sometimes I go so far as to say “stupid” is banned. This is really important to level set expectations around communicating within a group setting especially with folks who may not know each other well.

Since we were a small group, I changed up the format from Chef Summit, and we each shared an idea and then discussed the ideas for that round. Time flew by so quickly that we didn’t get a chance to brainstorm through all of the ideas.

I did take a few minutes at the end to share Julie Gunderson’s Embracing My Sparkle Ignite. We had gotten in a discussion about what can be shared in 5 minutes. My feeling is that an effective ignite means that you distill down your content to the most essential. You might not say everything you feel on a topic, but you say enough to start conversations.

With the permission of my fellow attendees, I’m sharing some of the talk ideas that we came up with. If you are interested in adding your own, taking one of these and expanding on it, please do.

If you are taking one of the ideas, please update the spreadsheet and in the second column comment so folks know that you are planning on using that topic. If someone is doing an idea that you are interested in, just update the column with your name too. You might even consider reaching out and working together on the topic. Everyone has a different perspective, so it’s entirely possible that you’ll approach the same idea in completey different unique ways!

Note, that these are potential topics, not necessarily titles.

As a location, Crema Coffee Roasting Company has a great space, delicious beverages, gelato and friendly staff. I totally encourage folks in the South Bay to consider it as an option to do these kind of small group events.

About the Chef Summit

By the way, if you have the opportunity you should totally go to Chef Community Summit. It’s much smaller than Chef Conference, allowing for deep dives on subjects from training to community building, from developing for a specific product, to learning to contribute as a new person in a community. There is one in Seattle and one in London this year.

While the Chef Summit does give people the opportunity to connect with Chef engineers, more importantly it gives people the opportunity to connect with one another. A key pillar of Effective Devops is affinity; the strength of relationships between individuals, teams, organizations, and even companies. Within the context of describing work, it is generally broken down into 4 categories: reactionary, planning, procedural, and problem solving. I think there is a fifth category that is often overlooked: relationship work.

Relationship work is a catalyst, facilitating all the other kinds of work so that it shortens the time to get other work done, reducing communication barriers, and building trust based on regard. Summit is an opportunity for people to build and strengthen relationships between organizations allowing companies to cross-pollinate ideas and technology, work on building affinity at a larger scale than a coffeeops event.

Affinity is hard to measure. Even harder is measuring and exposing the value of relationship work. I see the outcomes in the community as companies host their own internal knowledge sharing events encouraging inter-company cooperation based off of stronger affinity. I’d love the opportunity to talk about this with others in upcoming open spaces!

Thank you, Linda and Jonathan for a great evening of shared idea generation and discussion. I look forward to your awesome proposals!

DevOpsDays SV Idea Generation Event

tl;dr CoffeeOps - DevOpsDays Idea Generation
July 28, 2015 7:00 PM to 9:00 PM at Crema Coffee Roasting Company
950 The Alameda, San Jose, California

As one of the DevOpsDays Silicon Valley (DODSV) organizers, I care about creating a diverse and inclusive event. As a whole, DevOpsDays organizers value the participation of each member of the DevOps community and want all attendees to have an enjoyable and fulfilling experience. DODSV has a Code of Conduct as used by previous DevOpsDays.

The DODSV CFP is open and we want to hear from you! Your views and experiences in enterprise or startup, security, database design and administration, development, operations, community management and more have value in improving the ecosystem, informing others of how to build strong, resilient teams in a sustainable manner while getting work done. Bridget Kromhout, DevOpsDays core and DOD Minneapolis organizer wrote about this to a greater depth in The First Rule of DevOps Club. The CFP for DevOpsDays Silicon Valley is currently open until September 30.

Taking off my “DevOpsDays SV” organizer hat, and putting on my “Community Builder” hat for a moment, to help encourage folks who have thought about speaking but are not sure about their ideas, and to provide a space for folks who aren’t interested in speaking but want to participate in the process of building a great diverse and inclusive local conference, I’ve organized a special edition CoffeeOps event for DevOpsDays Idea generation at Crema Coffee Roasting Company in San Jose on July 28, 2015 at 7pm.

This event is open to anyone who is interested in brainstorming potential talk ideas, providing feedback about talk ideas, and connecting individuals who might want to work on ideas together.

More information about DevOpsDays Silicon Valley

DODSV 2015 Dates

  • Call for Proposal closes September 30, 2015
  • Schedule announced October 7, 2015
  • DevOpsDays Silicon Valley November 6-7, 2015

Conference Details

DevOpsDays Silicon Valley is returning to the Computer History Museum in Mountain View.

We are expecting approximately 510 attendees.

Speaking Details

Most talks will be 30 minutes; auto-advancing Ignites are 5 minutes.

Looking for ideas?

Can’t make the meeting and looking for ideas? Take a look at our previous DevOpsDays events in 2013 and 2014.

We are looking for fresh, current and unique talks that target a wide range of skill levels from beginner to experts on technology, culture, and community.

Speaker Benefits

All accepted speakers will receive a ticket to DevOpsDays Silicon Valley, including all meals.

Thank you Dave Dash, Jeremy Price, Jamesha Fisher for proof-reading this post! Thanks to my local CoffeeOps crew for giving me the idea to have this event.

Working With Gists Within Sublime

Sublime is a pretty sweet editor in combination with the Sublime package manager. Today, I learned about the Gist package add-on. Gists are a way to share work. Every gist is a git repository which means that it can be forked and cloned in the same ways. Gists can be public or secret. Public gists are searchable; secret gists are not, but they are accessible by anyone with the URL. Most of the time, I use gists for training classes to assign IPs, as well as snippets of code.

After installing the package, to interact with Github’s gist repository generate a new token on Github. You can also configure the settings to use Enterprise Git.

All of the commands to interact with Gist require 2 key combinations by default. With the Gist package add-on on OS X, Command + K is the first key combination. The second is dependent on the desired action. Command + K followed with Command + O will open up a list of available gists. Double clicking opens up a new window in sublime with the content.

To save the gist, it’s more than just saving the file in Sublime, it requires the key combo of Command + K Command + S.

Diversity in Lens - Optimism

Earlier today I was sharing my feelings about optimism and it being one lens to look through to view the world. Each of us make a choice about “reality”. We process a moment in time through our experiences and our current state of mind cataloging it for future interactions. Shared stories from others help us to further entrench ourselves in our beliefs and values. Even in listening to a story, we are interpreting it based on our own assumptions about reality and what the words mean. We assign context based on our own beliefs, often assigning values of someone’s character over words and behaviors rather than viewing it through the subjective context that individual is experiencing.

The lens that we view the world through can be of optimism or pessimism; neither of which are inherently bad or good. Just as with a dev or ops role where it doesn’t matter what role we are in as long as we understand the impact on business workflow; it doesn’t matter what lens we are using as long as we understand its impact on our overall processing. The lens in place for a particular moment impacts our perceptions and affects the decisions that we make. When I’m viewing the world through the optmist lens, I’m driven by enthusiasm and energy seeing the opportunities and possibilities around me. When I’m viewing the world through the pessimist lens, I’m preparing myself for negative outcomes conserving energy until needed.

In 2008, I wrote the following for myself. I’m sharing it because I think it’s important that we start talking about and actively choosing the way we choose to see the world. Sharing our personal context helps others to understand our behaviors perhaps even seeing themselves in a different light.

“Focus on the best. The brilliance. The love. The hope. The best of what your life is. When someone asks you how your day was, or what you’ve been up to.. reply with the very best of what is. Don’t think about the frustrating bits.”

That’s the advice I’ve been giving myself. On these last few days of taking time off of work.. and really spending time on self, I’ve thought about where I want to be and what I want to do.

So there is this possibility of getting laid off. The economy sucks. There is so much gloom, doom, sadness.. I could wallow in the depths of it. I could hurt with the internal drama of a variety of situations.

Not everyone is going to like you. Not everyone is going to agree with you. It doesn’t matter. Believe in yourself, and believe in others. Believe the best, cherish your friends and loved ones, and do whatever it is that you feel driven to do.

I feel happy. Things are good. I enjoy my job. I have a variety of friends and interests and life is rich.

These words still guide me. With time and experience I’ve realized that it’s important to share the experiences and my authentic feelings with people. Letting toxicity of experiences build up without an outlet impacted me in a way that was unhealthy impacting my health in a myriad of ways. I still choose the optimist lens; believing in me and believing in others. I am happy. Things are good. I enjoy my job. I have a variety of friends and interests and life is rich.