SIGJE

a blog on sparkly devops.

DevOpsDays Silicon Valley 2015 Speaker Selection Process

Earlier in the year, I mentioned wanting to write up a blog post on my talk selection process. This isn’t that blog post. Instead I’m going to share a behind the scenes for the DevOpsDays Silicon Valley 2015 speaker selection process. The process for creating a viable program is hard.

Viable is the right word for so many reasons when it comes to a DevOpsDays event.

Every DevOpsDay event is locally organized and managed by passionate individuals who care about the sociotechnical concerns in the workplace. Organizers strive to create an environment that engages everyone to be part of the conference. Carolyn Van Slynck described her experience of DevOpsDays Chicago 2015 and the value of the emphasis on this participation. DevOpsDays provide an environment that allows individuals to create with others a learning, inspiring, problem-solving safe space.

Each and every one of us comes from different backgrounds with different day to day experiences leading to a multiplicity of devops. Sharing our experiences in this open space format allows for cross-pollination across organizations strengthening our industry as a whole.

Speakers, whether 30 minute sessions or 5 minute ignites seed the conference participants with ideas. After lunch, we come together as a group to plan out the rest of the day with the ideas that:

  • Whoever comes to a session are the right people,
  • Whatever happens is the only thing that could have,
  • Whenever it starts is the right time,
  • When a session is over, it’s over.

Every participant of DevOpsDays is ok to leave an Open Space when they are no longer engaged or continue discussing a topic if it’s not over (even if the schedule says it’s over. Just be respectful to the group interested in the next topic and move the discussion if necessary). Participation is about engagement and speaking up when it feels right for the individual.

So when it comes to speaker selection, we are looking for charismatic diverse speakers AND topics that will encourage discussions. How do we do this, and not allow our unconcious (or concious) biases get in the way? How do we uncover those new voices that will bring topics that we don’t even know we need to consider, or bring new perspective to problems that we think are solved? I’m not going to go into ensuring you have enough proposals here, instead I’m going to focus on the selection process of the available proposals.

The TL;DR; is:

  • Anonymize proposals.
  • Identify proposal review process.
  • Rate proposals.
  • Rank proposals based on rating, unanonymize and take the top 30 talks.
  • Discuss, re-rank as necessary and plan program.

After proposals are emailed to the list, Peter Mooshammer, one of the Silicon Valley DevOpsDays Organizers, anonymized proposals for the website as well as for Judy, the tool we chose to use for our first round of ranking talks. Anonymizing is hard work because it was a very manual process. Removing company and personally identifying information. Next year we need to come up with a better way to do this. Thanks Peter for taking the time and effort to anonymize!

Thanks to Jason Dixon for sharing Judy and making it easy to setup which allows organizers to collect abstracts in one place, read abstracts in a common format, rate talks, and provide a way to analyze ratings.

I wrote up a proposal reviewing process. It’s an important step to have reviewers start with a common language and process of understanding how to rate proposals. Here is a modified version of our proposal review process. You’re welcome to take it as is, or extend.

Once the more than 100 proposals were input into Judy, and the CFP closed, all reviewers were encouraged to rate proposals.

A week after the CFP proposal closing, we met to plan the program. We did a scan for any anonymized proposals that hadn’t received as many reviews based on the convenient mode sorting in Judy, and verified whether any of them should be included. This was an important step as we did find a few proposals were higher ranked once we came together as a group. We took the top 30 talks based on ranking and unanonymized them.

Overall the general anonymized ranking process worked to generate a diverse set of speakers. In the next few days we’ll be announcing the program. I’m pretty excited to share and get your feedback! I’d love to hear some ideas on measuring “viability”, metrics to collect during and after the conference to help improve future selection processes.

There are definitely some problems around how we selected proposals. For now, I’ll share 3 of the problems uncovered.

One problem is that it biases us towards well written proposals. One mechanism that helps with this problem, is the Agile Conf’s process of opening up CFPs and providing coaching as part of the process. Speakers can choose whether they want to have feedback for their proposal before submitting it. A direct submission portal for speakers into Judy that allowed staging proposals before commiting to the proposal would help with this.

A second problem is that we didn’t allocate enough time to review proposals allowing for additional input on our individual rankings. Even with adjusting the CFP (separate issues with this to come in a later blog post), we didn’t have enough time to meet as a group multiple times. I saw how throughout the meeting, we discussed proposals and this changed how we perceived some of the proposals. One method of solving this problem is to pair over proposal review. Different perspectives helped us recognize value.

A third problem was managing multiple systems of entry. Peter and I coordinated to make sure we included all proposals, even so a proposal ended up being missed. We caught it before the selection period, but this took time and added stress. It wasn’t easy to “un-anonymize” submissions, notify speakers of acceptance, receive acceptances from speakers, notify individuals who had not been accepted, coordinate to the website for program listing, or provide a way for speakers to easily update information if needed. This is a problem that definitely could use some improved automation.

I hope this peek behind the curtain has been helpful. Further blog posts to come on observations and improvements on conference organization.

Thanks Peter Nealon for being my beta reader!