Red Hat Agile Day 2016

Recently had the opportunity to attend Red Hat Agile Day. Great conference. Thanks to all who worked to put it on. My notes from the talks I attended are below.

I also had the opportunity to speak at the conference. One of the scheduled speakers was unfortunately ill and not able to speak. I was asked if I could step in. Luckily I’d given this presentation at a local meetup the week before (coincidentally because the scheduled speaker was unable to make it) I was ready to give the presentation.

Normally the presentation runs about 40 minutes and I had an hour. I decided to make it a bit more interactive and include more dialog. We had some great conversation. Every time I give the presentation and ask, “what resonates with you?” or “what did I get right or wrong about this?” I’m always surprised at the great, thoughtful responses.

Thank you to everyone who attended an participated in the talk (even if I wasn’t the speaker you thought you were going to hear).

Below are the notes I took from the sessions I attended.

Jim Whitehurst – CEO Red Hat

Self driving cars as innovation that crosses corporate boundaries. Companies are starting to realize that there are some problems they won’t be able to solve alone.

Open source AI research is another good example. The winner will be the platform that has the most people engaged and helping.

Innovation – Old vs New Plan & Execute vs Try & Modify

He acknowledges that the Try/Learn/Modify approach doesn’t mesh well with traditional budgeting/funding where you have to project out a year in advance.

On openness in academia – Says there are challenges because the rewards (tenure) are structured around individual achievement.

Larry Maccherone – What? So What? Now What? – how to use metrics in an agile world

Larry provided a reminder that CA/Rally had measured which practices have the most impact on success of scrum teams. Can be accessed here

People are “first fit” pattern matchers, not “best fit”

Recommended a book “How to Measure Anything” by Douglas Hubbard

Equivalent Bet Calibration – suggests that if you frame the likelihood of something happening in terms of money it can force people out of “gut” thinking because it engages a different part of their brain.

For example, if someone says they are 70% confident in something and you frame it to them as, “Excellent, that means if you’re right, I’ll give you $3 but if you’re wrong you owe me $7” Same odds, but feels different when we talk about winning/losing money.

Good use of metrics does these things:

  1. Answers the question, “compared to what”

    • Trends, benchmarks
    • He likes % comparisons because it doesn’t share sensitive data
    • Said that when he speaks at conferences and is told he got a 4.2 out of 5 on the exit surveys, that isn’t helpful unless he knows how that compares to other speakers at the same conference
  2. Shows causality or is at least informed by it

    • Always show the causal dimension on the x axis
    • “If you decide this way, that will happen”
  3. Tells a story with whatever it takes
  4. Is credible
    • Earn it
    • Show your calculations
  5. Has business value
    • ODIM
      • Outcomes
      • Decisons
      • Insights
      • Measures
    • But you do it in the other order. Measures lead to insights, lead to decisions lead to outcomes
    • However, the wrong measures lead to the wrong outcomes
      • Ex: in basketball players are measured more on high scoring than on wins. Said that someone analyzed Carmelo Anthony and found that he is a high scorer but his team wins more when he’s out sick.
  6. Shows differences easily
    • Circles are bad for qualitative comparsion
    • Pie charts are bad for comparison
    • Bar charts are great for visual comparison (easier for us to estimate the area of a rectangle than a circle or wedge)
  7. Allows you to see the forest for the trees
  8. Informs along multiple dimensions
  9. Leave the numbers in if possible
  10. Leave out the glitter (and 3d and skeuomorphs)
  11. Use good visual grammar – has another presentation that he’ll share if you ask

“Every decision is a forecast” – you are predicting future outcomes

In games, a single move won’t usually win the game. Yet a single move can lose the game. The best players play the odds.

Lumenize is his framework for doing visualizations. You’ll have to be a programmer to use it. The demo he uses in the presentation is here (page down to move forward):

His strong recommendation is to let metrics drive the conversation. Rather than answering the question, “when will you ship it?”, present them with the distribution from the monte-carlo simulator and say, “if you want 100% confidence, the date is here. If you are ok with 60% confidence, the date is here.” And let that be the basis of the conversation.

I found this after the talk. Might be more accessible for non-programmers

Todd Olson – The Data-Driven Product Owner

Is shipping the goal? Or, is just shipping the goal? Or is it something bigger. Focus on outcomes not activities. PO as servant leader (some resources I found on the core of servant leadership)

To help their teams if their teams are struggling:

  • Prioritize debt
  • Allow the team to “stop the line”
  • Break things into tiny chunks
  • Pick stories in known areas of code
  • Be disciplined in story creation
  • Talk with the customer

Measuring Outcomes

  • Throughput
  • % Throughput on defects
  • Regressions
  • Test pass/fail coverage
  • Say/Do

Support

  • Focus on the top types of items that come in as support
  • How to improve
  • Support is a treasure trove of ideas
  • Look at cycle time by type of issues
  • Try doing support directly
  • POs can help train support

How to reduce customer/user churn?

Net Promoter Score

  • NPS = % promoters – detractors
  • Need to understand why passives are passive

Feature/Epic Kanban

  • Items are not done until we collect data on if we met our success criteria.
  • Build learning in
  • Focus on what you must have to ship
  • Must/Should/Could/Won’t
  • Focus on smaller outcomes to help the larger goal

Outcomes

  • Reduced support calls by x%
  • NOT – “Shipped 5 stories”
  • What if 1 story has all the impact but it was a small story?

Hope you find the notes useful.