Use Metrics to Drive Success
Would federal employees work on tasks outside their agency in order to support DigitalGov? That’s the question we wanted to answer as we created the Open Opportunities program.
We had a built-in test case. When the Digital Government Strategy was released in May 2013, agencies were tasked with building APIs, launching mobile products, establishing digital governance and getting better customer feedback. Our team at GSA was chartered to support agencies, and we were looking for innovators across government to contribute. We outlined our hypotheses:
- Individuals will see the program as a great opportunity.
- They will respond in large numbers.
- Agencies will think this is a good opportunity, too.
We then built a proof of concept to help us decide whether or not to make a technology investment.
We began by testing our assumptions through short experiments. We stood up a minimally viable product (MVP) of a platform using WordPress and focused on the experience of our participants. We measured results, and if the test failed or missed its goals, we pivoted or tried another experiment.
As Alistair Croll, author of Lean Analytics, says,
“If a metric won’t change how you behave, it is a bad metric.”
Here are the metrics we chose to focus on:
- Effectiveness (number of tasks created versus the number completed)
- Growth (number of people signed up for our weekly newsletter, number of people who tried a task, and the number of different agencies with staff participating
- Customer experience (would they recommend us to a friend? Would they do it again?)
What we learned
Overall, we found that people want to participate in Open Opportunities, and many become repeat users.
In tracking our growth metrics we found our initial focus on having enough tasks in the pipeline overshadowed steady recruitment. We also found that people would sign up but not necessarily take a task and that the lifespan of a member goes down over time. Both of these findings reinforced the need to have more people in the pipeline.
Looking at our customer experience metric, we saw a lot of repeat users and high customer satisfaction rates. During the 2013 Fiscal Year, 100% of survey respondents said they would “recommend us to a friend” and 93% said they would participate again.
And finally, we examined the program’s effectiveness, comparing the number of tasks created to the number completed. In our pilot year, just over half the tasks created were completed (48 tasks out of 93). Another eight were long term tasks in progress.
With the results from our pilot year, we approached our second year (FY14) with some new techniques:
- We opened the pipeline to allow more people to complete tasks.
- We revamped the way we created tasks, by focusing on creating deliverables and chunking tasks into smaller pieces. We were able to raise our task completion rate to 77%, with an additional 15 long term projects still in progress.
Our customer experience feedback from participants was positive in our pilot year, so we did not make any major adjustments to the program based on this metric—other than to keep a close eye on any changes. FY14 numbers were similar to our pilot year, with 93% of respondents saying they “would recommend us to a friend” and 100% willing to participate again.
So, we learned through our experiments that people are willing (and excited!) about helping and delivering on cross-agency digital government efforts. This data then drove our decision to invest in technology to improve the experience of our participants and further grow the program.
As we move to a new platform we will focus our metrics on the ratio of new innovators to tasks created and the overall completion rate. Our hypothesis going forward is that the new platform will increase participation and advance the innovators network.
To learn more about the Open Opportunities program read our previous post Hacking the Bureaucracy One Task at a Time and join in.