Setting Up a Performance Measurement Team
As part of an organizational shift to functional teams at GSA’s Federal Citizen Information Center (FCIC), we created a Performance Measurement Team that consolidates data analysis for our websites, contact center channels, and marketing channels.
Instead of looking at metrics within the bubble of each program, we’re beginning to leverage insights across programs.
These are the steps we’ve taken so far:
Drafted a Plan
We came up with a Performance Measurement Plan that describes the team’s role within the organization; identifies high-level goals for each channel; names team members and their areas of specialty; and maps out some specific projects and expectations on reporting. This living document will change as the team changes.
Inventoried Data
We inventoried our existing data and the reports that were being written about that data. We determined who was responsible for each dataset and report; how often they were being updated; and where the files were located.
Consolidated Data
We created a single, master spreadsheet where members of the Performance Measurement Team input data and where all staff can view data.
Set Up Data Discussions
We began meeting monthly to discuss what we’d learned from the prior month’s data (notable increases and decreases in customer satisfaction scores; usage of our services; email open rates; etc.). We’ve looked at top queries, Web pages, print publications, and social media posts. Some of our observations have influenced content decisions and raised issues that require action by other teams.
Shared Insights
We shared the master spreadsheet with all staff and began inviting members of other teams to attend our monthly data sharing session. The benefits of sharing and communicating have been twofold. First, those outside of the Performance Measurement Team have brought additional context to the data. Second, by sharing data, those who actually create and maintain our products have gained actionable insights.
Next Step: Create Data Visualizations
We want to be able to see how our programs are performing, and use data to tell stories that show the real-world impact of our findings. We’ve considered a few different data visualization tools and are just beginning to test one of them. Our expectation is that it will help us to more easily identify trends and anomalies and allow staff to get the data they need to make decisions.
Our team is still in development, but here are a few examples of our impact:
- After finding that the engagement rates for our manually written email messages were substantially higher than automated messages (pulling from RSS feeds), our marketing team modified some of it email practices. We’ve seen a 10-15% increase in engagement rates, likely due to some of these modifications.
- Based on analysis of search terms, we regularly create features in our internal search results that direct visitors to the most relevant content. For example, we noticed that “zoologist” was one of the top 50 queries on one of our sites, but had an average clickthrough rate (CTR) of 61%. We added a feature box with the most relevant links and, in the months that followed, observed a 35% increase in the CTR.
- In tracking mobile use, we noticed a 10% increase in less than a year on one of our non-mobile-friendly websites. Using this data, we can help support the argument to go adaptive.
As the Performance Measurement Team evolves, we anticipate a continuous cycle in which we work with other functional teams to evaluate, monitor, test, and—ultimately—improve our products and services.