Customer experience: beyond surveys

Good customer experience (CX) in government is critical to meeting the needs of the public and agencies alike. Not only is CX a guiding principle within the 21st Century Integrated Digital Experience Act (IDEA), but it can help foster trust in government services. The Office of Management and Budget’s Section 280, Circular A-11 defines CX as:

The public’s perceptions of and overall satisfaction with interactions with an agency, product, or service…[Customer experience] refers to a combination of factors that result from touchpoints between an individual, business, or organization and the Federal government over the duration of an interaction, service journey, and relationship.

Surveys are a typical starting point for measuring customer experience. I can see why — you can measure things like customer satisfaction with a service over time. However, surveys can be complemented by other efforts.

I encourage government practitioners to start with what you want to learn from users, and why, before deciding on a particular research method. This post includes a series of guiding questions to help you and your team brainstorm and select an approach for measuring customer experience.

What are your overarching goals?

Before thinking about what to learn and why, you should start with your overarching program or service goals. Learning itself is often not the end goal, but something that can help you understand if you are on track to reach a larger goal. Ideally, this goal or set of goals should already be established and aligned on by your program.

Why do you want to learn?

This is one of the most important questions to ask yourself and your team. Measuring and learning give you the opportunity to make decisions in service of a larger goal. Sometimes, information gathering is about fulfilling a reporting requirement, and that’s okay. Regardless, what decisions can this initiative help you make? What can this initiative help you do? How can you improve your program or service with these inputs? Is it to benchmark progress? Measure the success of a launch? Learn about user needs? Increase enrollment in a government program? Whatever the reason, being intentional about why you want to start learning something can clarify your objective and inform what you want to measure.

Who are your users?

Next, identify which users you want to learn from. Try to add at least some level of specificity. “Members of the public” or “agency employees” may be too broad. “Members of the public who applied for benefits in 2024”, “taxpayers who are eligible but not enrolled in this program”, or “agency employees who processed permits in the last quarter” can help you make more informed decisions and more easily target these users during recruitment.

This Mad Libs-style thought starter can help you hone in on your own audience: [user type] who [have completed an action or fit certain criteria] within [timeframe]

A key aspect of customer experience is considering not just end users but agency employees or other stakeholders who administer or support your government program. Also, keep accessibility and underserved communities in mind throughout this process so you can provide an equitable customer experience. Later in this exercise, as you plan out what method(s) you’ll use to learn from users, you should consider ways to reach people with disabilities or whose first language is not English, like including multilingual content, involving interpreters in interviews, or ensuring a screen reader can access your survey.

What do you want to learn?

Next, move onto what you want to measure. Other ways to ask this include, what do you want to learn? What can you measure that will help you understand if you are on track to reaching your goal?

Consider framing what you want to learn in the form of a question. “Why” and “how” questions are especially useful for identifying root causes and user context.

For each question you brainstorm, also ask yourself why you want to learn this, which can help you understand if they are in service of supporting a certain goal.

What method(s) could you use?

Then brainstorm what sources or methods make sense for collecting this information. Outline a few examples, like user interviews or contextual inquiry (best for “why” and “how”-based questions), analytics (best for “what” and “where in the journey”), and surveys (best for attitudinal-based questions, and to some degree “what” and “where in the journey”).

Find inspiration for different methods by reviewing 18F’s Methods cards.

What method(s) should you choose?

As you choose a method or set of methods with your team, consider risks, limitations, and constraints, like timeline, budget, or administrative burden on the public.

Also be mindful of any tradeoffs that come with each method you’ve brainstormed. For example, it is easy to over-index on survey scores like Customer Satisfaction Score (CSAT). They have neat and tidy numbers and are easy to show on a slide deck, but they don’t provide context.

Surveys should not be created and analyzed in isolation. Instead, they should be part of a larger strategic effort. Consider pairing surveys with qualitative user research methods like user interviews or contextual inquiry that provide context into user behaviors, needs, motivations, direct observation of them interacting with your service.

Regardless of which method(s) you choose, you should see if there is any need for Paperwork Reduction Act (PRA) clearance. Your agency’s PRA Office will be a great resource for navigating this question. If you are conducting a round of user interviews or usability testing with 9 or fewer participants, you will not need clearance, according to PRA.digital.gov.

When do you want to learn?

There are three ways you should answer this question for each item you’ve aligned on that you want to measure. The first is, when do you want to start gathering this information? And are there any limitations to how soon you can start gathering this information? If your method requires users to fill something out, you may need to wait until you have a significant number of responses, compared to a more passive data collection method, like web analytics, which you could already have at the ready.

The next question is, which points of a user’s service journey would it be most useful to collect this information? At the end of an experience? If they are eligible but not enrolled? Or somewhere in the middle, like if they have applied but haven’t received a decision? Would any of these points better serve your end goals or help you make more informed decisions?

The last question is related to frequency: How often would it be helpful to measure this information? For example, if you are benchmarking enrollment data, you may need to collect this information on a monthly or quarterly basis, compared to understanding user needs and pain points while designing a service, which could be done once, and possibly validated later after launch to understand if the new service meets their needs.

Implement

Finally, after aligning with your team and going through any other steps to prepare based on your method(s) of choice, you are ready to get started! How you implement your initiative will depend on the method(s) you’ve chosen.

Here are other resources on learning about your users based on different methods:

User interviews

Usability testing

Surveys

Analytics