top of page

Sentiment Surveys: The Experience Question Rotation Dichotomy - What is it and how to avoid

Updated: Feb 3, 2023


The Survey Question Rotation Dichotomy: What is it and how to avoid it

In the past, we've discussed what a sentiment survey is and how to start one, as well as how they contrast with the customer satisfaction survey. Now, we'll explore sentiment surveys further, in particular, a dichotomy that Bright Horse came across recently in regards to how often we should ask questions for the most optimum response rate whilst still adhering to best survey best practices.


Running surveys is an art and a science. That is a sentiment that Bright Horse believes and promotes when it comes to our mastering courses and consulting assignments. There is a very scientific measure of gathering actionable data from respondents that relates to, considering the affective science at play, in order to generate facts from the feelings. Where the art comes in is from the development of survey questions (this encapsulates their construction i.e. the wording and structure), the survey length, the survey design (look and feel) and the survey timing (what time, what day, what week to send it).


By following the art of survey question design, we want to reduce bias, be relevant, clear, and encourage the respondent to answer. Such a process is required for something as important as crafting the means by which to obtain the kind of actionable data to improve employee experience. It isn't just us who thinks sentiment gathering is important either. On a Nexthink-hosted webinar, senior analyst at Forrester, Andrew Hewitt, predicted that gathering employee feedback would be one of the top investment areas for 2022 as far as employee experience is concerned.


Bright Horse has extensive experience in devising these kind of survey questions that aim to be as effective and relevant as possible, to garner sentiment related to the experience level agreement (XLA) that said questions underpin. They're unlikely to be perfect. Nothing ever can be, but they can be excellent. Maximum effectiveness is what to strive for.


However, balance is the key, as it tends to be with everything in life. We need to maintain a balance between the art and science in order to craft effective questions. If we lean into the science too much, we may craft too many questions and this ends up harming our response rate. SmartSurvey state that the average response rate for surveys across various distribution methods is 30%. Too many questions and the respondent may become understandable disengaged. We need the art to understand the optimum construction and number of questions in order to encourage a response rate around the average or higher. As a rule of thumb, Bright Horse will often limit the amount of questions per survey to 8.



The Dichotomy Arises

The problem comes about when taking best practice into account. Adhering to it, we should preferably hold only 4 surveys a year, with 8 questions each as our own rule. However, as XLAs measure across all services of the organisation, this includes all surveys that may take place across multiple departments e.g., IT, HR etc. This can quickly use up our 4 per year best practice. If we only have 1 XLA in place, then this does not affect us as much since it could mean only 1 survey per year relating to employee sentiment. However, as with most businesses Bright Horse has supported, if there are 3 or 4 XLAs in place, it presents a problem. We cannot ask too many questions at one time because this will lead to lower response rates and, therefore, not enough data to give confidence to our actions. But we also don't want to limit the amount of actionable data we receive by asking too little questions. We call this the survey question rotation dichotomy.


It can be a difficult situation to overcome. Here are 4 strategies devised by Bright Horse to help your organisation work around the survey question rotation dichotomy. At the time of writing, we have not seen many other strategies to overcome this issue online. If you have solutions of your own, please feel free to share your opinions in the comments.


XLA Ambition Restriction

Restrict your ambitions. Are you trying to understand or achieve too much? Key to this is understanding which of your ambitions may be too ambitious and therefore unrealistic to achieve at this time. On the other hand, one ambition may not be the priority at the time. We say in our projects that what organisations should be striving for is valuable to them. There may be value in long-term ambitions, but an ambition you can't feasibly reach holds no value for you at this time. Therefore, any questions that are related to over-ambition wants can be taken out. By adopting this approach, questions can be restricted to 3 per XLA (typical 3 XLAs = 9 questions and therefore doable). However, you may find that 3 questions are not enough to obtain the levels of the data you want. In this case, one of the next 3 strategies may work better.


XLA Rotation

View of crops

You may have encountered the subject of crop rotation before. It was an advancement in agriculture where society discovered that planting a different crop per cycle helped to get the best nutrients out of the soil, and improve its health. In the same sense, XLA rotation would involve rotating the XLA at the centre of a survey each month. This would work to prevent the survey from feeling stale due to the feeling of receiving questions based upon the same topic regularly. In the same sense that repeating the same crop harms the soil, it is likely that the same topic each time for our questions will harm a willingness to respond. Instead, we can adopt an approach as follows:

  • Month 1 - XLA 1: Questions 1 - 3 Month 4 - XLA 1: Questions 4 - 6

  • Month 2 - XLA 2 Questions 1 - 3 Month 5 - XLA 2: Questions 4 - 6

  • Month 3 - XLA 3 Questions 1 - 3 Month 6 - XLA 3: Questions 4 - 6

and rotate on a quarterly basis. But you are having to prioritise your XLAs as XLA 3 won’t get asked until Month 3, 6, 9 or event 12 depending on the number of questions. The concern with this approach is that one way or another, we're promoting the data of one XLA over another.


XLA Question Mix Rotation

Move to the right. This approach is the same as XLA rotation, but instead we’re going to mix the questions and 1 question on each XLA in Month 1 and move on to question 2 for each XLA in Month 2 etc. In this instance, the schedule would be as follows:

  • Month 1 - XLAs 1-3: Question 1 Month 4 - XLAs 1-3: Question 4

  • Month 2 - XLAs 1-3: Question 2 Month 5 - XLAs 1-3: Question 5

  • Month 3 - XLAs 1-3: Question 3 Month 6 - XLAs 1-3: Question 6

With this approach, we overcome the limitation of XLA rotation by ensuring that each XLA is covered each month. However, the limitation here is that question 8 wouldn’t be answered, and thus data missed out on, until month 8. If it becomes apparent in hindsight that data from question 8 could have been particularly beneficial, we won’t obtain actionable data until a time when employees will already be having a poor experience.


A suggestion on how to work around this limitation would be to intentionally number the questions in order of the most prevalent poor experience e.g., question 1 would relate to the most prevalent experience complaints. Ideally, you would have discovered the most prevent poor experiences in the experience landscape surveying that takes place at the start of any XLA project.


Run Multiple Surveys in the Same Time Period

Providing sentiment

To run multiple surveys at the same time, we would split the sample group into 3 (or however many XLAs) and run 3 different surveys in that month across those sample groups. This approach takes a rather deep cut of the science of surveys into account. Through stats, we can use the response rate from an unbiased sample, determinant on a good response, and apply the results across the population from which the sample originated. Through this approach, we can assign each sample group the questions from specific XLAs, or, assign the sample group different questions across all XLAs. The latter may work as a better approach because it would ensure that various populations from your organisation e.g. HR, IT etc. can answer and provide data on all XLAs. As long as the bias is reduced as much as possible and a good response rate, correlating to the sample size, is reached, the conclusions from the surveys can be applied across the population from which the sample data was obtained.


Where this approach has its drawback arises from the criteria needed to be able to confidently apply the data obtained to the respective populations. It can suddenly seem quite a risk if we make all the effort to ensure multiple surveys across different populations only to find that the time has been spent poorly if the criteria is not met and we thus cannot confidently apply the sample data to the whole of the population.


As we say, running surveys is an art and a science. They are challenging, but the more you do them, the better you get. With each approach we've laid out in this article, we hope that one will resonate as a manner by which to overcome any question rotation concern your organisation may have when it comes to running surveys. Bright Horse can support you in your survey and sentiment gathering as well as others areas encompassed in our various consulting options.

bottom of page