Consensus on measurement properties and feasibility of performance tests for the exercise and sport sciences: a Delphi study

This study primarily aimed to obtain subject matter expert consensus on which measurement and feasibility properties should be considered for performance tests used in the exercise and sport sciences, along with their terminology and definitions. Ten items, including re-test reliability, content validity and responsiveness were considered essential by respondents. A further ten, including stability, predictive validity and concurrent validity, though recognised as important, were considered more context-specific. A secondary aim was to develop a checklist of the agreed upon properties which can inform performance test development or selection.

It was notable that all of the 20 items originally proposed in the first round of the questionnaire were accepted at some level. This suggests that experienced practitioners and academics in the exercise and sport sciences have an appreciation for the importance of measurement quality, but also that there are many components that come together to make a ‘high-quality measure’. The findings also demonstrate that the list was comprehensive, particularly as no additional items were suggested for inclusion by any of the participants. Specifically, commonly reported measurement properties such as re-test reliability, discriminant validity and responsiveness were all included as relevant items based on the final results, thereby confirming their importance for consideration when using a performance test. Based on these results, it would appear that these items be considered by researchers and practitioners alike in a variety of contexts. Measurement properties such as stability and concurrent validity, whilst included in the framework as level 2 items may not necessarily be relevant however under all circumstances. It is worth noting here that the likelihood of a given test displaying an appropriate level of each of these properties will depend largely on the user’s ability to administer it appropriately. Despite these conclusive findings in the participant sample, an increased number of participants from each of the three types of subject matter experts may have allowed for the investigation of whether statistical differences in the responses of these three subgroups existed and more generalisable results overall.

Comparison of the findings of this study also revealed some similarities with work undertaken in other disciplines. Previous checklists developed from research undertaken in the COSMIN project (used in health-related patient-reported test outcomes) also included measurement properties such as reliability, content validity, criterion-related validity, responsiveness and interpretability [28, 41]. The current findings also build additionally on previous work undertaken in exercise and sport science that has espoused the importance of many of the measurement properties included here [1, 4, 15, 16]. Further, in addition to ‘traditional’ measurement properties, this study also considered often overlooked items relating to feasibility in performance testing, which may be particularly important for users working in field environments. Whilst not considered measurement properties per se, items such as test duration, cost and complexity of completion were all deemed important considerations based on results of the current study.

The development of level 1 and level 2 criteria in this study represents a novel addition to previous work from other disciplines. Specifically, these criteria provide the user with flexibility in application of the findings. This is particularly useful as the relative importance of any item may differ depending on the intended use of, or context for, the test [27]. For example, the costs of administering a test may be a critical factor if financial resources are limited, but this may not be a constraint in all settings. Similarly, convergent validity may not be assessable in scenarios whereby a similar measure for comparison is not available.

The development of the checklist based on the findings from this study represents the main practical application of this work. The checklist consists of the 19 level 1 and level 2 criteria from the Delphi questionnaire, which can be used to assess an existing or newly developed performance test. Specifically, when selecting a test for implementation, the user can directly assess its quality based on existing results reported in the literature. These results can be recorded and easily compared against different test options or with newly developed alternative. The checklist also allows for the user to add their own testing results to compare directly with previous findings. This is important because although a test may display appropriate measurement properties and feasibility in one setting, this does not guarantee the same results when applied to a new scenario or population [25, 42]. It is hoped that this feature of the checklist prompts users to undertake their own measurement property and feasibility assessments when using a performance test.

Some limitations of the study should also be stated. The Delphi approach has been criticised due to its potential for researcher bias, its potential issues in achieving appropriate expert selection and has also been considered a restrictive communication method [43]. Further, the authors also acknowledge that the use of a face-to-face method (whilst difficult to facilitate) may have elicited different results to those seen here. Also, participants involved in the Delphi questionnaire were all of a single nationality and an even distribution from each of the three sub-groups was also noted. This may have meant that consensus was easier to achieve, given participants may have had similar conditions in their work environments and also experienced similar socio-cultural norms. There is a potential that engaging an international sample or a different sampling procedure altogether may have elicited different results to those observed here. Further, it is worth noting that the sample was recruited based on their expertise in sport and exercise rather than in measurement. As such, results may have differed somewhat to one that included statisticians or measurement experts.

In addition to addressing some of these limitations, future work in this area may also focus on the development of a user manual to be used as a supplement to the checklist. This manual could include specific practical examples of each item in order to increase the interpretability and increase the practical utility of the checklist for a wider user population. This may also allow for wider dissemination of the checklist to non-academic audiences. Further work may also look to evaluate the properties of the checklist itself. For instance, an evaluation of the uptake of the checklist after a period of time post-implementation may allow for identification of areas in need of further development. The measurement properties of the checklist itself are also still to be determined. For instance, the inter-rater reliability of user implementation of the checklist to rate particular tests may represent an appropriate starting point [36]. Follow-up studies may also look to determine the most appropriate statistical methods available in order to evaluate each item included in the checklist. This would serve to define the actual quantitative quality criteria relating to each item. For instance, in the case of a specific validity item, a minimum level of a particular statistical measure (i.e. correlation statistic) may be determined in order to provide a more specific representation of test quality. This approach, already undertaken in other disciplines [44, 45], could be a valuable addition to exercise and sport science research and practice.