As patient-reported outcomes and clinician reports become increasingly prominent in life sciences R&D, so does the imperative that these are recorded consistently globally. So why is it that even the major pharmas and CROs struggle with linguistic validation – i.e. ensuring that clinical outcome assessment-measures are culturally relevant as well as accurately translated for each target market.
The rising priority of including the patient voice at much earlier stages of life sciences research and development means that translation workloads for patient feedback forms and other content in support of multi-national clinical trials are growing.
Yet linguistic validation of routine clinical outcome assessments (COAs) – most notably patient reported outcomes and clinician reports – are proving an all-too-common stumbling block for clinical trial sponsors and contract research organisations (CROs). This goes beyond accurate translations of COA measures, to ensure that points of reference are culturally relevant for those making the assessments.
Researchers have recognised the increasing importance of asking patients targeted questions assessing the effects of the condition on their ability to function in their daily lives. For example, instead of asking patients to rate their pain on a numerical scale, far more relevant data are obtained when patients are asked what their pain prevents them from doing. As daily activities vary from culture to culture, the cultural adaptation of these questions becomes critically important for the valid analysis of pooled data across languages and cultures.
So if the COA’s area of interest was to assess patients’ shoulder mobility, and an original option for describing this is ‘I am able to shovel snow’, the translation challenge is not simply to convert this to the local language, but to consider too whether this statement has global application. Since large parts of the world do not experience snow, a literal translation will not suffice – otherwise it will yield high instances of ‘don’t know’ or ‘not applicable’ responses in those markets. These in turn would skew the international picture, threatening the value of the total patient evidence.
Effective linguistic validation involves controlled cross-cultural adaptation. Where given criteria are met (‘hot climate in target locale/ no snow’), the team would adapt the source statement to an accepted equivalent to maintain a consistent response. So in regions with persistently warm climates, ‘I am able to shovel snow’ is changed to ‘I am able to lift heavy grocery bags and place them on a counter’ in the target language.
Other cultural considerations might include the local diet, if the subject of the COA is digestive disorders. Here, the original Western statement, ‘I am able to eat soft foods such as mashed potatoes and oatmeal’ might be adapted to ‘I am able to eat soft foods such as rice and khichdi’ further east. Subtler still, but just as critical, would be the ability to differentiate between different responses to or definitions of quality of life. So that, when assessing the impact on self-perception of patients undergoing treatment for breast cancer, for example, teams are aware that ‘I am embarrassed by my appearance’ could be equal to the statement ‘I am self-conscious about my appearance’ in another culture.
The devil is in the detail
It is these subtleties that a great many companies struggle with. Typically this is because their designated teams do not have the awareness, training or authorisation to recognise the issues, and/or make appropriate judgement calls (in partnership with the trial sponsor) to protect the integrity and value of international COA data.
Yet this is a potentially serious oversight. If the collective patient data is called into question, it could jeopardise acceptance of labelling claims. COAs are critical to the progression of all clinical trials. Although there isn’t a legal requirement for standard approaches to translation and linguistic validation, the major health authorities – certainly EMA, FDA and PMDA in Japan – support best practice in the interests of quality and patient safety.
The industry does appreciate this need, particularly as linguistic validation is inexpensive and the word counts are small in the great scheme of things. After all, getting any of this wrong can be costly: if an entire clinical study is invalidated at a late phase, the cost could run into hundreds of millions of dollars.
The issue tends to be a practical one: (a) failure to consider the requirement early enough in the cycle to allow the time to manage it, and/or (b) a lack of internal capability. But this is easy to remedy through partnership. The ability to consistently identify the need for cultural adaptation and determine the appropriate course of action should not be left to inexperienced teams.
Dana Weiss is director of linguistic validation and customer services manager at AMPLEXOR