ON THIS PAGE:
Review date: April 2021
Citation: Joffe et al., 2021. Imaging modality and frequency in surveillance of stage I seminoma testicular cancer: Results from a randomized, phase III, factorial trial (TRISST). Journal of Clinical Oncology 29, 2021 (suppl 6; abstr 374). Doi: 10.1200/JCO.2021.39.6_suppl.374
The TRISST trial is trying to identify the best medical imaging modality and schedule for people who have had testicular cancer and are at risk of recurrence. Demonstrating equivalence or superiority of different protocols is not easy though.
Survivors of testicular cancer generally have a very good prognosis but may, quite rightly, be anxious about recurrence1. Surveillance appears an appropriate approach to managing many of these patients (with stage I seminoma)2, but the standard practice of repeated CT scans results in cumulative exposure to radiation that may increase risk of later cancers.
The TRISST randomised, non-inferiority, factorial trial in the UK is examining whether MRI is as effective as CT for detecting relapse of testicular cancer, and the best number of scans (either a total number of 3, at 6, 18 and 36 months; or 7, at 6, 12, 18, 24, 36, 48, 60 months), for people who have undergone orchidectomy for stage I seminoma3. Results from the study were presented recently at the American Society of Clinical Oncology Genitourinary Cancers Symposium4.
Of the 669 participants enrolled in the study, 10 had stage ≥IIC relapse at six years (72 months), which was the primary endpoint for the trial. This relapse rate, of only 1.5%, is encouraging for those recovering from stage I seminoma, especially given overall survival was 99% for all participants. For the researchers, though, this relapse rate of 1.5% creates a problem.
To analyse the outcomes from the different imaging methods and regimens, the investigators designed their trial to demonstrate non-inferiority (essentially equivalence) of the imaging modalities, based on an estimated overall recurrence rate of stage ≥IIC relapse of 5.7% (not 1.5%). Their study’s 669 subjects provided sufficient statistical ‘power’ to exclude an increase in relapse rates of 5.7% to 11.4% (i.e. a doubling of the rate of relapse).
Stage ≥IIC relapse occurred in only one of the patients that received seven scans, whereas the remaining nine stage ≥IIC relapses occurred in the three scan groups. Despite these rates of 0.3% (one out of 336 for the seven scan group) and 2.8% (nine out of 333 for the three scan group), the researchers conclude that three scans are not inferior to seven. But the rate of relapse in the three scan groups was nine times higher than the seven scan groups; a lot greater than the doubling that they set out to exclude when they designed the trial. The lower-than-expected relapse rate they observed means their trial was underpowered to exclude a difference between scanning regimens much higher than the doubling that their study was designed for. Although not “statistically significant”, a relapse rate nine times higher in one group than the other could very well be “clinically significant” for patients.
The investigators go on to report that four out of nine relapses that occurred in the three scan group could potentially have been detected earlier if they were in the seven scan group, and that half of all relapses (five out of 10) occurred after three years – these would never be detected by the three scan regimen, which did not extend beyond this time. It’s difficult to reconcile this observation with the investigators’ conclusion that “relapse beyond three years is rare, and imaging may be unnecessary”.
In terms of imaging modality, the investigators report that two out of 10 stage ≥IIC relapses (an overall relapse rate of 0.6%) occurred in patients who underwent MRI scans, whereas eight out of 10 (2.5%) were in patients who had CT. Here, the investigators’ conclusion of MRI being non-inferior to CT seems to hold up.
The design and conduct of clinical trials is difficult, as is statistical analysis of the results. When choosing how to conduct a trial, researchers have to make assumptions based on their knowledge and experience. If these assumptions aren’t substantiated, the implications have to be considered when interpreting outcomes from statistical tests based on the initial assumptions.
Too much biomedical research is underpowered5, leading to erroneous conclusions and a failure of researchers to make true and reproducible discoveries6,7 that can be relied upon to inform clinical practice. Patients rely on the expertise and experience of their doctors to interpret evidence as a basis of their decisions.
 Chovanec, M et al., 2021. Late adverse effects and quality of life in survivors of testicular germ cell tumour. Nature Reviews Urology
 Petrelli et al., 2015. Surveillance or Adjuvant Treatment With Chemotherapy or Radiotherapy in Stage I Seminoma: A Systematic Review and Meta-Analysis of 13 Studies. Clinical genitourinary cancer
 Button et al., 2013. Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience
 Begley and Ioannidis, 2015. Reproducibility in Science. Circulation Research
 Ioannidis, 2005. Why Most Published Research Findings Are False. PLOS Medicine