CANCER DIAGNOSTIC TESTS AND ARGUMENTS
Some arguments apply to tests such as imaging and other diagnostic tests. Again, there is an element of not needing to do research to validate the obvious – a scan with a sharper image is likely to be better than a fuzzy one! However, when we look more closely, things get trickier. For example, one of the key drivers to decision-making is whether cancer has spread to a particular organ. In general, if a scan looks abnormal in a particular area known to be at risk, it is likely that this represents disease. The converse is not the case, however – a negative scan could be negative or could mean the disease is below the threshold for detection. A good example of this sort of problem is the detection of cancer in lymph nodes. As lymph nodes are normal structures and cancer in lymph nodes is of similar density (and therefore imaging appearance) to the normal tissue, imaging can only tell us if nodes are of normal or abnormal dimensions – typically, the cut-off size is around 5 millimetres. Clearly, if we have a 4-millimetre cancer deposit replacing the bulk of a node, it will, therefore, look ‘normal’.
1. Examples of tumour responses on scans
Suppose a potentially better imaging test for node disease is developed, how should it be evaluated? Such a test would fall into the same sort of regulatory route as surgical devices – we need to show safety and fitness for purpose. Safety is straightforward – the phase 1/2 route clearly works fine, but how do we demonstrate ‘fitness for purpose’? The answer is some form of clinical trial, but the question of endpoints is very tricky – how many ‘normal’ lymph nodes harbouring small cancers do we need to detect to be worthwhile? How many are we allowed to miss? How do we evaluate the ‘true’ positive and negative rates? Should we move to broader clinical outcomes rather than counting lymph nodes – for example, does the application of the test result in better clinical results, for example, longer survival times, than the standard way of managing the patient?
These are all very difficult issues when applied to image technology when the acquisition costs of new scanners are very high. Even for technologies that augment existing scanners, for example, new contrast medium drugs, the issues are substantial, and there is not a single consistent route across the globe.
Similar arguments apply to diagnostic tests. Again, at first sight, the problem would seem simple – if we have a blood test that correlates with cancer, then we should use it as part of the basis for clinical decisions. However, if we examine the literature, we find many examples of tests that correlate with the presence or absence of disease but very few are actually used clinically – why should this be? The principal answer to this is that the test has to give additional information over what we know already. For example, there is a whole range of urine tests that correlate with the presence of bladder cancer but none are used in the UK. Patients with suspected bladder cancer need a cystoscopy to confirm the diagnosis. The available urine tests are not reliable enough to exclude patients from a cystoscopy. Once the bladder has been examined, if a tumour is seen, a biopsy is needed. Again, the tests are not sufficiently reliable to obviate the need for biopsy. In addition, the excision biopsy is also part of the treatment, so however good the test, the patient still needs the operation. How about predicting prognosis? Again, the urine test is good but not as good as the pathological study of the removed tumour, so again it adds nothing. Given the above, the correct test for a diagnostic procedure is its effect on outcomes – can the test spare invasive procedures or predict which of a range of treatment options is best? This requires large-scale trials similar to those needed to license a drug and is the reason there are so few established tests or markers used in the clinic as decision aids.