CANCER NEW DRUG DEVELOPMENT PRE-CLINICAL PHASE
Before a drug can be administered to human subjects, a further phase of pre-clinical testing is required – toxicity testing. While animal models and cell culture provide valuable indications of whether a drug may be active in man, they do not tell us whether it is safe. We also need to know whether it is likely that we can achieve drug levels in patients that will be high enough to realistically have an impact on the cancer. The standard way of exploring this is to give escalating drug doses to groups of animals until we start to see animals dying from drug side effects. There are a number of rather grisly standard measures, such as the dose of the drug that will kill a proportion of the test subjects – termed the lethal dose (LD) test. Measures such as LD50 (the dose that kills 50% of the animals) and LD10 (10% death rate) are widely used and attract much controversy from anti-vivisection groups. I don’t propose to examine the ethics of animal testing per se – it seems to me to be something you believe is right or believe is wrong. If you fall in the latter category, then no amount of argument will generally alter opinions. I do believe it is worth critically examining the scientific basis of animal testing to try and minimize unnecessary suffering. There are many very obvious problems with LD50 testing – for example, the LD50 will vary widely for different species for a given compound and hence may still expose human subjects to risks. Nonetheless, compounds that turn out to be very toxic in LD50 tests at levels well below the necessary therapeutic levels are unlikely to be safe or worthwhile to test in humans. Whatever the rights and wrongs and limitations of pre-clinical toxicity testing, at present regulatory authorities require such testing on at least two species, one of which must be a non-rodent species such as the dog, before any human testing of a drug can begin.
Having produced a candidate drug and completed the necessary pre-clinical testing package, the next step is testing in human subjects. Logically enough, this is termed a phase 1 trial. For many drugs, for example blood pressure pills, this testing will take place in ‘normal’, usually paid, volunteers. In general, these will be fit young men (not women, due to the risk of inadvertent damage to a foetus). For cancer drugs, which are often very toxic and frequently carcinogenic, this is clearly not an appropriate route, and phase 1 trials usually take place in patients who have exhausted standard treatment options. The classical phase 1 trial format is that the initial three patients are treated at a conservatively low dose and the effects observed. If no unacceptable toxicity occurs, then a further three patients will be treated at a higher dose, and so on. Clearly, for most drugs, eventually a dose level will be reached at which unacceptable side effects occur (termed ‘dose-limiting toxicity’, or DLT). If a patient experiences a DLT, additional patients are treated at the same dose level. If two or more out of six experience a DLT, then the ‘maximum tolerated dose’ (MTD) for the drug is reached and the trial ends. The dose level below the MTD will be used for further study. The classical phase 1 trial has the merit of simplicity, but there are clearly limitations as well. Firstly, different patients will have varying susceptibility to potentially dose-limiting side effects. If the trial includes too many side-effect-prone patients, the estimated maximum tolerated dose will be too low, and vice versa. Secondly, not all drugs need to be used at the maximum tolerated dose. For example, a drug blocking a hormone receptor only needs to be given in sufficient quantity to block the target. Any additional drug given above this level is only adding toxicity with no benefit. For trials with drugs of this sort, it is therefore important to specify the endpoint required to avoid unnecessary drug exposure to participants.