Industrialized research in the BJCP: A neo-Luddite view

Authors


Background

In the 18th century cloth was made in an artisanal process using spinning wheels and hand looms. Hence, clothes were expensive and quality generally low, and they were coloured with vegetable dyes, which faded quickly. The industrial revolution (actually a gradual process starting around 1750 and taking more than a century) changed all that. Mechanization of the equipment and an explosion of possibilities for communication and logistics led to mass production, goods at low prices and high quality and unprecedented economic growth in virtually all areas.

The industrial revolution started in England helped by the period of peace and stability following the unification of the Kingdoms of England and Scotland in 1707, the development of contract rules and a common (and reinforced) legal system to deal with conflicts. This environment was essential for reliable interactions between subcontractors, suppliers of semi-finished products, final manufacturers and transport companies.

The demand for high quality and relatively low priced textiles stimulated the chemical industry, first in inorganic chemical synthesis of products such as bleaching powder and subsequently in organic synthesis of aniline dyes, whose discovery, patented in 1856, was entwined with the birth of the pharmaceutical industry. The organic chemist William Perkin, while attempting to synthesize quinine from coal tar to save British soldiers dying from malaria in India, wound up with a deep brown sludge that stained his shirt a ‘lustrous purple’. He named the new chemical entity ‘mauvein’. This was commercial dynamite, because previously the shade that resembled it most closely, known as Tyrian purple, could only be obtained from a Mediterranean mollusc, and it took ‘an awful lot of them to make a ballgown’ [1]. Later the commercial dyestuffs industry, to which Perkin's discovery gave birth, more than repaid its debt to drug discovery when these developments were picked up on by apothecaries such as Merck (whose company was founded in Darmstadt in 1668), and when chemical companies such as Bayer, Hoechst and Sandoz moved into pharmaceuticals [2]. Paul Ehrlich was impressed by the selectivity of his (eponymous) haematoxylin for cell nuclei and formulated the notion that drugs act selectively by combining with specific chemical targets within cells (summed up in his dictum: ‘corpora non agunt nisi fixata’). His insight that such chemical targets could differ between cells, especially between the cells of a host and an invading pathogen, and could hence underpin selective toxicity (the ‘magic bullet’), led to systematic searches for chemoselective drugs. Many of the valuable drugs that followed stemmed from coloured substances, for example, classical antihistamines (H1-receptor antagonists such as mepyramine) and antipsychotic drugs (such as chlorpromazine) are based on the methylene blue (methylthioninium chloride) nucleus. However, serendipity continued to play a major part: Gerhart Domagk demonstrated that the red dye Prontosil rubrum was effective in a mouse model of bacterial infection in 1935, but Prontosil rubrum turned out to be an inactive prodrug, whose active metabolite (sulfanilamide) is colourless. If Domagk had used the much simpler experimental screening test of assaying antibacterial activity vs. bacteria grown in vitro, the sulfonamides might1 have languished on the shelf and twentieth century history likely taken a different course!

Although the industrial revolution is the basis of much of the current wealth of economically advanced societies, it has also had less desirable consequences. Unskilled labourers were exposed to squalid conditions to perform the industrialized tasks previously performed by skilled craftsman. Child labour became commonplace and had to be ended by law. Industrialization did not happen without protest. Ned Ludd, a semi-mythical figure representing the now jobless artisans, inspired a protest movement that reacted to these developments by smashing the mechanized equipment. Although understandable, this was not a good idea and the Luddites were severely repressed.

Industrialization of the process of drug development

Perhaps you recognize a parallel in the clinical trial. Until the mid 1980s this was largely an artisanal affair run by a single clinician and some assistants. There were no protocols, and until the 1980s many institutions had no research ethics committee or institutional review board (IRB). If they did, the committee was not regulated by law.

The emergence of Good Clinical Practice (GCP), inspired partly by a number of momentous fraud cases, and the adoption of the rules by the International Conference on Harmonization (ICH) in Europe, the USA and Japan, have provided the framework for increasing international collaboration in large trials. The need for such trials was of course recognized much earlier, notably by Richard Peto, Peter Sleight and Rory Collins who founded the ISIS group [3]. These trials provided spectacular results. For instance, the ISIS 2 trial that demonstrated a 20% survival benefit in myocardial infarction after acetylsalicylic acid made this inexpensive medicine part of the management of 90% of patients with acute myocardial infarction worldwide within a year [4]. These groups made their own rules for assuring quality, but also made sure the data collection was hyper-efficient and focused on the information needed to answer the scientific question, restricting themselves to simple end points, such as mortality, which were easily detected, allowing the large scale trials that were necessary to detect relatively small effects. Overheads attributable to process were still very limited and the costs of such trials extremely low.

These trials preceded the development of the blockbuster drugs of the 1990s, and it rapidly became necessary to perform trials of a size that exceeded national capacity. Most big pharmaceutical companies stopped performing such trials themselves and joined the big outsourcing trend. This favoured Contract Research Organizations (CROs), which grew from small operations with perhaps 50 employees, to stock market-quoted multinationals, sometimes with over 10 000 staff and larger than at least some of the clients they work for.

As the process of doing clinical research increased in complexity, so did the rule systems to assure patient safety and data integrity. Inevitably, management consultants became interested. They quickly convinced pharmaceutical managers that the best thing for a complex process is to cut it up into logical pieces that can be managed by specialists, a process analogous to Adam Smith's famous observations on pin manufacture in 1776. This led most companies to divide clinical research into operational and scientific departments. The scientists – stereotypically a rather messy lot – could dream up trial designs, which would then be handed over to the operational guys – three pens of different colours in the breast pocket and clean desks – who would then arrange the trial to be performed by investigators in many countries. We regard this as the industrialization of the clinical trial, and although not as revolutionary as the industrial revolution it did have many consequences, not all of them benign.

Such arrangements functioned well for the trials of the blockbuster drugs of the 1990s, but in retrospect none of these trials, important as they were, was scientifically very complex. Statins and ACE inhibitors are largely well-behaved medicines that can be given in relatively wide dose ranges without too many problems. Measurement of end points was largely confined to mortality and required little technology other than a request to the death registry.

The original ISIS trials were performed by clinicians and scientists interested in the basic question of the trial, and the ISIS-2 trial, in 17 000 patients, did not cost much more than £1 million. A similar trial today would probably cost well over £100 million. This increase in costs has many reasons that are beyond the scope of this editorial but do beg the question of what, if any, scientific advantages have been gained from this expenditure [5].

Not all trials are equal, and some are certainly more equal than others, to misquote George Orwell. Some clinical drug research remains largely artisanal. This used to be the highly complicated clinical pharmacology research of drug prototypes [6], often characterized by a lack of knowledge about the link between the biological mechanism of a drug and its clinical effects. To add to the uncertainty, many truly prototypic drugs lack biomarkers to predict or quantify this link.

Research into such compounds requires integration of multidisciplinary knowledge and process with highly efficient (and rapid) interchanges between the ‘silos’. Such research is poorly suited to the industrialized model of drug study organization, unless it is enhanced by individual clinical pharmacological critique and overview. This limitation is well recognized by scholars in the field of organization and project management [7]. This was perhaps the reason why most of the major innovative drug companies in the 1980s and 1990s performed prototype studies in house. The first and second subjects to receive the antiepileptic drug lamotrigine were the senior physician leading the project (Tony Peck) and the chemist who designed it (Dave Sawyer) in the Wellcome Clinical Pharmacology unit. The first dose of lamotrigine was administered by a future European editor of the BJCP, who blithely ignored the slight changes in the physician's electrocardiogram and the chemist's stress-induced attack of migraine. Unprofessional as this may sound, lamotrigine was developed with very innovative trial methods, even in the very first studies in humans [8].

Despite such successes, most pharmaceutical company units were subsequently devoted to more routine non-prototype work, and the early development of new drugs was generally outsourced to phase I units in CROs rather than in academic institutions. This led in turn to the industrialization of this segment of the development process. From then on (with increasingly rare exceptions) protocols were written by drug companies. Data management and analysis were done by the company or its consultants. Scientific papers were written by the company, leading to the current situation, where many CROs have such a small number of publications per investigator that they would be immediately closed by any even remotely critical university (as distinct from regulatory or business) committee.

Is this a problem, and should we even contemplate smashing the mechanical spinning machines and steam-driven looms of the trials industry and going the way of Ned Ludd? This question was largely resolved by the widely published case of the CD28 ‘superagonist’ TGN1412, possibly the best remembered code number of a new drug since Ehrlich's ‘compound 606’ (and even that is better remembered as Salvarsan or arsphenamine). TGN1412 was undoubtedly prototypical and had biological properties in the field of immunology that certainly were not familiar to many phase I clinical pharmacologists at that time. The underlying science was excellent, but that was virtually the only thing done by the company. Contractors did the pharmacology and the toxicology and produced the material and the regulators approved the data package. Finally a phase I unit administered probably the biggest overdose ever given as a first dose to humans, with terrifying results for the subjects, the end of a potentially interesting therapeutic route and the demise of a small and innovative scientific company.

The authors of the final report [9] on the investigation of this disastrous episode made several criticisms, but did not allocate blame, as all the involved parties had done what they were required to do. Perhaps no one was to blame other than the system of industrialization, which did not assure a final expert view of the whole rather than the parts. The Regulatory Authority, which was responsible for the report, did not find that such an integrative view of the risk belonged with them, and perhaps it does not and should stay with the investigator.

We have recently had a number of experiences in the Journal that, in the light of all this, make us concerned that the process of industrialization of the early development trial in humans has gone too far. We recently sent a paper for peer review to one of our very respected and experienced reviewers, who returned it because he did the work as a clinical investigator. His name was not on the author list or in the acknowledgements. We now see many papers in which the clinical investigator is not an author and sometimes is not acknowledged, and this is of great concern.

Authors' responsibilities

The International Committee of Medical Journal Editors (ICMJE) has referred to authors in the following terms:

‘An author must take responsibility for at least one component of the work, should be able to identify who is responsible for each other component, and should ideally be confident in their co-authors' ability and integrity.

Authorship credit should be based on 1) substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data, 2) drafting the article or revising it critically for important intellectual content and 3) final approval of the version to be published. Authors should meet conditions 1, 2 and 3’.

A clinical investigator of a study, according to the ICH GCP guideline, ‘should be qualified by education, training and experience to assume responsibility for the proper conduct of the trial, should meet all the qualifications specified by the applicable regulatory requirement(s) and should provide evidence of such qualifications through up to date curriculum vitae and/or other relevant documentation requested by the sponsor, the institutional review board (IRB)/ethics committee (EC) and/or the regulatory authority(ies)’ (our emphasis).

These rules have been devised in order to ensure that anyone whose name appears among the authors of a paper is suitably qualified. They were never intended to ensure that those who are so qualified should automatically be credited as authors. However, we consider that the clinical investigator who meets these requirements must surely qualify for consideration of authorship. Conversely, if a clinical investigator is not qualified to be an author this implies, in our opinion, a case for considering whether this represents a possible deviation from GCP. We recognize that there are facilities in which medical personnel do little more than take blood and transmit electrocardiograms for interpretation to an offshore cardiology CRO, but we are convinced that such lack of engagement is wrong and dangerous. This affects the safety of trial subjects and also the future of drug development. In the next era clinical pharmacologists involved in clinical experimentation will be faced with a plethora of exciting new and prototypical mechanisms of action. We need to learn from TGN1412, the archetype of such compounds, that this cannot be managed in a narrow industrial model. The responsible clinical staff involved should be able to read and interpret preclinical information, write protocols, evaluate biomarkers, statistics and pharmacodynamics, and write scientific papers. Such skills are perhaps even more important for the integrity of the data and the safety of the subject than being able to perform advanced life support.

We find that crediting such individuals with authorship is of such importance that we will in future reject papers in which the clinical investigator is not an author, because such studies are done under conditions that are unacceptable for the proper conduct of human studies. Some of our readers may find this position unnecessarily harsh. We invite comments for a discussion about this position and will publish your opinions.

Competing Interests

JR is a senior research physician at Quintiles Drug Research Unit at Guy's (a phase I contract research organization).

Footnotes

  1. 1

    The story is complicated since sulfanilamide had already been patented in 1909, and it was therefore potentially more profitable to use Prontosil. See Julius Comroe's paper ‘Missed opportunities’ (Am Rev Respir Dis 1976; 114(6): 1167), reprinted in ‘Retrospectoscope’ (von Gehr Press, 1977). It is possible that IG Farber knew that sulphanilamide had antibacterial properties.

Ancillary