Venous thrombosis is a frequent disease with annual incidence rates varying from 1 case per 10 000 young adults to 1 per 100 elderly persons (Anderson et al, 1991; Nordstrom et al, 1992). It is surprising therefore that no case truly compatible with a diagnosis of venous thrombosis was apparently reported in antiquity. Extensive searches made by Hull (1800), Anning (1957), Popkin (1962) and Dexter (1973) found no case that could be reasonably attributed to a venous thrombus in the writings of Hippocrates, Galenus, Celius Aurelianus, Ibn an-Nafiz, Avicenna and others. Venous thrombosis is not among the many diseases mentioned in the Bible (Bennet, 1891). The term ‘leucophlegmasia’, first used by Hippocrates and then by Celius Aurelianus, refers to cases of bilateral leg oedema, probably as a result of conditions such as heart failure, liver cirrhosis and renal insufficiency. Nothing compatible with a diagnosis of venous thrombosis can be found in pieces of art from ancient Egypt, Greece, Rome, Persia and South America. Although these sources sometimes included representations of varicose veins and ulcers, unilateral leg oedema or other pictures compatible with venous thrombosis were not featured. Particularly surprising is the absence in antiquity of descriptions of venous thrombosis during pregnancy and in the post-partum period, because it is difficult to imagine how unilateral swelling of a leg accompanied by signs of inflammation could have escaped the attention of the concerned team attending labour (the midwife, the mother and other relatives of the pregnant woman).
According to Dexter & Folch-Pi (1974), the first well documented case of venous thrombosis is depicted in an illustrated manuscript written in the 13th century and currently preserved in Paris at the Bibliothèque Nationale (MS Fr 2829, Folio 87) (Fig 1). The manuscript describes the case of a young man from Normandy named Raoul who, at the age of 20 years, developed unilateral oedema in the right ankle that subsequently extended up to the thigh, with no obvious symptoms in the contralateral leg. The case illustrated in the manuscript is reported in more detail in an essay by de Saint Pathus (1932) on the miracles of Saint Louis (Louis IX), included in a collection of Fay (1932) of Classiques Français de Moyen Age. From the description, it is clear that, during the course of his illness, the young man developed a septic leg with ulcers and fistulae and that the great and much renowned surgeon Henri du Perche advised him to wait and see. This conservative attitude at a time when surgery was the main approach to illnesses indicates that what is now common knowledge was already understood, i.e. that in the treatment of venous thrombosis there is little place for surgical intervention. The subsequent course of the illness was characterized by further deterioration, with exposure of bone from the ulcers and fistulae and clear manifestations of gangrene. Raoul was advised to visit the tomb of Saint Louis who was buried in the church of Saint Denis, where the patient spent several days confessing his sins and praying to the saint. Afterwards he chose to collect the dust accumulating below the stone that covered the tomb and apply it to the fistulae and ulcers of his foot. The openings stopped running and were filled with flesh. He was first obliged to use crutches, but subsequently he could walk with a cane and was eventually able to dispose of all devices, even though his foot throbbed a little. Raoul was cured as described above in the year 1271 and was still alive and well in 1282. With our current knowledge, it is not certain that this was a bona fide case of venous thrombosis. Septic complications are not common features of this condition but, at that time, infections were much more prevalent than now. Whether or not this is the first case of venous thrombosis, it is clear that, without its religious implications, it would not have been reported.
History of pathogenesis
It is now well established that venous thrombosis is caused by a contribution of three factors, i.e. damage to the vein, blood stasis and hypercoagulability. It is also known that thrombosis is more frequent in association with circumstantial risk factors such as pregnancy and delivery, surgery, cancer and other medical illnesses, and with inherited and acquired factors that cause hypercoagulability. The development of our current knowledge on the pathogenesis of venous thrombosis took place over several centuries.
The first detailed description of venous thrombosis after delivery was made in England by Richard Wiseman (1676), Sergeant-Chirurgeon to King Charles II. In a chapter of ‘Severall Chirurgicall Treatizes’ he wrote about the wife of a pharmacist who, after a difficult labour, developed swelling and pain of the right leg, extending from the knee to the hip, with no inflammation and discolouring of the skin. The description by Wiseman of this case is notable not only because the concept of the proximal propagation of a leg venous thrombus is put forward, but also because Wiseman surmised that thrombus formation was caused by a systemic alteration in circulating blood, thereby pioneering the concept of hypercoagulability. Until the end of the 18th century, it was held that venous thrombosis associated with pregnancy and delivery was caused by retention in the legs of ‘evil humors’, which determined a reflux of blood. For instance, the famous French surgeon Ambroise Paré, who lived in the 16th century, believed that swelling of the legs during pregnancy was caused by the retention and concentration of menstrual blood (Johnson, 1678). Incidently, Ambroise Paré was probably the first to describe superficial thrombophlebitis as a complication of varicose veins, as he wrote ‘They often swell with congealed and dryed bloud and cause pain which is increased by going and compression’ (Johnson, 1678). Another widely held view was that post-partum thrombosis was caused by the retention of unconsumed milk in the legs (‘engorgements laiteaux’) (Levret, 1766). The short but well-documented and detailed publication of Findley (1912) should be read by those who are interested in one of the earlier modern reviews on venous thrombosis associated with pregnancy and puerperium.
With the exception of the putative first case of venous thrombosis in the apparently healthy young man described by de Saint Pathus (1932), the early descriptions of this condition were mostly related to childbirth. We have to wait until the 19th century and the seminal work of Armand Trousseau (1866) for the first documented case of the association of venous thrombosis with cancer, known to be one of the most frequent predisposing conditions. His observation had to wait nearly 70 years to be confirmed and extended by Sproul (1938), who reported a high frequency of venous thrombosis during the post-mortem examination of patients who died of various malignancies, most notably pancreas carcinoma. According to De Bakey (1954), a pioneer in cardiovascular surgery who reviewed the early literature, the well-known association between venous thrombosis and surgery was first recognized by Spencer Wells in 1866. More modern, seminal studies are those of Gunnar Bauer (1942), who called attention to the frequency of venous thrombosis especially after fractures of the legs, and Byrne (1955), who investigated as many as 748 cases and demonstrated that the post-operative state was the second most common predisposing factor (pregnancy and puerperium ranking first), particularly in cancer surgery, in operations involving the pelvis and with fracture of the legs. That medical illnesses, particularly if long-lasting and associated with prolonged immobilization in bed, are a risk factors for venous thrombosis was first established by Ferrier (1810), who noted that the condition occurred not only post partum but also during debilitating infectious diseases such as typhus. According to Lockwood (1951), however, it was as early as at the beginning of the 15th century that Ugo Benzi of Siena described venous thrombosis occurring during a long illness accompanied by fever in a man from Novara named Jacobus Manni. Swelling involved both legs, it is not known whether conditions other than venous thrombosis may have caused it.
The Virchow triad
In 1628 William Harvey demonstrated the circulation of the blood for the first time in ‘De Motu Cordis’. Hunter (1793)(Fig 2)and his disciples Matthew Baillie (1793) and William Hewson (1846) abandoned the theory of retention of humors and hypothesized that venous thrombosis was caused by closure of the veins by blood clots. They thought that slowing of the circulation of the blood was a prominent pathogenic mechanism and that thrombus formation was caused by the presence of a ‘coagulable lymph’ in plasma, a substance later called fibrinogen. Hull (1800) provided the first review of the literature on venous thrombosis that he called for the first time ‘phlegmasia dolens’ and hypothesized that coagulation of the ‘lymph’ was caused by inflammation. Another step forward towards the understanding of the pathophysiology of venous thrombosis was made in the second half of the 19th century through the seminal work of Rokitansky (1852) and, particularly, Virchow (1860)(Fig 3). On the basis of pathological observations mainly made in fatal cases of post-partum thrombosis, they independently presented the famous triad of factors that are still considered the main factors in the pathogenesis of venous thrombosis, i.e. damage to the vein, slowing of venous flow and changes in the blood leading to an increased tendency to form clots (hypercoagulability).
In post-partum phlegmasia dolens, the epitome of venous thrombosis at that time, damage to the vein was related to manipulation of the uterine veins in which thrombosis was thought to begin; stasis, caused by confinement in bed and the process of parturition itself; hypercoagulability, from bleeding which, as early as in 1604, has been observed to increase blood coagulability (Roderiguez Castro (1604), cited by Anning (1957). However, the biochemical and molecular basis of the third component of the Virchow triad, hypercoagulability, remained only partially understood until recently. It has been known for a long time that, in situations associated with an increased risk of venous thrombosis (pregnancy and the puerperium, oestrogen use, the post-operative state and cancer), there is an acquired increase in the plasma concentration of a number of coagulation factors (particularly factor VIII and fibrinogen), often accompanied by an impairment of a defensive system such as fibrinolysis. However, it was only in 1965 that the existence of genetic factors increasing the risk of venous thrombosis through the induction of hypercoagulability was first given a biochemical basis, when Egeberg (1965) reported that there was reduction of antithrombin to halfnormal levels and that this defect was linked closely with the thrombotic tendency, in a Norwegian family characterized by venous thrombosis occurring at a young age and with a tendency to recur. This finding appeared immediately biologically plausible, because antithrombin is a naturally occurring anticoagulant protein that inactivates the main coagulation enzymes (thrombin, activated factor X, activated factor IX and activated factor XI). Antithrombin deficiency, however, explained only a small proportion of cases of venous thrombosis, being present in as few as 0·1% or less of patients with a first episode of this disease (Allaart & Briët, 1994). A step forward was made in the early 1980s, when the American groups of investigators led by John Griffin (Fig 4) and Charles Esmon independently showed that protein C and protein S deficiencies were inherited risk factors for venous thrombosis (Griffin et al, 1981; Comp & Esmon, 1984; Schwartz et al, 1984), together accounting for approximately 0·5% of first episodes of the disease (Allaart & Briët, 1994). The active enzymatic form of protein C with its cofactor protein S inactivate the activated form of coagulation cofactors V and VIII, making biologically plausible the fact that the deficiency of protein C and protein S lead to decreased inactivation of these coagulation cofactors and, ultimately, to a hypercoagulable state.
Despite these discoveries on the genetic basis of hypercoagulability and the associated risk of thrombosis, the great majority of cases of venous thrombosis, particularly those occurring in the absence of circumstantial risk factors, remained unexplained. Dahlback et al (1993) in Malmo (Sweden) (Fig 5) demonstrated a relationship between the inherited resistance of plasma with the anticoagulant action of activated protein C and the development of venous thrombosis in patients with a history of venous thrombosis, a finding confirmed by Griffin et al (1993), Koster et al (1993) and Faioni et al (1993). The next year, Rogier Bertina (Fig 6) and his team in Leiden discovered that the inherited resistance to activated protein C was associated with a missense mutation in the gene encoding coagulation factor V, which dramatically slowed the cleavage of the activated form of this cofactor by activated protein C (Bertina et al, 1994). The mutation, called factor V Leiden, leads to a gain of function of activated factor V, which in turn causes a hypercoagulable state (Kalafatis et al, 1994, 1995). The striking finding was that activated protein C resistance resulting from heterozygosity for the factor V Leiden mutation is present in approximately 20% of patients who present with a first episode of venous thrombosis (Koster et al, 1993) and that, in populations of European descent, the background frequency of the mutation is 2–3% or higher.
The reasons for the high frequency of the Leiden mutation, which apparently originates from a single founder, are still only partially understood but have led to interesting evolutionary hypotheses. Hypercoagulability associated with the mutation might have conferred advantages to men in the ‘fight or flight’ pattern of primitive life and favoured women by causing less blood loss at the time of parturition. The discovery of activated protein C resistance and of the factor V Leiden mutation has been a fundamental breakthrough in clinical medicine, as witnessed by the fact that published papers related to these findings have been quoted more than 2000 times in 8 years. After the original description of the factor V mutation, the Leiden group led by Bertina made further important contributions by identifying another gain-of-function mutation in another coagulation zymogen, factor II or prothrombin, associated with hypercoagulability and thrombosis (Poort et al, 1996). The substitution of one nucleotide in the non-coding region of the gene at 3′ leads to an increased potential for the formation of the enzyme thrombin from the zymogen and an increased risk of venous thrombosis. Overall, the two aforementioned inherited abnormalities associated with hypercoagulability contribute to explain, alone or in association with circumstantial risk factors, approximately 25–30% of first, unselected cases of venous thrombosis and 60–70% of cases of recurrent thrombosis and thrombosis in the young. Among the acquired risk factors associated with hypercoagulability, the antiphospholipid antibody syndrome and particularly the lupus anticoagulant add substantially to the genetic factors in the modern understanding of hypercoagulability. The pioneer of this concept, Robert Wiseman, will smile satisfied in his grave.
History of diagnosis
It has been known for a long time that clinical symptoms and signs are of little help in the diagnosis of venous thrombosis of the legs because they lack both sensitivity and specificity. This applies to calf tenderness, pain on dorsiflexion of the foot (the Homan's sign), increased skin temperature, ankle and calf oedema, and superficial venous dilatation. Contrast venography, the first objective test and still the gold standard in the diagnosis of venous thrombosis, was extensively investigated by Bauer (1940), Dougherty & Homan (1940) and Starr et al (1942). However, it took time before contrast venography was widely applied, despite awareness of the poor reliability of clinical diagnosis. De Bakey (1954), for instance, wrote in his review article that ‘experience has shown that a significant proportion of patients with this disease do not have signs’. Lack of specificity and sensitivity of clinical diagnosis was definitely demonstrated by Haeger (1969), taking phlebography as the reference method. Because a suspected clinical diagnosis of venous thrombosis was objectively verified in only 46% of the patients hospitalized and treated for venous thrombosis, Haeger concluded that clinical signs could not be used for diagnosis, nor could thrombosis be excluded by their absence. The fallacy of clinical diagnosis, the need for an accurate diagnosis before starting a treatment not free of serious sideeffects such as anticoagulants, and the difficulties associated with the widespread use of contrast phlebography as a diagnostic method (invasiveness, pain, superficial thrombophlebitis, difficult interpretation of the results) gave the impetus in the 1970s to the development and validation of a number of objective and non-invasive diagnostic methods, using phlebography as a reference. Those better validated and more used in clinical practice are impedance plethysmography and compression ultrasonography, the latter alone or combined with pulsed Doppler (Duplex) scanning. After a period of popularity that followed the original independent development of the 125I–fibrinogen uptake test (Browse, 1972; Kakkar, 1972), this method, mainly used for the diagnosis of asymptomatic venous thrombosis in individuals at risk in the post-operative period or during medical conditions such as stroke and myocardial infarction, fell into disrepute because it was shown that it was not as sensitive as originally predicted (Lensing & Hirsh, 1993). In addition, the fear that the infusion of plasma-derived fibrinogen might transmit blood-borne infections led to its abandonment during the acquired immune deficiency syndrome (AIDS) epidemics.
The criteria that have been used to validate plethysmography and ultrasonography were based on the parallel performance of the test under evaluation and the reference method (contrast phlebography) in consecutive patients with a suspected diagnosis of venous thrombosis, the results of each test being interpreted without knowledge of the clinical findings of the patients. The final clinical validation of the tests stemmed from the long-term clinical follow-up of patients in whom anticoagulant treatment was withheld upon finding a normal result. Overall, several studies have shown that phlethysmography and ultrasonography are sensitive to the presence of occlusive venous thrombi involving the veins of the knee and thigh, but are poorly sensitive to non-occlusive thrombi of the upper leg or to thrombi confined to the calf vein, and that withholding anticoagulant treatment on the basis of a negative diagnosis is safe in terms of the occurrence of venous thromboembolic complications (Hull et al, 1985; Lensing et al, 1989; see also the review of Prandoni & Mannucci, 1999).
At this time, compression ultrasonography with or without the associated use of pulsed Doppler scanning is the most widely used test in Europe to diagnose patients with symptoms and signs that suggest the presence of venous thrombosis. In the USA and Canada, impedance plethysmography is still used, but it is recognized that the instrumentation needed is less simple than that used for ultrasonography. These are still partially resolved or unresolved problems in the field of the diagnosis of venous thrombosis of the legs. None of the above methods permits us to accurately diagnose asymptomatic thrombosis that occurs frequently in patients undergoing orthopaedic and cancer surgery. At this time, phlebography is still the only method accurate enough in this instance. It is also difficult to diagnose recurrent venous thrombosis if it recurs in the leg affected by the first episode and phlebography is no more accurate than non-invasive methods in these situations. This is a serious limitation in clinical practice, because documentation of recurrence usually leads to the implementation of a demanding and risky treatment such as the lifelong administration of anticoagulant drugs.
History of anticoagulant treatment
It was only in the late 1930s that effective anticoagulant treatment began. The development in 1935 of pure preparations of heparin to be given by injection was followed soon after by the discovery of dicoumarin, the first oral anticoagulant. Heparin is now the main anticoagulant drug for the initial treatment of venous thrombosis, whereas oral anticoagulants only take effect after several days. It is a surprising fact that, after 60 years, variants of these two drugs remain the dominant form of treatment and are now used on an increasing scale. Other types of antithrombotic and anticoagulant treatment, including antiplatelet drugs, snake venoms, direct and indirect thrombin inhibitors, activated protein C and recombinant hirudins, have since been introduced. To date, none have had a similar impact, despite vast sums spent by the pharmaceutical industry.
‘Blood thinners’ given by mouth were first proposed in ancient Greek medicine by Hippocrates. Extracts from many plants, leaves, stems and roots have been recommended from time to time. Blood letting, leech bleeding, acid fruits and clear wines were also considered. Haycraft (1884) identified the anticoagulant in the saliva of the European medicinal leech, which he named hirudin, but his extract was too toxic.
The discovery of heparin, the first clinically important anticoagulant, is universally attributed to McLean (1916)(Fig 7), who found that liver extracts contained a powerful anticoagulant. Howell, the Toronto physiologist, had accepted McLean for a research year to investigate the coagulant properties of tissue thromboplastin. McLean's studies revealed that not only were procoagulant thromboplastins present but also a powerful anticoagulant. Howell & Holt (1918) named McLean's anticoagulant ‘heparin’ from the Greek word for liver and then Howell (1925) improved the method for its extraction. A tissue anticoagulant had in fact been described several years earlier in literature from France and Germany (Owen & Bowie, 1996). Schmidt-Mulheim (1880) found that the intravenous injection of peptone into a dog resulted in incoagulable blood. Doyon (1910) found that this anticoagulant could be induced in dogs by injecting atropine into the portal veins and Contejean (1896) found that the gastrointestinal tract also contained the anticoagulant. Clinical interest in the possibilities of heparin as an anticoagulant began with McLean's discovery. Howell (1925) showed that heparin needed a plasma cofactor (antithrombin). All the early preparations of heparin gave rise to severe toxic reactions and it was not until 1935, when Best (1959) in Toronto and Jorpes (1935) in Stockholm prepared pure heparin, that clinical use began with the first report by Crafoord (1939).
Commercial heparin was originally derived from beef lung, but is now mainly from porcine intestines. A pharmacopoeial definition is ‘calcium or sodium preparations of a sulphated glycosaminoglycan’. Heparin must be given by intravenous or subcutaneous injection, as it has little effect by mouth. Dosage requires regular laboratory control to achieve minimum levels of the target therapeutic interval of 1·5–2·5 activated partial thromboplastin time (APPT) ratio prolongation (or 0·2–0·7 anti-Xa units per ml). The degree of APTT ratio prolongation has been shown to be reagent and coagulometer dependent.
The convenience of heparin administration has been improved in the last 15 years by the development of low-molecular-weight (LMW) heparin fractions that require no laboratory monitoring and can be given subcutaneously in most instances on a once-daily regimen instead of the 8- or 12-hourly conventional heparin. The latter has a MW of 5000 to 30 000 Da; LMW heparins have a MW of 4–5 000 Da. The fractions are prepared by depolymerization of conventional heparin by procedures that have little effect on anti-Xa activity but considerably decrease anti-IIa (Johnson et al, 1976). They bind to antithrombin like conventional heparin through the unique pentasaccharide sequence present in less than a third of the molecules. They have a longer half-life and a more predictable anticoagulant response.
Discovery of oral anticoagulants
With the success of heparin, the need for an alternative that could be given by mouth was evident. This came from a very different source. On the prairies of the USA and of Canada in the 1920s, a new disease developed with cattle dying of uncontrollable internal bleeding. The cause was linked to animal fodder by Schofield (1924)(Fig 8). The staple diet of cattle in these areas of the USA and Canada had long been sweet clover hay. The storage of hay was primitive and often the sweet clover hay went bad. The ‘spoiled’ hay would normally be discarded, but farmers could not afford to do so in the 1920s. After cattle or sheep ate ‘spoiled’ hay, the disease slowly became manifest in about 15 d with impairment of blood clotting resulting in internal haemorrhage, killing the animal in 30–50 d.
Bleeding could be controlled by the withdrawal of ‘spoiled’ hay and by transfusion of fresh blood from healthy cattle. The clotting tests of the time indicated that the disorder was as a result of a plasma ‘prothrombin’ defect and its severity paralleled the ‘prothrombin’ reduction (Roderick, 1929). The story of the discovery of the causative agent, dicoumarin, was related in detail by Link (1959)(Fig 9) in a historical issue of Circulation. Late in 1932, Ed Carlson, a Wisconsin USA farmer, lost two young heifers and three cows, and other cattle were suffering from bleeding. He did not trust the local veterinary diagnosis and he set off from his home in a blizzard to drive the 190 miles to the local agricultural experimental station. In his old Ford van was a dead heifer, a milk can of unclotted blood and about 100 pounds of ‘spoiled’ sweet clover hay. When he arrived at the Madison State Veterinary Scientist headquarters there was only one unlocked door. This was the laboratory in the Biochemistry Building of Carl Link in which he dumped the carcass. Only 1 month previously, Link had become involved in the sweet clover problem. The smell of new mown hay is now known to be caused by coumarin and the degree of bitterness in taste is a function of its content. Cattle preferred the less bitter, sweet clover. Campbell et al (1940) showed that the Howell plasma clotting time test was too imprecise for detection of sweet clover disease. The prothrombin time described by Quick (Fig 10) also left much to be desired. Link and colleagues spent a long time modifying the Quick test and in breeding a rabbit colony susceptible to the disease.
Isolation and synthesis
They identified the anticoagulant finally in 1939 when the crystalline substance was isolated, which proved to be dicoumarol (Campbell & Link, 1941). This resulted from the ‘spoiling’ of sweet clover hay by bacterial action from damp storage, the natural coumarin in hay becoming oxidized to 4-hydroxy-coumarin. Two molecules linked to form a bis-coumarin anticoagulant.
Clinical oral anticoagulation
Water-soluble vitamin K1 was then found to restore the low ‘prothrombin’ levels to normal and to reverse the coumarin defect. The way was then open for the first clinical report on oral anticoagulant therapy from the Mayo Clinic (Butt et al, 1941). Link's group went on to synthesize over 150 related compounds. Analogue no. 42 was found to be particularly active and given the name warfarin, from the Wisconsin Alumni Research Foundation initials. It was first used as a rat poison, a favourite term used by patients! When the U.S President Eisenhower had a coronary thrombosis in 1955, the anticoagulant drug he was given was warfarin. Warfarin has become the most widely used anticoagulant and is exclusively used in many countries.
The risk of bleeding is not unexpected in a treatment that reduces blood clotting factors. After 50 years of anticoagulation, bleeding risks are substantially reduced by better laboratory control and less intense dosage. Nevertheless, bleeding remains a constraint and the risk/benefit ratio has to be established for all clinical situations. Other complications include sensitivity reactions occurring mainly with related indanedione oral anticoagulants (e.g. Dindevan).
A major problem in anticoagulant administration has been in the prothrombin time (PT) test used for control of dosage. In this, thromboplastin (tissue factor) is obtained from a variety of animal species and tissues. In the early clinical studies by the American Heart Association (Wright et al, 1954), human brain tissue factor was used. This was usually home-made in each laboratory from fresh cadaver brains and extracted daily from refrigerated acetone dried material. This was impossible to standardize. When commercial supplies of thromboplastin became available in the 1950s, there were immediate apparent advantages of ease of production and assumed standardization. They rapidly replaced home-made human thromboplastins but, unfortunately, instead of improving the safety of the treatment, had the direct opposite effect. Commercial reagents were not only from different animal species but also contained variable serum (and, hence, clotting factors VII and X) contamination. Consequently, they were relatively unresponsive to the depression of these coagulation factors by oral anticoagulants and there was poor agreement between results from different laboratories. As a result, the dose of warfarin varied between centres. These changes in the sources of thromboplastin in the 1950s and 1960s took place without the clinicians responsible for the anticoagulant dosage generally being aware of the problem. With the less responsive commercial thromboplastins, larger doses of anticoagulant had to be given to achieve the same target PT or PT ratio with, not surprisingly, a resultant increase of bleeding. The treatment therefore became less popular in the early 1960s and remained so for the next 20 years.
There was a great need for PT standardization. The first such programme was initiated in 1962 in the United Kingdom with the Manchester Regional Thromboplastin Scheme covering a population of five million based on a single human brain thromboplastin (Poller, 1964). Manchester Reagent was adopted by the British Society of Haematology in 1969 as the first national reference reagent and was designated British Comparative Thromboplastin (BCT). This provided the basis for the British System for Anticoagulant Control. BCT was externally monitored under the auspices of the British Committee for Standards in Haematology and results were reported as British Ratios, with a recommended therapeutic range between 2·0 and 4·5. The unexpected outcome was the discarding at nearly all UK hospitals of commercial and local thromboplastins in favour of the national routine use of the human brain reagent.
The basis of anticoagulant control in the UK was human brain thromboplastin. The first primary WHO IRP for thromboplastin was also made from human brain. There was, however, concern over the possible transmission of the neurological disorder Creutzfeldt–Jakob disease by the causative prion in human cadaver brains. As a precaution, human brain reagent was withdrawn in the UK in 1985. Fortunately, more responsive thromboplastins of animal origin were being developed by the industry and the INR system was firmly established.
‘High dose’ versus ‘low dose’ warfarin
As stated, in the 1960s and 1970s reports of oral anticoagulation from North America and elsewhere gave a far higher incidence of bleeding than those from the UK and the Netherlands. Was the higher bleeding risk of American dosage with more intense warfarin treatment justified or were the lower bleeding rates in the UK and the Netherlands caused by ineffectual treatment with greater risk of re-thrombosis? This was resolved by a landmark study from McMaster University, Canada (Hull et al, 1982). Patients were all receiving anticoagulant prophylaxis for venous thrombosis during surgery. This randomized prospective study compared two different intensities of treatment. The less intense treatment (INR 2·0–2·5) was by control with Manchester Reagent (ISI = 1·0) and the second level of dosage (ISI = 2) by control with a typical North American thromboplastin. Patients were all receiving anticoagulant prophylaxis for venous thrombosis during surgery. The UK ‘low dosage’ proved equally successful to the higher North American dosage in the prevention of venous thrombosis, but caused only a fifth of the haemorrhagic complications. The McMaster study thus established the greater safety but equal effectiveness of the ‘low dose’ UK treatment. The way was thus open for safer treatment worldwide. Reduction of warfarin dosage has continued even further in North America subsequently with the successive recommendations on therapeutic ranges from a series of Consensus Meetings on Antithrombotic Therapy organized by the American College of Chest Physicians (ACCP) since 1985. Current ACCP Consensus recommendations are for a relatively conservative range of 2·0–3·0 INR for most clinical indications including venous thrombosis (Hirsh et al, 1998).
The wheel has thus gone full circle, with North America having moved from very high to low dosage. In retrospect, the British System for Anticoagulant Control and the Netherlands Thrombosis Service can be seen to have restored confidence in oral anticoagulation at a time during the 1960s and 1970s when it was being generally discarded because of the bleeding risks. A safe basis for the later massive expansion of anticoagulation in the 1990s was thus secured.
Automation of prothrombin time
Automation of the PT has been made necessary by the increasing workload for laboratories by increased demands for PT control. Coagulometers (automated or semiautomated instruments) have almost entirely replaced the original manual PT technique. Many reports in recent years have shown that coagulometers may have marked and unpredictable effects on INR. Some simple local methods for ISI calibration of local coagulometer/thromboplastin test systems is required. The WHO procedure is not feasible at most centres for several reasons, mainly because of the need for parallel manual PT testing and for supplies of thromboplastin IRP. The European Concerted Action on Anticoagulation of the European Union (Poller et al 1998a, b) have developed a simplified local ISI calibration procedure for coagulometers based on lyophilized plasmas certified in terms of the manual PT test with thromboplastin IRP, which avoids the requirement for local manual PT testing with thromboplastin IRP.
History of anticoagulant prevention of deep vein thrombosis (dvt)
Sevitt & Gallagher (1959) first described the benefit of oral anticoagulation in prevention of venous thrombosis after hip surgery in an anatomical study from Birmingham, UK. Later, a randomized study of oral anticoagulant prophylaxis in major surgery showed its value in venous thrombosis prevention (Taberner et al, 1978). A long series of controlled studies including those of Bergqvist et al (1996), Samama et al (1988), Ockelford et al (1989), and the meta-analyses by Collins et al (1988) and Clagett et al (1998), later substantiated the effectiveness of low-dose subcutaneous heparin (5000 U) given 8- or 12-hourly and LMW heparins (once daily) in prevention of venous thrombosis in general surgery. Ultimately there proved to be little to choose between the two types of heparin (Geerts et al, 2001), but LMW fractions cause less heparin-induced thrombocytopenia.
In these reports from general surgery, the incidence of DVT in controls was around 25% but between 6% and 8% with heparin. In contrast, oral anticoagulants have the disadvantage of a relatively slow onset and have therefore to be administered prior to surgery or as soon as possible afterwards. A two-step warfarin prophylaxis regimen (Francis et al, 1983) has been recommended but has had very limited application. In pooled results from five large studies in which oral anticoagulants were compared with LMW heparin, DVT rates were 20·7% with oral anticoagulants and 13·7% with LMW heparin, but major bleeding was 3·3% with oral anticoagulants but 5·3% with LMW heparin (surgical site bleeding and wound haematoma were also greater with LMW heparin). In contrast, in untreated controls after hip replacement, total knee replacement and hip fractures, the incidence of DVT was 50–60% (Mohr et al, 1993; Imperiale & Speroff, 1994). A recent analysis based on United States health-care costs found adjusted-dose warfarin to be more cost-effective than LMW heparin (Hull et al, 1997). Both warfarin and LMW heparin have been shown to be valuable in prophylaxis of deep vein thrombosis after total knee replacement.
Treatment of established deep venous thrombosis
In determining the value of anticoagulation in the management of established DVT and pulmonary embolism, the only study with a randomized control group was that of Barritt & Jordan (1960) from Bristol, UK, before development of modern methods of diagnosis. Later studies have been unable to include untreated controls on moral grounds and have had to rely on the comparison of heparin and oral anticoagulants with other types of treatment.
As well as effective levels of dosage, patients with established DVT need an adequate duration of oral anticoagulation after the initial treatment with conventional or LMW heparin. The optimum duration has been shown to depend on whether the episode is idiopathic or secondary to a reversible clinical cause. Three months duration for proximal DVT is recommended as a minimum and at least 6 months in which DVT is idiopathic or recurrent (Hyers et al, 2001).
Treatment range for venous thrombosis
The first attempt to provide INR therapeutic range guidelines was made privately to members by the British Society for Haematology (BSH) in 1984. This was followed in the same year by a report from a group of European clinicians based on their combined experience (Loeliger et al, 1984). In the United States, INR targets for venous thrombosis are now reduced to 2·0–3·0 (Hirsh et al, 1998).
Against all preconceived notions and principles, a small fixed mini-dose warfarin (1 mg/d) which does not cause significant PT prolongation has, surprisingly, been shown to prevent deep vein thrombosis during moderate risk surgical procedures. From Boston, USA, Bern et al (1986) showed that subclavian vein thrombosis with indwelling catheters was prevented with this regime in patients with malignancy. Warfarin (1 mg/d) was also shown to significantly reduce venous thrombosis after major gynaecological surgery in a UK randomized study but was not as effective as full-dose warfarin (Poller et al, 1987). This mini-dose regime neither prolonged the prothrombin time nor required laboratory control. The benefit may be caused by an unsuspected action of warfarin in fibrinolytic enhancement by an unknown mechanism in which plasminogen activator is stimulated. This is, however, insufficient to prevent DVT after hip surgery. Levine et al (1994) from Hamilton, Canada, found that 1 mg of the warfarin regimen daily for 6 weeks, followed by adjusted dosage, prevented thrombosis in patients with stage 4 breast cancer receiving chemotherapy.
A new approach to monitoring
In several European countries, the greatly increased demands for warfarin administration have led to devolution of management from hospitals to the community. To cope with the workload and for the greater convenience of patients, there has been pressure for a new method of PT control using home monitors (Anderson et al, 1993). These avoid the need to attend hospital departments. Hitherto, PT measurement depended on accurate testing of blood samples by skilled laboratory staff. With home monitors, a result is obtained on an unmeasured drop of whole blood, which therefore does not require skilled laboratory personnel. Not only has home testing been advocated, but programmes of patients' self-dosage combined with self-testing have been promoted mainly in Germany. These developments will require careful evaluation and standardization of methods and operator to ensure their safety and conformity to the INR system.
The remaining challenge
There is one large remaining barrier to the success of oral anticoagulant treatment. Many doctors have found clinical dosage tedious and time-consuming, belittling their skills, and even at specialist centres have had limited success in achieving therapeutic INR targets. Duxbury (1982) was the first to introduce therapeutic quality control in clinical practice and showed that his own short-term patients were within the target intervals for only half the time. Most published results of clinical trials at dedicated centres have given similar or worse results. Various methods have been introduced to assess therapeutic success. They agree that only about half the patients are within the target intervals even in clinical trials. Long-term patients were only maintained within the target INR for treatment of atrial fibrillation for 48% of the time in a Swiss multicentre study (Hellemons et al, 1999) and other recent reports have ranged between this figure and a maximum of 64%.
With one method of analysis (Rosendaal et al, 1993), giving percentage of the time in the therapeutic range, Poller et al (1998c) found that a computer-assisted anticoagulant dosage program improved the success rate dramatically. A group of experienced doctors achieved only 59% success in long-term patients but, with the computer-dosage program at the same centres, the percentage of time within target range increased to 72%. Over the first 22 weeks there was a 30% improvement with computer assistance. Whereas all computer programs need to be assessed individually, they offer considerable promise, as they are available to all centres including the least experienced, smallest and least dedicated centre.
It is thus appears that only in the early years of the 21st century will the full benefits of conventional anticoagulant treatment be achieved. These result from improved oral anticoagulation, with greater safety and effectiveness resulting from the lower dose warfarin, improved laboratory control and computer-assisted dosage. These will perhaps be linked to the wide introduction of home PT monitoring. New formulations of heparin or heparin-like drugs may also assist.
We would like to thank Professor Dominique Meyer for her help in obtaining the manuscript reproduced in Figure 1 from the Bibliothèque National de France in Paris.
Sam Schulman, Ole Bjorgell and Paolo Prandoni helped us with advice and the retrieval of some references. Gratitude is also expressed to Dr Brian McD. Duxbury for valuable advice in the oral anticoagulant section.