Abstract
The diseases of critical illness are rooted in the response of the host to clinical interventions that have thwarted imminent death. Sepsis—the clinical syndrome that embodies the systemic host response to microbial invasion—exemplifies the dynamic interplay between an external threat and the endogenous response of the host that has created the phenotype of critical illness. Critical illness is a pathophysiologic state without evolutionary precedent. It is only in the past century that shock was transformed from a lethal disease to a physiologic state that was reversible by the administration of intravenous fluids [1], or that acute respiratory insufficiency caused by polio or renal failure resulting from trauma were readily supported by positive pressure ventilation and dialysis respectively. These successes came at a cost that is still incompletely understood, as survival from shock gave rise to the acute pulmonary insufficiency of the acute respiratory distress syndrome (ARDS) [2,3], and the interventions used to manage ARDS, in turn, triggered further injury in remote organs [4], and contributed to the evolution of the multiple organ dysfunction syndrome (MODS) [5]. The word “sepsis” comes from the Greek ση´ψις, meaning rot or putrefaction, and is attributed to Hippocrates [6]. In the Hippocratic model, living tissues broke down by one of two contrasting processes. Pepsis was the process through which foodstuffs were digested, fostering growth and well being, or that grapes were fermented to produce wine. Sepsis, on the other hand, denoted tissue breakdown that resulted in a foul order and illness, and that was exemplified by swamps and festering wounds. The work of Semmelweis, Pasteur, and Lister in the nineteenth century established definitively that the processes of infection and suppuration were caused by microscopic living organisms, and so the word “sepsis” became synonymous with overwhelming invasive infection; indeed, a medical dictionary from 1972 defines sepsis as “... the presence of pus-forming organisms in the bloodstream ...” [7]. But anti-infective therapies have had only a modest impact on the morbidity and mortality of sepsis. Data from the Center for Disease Control in the United States show a dramatic reduction in the toll of infectious diseases over the twentieth century, with mortality rates falling from approximately 800 in 100,000 at the turn of the century to 70 in 100,000 at its close [8] (Fig. 1). A closer review of these successes shows that the largest decrease in mortality occurred before the widespread availability of antibiotic agents, and well before the establishment of the first ICU in 1957 [9]. Rather the mortality reduction reflects the introduction of such public health measures as immunizations, the establishment of public health departments, and the pasteurization of milk, and mortality rates in the second half of the twentieth century have been relatively unchanged. The conclusions of this population-based analysis are mirrored in a study of infections in a single hospital in the decades immediately before (the 1930s) and after (the 1950s) the introduction of antimicrobial therapy into clinical practice. Neither the rates nor the mortality of infection was changed; instead, the most striking impact of antibiotics was on the microbiology of infection in the hospitalized patient, with a shift from exogenous to endogenous organisms as the predominant infecting species [10]. As the pathophysiologic mechanisms of sepsis are becoming better understood, it has become apparent that the clinical syndrome arises not from the direct effects of the microbial trigger, but indirectly, through the systemic activation of an innate immune response.