The Future of Medical Research


About a year and a half ago, as I began to read into the long history behind the Epidemiology movement and the Evidence-Based Medicine debates, I told my husband that the real problem was not that we were too empirical (which is what many argue is wrong with EBM philosophy). I argued that our problem was that we have been ascribing god-like powers to a test, which in all actuality, tells us almost nothing about the human body, the way it works, and how it’s affected by intervension. The test I’m referring to is the Randomized Control Clinical Trial (RCT).

Although Epidemiology was not in any way founded on RCTs, the field quickly moved to argue that, for clinical and policy purposes, decision making should be primarily informed by double-blind epmirical trials in which a large, diverse group of people (think thousands) takes a particular drug and is monitored over a long period of time for particular results; that group is then compared to a large contol group, who takes a plecibo. The results of these studies indicate the general effects of a drug, what it does and doesn’t accomplish based on its goals, and weather it’s safe enough to hit the market. The trial also shapes how the drug will be marketed, to what group, and with what exceptions, potential side effects, etc. The same process applies to new forms of medical technology, tests, treatments, and interventions.

The long standing criticisms of RCTs are fairly consistent:

1) They can only measure what they set out to measure. If they want to see if a drug lowers blood pressure, they measure blood preassure. They don’t measure how blood pressure and the drug, interacts with each patient’s diet, their history of  smoking, the fact that they mothered x amount of children, or have a predisposition to breast cancer.  That would take way too much money and more data collection than is reasonable a trial. As a result . . .

2) Trial results have to make massive generalizations which don’t account for the innumerable varieties of reactions and interactions people experience physiologically and otherwise while on the tested drug, and dozens of other drugs.

3) Most tests are conducted by companies trying to sell a product; this means that pharmaceutical companies conducting studies have an incentive to ignore unfavorable information and isolate data and results which will ensure their product sells.

4) They test interventions; they don’t actually study how the body works. Yes, taking a pill is one way to fix something. But figuring out how complex systems interact to predispose a person to a disease might be more productive. caveat: Although much research is dedicated to how the body works, at the moment, RCTs only allow us to test a few correlations at a time–eg: the correlation between sodium intake and obesity; NOT not all the various factors which have contributed to various individual’s obesity throughout the nation.

5) Clinical Trials don’t help doctors make decisions. This has been the major complaint of those who founded the field of epidemiology in the first place (Daly). The incredibly generalized results of RCTs don’t necessarily help a doctor decide if a particular drug or test will help their particular patient. They DO help narrow the playing field (75% of the time, Drug x works better than Drug y; but Drug y is much cheaper); but the complexity of each patient is so extreme, that doctors really need more detailed information than RCTs can offer–and that usually only comes with experience (eg: historically, I have found that patients who take drug x and are also on antidepressant y, have problems with impotence, which causes them to stop taking drug x after so many weeks, before the treatment takes effect. Therefore, treatment z will probably be better for them because they’re most likely to complete the treatment).

Back to my story.

What I told my husband that day when I was beginning to process all this information was this: we need to revolutionize the way that we’re doing medical research.Rather than isolating particular questions and testing for a few correlations cut off from all other factors, we should be finding ways to do massive data collection about the detailed intricacies of the body, and mapping that data for patterns and coorelations within particular patient populations.

We need to use complex, convenient monitoring systems which will track hundreds of thousands of minor reactions and systems in the human body on a moment by moment basis, as people function within their own environments; and then–utilizing the storage and data-analysis technologies used by search-engines to navigate oceans of information on the internet, and the super computers which perform calculations humans can’t even attempt–we should be looking for patterns in that data which explain how individual’s physiological system’s work, how diseases develop, and in what conditions.

We cold take a huge sample of subjects form a single population with a disease like prostate cancer, or living in a particular area, or with a particular medical history, and monitor their physiology for however long seemed necessary, looking for what they have in common, and what they don’t. That’s how we should be doing research.

Considering what technology has accomplished over the past few decades, and looking to what potential the future holds, RCTs honestly look like something out of the stone age. We could be doing so much better. The key is combining complexity, technology, and, well, lots and lots of money.

I was thrilled, today, to find that there is a doctor who has been making the exact same argument. 

One of the top ten most cited researchers in medicine, and Professor of Genomics, the Scripps Research Institute, Cardiologist and Geneticist Eric Topol has the support of medical institutes, philosophers, medical educators, policy makers, research laboratories, technicians, politicians, clinicians, and me. His book, The Creative Destruction of Medicine: How the Digital Revolution will Create Better Health Care(Jan 2012) imagines a future in which the technologies we have devloped for personal and commercial use will be used to understand physiology in a way “that will make the evidence-based state-of-the-art stuff look” primitive,

“. . .by bringing the era of big data to the clinic, laboratory, and hospital, with wearable sensors, smartphone apps, and whole-genome scans providing the raw materials for a revolution. Combining all the data those tools can provide will give us a complete and continuously updated picture of every patient, changing everything from the treatment of disease, to the prolonging of health, to the development of new treatments.”

He specifically suggests how existing medical and information technologies can be used and further developed to achieve a new level of health research:

“At home brain-monitors helping us improve our sleep. Sensors to track all vital signs, catching everything from high blood pressure to low blood sugar to heart arrhythmia without invasive measurements to inconvenient and nerve-wracking—or even dangerous—hospital stays (which kill some 100,000 every year, due to infections caught there, or patients getting someone else’s medicine). Improved imaging techniques and the latest in printing technology are beginning to enable us to print new organs, rather than looking for donors. Genetics can reveal who might be helped by a drug, unaffected by it, or even killed by it, helping avoid problems as were seen with Vioxx.”

I was also pleased to find that Topol addresses some of  the major problems that will accompany future technologies, such as privacy and protection issues, genetic planning decisions, and the potential for depersonalized care. However, he also emphasizes the potential for individuation and personalization that these technologies could bring to clinical care, as well.

A few prememtive clarifications

Many in the medical and scientific humanities would point to my enthusiasm as reinforcing the, “science just needs to be better” philosophy in which empiricism is the answer to everything; the belief that if we just measured a little closer, controlled nature a bit better, we could master it and all our problems. That is not what I believe. On the contrary, I believe that the more humans stand in awe of natural creation–in which we ourselves beautifully and mysteriously embedded–the more we will realize that we can never control it, and the more we will desire to understand  it better. As a vitalist, new materialist, and complexity theorist, I believe that empiricism has been based on the concept that our world is linear, strait forward, simple, and masterable. I believe that methods such as the ones I’ve described here acknowledge the fact that this is not the case–that there more more connections, interactions, and vital processes than we could ever imagine or count, but that they are worthy of our attention and investigation all the same.

Lastly, and perhaps most importantly, I don’t want to ignore the serious ethical and social problems that litterally flood my mind as I consider future directions in medical technology. The fact that technology, medicine, and research are so largely privatized and commercialized obviously has difficult implications. History suggests that human social complexity necessarily includes corruption, error, over-confidence in human ability, greed, race, class and gender stratification, and oppression; innovation and power, network into nodes which disadvantage the majority of the human population. I don’t believe that science has the power to, or will solve, all our greatest needs and problems. I do think, however, that human innovation is remarkable and should be developed–with as much ethical contemplation, caution, and will toward justice as possible. Even philosophers have a part to play in the network formations of the future.

Digital Ethics in Quasi-Public Places: McKee and Porter

McKee and DeVoss‘s edited collection, Digital Writing Research: Technologies, Methodologies, and Ethical Issues, provides an array of technological tools and methodologies which may be used to study writing. The section we looked at last week, Part Three: Researching the Activity of Writing, covered time-use diaries, mobile technologies, and video screen capture. Some of the chapter authors, such as William Hart-Davidson, focus on the practical use of a technology: time use diaries to examine the integration of certain writing devices, like texting, into daily practices, via when and for how long a device is used. Other authors, such as Joanne Addison, focused on theoretical basis for using technology: mobile technologies as a way to investigate phenomenological experience.

This week’s section, Part Four: Researching Digital Texts and Multimodal Spaces, was highly related to our other readings by McKee and Porter this week. All the sources dealt with the differences between static and authors and published texts, vs. fluid online texts, spaces, and speakers.

Stuart Blyth points out the difficulty of coding online data, because it is not static, but fluid, with particular texts often changing over time. His suggestion is to collect copies of the web page at a particular point in time, and make notes using an HTML tag. His was the most detailed account of coding methods I have read thus far—providing protocol for tagging words, rhetorical moves, images, and even spaces and time-laps editing.

What I found interesting was the fact that, when this book was written, authors were still trying to find ways to freeze texts, rather than develop tools to track their development. Blyth does a good job of borrowing from film coding practices, in order to find ways to code online videos or animations, and comic practices for narrative, in order to enable time-laps analysis. However, this is not quite the same as looking at verbal/alphabetic texts changing over time.

Attending CEA this month, in St. Petersburg Florida, I met Chris Friend from the Texts and Technology program at UCF. He did a presentation about the application, Google Wave—which is slowly being phased out in favor of Google Docs. The premiere feature of this composition tool, however, which Christ drew attention to, was its ability to record the collaborative writing process performed on/in a document. It worked in real time, and at any point you could “rewind” so to speak and watch how different writers had edited and added to the document. There was also a chat feature which allowed people to talk about their changes as they worked. This feature was also time stamped to be replayed with the document recording.

Digital Writing Research was published in 2007, however many of the chapters would lead you to believe otherwise. Out of the two sections we  read, Google is only named once—in one paragraph. I think that Chris’s work is a great example of where composition methods of research might be going in the next few years.

McKee and Porter’s CCC article, “The Ethics of Digital Research: A Rhetorical Approach” was incredibly enjoyable read. Published in 2008, the article addresses the need for IRB and ethical guidelines, to be tailored for online environments. At the moment, IRB’s qualifications for review are based on 3 things—two of which the article deals with 1) whether the study looks at humans or texts, 3) whether the human data is public or private, 3) and whether the data is “individually identifiable” (“the identity of the subject is or may be readily ascertained by the investigator”).

To simplify: human subjects research needs review if it is: of people, private, and identified. It does not need review (is not human subjects research) if it is: of texts, public, and/or unidentifiable. McKee and Porter problematize all three of these binaries using real examples of ethical dilemmas in online research. They provide suggestions for how researchers may make ethical choices in their studies; in particular, they propose the deliberative process of casuistry, with special attention to the rhetorical situation of various web texts (purpose, audience, environment, expectations).

This work is supplemented in “The Ethics of Conducting Writing Research on the Internet: How Heuristics Help,” in which McKee and Porter provide a more detailed set of diagrams, which take the many fluid factors of online writing into account: degree of interaction, data ID, topic sensitivity, subject vulnerability, etc.

My favorite thing in this article was the below diagram, tracing the continuum between “Space” and “Place.” Something I inferred from this distinction was the difference between static, (two dimensional) texts, versus fluid (four dimensional?) digital texts. One can be tagged and coded easily, while the other requires something as complex as Google Wave/Docs.

In their CCC article, McKee and Porter discuss the difference between an author/person binary, versus a continuum. They quote Amy Buckman who says that “Most work on the Internet is semi-published” (qtd on 734).  McKee and Porter set this concept up against the idea of online texts as published documents which may be quoted and cited at will, within reason of fair use, without permission of the author. “In this respect” the authors continue, “the ethical guidelines governing fair use of others’ writing always apply, and the ethical guidelines of securing informed consent may also apply” (734).

The basis for their argument is very similar to one of Bruno Latour’s in We Have Never Been Modern, where he argues that there are no such things as concrete subjects or objects (people/texts), or essence and representation. Rather, he says that there are quasi-subjects and quasi-objects (like, semi-published), all of which have a subject or object identity, depending on what they are are in relationship to. In the case of online texts, McKee and Porter ask that we consider online texts relation to their writers, the writer’s intended audience, and their relationship to the researcher. Is the researcher part of that original, intended audience? Or would the author be disturbed to discover that their work was being analyzed and published by the researcher? That, ultimately, is the question, regarding informed consent.

In the CCC article, “Writing in High School/College: Research Trends and Future Directions,” Addison and McGee review the results of ten educational institutions (three high schools, two community colleges, two four-year public institutions, one four year private institution, one public MA granting institution, and one doctorate-granting, flagship institution) to aggregate data regarding student writing and teacher pedagogy.

One of the most interesting points to me was the data showing that faculty favor personal and in-class writing tasks, but they don’t  value workplace genres. While I understand not teaching something you aren’t yourself familiar with, and the need for personal reflection, I can’t help feel that college writing should include some preparation for disciplinary writing. This is why I favor the Writing About Writing approach (Downs and Wardle) –also discussed at CEA–because it teaches students about how writing works within particular discourses and ecologies, without setting out to teach in the disciplines themselves.

Stanford Study, Compositionism, and Ethics and Representation

If only every research project every conducted would publish their results in the organized and open format that Stanford has with their famous Stanford Study of Writing, which can be found here. All their methodological materials are available, for those who might wish to reproduce the study, and their background, methods, and research question are summarized in short, two-paragraph sections: Brilliant. Imagine what the field of (college) writing/composition would be if every major institution conducted the same study, maintained the same database–the knowledge accumulated would be fantastic. Here’s a cheers to rigorous, longitudinal methods of research.

An article in the Chronicle of Higher Education provides a nice overview of the study’s reception as of 2009. In it, Josh Keller reference Katherine Blake Yancy, “a professor of English at Florida State University and a former president of the National Council of Teachers of English, [who] calls the current period ‘the age of composition’ because, she says, new technologies are driving a greater number of people to compose with words and other media than ever before.”  This is view is echoed by Bruno Latour in his article, “An Attempt at Writing a ‘Compositionist Manifesto'”, based on speech given at the  reception of the Kulturpreis presented by the University of Munich on February 9th, 2010. He argues that Compositionism may be an apt successor to the PostModern movement. Compositionism, he says,

“…underlines that things have to be put together (Latin componere) while retaining their heterogeneity. Also, it is connected with composure; it has a clear root in art, painting, music, theater, dance, and thus is associated with choreography and scenography; it is not too far from “compromise” and “compromising”  retaining with it a certain diplomatic and prudential flavor. Speaking of flavor, it carries with it the pungent but ecologically correct smell of “compost”, itself due to the active “de-composition” of many invisible agents…Above all, a composition can fail and thus retain what is most important in the notion of  constructivism (a label which I could have used as well, had it not been already taken by art history). It thus draws attention away from the irrelevant difference between what is constructed and what is not constructed, toward the crucial difference between what is  well or  badly constructed,  well or  badly composed. What is to be composed may, at any point, be decomposed.” (3)

Perhaps, if we can conduct more studies such as Stanford’s, which continue to investigate the multifaceted dynamics of writing, theories of composition might be generated which could in tern be applied to other disciplines’ generation and arrangement of knowledge, the way that Bruno Latour, and new media theorists’ have.

Although published in 1996, Ethics and Representation in Qualitative Studies of Literacy offers an excellent array of discussion on the classic qualitative methods issues, from seminal voices such as Patricia A. Sullivan and Lucille Parkinson McCarthy. Perhaps one of the most applicable chapters for my own future research was Blakeslee, Cole and Conefrey’s piece on negotiating subjective perspectives within ethnographic research (chapter 8), particularly when studying a community whose epistemological assumptions are foundationally different than the researcher’s. They used Blakeslee’s own experience researching physicists as a case study, particularly considering authority, scientific epistemology and how a text can be negotiated to ethically reflect the perspective of both the subject and the theories of the critic/observer. Like Sullivan (and Porter), the authors of this chapter acknowledge the fact that ethnographers can be neither fully authoritative, nor fully objective in analyzing their observations, but must acknowledge their subjective perspective and rely on others to produce an ethical, textual representation. My own study of the medical community will draw from these concepts.