McKee and DeVoss‘s edited collection, Digital Writing Research: Technologies, Methodologies, and Ethical Issues, provides an array of technological tools and methodologies which may be used to study writing. The section we looked at last week, Part Three: Researching the Activity of Writing, covered time-use diaries, mobile technologies, and video screen capture. Some of the chapter authors, such as William Hart-Davidson, focus on the practical use of a technology: time use diaries to examine the integration of certain writing devices, like texting, into daily practices, via when and for how long a device is used. Other authors, such as Joanne Addison, focused on theoretical basis for using technology: mobile technologies as a way to investigate phenomenological experience.
This week’s section, Part Four: Researching Digital Texts and Multimodal Spaces, was highly related to our other readings by McKee and Porter this week. All the sources dealt with the differences between static and authors and published texts, vs. fluid online texts, spaces, and speakers.
Stuart Blyth points out the difficulty of coding online data, because it is not static, but fluid, with particular texts often changing over time. His suggestion is to collect copies of the web page at a particular point in time, and make notes using an HTML tag. His was the most detailed account of coding methods I have read thus far—providing protocol for tagging words, rhetorical moves, images, and even spaces and time-laps editing.
What I found interesting was the fact that, when this book was written, authors were still trying to find ways to freeze texts, rather than develop tools to track their development. Blyth does a good job of borrowing from film coding practices, in order to find ways to code online videos or animations, and comic practices for narrative, in order to enable time-laps analysis. However, this is not quite the same as looking at verbal/alphabetic texts changing over time.
Attending CEA this month, in St. Petersburg Florida, I met Chris Friend from the Texts and Technology program at UCF. He did a presentation about the application, Google Wave—which is slowly being phased out in favor of Google Docs. The premiere feature of this composition tool, however, which Christ drew attention to, was its ability to record the collaborative writing process performed on/in a document. It worked in real time, and at any point you could “rewind” so to speak and watch how different writers had edited and added to the document. There was also a chat feature which allowed people to talk about their changes as they worked. This feature was also time stamped to be replayed with the document recording.
Digital Writing Research was published in 2007, however many of the chapters would lead you to believe otherwise. Out of the two sections we read, Google is only named once—in one paragraph. I think that Chris’s work is a great example of where composition methods of research might be going in the next few years.
McKee and Porter’s CCC article, “The Ethics of Digital Research: A Rhetorical Approach” was incredibly enjoyable read. Published in 2008, the article addresses the need for IRB and ethical guidelines, to be tailored for online environments. At the moment, IRB’s qualifications for review are based on 3 things—two of which the article deals with 1) whether the study looks at humans or texts, 3) whether the human data is public or private, 3) and whether the data is “individually identifiable” (“the identity of the subject is or may be readily ascertained by the investigator”).
To simplify: human subjects research needs review if it is: of people, private, and identified. It does not need review (is not human subjects research) if it is: of texts, public, and/or unidentifiable. McKee and Porter problematize all three of these binaries using real examples of ethical dilemmas in online research. They provide suggestions for how researchers may make ethical choices in their studies; in particular, they propose the deliberative process of casuistry, with special attention to the rhetorical situation of various web texts (purpose, audience, environment, expectations).
This work is supplemented in “The Ethics of Conducting Writing Research on the Internet: How Heuristics Help,” in which McKee and Porter provide a more detailed set of diagrams, which take the many fluid factors of online writing into account: degree of interaction, data ID, topic sensitivity, subject vulnerability, etc.
My favorite thing in this article was the below diagram, tracing the continuum between “Space” and “Place.” Something I inferred from this distinction was the difference between static, (two dimensional) texts, versus fluid (four dimensional?) digital texts. One can be tagged and coded easily, while the other requires something as complex as Google Wave/Docs.
In their CCC article, McKee and Porter discuss the difference between an author/person binary, versus a continuum. They quote Amy Buckman who says that “Most work on the Internet is semi-published” (qtd on 734). McKee and Porter set this concept up against the idea of online texts as published documents which may be quoted and cited at will, within reason of fair use, without permission of the author. “In this respect” the authors continue, “the ethical guidelines governing fair use of others’ writing always apply, and the ethical guidelines of securing informed consent may also apply” (734).
The basis for their argument is very similar to one of Bruno Latour’s in We Have Never Been Modern, where he argues that there are no such things as concrete subjects or objects (people/texts), or essence and representation. Rather, he says that there are quasi-subjects and quasi-objects (like, semi-published), all of which have a subject or object identity, depending on what they are are in relationship to. In the case of online texts, McKee and Porter ask that we consider online texts relation to their writers, the writer’s intended audience, and their relationship to the researcher. Is the researcher part of that original, intended audience? Or would the author be disturbed to discover that their work was being analyzed and published by the researcher? That, ultimately, is the question, regarding informed consent.
In the CCC article, “Writing in High School/College: Research Trends and Future Directions,” Addison and McGee review the results of ten educational institutions (three high schools, two community colleges, two four-year public institutions, one four year private institution, one public MA granting institution, and one doctorate-granting, flagship institution) to aggregate data regarding student writing and teacher pedagogy.
One of the most interesting points to me was the data showing that faculty favor personal and in-class writing tasks, but they don’t value workplace genres. While I understand not teaching something you aren’t yourself familiar with, and the need for personal reflection, I can’t help feel that college writing should include some preparation for disciplinary writing. This is why I favor the Writing About Writing approach (Downs and Wardle) –also discussed at CEA–because it teaches students about how writing works within particular discourses and ecologies, without setting out to teach in the disciplines themselves.
If only every research project every conducted would publish their results in the organized and open format that Stanford has with their famous Stanford Study of Writing, which can be found here. All their methodological materials are available, for those who might wish to reproduce the study, and their background, methods, and research question are summarized in short, two-paragraph sections: Brilliant. Imagine what the field of (college) writing/composition would be if every major institution conducted the same study, maintained the same database–the knowledge accumulated would be fantastic. Here’s a cheers to rigorous, longitudinal methods of research.
An article in the Chronicle of Higher Education provides a nice overview of the study’s reception as of 2009. In it, Josh Keller reference Katherine Blake Yancy, “a professor of English at Florida State University and a former president of the National Council of Teachers of English, [who] calls the current period ‘the age of composition’ because, she says, new technologies are driving a greater number of people to compose with words and other media than ever before.” This is view is echoed by Bruno Latour in his article, “An Attempt at Writing a ‘Compositionist Manifesto'”, based on speech given at the reception of the Kulturpreis presented by the University of Munich on February 9th, 2010. He argues that Compositionism may be an apt successor to the PostModern movement. Compositionism, he says,
“…underlines that things have to be put together (Latin componere) while retaining their heterogeneity. Also, it is connected with composure; it has a clear root in art, painting, music, theater, dance, and thus is associated with choreography and scenography; it is not too far from “compromise” and “compromising” retaining with it a certain diplomatic and prudential flavor. Speaking of flavor, it carries with it the pungent but ecologically correct smell of “compost”, itself due to the active “de-composition” of many invisible agents…Above all, a composition can fail and thus retain what is most important in the notion of constructivism (a label which I could have used as well, had it not been already taken by art history). It thus draws attention away from the irrelevant difference between what is constructed and what is not constructed, toward the crucial difference between what is well or badly constructed, well or badly composed. What is to be composed may, at any point, be decomposed.” (3)
Perhaps, if we can conduct more studies such as Stanford’s, which continue to investigate the multifaceted dynamics of writing, theories of composition might be generated which could in tern be applied to other disciplines’ generation and arrangement of knowledge, the way that Bruno Latour, and new media theorists’ have.
Although published in 1996, Ethics and Representation in Qualitative Studies of Literacy offers an excellent array of discussion on the classic qualitative methods issues, from seminal voices such as Patricia A. Sullivan and Lucille Parkinson McCarthy. Perhaps one of the most applicable chapters for my own future research was Blakeslee, Cole and Conefrey’s piece on negotiating subjective perspectives within ethnographic research (chapter 8), particularly when studying a community whose epistemological assumptions are foundationally different than the researcher’s. They used Blakeslee’s own experience researching physicists as a case study, particularly considering authority, scientific epistemology and how a text can be negotiated to ethically reflect the perspective of both the subject and the theories of the critic/observer. Like Sullivan (and Porter), the authors of this chapter acknowledge the fact that ethnographers can be neither fully authoritative, nor fully objective in analyzing their observations, but must acknowledge their subjective perspective and rely on others to produce an ethical, textual representation. My own study of the medical community will draw from these concepts.
I was pleasantly surprised by how much I enjoyed Tinberg and Nabeau’s study on The Community College Writer: Exceeding Expectations. Theirs is a fairly broad investigation into the challenges facing first year college students, but essentially tries to understand how first year composition student’s experiences prior to and during their first semester in college matches up to their various teacher’s expectations .
The literature review and methods were both very helpful to me in demonstrating the process of composition research. Those who know me will not be surprised to hear that I particularly enjoyed their combination of ethnographic and social-studies research methods. On page 17, Tinberg and Nabeau explain that their “purpose . . . is primarily descriptive: we intend to account for the nature of student writing tasks at college–and the degree of success achieved in meeting the writing challenge”; yet, in their literature review, they acknowledge that “research can be and out to be conducted on a scientific [read, positivist] basis, while at the same time grounded in specific writing situations” (15). Elaborating on what they mean by scientific: “producing replicable, well-designed research studies” which they hope to balance with “reproducing the specific and localized scene of writing.”
The question is, do their methods successfully facilitate this dual desire? I believe that the study does indeed produce valuable knowledge from which other institutions may glean applicable information for their own curricular design; however, I also believe that certain aspects of their methods could have been refined in order to ensure more consistently generalizable results.
For example: admirably, Tinberg and Nadeau administered their first student survey at four different institutions, in four different states. This was, in my opinion, very impressive in light of comp. research norms. However, their student cohort, from which the bulk of the study data was truly derived, was comprised of students only from BCC (Bristol Community College).
I think that because the study focused so much on the past experience of students, and particularly because the primary data was gathered at a community college, where students will most likely all be coming from one very specific town, city, or location, the data would have been much more generalizable if the PIs had followed through with their ambition to survey four very different institutions in four very different locations.
I’d like to start by thanking my classmates Dan, Susan and Karen for setting an example for me as I adapt to this new genre of blogging. It’s proving fun, but challenging, and their bloggs are giving me an idea of where I might go with my own public space.
Most of our ethnographies we read this week revolve around educational controversies–sororities, censorship, and the effectiveness of the public college education. I say controversies because in each case, a moral accusation lurks behind the exigence of the study. In Storm in the Mountains: A Case of Censorship, Conflict, and Consciousness James Moffett recounts the truly mind-blowing circumstances which led to the Virginia Text Book Riots of 1974-75. Parents in rural Appalachia, in particular, Kanawaha county VA raised an objection to the state’s adopted language arts curriculum, which transformed into a protest, which transformed into outrage and mob chaos. As the facts quickly disappeared behind sensationalism, and political process gave way to anarchy, an often powerless and voiceless population gained a hearing in the courts of educational history, even if it was by means of radicalism, misunderstanding, and injustice. A promising scholar and curriculum designer lost a lifetime’s worth of credibility, a generation of educators was impressed with a deep fear and defensiveness toward parents, and text-book companies took on the motto “never again.”
I was left with a deep sense of tragedy: for the “common man” who, when desiring change, does the only thing he knows how: “go home and sit down”; for a state whose inability to communicate over vast caverns of cultural difference led to indelible mark of strife; and for the utter havoc that was wreaked on individuals and communities through the insidious power of false information, inflamed by the sparks of zealous belief. I cannot help but believe that such an event constitutes a kind of collective trauma, an EmerAgency which has left a lasting scar on the educational system of the nation. It would be interesting to see what types of Memorials might arise out of such memories.
Pledged by Alexandra Robins and Academically Adrift: Limited Learning on College Campuses by Arum and Roksa, both deal with University life and culture–although the first is written by a journalist, and the second is based on a quantitative study. Sustaining the (vast) methodological differences and implications, both set out to question the value of current cultural practices in higher education–be it Greek life, or the study habits of students in general. While I would have to perform a full reading of Pledged to comment on its conclusions, Academically Adrift finds that students who remain more focused on individual study and less on university social life improved their critical thinking and writing at a substantially higher rate than those living out the traditional college experience. The assumption seems to be one in line with a WAC mentality–that writing is the best way to both develop and test intellectual growth. I would like to buy/read the books methods section, if only to learn more about how they interpreted their data. According to The Community College Spotlight, the study employed the Collegiate Learning Assessment essay test “that asks students to solve real-world problems, ‘such as determining the cause of an airplane crash, that require reading and analyzing documents from newspaper articles to government reports.’” The circumstances in which such standardized tests are administered often fly in the face of much Composition research which indicates that writing effectively requires in-depth knowledge of particular topics, research, time, multiple drafts, and collaboration.
The New York Times did an interview with Arum which also reveals the very simplistic view of “learning” which the general public has, and which the study seems to capitalize on in its ability to gain attention. In the interview, Arum says that “areas like critical thinking, complex reasoning and written communication. . . . are the general skills that most people believe should be at the core of undergraduate learning.” I think many here at USF would agree. However, from a compositionist’s standpoint, such standards often seem to dismiss factors such as the time required to ingest new knowledge, much less learn to perform it within a new discourse community and professional culture, which Composition has tried so hard to bring to scholar’s attention through qualitative studies. The assumption is that “learning” is a monological thing that can be appropriately demonstrated by a standardized critical thinking test after two years of nothing but GED classes. To be clear, I agree that the current University system might undergo some reform in it’s proclaimed priorities and concerns regarding student education–however I am in agreement with critics of the Adrift study who object to its lack of attention to disciplinary specialty.
(As a tie-in to the first reading by Moffett: it is interesting to note that Josipa Roksa is from the University of Virginia, and the ABC article cites The University of Charleston, in West Virginia as one of the institutions dedicated to “beefing up” their writing assignments within majors as a response to the study.)
I think this project segues well into the Spencer Foundation CFR which Dr. Moxley passed to us last class (before he had to release us early because of tornadoes, and long drives home), requesting research about how individuals and institutions within the educational infrastructure use data to improve and shape pedagogy and policy. The national response TO this study would be a fantastic case to answer just such questions. The Foundation specifically calls for inquire into “how individual teachers, faculty members, as well as principals and department heads, learn how to use data and how they can work together to understand, interpret and apply data in their specific professional contexts.”
So, I haven’t gotten to Stake’s Qualitative Research yet, but for the sake of time, and to end on an entertaining note, I’ll just say that I was struck by his conception of research as collecting more information than any one person could experience, because it reminded me of this (my apologies for the painful English dubbing): http://www.youtube.com/watch?v=chhtNIKafvU&feature=relatedv=chhtNIKafvU&feature=related.