synthesizing the past

24 May 2012

It was arranged for me to see Charles Beard, who was attending the American Historical Association’s 1935 convention in New York. Perched on the bed in his overheated room in the Hotel Pennsylvania, Beard poured forth his scorn for the pusillanimity and triviality of a historical scholarship that had lost all sense of its critical function in the civic realm. He gave me a formula for a fine scholarly career: “Choose a commodity, like tin, in some African colony. Write your first seminar paper on it. Write your thesis on it. Broaden it to another country or two and write a book on it. As you sink your mental life into it, your livelihood and an esteemed place in the halls of learning will be assured.”

-Carl Schorske, “A Life of Learning [pdf]” (the 1987 Charles Homer Haskins lecture)

Concerns about overspecialization in history are probably about as old as specialization itself. It would be easy to point to Carl Schorske’s 1935 conversation with Charles Beard quoted above and then to William Cronon’s recent column on the importance of synthetic, bigger picture history, then say something along the lines of “same as it ever was”, and then leave it at that. But just because overspecialization seems to be a persistent problem doesn’t mean that nothing can be done about it, and just because it seems to be a recurring problem doesn’t mean that it’s always of the same magnitude.

I could be mistaken, but I take Cronon’s title, “Breaking Apart, Putting Together” to be an implicit reference to the title of Thomas Bender’s “Wholes and Parts: The Need for Synthesis in American History” [JSTOR – paywalled] which was published just about 25 years ago. Writing after a couple of decades during which social history – which often involved intensive research on particular communities or social groups – had grown to become quite possibly the dominant form of American historical research, Bender urged historians to begin to synthesize this work into broader interpretations. For those historians who continued to write monographs – and Bender was not opposed to the monograph – Bender hoped that more of them would carry out their research with the possibility of future synthesis in mind: there’s not really a standard way of combining individual works of history, but it stands to reason that some works are more amenable to synthesis than others.

Bender’s article generated quite a bit of discussion at the time, not all of it positive:  some objected to the idea of synthesis, some objected to the particular kinds of synthesis Bender preferred. About a year after the article appeared, the Journal of American History published a special forum on synthesis, in which Bender defended his ideas against some of his critics [table of contents; articles paywalled].

Despite the criticism, the impression I got when I was reading up on this a few years ago was that Bender was not alone in his concerns, that other historians – and not just those who specialized in the United States – felt that there was a need to broaden the scope of individual historical works. Sometimes this was expressed more as a concern with fragmentation rather than with synthesis, per se, but I think those are two sides of the same coin.

In the intervening years, there does seem to have been an increase in the amount of synthetic work being produced, at least in American history. While you probably still will have a problem finding a recent, academic-ish survey of all of American history that is not actually a textbook – textbooks are a kind of synthesis, but not the kind Bender was writing about – there are now a fair number of surveys that cover particular periods of American history that build on recent research.

And course general surveys are not the only kind of synthesis there is. As Andrew Hartman has pointed out over on the U.S. Intellectual History blog, many works that focus on a particular topic or question also involve synthesizing other historians’ research on the same or related areas. Hartman’s examples are from intellectual history, but I can think of relatively recent works in political or social history, such as the history of voting or the history of marriage, that seem to qualify as both original and synthetic.

All of that said, I don’t really disagree with Cronon. I’m not a fan of setting up stark dichotomies as a rhetorical device and I don’t think, for reasons I’ve outlined above, that the situation is as bad for synthesis as it was in the mid-1980s, but I’m also a big fan of synthesis (and survey courses, for that matter). Since I think there could still be more of it I don’t really see a problem with arguing in favor of it.*

What I do wonder, though, is what relationship academic synthesis has to public interest in history, but that’s a subject for another post.


*Even though I think I’m personally still more comfortable doing close-to-the-sources monograph-style research.

merely synthetic

18 December 2009

Years ago, ex-blogger (and current twitterer) Caleb McDaniel wrote a post about academic plagiarism called “Good Fear and Bad.” The good fear was the fear of committing plagiarism that keeps academics vigilant, guarding against carelessness and error in their own research and writing: “It’s one of the internal controls that helps prevent the outright cases of intellectual theft from happening.”

Of course, no one really needs to be afraid of committing conscious plagiarism: being by definition a conscious act, they should focus more on not doing it at all. But that’s not really what Caleb was talking about. Instead, he was raising the specter of truly accidental or coincidental cases: cases where one paraphrases from notes without realizing that the paraphrase brings them back more closely to the text the notes are based on, or cases where one arrives independently at an image or metaphor only to find someone else already arrived at the same place. Cases that might look like plagiarism – that might even draw accusations – but aren’t.

I was reminded of this recently because I’m currently trying to work part of a course paper into a blog post (or two). It’s my own paper but it’s not original – that is, I did the research and did the writing and everything else involved in producing the paper, but it’s based entirely on secondary sources. It’s about the history of the Declaration of Independence, the Constitution, and the Bill of Rights as documents: mostly, it’s about how they’ve been preserved over the years. My principal sources for the pre-1950 history of the Declaration and the Constitution were the 1949 annual report of the librarian of Congress and an article by Verner Clapp in the journal Special Libraries; among my post-1950 sources were some articles in the New York Times (I can post the full bibliography with links if anyone’s interested). I did not, because I was looking at material history rather than cultural or intellectual history, look at any of the many histories of the documents as expressions of ideas. But I kept them in mind for future reading.

A few days ago, I picked up Pauline Maier’s history of the Declaration of Independence, American Scripture, and was quite surprised to find much of what I covered in my paper written into its first few pages: Maier’s introduction starts with a reflection upon the Declaration as a material object and its history. There’s no question as to primacy here: I wrote the paper a couple of months ago for a readership in the ones; Maier wrote years ago for a readership in the thousands. So I did what any former almost-historian would do: I turned to the footnotes. And sure enough I found that same Librarian of Congress annual report, the Verner Clapp article, and one of the New York Times articles I used.

Before knowing that we worked from the same sources, I found the resemblance striking, even worrying – there’s at least one quotation we both used (it’s a good quotation!); after looking at the footnotes, it seemed almost unremarkable. After all, how many different ways can you say that for a few decades the Declaration hung on a wall in the United States Patent Office Building opposite a window where it was exposed to natural light, and that many suspect this prolonged exposure of causing much of the fading visible in the document? (I have not looked that up to make sure I’m not inadvertently quoting someone. Apologies to that someone if I am.)

But what about the sources themselves? There have been cases where scholars have been accused of hiding their unoriginality by quoting and citing sources they found through others’ work without acknowledging where they found those sources. Generally, the problem is with using only the bits of sources that another scholar used without crediting that scholar (e.g. by not writing “quoted in [citation]”), not with finding the sources and then using them directly. Since I worked directly from the sources I cited and in any case I found them elsewhere, that doesn’t really apply here.

Interestingly, we followed similar routes to our identical sources: in her footnotes, Maier thanks a conservator at the Library of Congress for the references to the Clapp article and the 1949 annual report. I found those same references through an article in a 1997 issue of the Library of Congress Information Bulletin, which for full circularity also refers to Maier’s introduction to her book. Appropriately enough, parts of that 1997 article also seem to be based on the references it recommends. I assume we both turned to the New York Times for more recent coverage because it’s a prominent paper with a certain amount of credibility and it has carried some fairly detailed articles on the documents’ preservation.

In the end, I don’t really think accidental plagiarism, or the appearance thereof, was ever much of an issue here. I wrote about the Constitution and Bill of Rights along with the Declaration – although the Declaration has the best documented history and consequently got the most attention in the paper – and I tried to include a bit more technical detail about preservation when I could. I also cited my sources and did not claim to be uncovering original information, just to be putting together in one place information already available.

I am still glad, however, that although I picked up a copy of American Scripture some time ago I did not open it until after finishing the paper: I think I might have been so paralyzed with fear of re-summarizing Maier’s summary that I would have had a hard time writing anything at all.

historians and their fact-finding

3 December 2009

I’ve been browsing through studies of archival users over the past few days and have been finding them fascinating. (This probably says something about me.) There seems to have been a huge upsurge in interest in studying users within the archival profession in the past 15-20 years. Many of these studies, not surprisingly, focus on people conducting historical research: usually historians, but also other academic researchers, as well as genealogists, who I believe are the largest group of archives users in North America.*

I plan to write up something more detailed about these studies; having once been a(n) historian in training, I’ve been particularly interested in the studies focused on professional historians. Although that genealogy one linked above is great too, as I’ve done some casual family history searches, but nothing like what the professionals do.

In the meantime, I have a reference request: is there a study, or even a reflective article, by a historian that discusses the use of archival tools such as finding aids? Almost all of the user studies I’ve found are by archivists or others in the information professions. Meanwhile, most historical writing about archives I’ve seen generally discusses physical locations, access considerations, and maybe archivists, but then bypasses the routines of searching and requesting to jump to the archival material itself. There might be references to classification systems or organizational arrangements, but those aren’t really the focus of the writing. And then you’re left with the familiar scene of the historian alone with the sources – the emphasis is on the information sought, not the information seeking, and the latter is what I’m looking for.

I suppose this is a question that might be better on something like twitter, but it always seems like I’m on twitter late at night when no one’s around. Also, I think I might have gone over 140 characters.

*It would be interesting to know if this is true of archives use, as well as users. That is, do genealogists as a group request more material than historians (assuming historians are the second largest group)? Or do historians request so much material that, on aggregate, it outnumbers genealogical requests? And while I’m asking, how much overlap is there between these two types of requests?

a matter of degrees

7 November 2009

Louis Menand:

The moral of the story that the numbers tell once seemed straightforward: if there are fewer jobs for people with Ph.D.s, then universities should stop giving so many Ph.D.s—by making it harder to get into a Ph.D. program (reducing the number of entrants) or harder to get through (reducing the number of graduates). But this has not worked. Possibly the story has a different moral, which is that there should be a lot more Ph.D.s, and they should be much easier to get. The non-academic world would be enriched if more people in it had exposure to academic modes of thought, and had thereby acquired a little understanding of the issues that scare terms like “deconstruction” and “postmodernism” are attempts to deal with. And the academic world would be livelier if it conceived of its purpose as something larger and more various than professional reproduction—and also if it had to deal with students who were not so neurotically invested in the academic intellectual status quo. If Ph.D. programs were determinate in length—if getting a Ph.D. were like getting a law degree—then graduate education might acquire additional focus and efficiency. It might also attract more of the many students who, after completing college, yearn for deeper immersion in academic inquiry, but who cannot envision spending six years or more struggling through a graduate program and then finding themselves virtually disqualified for anything but a teaching career that they cannot count on having.

That’s a surprising conclusion, but I think Menand is on the right track here. Only he misses an alternative conclusion that his own analysis points towards:

Who teaches that? Not, mainly, English Ph.D.s. Mainly, ABDs—graduate students who have completed all but their dissertations. There is a sense in which the system is now designed to produce ABDs.

The system works well from the institutional point of view not when it is producing Ph.D.s, but when it is producing ABDs. It is mainly ABDs who run sections for lecture courses and often offer courses of their own.

Maybe he’s too much a product of his professional training and context to see it,* but it seems to me that rather than completely transform the Ph.D., it might be better to acknowledge that the ABD has become a sort of degree without a diploma and then formalize it and confer upon it a legitimate status. That is, make it a terminal degree between the M.A. and the Ph.D., and reward people who teach with an ABD (or whatever it would be called if no longer associated with a dissertation) with a decent salary, benefits, and a measure of job security. That’s a lot to ask, especially in today’s economic climate, but if ABD is going to become a degree with a diploma, it can’t be a degree in diploma only. As for time to degree, I admittedly don’t have any numbers on this, but it seems like it usually takes about the same time to get to ABD within the same discipline across institutions, with variation in time to Ph.D. mostly a function of variation in the length of the final dissertation phase. A formalized ABD degree would have to set its requirements to avoid reproducing that same disparity.

Meanwhile, the Ph. D. – which may still need reforms in other ways – would get to remain distinct as a Ph. D. And the M.A. could remain a shorter, still in-depth but not as in-depth degree. My experience, anyway – and I was in a bit of an unusual situation because in my program you got an M.A. through coursework, but there was no M.A. thesis or M.A. exams – was that I learned quite a lot between finishing the M.A. requirements and passing my oral exams, and that this learning was not just a matter of covering more content but involved learning new ways of thinking about both my field (history) and, for lack of a better phrase, my orientation towards the world. Maybe it’s not always like that. But even though I went on to the start of the dissertation after I finished my exams, I still felt like I’d completed something very real and distinct when I became ABD; that would not have been the case had I left the program earlier.
*Or maybe, as an ABD, I’m blinded by my own status and context.

the mysteries of language

27 October 2009

invigilate (verb):

1. to keep watch.
2. British. to keep watch over students at an examination.

invigilance, invigilancy (noun):

Want of vigilance; neglect of watching; carelessness.

levels of knowledge

2 October 2009

Being in school again has me thinking about what it means to know something. Not because of anything covered in any one course, but because of the fact of the courses themselves. When you’re out of school, if you read something, and you have reason to believe you understand what you’ve learned from it, you can act as if you know that information without too much hesitation. Of course that knowledge, like most knowledge, is provisional: you could be misunderstanding it, or the source itself could be wrong. Just because you believe you know something doesn’t mean you’re beyond correction. You might qualify your statement when you present that knowledge – “I remember reading a study” – and you might ask someone with more expertise if what you know is true, but you generally don’t feel as if you need some sort of external approval to demonstrate that you really know it.

It’s different in school, where there are systems of evaluation set up to periodically evaluate your knowledge. Read a book about subject A outside of work and there’s not much you have to do aside from finish the book to believe that you’ve learned and now know something additional about A. Read the same book for class and you might have the same belief  about your knowledge – but until you’ve finished the coursework evaluation process, it will seem less settled.

Why am I bringing this up now? Aside from the fact that I’ve been struck by how differently I approach what I know depending on whether it’s part of an education program or not and simply think that is interesting, I am also going to be writing a bit about subjects related to my program. So I want to emphasize that this blog reflects the fact that I am in the process of learning. There are certain risks involved in showing one’s learning process in a public forum, but I hope that in writing about what I am learning, I’ll be able to give others at least a partial idea of what the library and archives fields are about. You can learn along with me.

For example, if I don’t have time to get into details, I tell people I’m in library school. People usually know libraries and they have some understanding of what librarians do, so library school doesn’t sound like anything that out of the ordinary. But I’m not just in library school; I’m also in an archives program (it’s a joint degree, so I’m in both). And people are less familiar with archives and what archivists do. I plan to write a post about the difference between the two – that is, between libraries and archives – but it turns out that the definition of an archive is quite particular – as is the definition of a record – and something that you have to learn carefully, even if you know, under general knowledge, what archives are, have done historical research in them, and don’t find the idea of “archives school” completely foreign to you.