Cyberlibertarianism: The Digital Deletion of the Left

I’m very happy to have a piece appear in Jacobin: A Magazine of Culture and Polemic, titled “Cyberlibertarians’ Digital Deletion of the Left” (I’ve given this blog entry my original title, which fits more neatly with the work I’ve been writing recently on cyberlibertarianism. The piece begins by posing a question that I hope most of us on the Left are thinking hard about:

The digital revolution, we are told everywhere today, produces democracy. It gives “power to the people” and dethrones authoritarians; it levels the playing field for distribution of information critical to political engagement; it destabilizes hierarchies, decentralizes what had been centralized, democratizes what was the domain of elites.

Most on the Left would endorse these ends. The widespread availability of tools whose uses are harmonious with leftist goals would, one might think, accompany broad advancement of those goals in some form. Yet the Left today is scattered, nearly toothless in most advanced democracies. If digital communication technology promotes leftist values, why has its spread coincided with such a stark decline in the Left’s political fortunes?

The full piece is available on the Jacobin website. .

Posted in cyberlibertarianism, rhetoric of computation | Tagged , , , , | Leave a comment

Bitcoin Will Eat Itself: More Contradictions of (Digital) Libertarianism

Bitcoin (BTC), the much in-the-news and up-for-government-discussion cryptocurrency favored by Deep Web drug markets, libertarians, anarchists and would-be assassins everywhere, has been on a tear recently, and as of yesterday has hit an all-time high (albeit briefly) of more than USD $900 mark. It’s not hard to find—in fact it’s difficult to avoid—cyberlibertarians of all stripes celebrating this surge and similar ones in the past as proof of Bitcoin’s importance. While the surge does indicate something, it is beyond remarkable to read celebrations of the surge as if they prove Bitcoin’s feasibility as what it is advertised to be, a currency: because it is only through an incredibly blinkered and uninformed worldview, one typical of the paradoxes found throughout cyberlibertarian discourse, that dramatic surges in the (relative) value of an instrument can be understood this way, since under any conventional economic theory such surges prove not that it is a new government-toppling currency, but to the contrary, that it is nearly useless as a currency. Like so many other parts of cyberlibertarian discourse, Bitcoin’s supposed power is so fully and transparently perched on blatant contradictions that it is shocking to find people taking it seriously, and yet take it seriously they do.

In many ways this is a familiar story about digital arrogance. Engineers imagine that their domain-specific knowledge translates into universal knowledge (“guys [who] are really good at what they do, and [who] think that makes them an expert at everything“); that all problems are engineering problems and that unsolved problems simply indicate that nobody so smart as they are has come along to solve those problems; that domain-specific knowledge is a kind of “elitism” meant to keep out true experts like them. It’s also a story about the permeability of cyberlibertarianism with Tea Party libertarianism, as lurking under the celebration of Bitcoin is an endorsement of Ron Paul-ite conspiratorial intuitions about monetary policy that do not stand up to scrutiny, even, for the most part, from more reputable Right and Libertarian economists (see, for example, criticisms of Bitcoin from an economist at the real (not Paulite) Libertarian Mises Institute: “The Bitcoin Money Myth,”; “Bitcoin: Money of the Future or Old-Fashioned Bubble?“; also see well-known digital investment analyst Henry Blodget making much the same fiscal argument I am advancing here, in “Bitcoin Could Go To $1 Million“). Together, we have the spectacle of rabid cyberlibertarians like Pirate Party leader Rick Falkvinge promoting Bitcoin because it displays exactly those features that disqualify it for its putative use. While Falkvinge may be the most visible and loudest advocate of this contradictory “analysis,” one need only check the comment boards for any article raising critical questions about the economics of Bitcoin to see it being repeated, ad nauseum, and in a typically trolling and dismissive style of any point of view not directly based on grokking the genius of the Bitcoin algorithm (in addition to their ubiquity on Falkvinge’s site, see comments on articles like this one on the Washington Post or this one on Bloomberg or this excellent blog post on Naked Capitalism).

To see this, one need only start at the beginning. Bitcoin is touted as a replacement for “fiat currency.” “Fiat currency” is a buzzword from libertarian economics and especially from Paulites; here are the key bits of the definition from Wikipedia:

Fiat money is money that derives its value from government regulation or law. The term fiat currency is used when the fiat money is used as the main currency of the country. The term derives from the Latin fiat (“let it be done”, “it shall be”)

The Nixon Shock of 1971 ended the direct convertibility of the United States dollar to gold. Since then all reserve currencies have been fiat currencies, including the U.S. dollar and the Euro.

In the simplest terms, one can understand “fiat money” as money without “intrinsic value” (that is, where the currency itself has value in another form; the most typical example is gold). This distinction is actually much harder to make than advocates want us to think; more on this below. As the Wikipedia entry goes on, “while gold- or silver-backed representative money entails the legal requirement that the bank of issue redeem it in fixed weights of gold or silver, fiat money’s value is unrelated to the value of any physical quantity. Even a coin containing valuable metal may be considered fiat currency if its face value is higher than its market value as metal.”

OK: fiat money is “bad” because it has no value other than as money; non-fiat money, also known as “commodity money,” has “intrinsic value.”

Now, what’s supposed to be wrong with fiat money? This is a age-old canard for libertarians; it’s one of the favored talking points of the Paul clan. What it’s supposed to be about is stability of value.

Ron Paul: “if unchecked, the economic and political chaos that comes from currency destruction inevitably leads to tyranny” (Ron Paul, Paper Money and Tyranny, Speech in U.S. House of Representative, September 5, 2003, quoted on http://wiki.mises.org/wiki/Criticism_of_fractional_reserve_banking)

What people supposedly hate about “fiat” currencies is that “central bankers” can manipulate the value of the currency, supposedly unlike asset-backed currencies like gold. The whole point of this is to have a stable currency. A currency whose value does not fluctuate wildly.

But because Bitcoin is completely uncontrolled, it cannot separate its asset from currency functions. That means that when it appears to be deflating, investors (ie “hoarders”) will jump in, as they are doing now. The problem with this is that, in just the way the libertarians scream about, it makes the instrument too volatile to use as a medium of exchange.

This is why most economists warn against deflation in currencies in general. The problem with “fiat currency” is value fluctuation. The most dangerous kind of value fluctuation is the deflationary spiral–it’s usually considered worse, even, than the kind of inflationary spiral experienced in the 1990s & 2000s by the Zimbabwean dollar.

That is, a merchant cannot hold onto their Bitcoins as profit, because they have no guarantee that their profits will be worth the amount they were when they took the profit. The 6 Bitcoins I get for selling a lawnmower today, may (likely will) only buy me a box of cereal tomorrow. This forces people to constantly transfer their Bitcoins into the supposedly-outdated national currencies, which actually underpin Bitcoin, are actually necessary for it, rather than being the old-fashioned predecessors to it.

Which world currency is currently experiencing among the most dramatic deflationary spirals anyone has ever seen? Bitcoin, the “existential threat to the liberal nation state.” Any sane person putting their life’s savings into Bitcoin among all world currencies right now is as foolish as a Dutch person buying tulips bulbs during–well, you know when. That is because the problems with currencies actually aren’t formal, or mechanical, or algorithmic, despite what BTC-heads desperately want to believe. They are social and political problems that can only be solved by political mechanisms. That is why, despite the rhetoric, right now most sovereign currencies are far more stable than Bitcoin will ever or can ever be (since Bitcoin has no mechanism for value control whatsoever, is almost designed to produce deflation, and BTC-heads have an historically-disproven belief that lack of regulation produces stability–when we can see time and time again that lack of regulation produces boom-and-bust cycles of an intensity far greater than the central bank shenanigans BTC-heads loathe so much.) Fine, let’s go back to the Gilded Age–but don’t pretend for a second we had stable currency values back then, asset-backed or otherwise. We had fiscal chaos, ruled by the most concentrated and powerful holders of capital.

Many economists recognize something that appears to have been beyond the inventors and advocates of Bitcoin. Without direct regulatory structures that prevent an instrument from being used an an investment (aka “hoarding”), any instrument (even gold) will be subject to derivation, securitization, and ultimately extreme boom-and-bust cycles that it is actually the purpose of central banks to prevent.

The more Bitcoin fluctuates in value, the less functional it can be as a currency. The less impact it can have on “world governments,” whatever that is supposed to mean. The more Bitcoin “rises in value”—that is, experiences radically deflationary spirals—the more useless it is as currency.

In fact, because the cycles of rapid deflation and inflation provoke constant exchanges of Bitcoin for other stores of value, usually national currencies, Bitcoin can more readily be understood not merely as a commodity, as just one among many other digital commodities, but also as a kind of derivative itself–an option or futures contract related to the value of other instruments and on which investors of all sorts can speculate and, depending on the volume of transactions, even manipulate the market. Given Bitcoin’s foundational anti-regulatory stance, it is almost inconceivable that major players are refraining from such manipulation. Thus the involvement of high-profile players like the Winkelvoss twins, too, cannot be a cause for celebration of Bitcoin’s potential as a currency, but rather demonstrates its utility as a manipulable commodity for typical, existing capital to use to its own ends. In this sense, it becomes a tool for existing power to concentrate itself, rather than a challenge to the existing order.

Few attitudes typify the paradoxical digital libertarian mindset of Bitcoin promoters (and many others) more than those of “Sanjuro,” the alias of the person who created the Bitcoin “assassination market” written up by Andy Greenberg. He believes that by incentivizing people to kill politicians, he will destroy “all governments, everywhere.” This “will change the world for the better,” producing “a world without wars, dragnet panopticon-style surveillance, nuclear weapons, armies, repression, money manipulation, and limits to trade.” While not directly about the revolutionary powers of Bitcoin, the sentiment flows from the same fount of misguided computational “wisdom.” Only someone so blinkered by their ideological tunnel vision could look at world history and imagine that murdering democratic governments out of existence would do anything but make every one of these problems immeasurably worse than they already are.

Posted in "hacking", bitcoin, cyberlibertarianism, information doesn't want to be free, materality of computation, revolution, rhetoric of computation | Tagged , , , , , , , , , , , | Leave a comment

On Allington on Open Access

Daniel Allington has written the best thing I’ve yet read anywhere on open access, called “On Open Access, and Why It’s Not the Answer.”

Anyone interested in the question should read it now. It is much more deep and detailed than most of the pro-OA writing out there, and gets at some of the deep political and academic problems that lurk around the stark moralistic rhetoric that informs most discussions of the topic. I won’t try to summarize it, but here are a couple of choice quotations:

“If you do not have access to an adequately funded library, then that is a problem” but it is a different problem from the apparent over-pricing of some academic journals, and requires a different solution. And if you have access to what would seem to be an adequately funded library, but cannot obtain the reasonably priced journal you need because the funds have all been soaked up by overpriced journals, that’s again a problem, but it’s hardly the fault of the reasonably priced journal, its academic editors and contributors, its editorial staff, or even (in many cases) its publisher (since not every organisation that publishes a cheap journal also publishes a very expensive one).”

“Requests for articles uploaded to an institutional repository … primarily come from people who already (for the most part) have access to the same articles through the inter-library loan system.”

“It is hard to see any particular need for an ‘academic spring’: the name by which the most recent phase of the open access movement was, somewhat offensively, referred by some journalists (the implication being that boycotting Elsevier is somehow akin to risking one’s life protesting against a military dictatorship in the Middle East). Completely free journals already existed, albeit that many of them were and are of comparatively low status. There were at least two viable systems whereby people could access articles published in closed journals to which they lacked direct access, namely repositories and (for those lucky enough to be placed within participating institutions) the inter-library loan system.”

“The existence of the larger market enables more money to be spent on marketing (for many academic presses, ‘marketing’ consists of no more than listing a title in a catalogue and mailing out a scant handful of copies to the reviews editors of scholarly journals) and facilitates these books’ appearance on the shelves of general interest bookshops – not because the manager of (say) the local branch of Waterstones necessarily has a commitment to disseminating scientific knowledge (although in practice, that is not unlikely to be the case), but because members of the public are likely to pick them up and buy them, contributing not only to the dividend paid to Waterstones shareholders, but also to the local branch balance sheet, permitting the branch to stay open and the staff to be paid. The irony is that if the text of those books had been published not through the commercial system, but by being uploaded to a “free” website such as this one, far fewer people would have read that text, because far fewer people would have had a stake in ensuring that it would reach an audience.”

“The open access movement is consumerist, i.e. … it has typically ignored production issues and failed to give serious consideration to the academic publishing industry, to the contribution it makes, and to the likely results if it were to be starved of income. This point is obscured by focus … on academics as producers. Such focus misconstrues the relationship that professional academics have with the publishers of academic journals.”

“I am yet to be convinced that social media can provide an adequate medium for the assessment of extended theoretical arguments or in-depth analyses of data, as opposed to strikingly-expressed position statements and technically-impressive visualisations.”

“Although academics collectively appear to be no better than other social groups in considering the welfare of those who depend upon them – Brienza has been perhaps unique in arguing that academics have an ethical responsibility to consider the fate of the thousands of people ‘who make a modest living supporting the publication of worthy research’ – the attitude Fyfe draws attention to is probably more closely associated with new media companies than with open access advocates”

open access

Open Access

I wrote a comment on Allington’s blog to emphasize and expand upon a couple of his key points; it’s reproduced here:

I agree with Sara and will go further: this is the best thing I’ve ever read on the topic. It summarizes quite a few points I’ve been meaning to make in writing about it myself, and I’m grateful I can point people here instead.

Just to chime in, I’ll add a few other points that I believe complement yours:

1) Publicity. One of the main functions of publishers is to publicize (as the name suggests) our work. This costs money. it is one of the reasons Representations and Critical Inquiry count so much in English: they are well-publicized as well as being well-funded in other ways. Along with sustainability, this is one of the things funded publishers can and do offer that completely gratis operations can’t and don’t.

2) Discipline Specificity. The blanket injunctions regarding OA completely overlook the tremendous differences in costs from one discipline to another. As Hal Abelson said in the Swartz report from MIT, the entire JSTOR back and current catalog (mostly humanities and social sciences journals from hundreds of publishers) costs less than the current journal subscription from some individual science publishers (such as Elsevier). If a main part of the pro-OA argument is the cost of journals, then it must take into account the fact that journal costs are radically different across disciplines, as you suggest.

3) Emotion/moral argument. I have my own explanations for where this comes from and why it occurs, a question you aren’t really asking, but I think it has to do with the resistance on the part of OA advocates to really think deeply about what they are doing. There is cognitive dissonance. OA is easily seen, especially in the humanities because of #2 above, and visible in quite a bit of the rhetoric you quote, as a rejection of humanistic academic practice, not as support for it. Keeping these contradictions in mind is hard and produces extra emotion. Arguing that academics and their support system should be paid nothing while simultaneously suggesting you are supporting them is hard work, because it does not make sense on the surface.

4) Mandates. You get near this a couple of times, but there is a tremendous contradiction in the fact that for the first time I’m aware of in history, under the banner of “open” and “free,” academics are being told where and how they can and should publish and not publish. There is a broad suggestion around that you touch on, that academics should not publish in non-OA journals. Whatever the moral benefits of OA may be (and they are much thinner than advocates suggest, as you rightly point out), academic freedom is more important.

5) Libraries. The sharp edge of the OA knife is a Manichean distinction between “open” and “closed.” Anything not freely available on the web is “closed.” This is an amazing reinterpretation of the function of libraries, which have until now been seen as open institutions that provide largely free access to all sorts of published material, and still do. The fact that an article is available for a fee on the web, but for free in nearby libraries, still makes it count as “closed.” That disparages libraries (and is partly responsible for another anti-intellectual push toward putting them out of business) and turns the facts of the world upside-down. If the price of access is a trip to the local library, I don’t see why that is unreasonable. At all.

6) The Reinterpretation of Publishing. In history, publishing has been about making things open and available, through not just printing but publicity, distribution, editing, and so on. Now we have book historians as wise as Darnton reinterpreting publishing itself as a means of preventing rather than providing access. That is really bizarre. “Paywalls” do not prevent access. Stephen King fans are not “prevented” from reading his books because they cost money. This just turns obvious facts on their head.

7) Access for the Disadvantaged. Many publishers and distributors have robust programs to deal with this. JSTOR (again, not a publisher but a distributor), in particular, provides free access to nearly its entire set of journals to almost every African institution and many institutions in developing nations worldwide. (The African program was in place prior to Swartz’s actions, making his mention of African nations in the Guerrilla Open Access Manifesto particularly curious).

8) The Slippery Target of OA. The best arguments for OA focus on academic journal articles because they have traditionally been contributed without compensation. Yet many of the most rabid OA supporters go much further, beyond the Budapest OAI recommendations, and start to talk about mandated OA for all sorts of other things up to and including “everything professors publish.” The fervor with which this position is sometimes recommended (see: the recent AHA Electronic Thesis controversy) also smacks to me of cognitive dissonance, because depriving professors of the opportunity to earn money for their own creative and scholarly productions is one of the best ways to eviscerate what is left of the professiorate.

Posted in cyberlibertarianism, digital humanities, information doesn't want to be free, rhetoric of computation | Tagged , , , , , , , , , | Leave a comment

Talk: ‘Cyberlibertarianism: The Extremist Foundations of ‘Digital Freedom”

Talk delivered at Clemson University, September 5, 2013

Full paper: Cyberlibertarianism: The Extremist Foundations of ‘Digital Freedom’

Abstract

Cyberlibertarianism has rapidly become the dominant mode of political thought of our time. Especially in the US, but also around the world, the view that might be summed in the slogan “computerization will set you free” has taken remarkably firm hold, especially of young people who perceive some of the many structura l political problems our world faces. Yet when we examine the premises of cyberlibertarian thought, here including the work of writers who appear to range across the political spectrum–from Clay Shirky and Yochai Benkler on the “left” to Peter Thiel and Eric Raymond on the “right”–we find remarkable uniformity in the grounding principles of their belief system, especially in the definition of terms like freedom, democracy, and property. This uniform belief system maps much more closely than the rhetoric would suggest to what Philip Mirowski and other economic historians have carefully traced to origins in the Neoliberal Thought Collective (NTC)–that is, a far-right political vision that is not compatible with both left and even traditional conservative thought of many different stripes. The long-term practice of the the NTC reveals many points of convergence with cyberlibertarian thought, and highlights areas of profound divergence with regard to other forms of political thought, especially around the value of equality. Both the practice of Wikipedia and the discourse of and surrounding hacking reveal close alignments between digital utopianiasm and the political agendas of the extreme right.

Posted in "hacking", cyberlibertarianism, materality of computation, revolution, rhetoric of computation, theory | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , | 3 Responses

Opt-Out Citizenship: End-to-End Encryption and Constitutional Governance

Among the digital elite, one of the more common reactions to the recent shocking disclosures about intelligence surveillance programs has been to suggest that the way to prevent government snooping is to encrypt all of our communications.

While I think encryption might be an important part of a solution to the total surveillance problem, it strikes me as much more problematic than many people, especially encryption advocates, seem to think, and in certain ways actually not at all welcome and not an appropriate democratic response to surveillance. What is particularly troubling is that the issues I am going to discuss are obvious and clear ones that anyone interested in democracy, constitutional government, or rule of law should be thinking hard about, and yet despite intensive searching and Twitter solicitation on my part, I have been unable to find any real discussion of the problems, even while it is easy to find unambiguous, detailed, and explicit advocacy for technical solutions that raise the problems I mention here.

Let’s leave aside the technical questions. Let’s suppose for the moment that perfect end-to-end encryption is possible—that it becomes possible for individuals to hide everything they say and everything they write and every document they create and every transaction they perform from any surveillance, ever. This is clearly the goal advocates aim toward, without hesitation. This is to some extent what a service like Tor already provides.

On what legal or ethical basis do advocates assert that they have the right to do this?

The Silk Road

A sampling of wares available on silk road

The knee-jerk answer, although we do not find even this discussed, is to claim a right to privacy. In the US, there is no unambiguous constitutional right to privacy—none of the Bills of Rights specifically grants a right to privacy by name. Case law, and readings of the 4th (against unlawful search and seizure) and 5th (against self-incrimination) amendments in particular, have led to a tacit right to privacy. Here is how one of that right’s fiercest advocacy groups, the ACLU, explains that right:

The right to privacy is not mentioned in the Constitution, but the Supreme Court has said that several of the amendments create this right. One of the amendments is the Fourth Amendment, which stops the police and other government agents from searching us or our property without “probable cause” to believe that we have committed a crime. Other amendments protect our freedom to make certain decisions about our bodies and our private lives without interference from the government – which includes the public schools.

Note what this does not say. It does not say, anywhere at all, that citizens have the right to hide their activities from law enforcement when they do have probable cause. It does not give us the right to conduct criminal activity and to hide those actions completely from the government.

In the writings of some of the major Enlightenment philosophers on whom the Founders of the US relied in fashioning this country, we find the notion of the “social contract.” As Wikipedia puts it, “Social contract arguments typically posit that individuals have consented, either explicitly or tacitly, to surrender some of their freedoms and submit to the authority of the ruler or magistrate (or to the decision of a majority), in exchange for protection of their remaining rights.” This is clearly what the framers had in mind when they created a constitutional form of government (an alternate formulation favored by thinkers like David Hume, “consent of the governed,” raises similar questions). Today, especially among digital advocates, we read about our system of government being a democracy (despite it actually being a republic, or representative democracy); we read quite a bit less about it also being a constitutional democracy, a government of laws and not men.

Yet the non-absoluteness of the right to privacy is a perfect example of what the rule of law means. In order for most of us to be relatively free, we each must sacrifice a little bit of our freedom. In order to have a relatively free society, we cannot have absolutely free individuals. Being a citizen—accepting the privileges, rights and responsibilities that go along with citizenship—means accepting some of these limits. Some of them are straightforward curtailments on what some might see as liberties: I’m not free to murder another person, to take their belongings without compensation and against their will, or even to practice medicine without being duly certified by the appropriate authorities. We sacrifice these freedoms because we have both history and logic to tell us that as a whole we are better off if we each individually make these sacrifices. Despite the murderer’s freedom to kill being constrained by law, most of us are better off if killing is illegal. Despite the amateur surgeon’s freedom to perform an appendectomy being curtailed (and note that the amateur surgeon might even be perfectly competent at performing the appendectomy, so that society as a whole loses the benefit of that skill), we have decided that society as a whole is more free if we require doctors to have licenses.

If all communications are effectively encrypted, surveillance and retrospective investigation of those communications become impossible. That might sound good to you, but it does not sound good to the victim of a violent crime or murder, to those of us who want the financial industry investigated, to those of us who would like political and corporate actions to be thoroughly investigated, and so on. Under the right legal circumstances, under any regime of power, law enforcement must have access to communications and even more so to digital data that actually comprise commercial and legal transactions, in our society, under our laws. The 4th and 5th amendments could have been written: “under no circumstances may government search a person’s property.” They are not written that way. They clearly give the government not just a right but the obligation to investigate violations of democratically-enacted laws. Suggesting anything else is proposing a vastly different model of governance from the one we have. I am open to and want to read such proposals, but one does not find them in the writings of digital encryption advocates.

It is ironic that much of this discussion comes up in the context of the shocking nature of Edward Snowden’s revelations, where the shock value emerges precisely from the incompatibility of the NSA’s conduct with most of our readings of the 4th Amendment itself, the same 4th Amendment that guarantees the government the power to search and seize our papers under the right circumstances. My own criticism of the NSA surveillance programs are based exactly on what seem to me their incompatibility with the 4th and 5th amendments. Rule of law, and the citizenship that enable it, are package deals. If you want the protections, if you want the democratic rights and privileges, if you want the Constitution to mean something, you have to go along with the obligations as well. Though it is not usually taken this way (because until recently nobody even thought to raise the question), the 4th Amendment gives the government the right and the responsibility to “search and seize” the “persons, houses, papers, and effects” of citizens. By asserting the right to prevent law enforcement from conducting such legal searches, you are asserting your independence from the Constitution. You are entitled to do that in some sense, but I do not see how you can do that and also claim the rights accorded under the rest of the Constitution. You are entitled to renounce your citizenship. But if you assert that you have the right to put yourself entirely beyond the reach of the law for any of your actions, you are asserting your right to opt out of one of the very most basic elements of the social contract. I don’t want Jeff Skilling or Raj Rajaratnam or Colin McGinn or WorldCom or Citigroup to be able to conduct its business beyond the reach of law enforcement, and the price for that is that I’m not entitled to do so either.

Cyberlibertarians have an almost astonishingly selfish and self-centered view of freedom. They look at their own sphere of influence and demand total freedom within it. They do not, typically, look in a general way at society and ask what it means for all of us to be meaningfully free, especially if that general social freedom might (heaven forbid) put constraints on their own behavior. For most of history, desiring to be free from violence and serious crime has been understood as a basic part of the social contract. We can’t have that if all communications are encrypted.

I say this from a position profoundly critical of the security state, the huge amounts of government secrecy, the total surveillance programs we are all reading about, and so on. Among many other problems, the government assertions about these programs argue that the State’s perfect right to security inspection makes imperfect the citizen’s right to privacy. This especially comes out with regard to oversight, where we are repeatedly told that any revelation of methods and procedures compromises the security measures so entirely that it is impossible. I think that is a terrible misreading of history and law. What must be imperfect, so far as a balance is to be struck, is security, not privacy. Citizen privacy is a bedrock aspect of common law and in much of the thought of the US framers. We need to know—to know, not to believe—that our government only surveils citizens and even non-citizens in compliance with the law and constitution. We lived under this kind of regime for decades, with regard to the phone, mail, and wire communications, and we need to get back to it. But tilting toward privacy is not to deny the security functions of government altogether. Denying those functions seems to me a denial of a principle so fundamental that we must ask just how and why you do consider yourself to be a citizen and to be bound by the rule of law that constitutes democratic governance itself. Further, failing even to have this conversation suggests that cyberlibertarians tacitly subscribe to a notion of governance that is wildly out of step with what we understand as democracy.

Let’s face it: there is nothing hypothetical about these concerns. While we read frequently about the uses of Tor in politically-repressive states, most of us know that the most visible use of Tor is the Silk Road, a Tor hidden service marketplace in which the major product is illegal drugs. We might argue about whether having this service makes the drug trade more or less safe, more or less widespread, increases or decreases drug addiction, etc. What we can’t argue about is that because drug use is largely seen as a victimless crime and is engaged in by huge numbers of people, it is especially visible. Even if you think it’s OK for drug dealers and users to conduct their business in such a way that it is impossible for law enforcement to monitor it with a legally-obtained search warrant, the visibility of the Silk Road suggests that Tor is used or could be used for other kinds of crime. We certainly have anecdotal reports that Tor and similar services are implicated in the spread of child pornography, and the Silk Road itself has been used for illegal weapons sales. End-to-end encryption is currently being used to hide criminal activity. Further such encryption will make hiding it even easier. Full encryption would make hiding it standard, and make law enforcement something between difficult and impossible. The basis on which that happens is the assertion by cyberlibertarians that they are not subject to the laws that constitute our society. We should be having a thick and robust discussion about these problems, not just presuming that it is fine to opt out of citizenship obligations if you have the technical means to do so. That this discussion is not taking place is itself an indication of the atrophied state of political discussion in our “information age.”

Update, Dec 17, 2014: This piece has gotten some attention recently due to the fallout from Pando’s reporting on Tor, and Pando’s reposting of my “Tor Is Not a ‘Fundamental Law of the Universe.'” In a very interesting Twitter discussion with W. Greenhouse, it became clear to me that the word “encryption” here may be a bit of a distraction. Encrypted services in and of themselves are, for the most part, not what I’m getting at, but rather the kind of service Tor offers, which at least advertises itself as making all communications, including metadata, invisible to any sort of observation. Though what “encryption” and “anonymity” mean as one starts to install them throughout the system remains an open question. I definitely want banks and brokers to be able to encrypt the traffic between themselves and their customers, and I don’t have a problem with companies using VPNs to conduct remote business with clients and employees (although my quick guess would be that all of these services, at least within major companies, exist within significant compliance regimes–ie, requirements to store, report on, and produce the records of such communications when required to do so by warrant). But I don’t want banks and brokers hiding their activities so routinely that it becomes even more difficult than it already is to document wrongdoing inside of them. The Tor Project website boasts that “business executives use Tor,” and offers a bunch of positive sounding use cases, but it is not hard to think of many ways “business executives” might find Tor extremely helpful in evading legal and regulatory requirements.

Posted in cyberlibertarianism, privacy, rhetoric of computation, surveillance, what are computers for | Tagged , , , , , , , , , , , , , , | 2 Responses

‘Communication’ and ‘Critical’

The great communications scholar James Hay has assumed editorship of the journal Communication and Critical/Cultural Studies this year, and I am very grateful to have been invited to contribute a short piece on the core concepts around which the journal is organized. My contributions (provided here gratis, libre, and DRM-free) are about the terms “communication” and “critical”:
“Communication,” “Critical.”

Abstract
Despite the proliferation of critical studies of communication, the meanings of the words “communication” and “critical” remain deeply contested. Attending to the history of the use of these terms inside and outside of the academy offers a broader perspective on some of the most pressing issues confronting scholars of communication today.

Immanuel_Kant_(painted_portrait)

Immanuel Kant (Photo Credit: Wikipedia)

220px-Harold_Innis_public-domain_library_archives-canada

Harold Innis (Photo credit: Wikipedia)

marx and engels

Karl Marx and Friedrich Engels (Photo credit: Wikipedia)

Posted in "social media", materality of computation, rhetoric of computation, theory | Tagged , , , , , , , , , , , , | Leave a comment

Postcolonial Studies, Digital Humanities, and the Politics of Language

Excerpted from a longer essay in progress.

Adeline Koh and Roopika Risam recently started an open thread on DHPoco based around an observation by Martha Nell Smith about the politics of race and gender in the digital humanities. I find these topics distinctly connected to questions about language and the relationship of various humanities fields. In one comment I made on the thread I tried to raise these issues, which I was not entirely surprised to find provoked no additional discussion, especially as they relate to the general question of a postcolonial digital humanities.

This is a great comment, and at the risk of continuing to talk too much on this board, I think it important to expand a bit on a point I hope to write up more thoroughly, which is that when we expand not just to multilingualism among imperial/majority languages (French, Spanish, Italian, Mandarin, Japanese, Russian, Hindi, Tamil, Arabic, Swahili, even Quechua) but to minority, indigenous and endangered languages, the question of what counts as DH and why something should be labeled as DH becomes extremely vexed. To keep to North America but turn to the indigenous (aka First Nations) people of Canada, most of the major First Nations groups now maintain rich community/governmental websites with a great deal of information on history, geography, culture, and language–a lot of what might go into at least the “archive” if not the “tool” version of DH type 1. But none of this work, or little of it, is perceived or labeled as DH, particularly as Type 1 (one of the earliest DH projects I worked on was the Cree language site http://www.eastcree.org/cree/en/, and I have had trouble getting this recognized “as” DH, and for a lot of reasons have stopped trying). To my mind, in many ways, these are better than “archives,” because they are the marks of living communities using any form of communications to keep themselves active and alive. It would make DH look a lot less parochial and majority-culture oriented if this stuff “counted” as DH, but it’s hard to see how it would benefit the communities themselves. This is one of the deep cruxes that DH as a label has created for itself–it needs this material in order to de-colonize itself, but taking that material in looks like a colonizing gesture, one that is meant to benefit “us” much more than “them.”

Some links to indexes of North American (especially Canadian) indigenous sites:

http://www.firstnationsseeker.ca/

http://www.afn.ca/index.php/en

In literary studies “proper”—college and university departments like English and Comparative Literature and Area Studies, and the academic journals in which researchers publish—it is fair to say that postcolonial studies manifests itself in two major ways. The first is through theoretical writing, like that of Edward Said, Gayatri Chakravorty Spivak and Homi K. Bhabha; the second is through the direct reading of primary texts, whether these come from traditional majority cultures or from writers more or less associated with minority and postcolonial cultures (here meaning literally those cultures once subject to colonial rule, and governance of which has been returned to one degree or another to the local culture)–texts, it’s important to say, which come from every time period and place, not just contemporary ones. This work, unless it is situated in Anthropology or Linguistics departments, typically proceeds via examination of work written in majority languages, for a variety of reasons, not all of them necessarily salutary to the postcolonial critical project. This gives rise to some of the profound tensions in the field, perhaps exemplified by the 1960s debate between the Nigerian writer Chinua Achebe and the Kenyan writer Ngũgĩ wa Thiong’o over the question of the appropriate language for African literature.

In departments other than Anthropology or Linguistics, subaltern texts are typically read in translation unless they were originally composed in a majority language. The practical reasons for this are obvious, but its effects are not entirely welcome, as it can tend to perpetuate the notion that all languages are not just equal but transparent, that nothing is lost if the original language is lost, that even the Ngũgĩ/Achebe debate is moot because Ngũgĩ’s original Gikuyu must be translated for anyone but approximately 6.6 million native speakers to read it (“Gikuyu”). In recent decades, a publishing explosion has meant that postcolonial literature is widely available in majority languages or in translation; but this should not obscure our understanding of the postcolonial predicament of the non-majority languages and their speakers. One hopes that the opposite can occur, and that interest in postcolonial cultures as no “more than” or “less than” our own culture will encourage a new respect for both these cultures and their rights, much as minority rights have come to be more widely accepted within majority cultures themselves; yet as the tension over these rights (and often enough, the languages spoken by minorities within majority cultures) shows, this work is slow, fraught, poorly-understood, difficult, and by no means guarantees a salutary outcome.

One of the projects that draws me to do part of my work in linguistics, and that drives the work of many linguists today, is usually referred to by the phrase “Endangered Languages.” This is another complex topic that I won’t even pretend to cover in anything like the detail it deserves; I encourage readers unfamiliar with it to explore the scholarship on it (good starting points are Nettle and Romaine 2002; Harrison 2008; Grenoble and Whaley 1998; and the central language resource called Ethnologue: Lewis, Simons and Fennig 2013). The “endangered languages” movement in linguistics is of fairly recent vintage, spurred in no small part by a 1992 essay by Michael Krauss. As the linguists Daniel Nettle and Suzanne Romaine put it,

There are good reasons to believe that the processes leading to the disappearance of languages have greatly accelerated over the past two hundred years. Linguists estimate that there are around 5,000-6,700 languages in the world today. At least half, if not more, will become extinct in the next century (Nettle and Romaine 2002, 7)

In fact estimates have risen since Nettle and Romaine published this assessment in 2002; current estimates indicate that there are more than 7,100 languages in the world today (Lewis, Simons and Fennig 2013).  The Ethnologue currently reports that around 36% of these are threatened or already in immediate danger of dying out.

The “solutions” to these problems are difficult even to imagine, but they do exist. The first reaction within linguistics was an explosion in “documentary linguistics,” also fueled by the rapid spread of digital technologies, in a direct and understandable effort to “save” the languages by recording, archiving, and transcribing them. But even in the best cases we are talking about recording a few hundred hours of speech by a small group of speakers. Imagine if someone gave you the assignment to “document English” in the “community where you live” in several hundred hours of tape. Yes, you’d get a lot—and yes, for a variety of reasons, English exists in more variants and has a larger vocabulary than many indigenous languages—but think of how much you’d miss. You’d miss almost everything, to be very honest.

So in a kind of second wave of work, one now hears the phrase “language revitalization” coupled, more often than not, to the phrase “language documentation.” The goal of such efforts is at least threefold: to document languages as fully as possible; to support communities in ongoing efforts to resist the loss of language; and to use documentary and other materials generated by the linguistic work itself, often involving community members directly in the creation of archival and educational resources—which are usually digital in nature.

The move is one from what a few linguists have smartly called “telic archivism” (Dobrin, Austin, and Nathan 2007; Nathan 2004)—the creation of archives as an appropriate and sufficient end-goal—to a focus on the community itself, where there is any hope of the community holding onto its language(s) as vital practices. This requires, too, what I understand as the work of postcolonial studies: “If people do believe their language is primitive, or are scarred by punishments imposed for speaking the language in their youth, they are unlikely to make informed judgments about their goals for language learning.” (Nathan and Fang 2009, 8).

Most of the world’s languages are spoken and not historically written (exact estimates are hard to come by, but it is largely accepted that about 220 languages account for most of the language use by 95% of the world’s population, and that these also make up the large majority of written languages and, until very recently, languages taught in schools). As such we cannot pin hopes for a postcolonial digital humanities to wait for texts from marginalized peoples to appear in print or even on the web; thus an historic problem with Wikipedia’s goal to establish an “encyclopedia in all languages” is hampered by Wikipedia’s only existing in written form (see Wikimedia Oral Citations for the only general acknowledgement of this predicament).  Efforts to make the web more multilingual or less monolingual face enormous hurdles in the accommodation of spoken practices within what is largely a written medium; what is not available is the insistence that such written practices be reduced to writing in order to accommodate the web—writing in this way fundamentally changes the character of languages in theoretically interesting ways, but the primary goal of the endangered languages movement, and of postcolonial studies, is to document and support the languages as they are, not to change them. (Note that documentation of languages is often done through IPA, the International Phonetic Alphabet, but that this writing system is not typically used other than for linguistic research; the endangered languages movement has taken advantage of a variety of digital technologies to move the concept of documentation from written transcription to audio and audiovisual media.)

The late Dell Hymes is probably the linguist whose work most exposes the prejudicial assumptions on which our notion of literature itself rests. In a series of essays of which the most famous is “Discovering Oral Performance and Measured Verse in American Indian Narrative,” and which are collected in the volumes “In Vain I Tried to Tell You”: Essays in Native American Ethnopoetics and Now I Know Only So Far: Essays in Ethnopoetics, Hymes argued that there are not just conceptual but formal and stylistic reasons to re-evaluate the “texts” that have been “collected” from indigenous people as literary forms. As “Discovering Oral Performance” announces in its opening sentences:

I should like to discuss a discovery which may have widespread relevance. The narratives of the Chinookan peoples of Oregon and Washington can be shown to be organized in terms of lines, verses, stanzas, scenes, and what one may call acts. (309)

Hymes concludes the essay:

The contribution to a truly comparative, general literature, in which the verbal art of mankind as a whole has a place, might be analogous to the effect once had by grammars of Native American languages on general linguistics, expanding and deepening our understanding of what it can mean to be possessed of language. (341)

Roughly, what Hymes discovered is that in every productive sense, all cultures have what we call “literature,” but for the highly parochial definitions attached to the specific literary traditions of majority cultures. This has always struck me as both intuitively and empirically beyond question, as soon as one starts to look at the evidence; and also to pose huge problems for the kinds of global theorizing advanced by literary scholars like Franco Moretti (Atlas of the European Novel; Graphs, Maps, Trees) and Pascale Casanova (World Republic of Letters), which take too much for granted our ability to isolate the “properly” or formally-named “literary” from the varieties of speech genres in which they are embedded. To put Hymes’s observation in terms which I feel confident, based on the time I spent with him, would agree: Every culture has language. Every culture has literature. Every culture has narrative. Every culture has poetry.

Digital technologies can play important roles in the preservation and revitalization of languages and cultures. They are also deeply implicated in the forces that are causing linguistic and cultural endangerment to begin with. Like other technologies of media, memory, and language, they always have the nature of the pharmakon, in the terminology of Jacques Derrida that has recently been adapted to a wide range of communications technologies by Bernard Stiegler (see for example Stiegler 2012), both poison and cure. Sometimes the poison and cure come in the same package: the very seductiveness and utility of technology can be precisely the destructive force. In Michel Foucault’s terms, the “poison” can come in the form of positive power: a power that subverts minority cultures not by directly destroying them, but by advertising the superiority of the majority, in a thousand different ways. This kind of positive power is especially effective on young people, who find so much about metropolitan modernity attractive (not least its economic opportunities), and easily adopt the view that indigenous cultures are “traditional,” old-fashioned, out of step, even “primitive.”

We can also work to make the world of the digital reflect these prejudices much less than they do. Here is where the uncertain position and uncertain commitments of the digital humanities seem to me especially worthy of reflection. As I asked in my comment, what is a digital humanities project outside of the major metropolitan languages and cultures? Who wants that label to be applied, and why? Could we, for example, engage in digital humanities work that promoted the values and lives of postcolonial peoples, even if the work did not have that label? For a long time I worked under the assumption that we can, which I still hope is correct; yet I often wonder if that is a kind of fool’s errand.

One project that deserves special discussion in this regard is the the World Oral Literature Project (WOLP) chiefly run by Cambridge and Yale universities:

Established at the University of Cambridge in 2009 and co-located in Yale, US since 2011, the World Oral Literature Project collaborates with local communities to document their own oral narratives, and aspires to become a permanent centre for the appreciation and preservation of oral literature. The Project provides small grants to fund the collecting of oral literature, with a particular focus on the peoples of Asia and the Pacific, and on areas of cultural disturbance. In addition, the Project hosts training workshops for grant recipients and other engaged scholars. The World Oral Literature Project also publishes oral texts and occasional papers, and makes collections of oral traditions accessible through new media platforms. By stimulating the documentation of oral literature and by building a network for cooperation and collaboration, the World Oral Literature Project supports a community of committed scholars and indigenous researchers.

It is striking how well this project embodies the ideals implicit and explicit in Hymes’s work and the work of the language revitalization movement, without needing to make those commitments all that explicit. Instead, this project takes for granted that these languages and cultures are equal to all others, without implying rhetorically or practically that the people and their languages are “dying” or “traditional” or “backward.” It focuses on oral practice because that practice is fundamental to language and culture in a way writing is not. It is multidisciplinary, global, and postcolonial in the best sense.

Interestingly, though, like most linguistics and indigenous media projects, the WOLP has no explicit connection to digital humanities. Its funding comes directly from the sponsoring universities and from funding bodies associated with endangered languages. I was not entirely surprised, therefore, at the lack of response from within DH to the surprising announcement made on April 4 of this year, that the WOLP will be shutting down due to lack of funding:

After five years supporting the documentation of endangered languages and cultures, the World Oral Literature Project (WOLP) will temporarily halt accepting materials or offering grants as of April 2013 as new funding has become challenging in the current environment.

I tweeted about the project shutting down, and #DHPoco retweeted my tweet. Other than that, I know of no digital humanities venue in which any comment was made about what seems to me a terrible development on many different levels. I do think deep questions about the overall orientation and purpose of DH are raised by this general lack of interest and engagement, even though as a member of the DH community I am committed to doing what I can to draw attention to it.

The project I continue to think the best example of what digital technologies can do for humanities scholarship comes from linguistics, and despite being widely used and admired throughout many different linguistics fields, is advertised very little. To my knowledge, other than in my own work, I’ve never seen it referred to as DH; I don’t think the researchers refer to it this way, its funding (like the WOLP) comes entirely from linguistics sources, and perhaps because of its importance and utility to the linguistics community, I see very little self-promotion or labeling associated with it, very little of the institutional politics I associate with DH.

This project is called the World Atlas of Linguistic Structures (WALS; Dryer and Haspelmath 2011). As the editors explain, WALS is “a large database of structural (phonological, grammatical, lexical) properties of languages gathered from descriptive materials (such as reference grammars) by a team of 55 authors (many of them the leading authorities on the subject).” In no small part through a careful examination of those endangered and minority languages that until very recently had been ruled out for serious scholarly research due to colonial prejudice, WALS demonstrates just how wide is the conceptual space in which human language can vary; WALS currently lists 144 classes of features of this sort, ranging from well-known categories like tone, gender, tense, and definite and indefinite articles to forms of double negation, many varieties of word ordering, kinds of case structure, inclusive/exclusive pronouns, the presence or absence of adjectives, and forms of negation. It is interesting to ponder whether WALS would “count” as “Type 1” digital humanities, and what would be at stake in considering it to be or not to be a DH project.

WALS, the World Oral Literature Project, the Ethnologue, and the sites of indigenous governments all display a commitment both to the capabilities of digital technology and the rights, needs, and desires of postcolonial people and cultures. They already exist; they are already digital. I am not sure what it would mean for these projects to be seen as DH; I would be very glad if DH expanded its self-conception to include projects like these, but I don’t know what it would mean or who would benefit for these projects to be labeled DH, and unlike the digital work in other fields, I continue to think that the DH label remains particularly important for reasons of institutional politics that may not be germane to the specific needs of postcolonial cultures.

Some things, though, clearly could be done and are within the purview of DH as it is currently constituted. In addition to the closer ties with and appreciation for disciplinary linguistics work within text-analytical communities, this would include:

  • Increased attention from digital humanists to the world’s minority languages;
  • An increased focused on language revitalization projects as inherently a part of DH;
  • Increased recognition of the importance of speaking to language itself, and support for projects that take spoken language as the evidentiary base from which to proceed;
  • Support from DH funding bodies for work like the World Oral Literature Project, the Wikimedia Oral Citations project, and even the World Atlas of Linguistic Structures;
  • Careful thought and even elaboration of the postcolonial studies perspective on all media and technology interactions with indigenous peoples;
  • Significant work within the majority digital realm to combat the pernicious stereotypes of indigenous peoples and their languages.

I don’t see how an investment in postcolonial studies can meet an interest in digital technology without entailing a seriously critical perspective. Our world is too thoroughly informed by colonialism and imperialism, racism and sexism, to escape it through technical means; that such a desire is written into our technology at a very deep level is attested to by the Martha Nell Smith essay that sparked off the DHPoco open thread.

On that thread, Brian Lennon enjoined us to see a continuity with critical-theoretical work on the interactions of digital technology with the postcolonial predicament, including

Maria Fernández’s “Postcolonial Media Theory”; Kavita Philip et al.’s “Postcolonial Computing” and other work; Terry Harpold and Kavita Philip’s “Of Bugs and Rats: Cyber-Cleanliness, Cyber-Squalor, and the Fantasy-Spaces of Informational Globalization” and Harpold’s “Dark Continents: A Critique of Internet Metageographies”; more broadly, the work gathered in volumes like Sandra Harding’s Postcolonial Science and Technology Studies Reader.

Taking such work seriously might offer significant opportunities for digital humanities. I can think of any number of ways that the oral literature collected by WOLP might be of real interest to scholars of literature, and that the challenges posed by parsing spoken language represent hard problems to which scholars of literature and culture, as well as scholars of language, have much to add. Some of the questions about literary form asked by Franco Moretti and the Stanford Literary Lab strike me as ones that could be meaningfully expanded to a much wider range of sources and genres than has so far been done, along the lines suggested by the ethnopoetic analyses of Dell Hymes. We might explore some of the reasons that the stalled Wikimedia Oral Citations project has so far not found wider acceptance. As the leaders of that project write, “The problem with the sum of human knowledge, however, is that it is far greater than the sum of printed knowledge.” That is nowhere more clearly true than in the question of the literary, and there could be no more appropriate realization of the “cure” part of the digital pharmakon than for us to broaden our object of study to include what our colonial legacy has (almost) ruled out.

Works Cited

  • Dobrin, Lise M., Peter K. Austin and David Nathan. 2007. “Dying to Be Counted: The Commodification of Endangered Languages in Documentary Linguistics.” In Peter K. Austin, Oliver Bond & David Nathan, eds., Proceedings of Conference on Language Documentation and Linguistic Theory. London: SOAS. 59-68. Full text available here.
  • Dryer, Matthew S. and Haspelmath, Martin, eds. 2011. The World Atlas of Language Structures Online (Munich: Max Planck Digital Library). Available online at http://wals.info/. Accessed May 30, 2013.
  • “Gikuyu.” Ethnologue entry. http://www.ethnologue.com/language/kik. Accessed May 28, 2013.
  • Grenoble, Lenore A. and Lindsay J. Whaley, eds. 1998. Endangered Languages: Language Loss and Community Response (New York: Cambridge University Press)
  • Harrison, K. David. 2008. When Languages Die: The Extinction of the World’s Languages and the Erosion of Human Knowledge (New York: Oxford University Press)
  • Hymes, Dell. 1981. “In Vain I Tried to Tell You”: Essays in Native American Ethnopoetics (Philadelphia, PA: University of Pennsylvania Press)
  • Hymes, Dell. 2004. Now I Know Only So Far: Essays in Ethnopoetics (Lincoln, NB: University of Nebraska Press)
  • Krauss, Michael E. 1992. “The World’s Languages in Crisis.” Language 68:1. 4-10.
  • Lewis, M. Paul, Gary F. Simons, and Charles D. Fennig, eds. 2013. Ethnologue: Languages of the World, Seventeenth edition. Dallas, Texas: SIL International. Online version: http://www.ethnologue.com.
  • Nathan, David. 2004. “Documentary Linguistics: Alarm Bells and Whistles?” Paper presented at SOAS Conference (November). Abstract available here.
  • Nathan, David. 2012. “Archive Fever: Making Languages Contagious, or Textually Transmitted Disease?” Paper presented at Charting Vanishing Voices: A Collaborative Workshop to Map Endangered Oral Cultures. University of Cambridge (June 30).
  • Nathan, David and Meili Fang. 2009. “Language Documentation and Pedagogy for Endangered Languages: A Mutual Revitalization.” In Peter Austin, ed., Language Documentation and Description. Vol 6. London: SOAS. 132-160. Full text available here.
  • Nettle, Daniel and Suzanne Romaine. 2002. Vanishing Voices: The Extinction of the World’s Languages (New York: Oxford University Press)
  • Stiegler, Bernard. 2012. “Relational Ecology and the Digital Pharmakon.” Culture Machine 13. Full text available here.
Posted in digital humanities, materality of computation, rhetoric of computation, theory, what are computers for | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 2 Responses

Definitions that Matter (Of ‘Digital Humanities’)

In a recent post, “‘Digital Humanities’: Two Definitions,” I tried to point out an ongoing conflict in the deployment of the term “Digital Humanities.” While my goal was in part to show the practical range in definitions of DH, that was not really my main purpose. A lot of the time, definitions aren’t all that important. The meanings of words change all the time, and even more than that, can be changed all the time, for all kinds of reasons. But in some contexts, definitions and conflicts over definition can function as proxies for or even realizations of other kinds of conflict, including deep political and cultural conflicts. In such cases the definitions of words can matter a great deal; the stretching and contracting of definitions can matter very much, even if there is no essence to what a word “really means.” What’s at stake is not the meanings of words per se, but significant matters of power. That’s what’s happening in discussions of the definition of Digital Humanities, and it is those matters of power that have raised the kinds of concerns expressed at the “Dark Side of the Digital Humanities” panel at this year’s MLA convention.

I agree that most academic formations experience growing pains and that these naturally produce definitional perplexities. But I do not agree that the definitional problem in DH exclusively results from these ordinary phenomena; if it did, I’d be more inclined to agree with the widespread sentiment that this conversation is unimportant and/or uninteresting (see, e.g., Ted Underwood, “How Everyone Gets to Claim They Do DH“ and “Why Digital Humanities Isn’t Actually ‘The Next Thing in Literary Studies‘”). I am sympathetic with Rebecca Harris when she writes that “the problem of definition, it seems to me, is inspired by the formation of a field that doesn’t very much want to be defined,” and that “an analogous academic instance, I think, is the advent of ‘Queer Theory’ or ‘Queer Studies’ as an academic discipline.” I think, however, that DH adds something dramatic and different to this general process of definition–and that at least some people in DH, such as the authors of the Digital_Humanities book discussed in the earlier post very much do want the field to be defined–and in fact that this difference is why it’s important to attend to the question. General discussions of the definitions of fields seem less interesting and important to me, as they appear to be for others, but I don’t think that’s what’s going on here. (I should be careful to state here, as I have in several other of my comments on this topic, that what I am most interested in and most concerned about is the claim that DH can be understood as a part of the profession of literary studies; many of the remarks I will make do not apply to DH in relation to other disciplines, or as a standalone enterprise.)

The difference in DH, and the reason definitions of it matter so much, is that from its inception, some very powerful people and institutions have insisted on one definition, even when many others do not accept or endorse that definition, and these persons and institutions have been able to enforce that definition in one critically important sphere that has no parallel in Queer Theory, deconstruction, or any other recent movement in literary studies: newly-available, large-scale, field-defining grant funding. Further, the availability of unprecedented amounts of grant funding to English professors has had a follow-on deformative effect in perhaps an even more critical venue: hiring. These, in turn, have had consequences (though, I think, less obviously dramatic ones) for promotion and tenure standards, although I’ll leave those aside for the time being.

My point in that earlier post was to show that there is a clear and distinct sleight-of-hand at play in definitions of DH, and that this sleight-of-hand contravenes what many–perhaps most–of those of us practicing DH want, and what many English professors would prefer if they thought it was up to them. In this game two different messages are used depending on which one is advantageous at any given moment. Message 1, the “narrow” definition, as advanced by the authors of the Digital_Humanities volume, according to which DH refers exclusively to the building of tools and archives, is used inside the field, especially for funding and hiring. But when concerns are raised about the narrow definition, especially from outside the field (of DH), the “big tent” definition comes out as message 2, which relies on the literal meaning of the terms “digital” and “humanities,” and according to which anything that combines the two inherently qualifies as DH. The effect of this shift is precisely to destabilize the criticisms occasioned by the apparent differences between narrow DH and other practices in the study of literature. This is why, despite my endorsement of the “big tent” definition, I also see something approaching disingenuousness occurring when we simply assert that by wishing it we can make it so. Apparently, we can’t make it so, not just by affirming it–at least not until we come to grips with the major forces that are driving the promulgation of the narrow definition.

Yes, grants remain available across the spectrum of humanities subjects, including the “big tent” DH topics–but those individual grants are not labeled Digital Humanities, and that labeling often turns out to be decisive. Further, my understanding is that those other streams of funding are available in approximately the same number and the same amounts they have been for decades. What changed the funding scene in humanities departments in general and English departments in particular was the availability of 6-figure project funding specifically and exclusively targeted for the narrow version of DH.

Interestingly, when the NEH’s Office of Digital Humanities (ODH) was starting up, it too offered the “big tent” definition of DH. Here is a passage from its “About ODH” page from 2006 (quoted in Juola, “Killer Applications,” p.81; as far as I can tell it’s no longer available on the NEH website):

NEH has launched a new digital humanities initiative aimed at supporting projects that utilize or study the impact of digital technology. Digital technologies offer humanists new methods of conducting research, conceptualizing relationships, and presenting scholarship. NEH is interested in fostering the growth of digital humanities and lending support to a wide variety of projects, including those that deploy digital technologies and methods to enhance our understanding of a topic or issue; those that study the impact of digital technology on the humanities”“exploring the ways in which it changes how we read, write, think, and learn; and those that digitize important materials thereby increasing the public’s ability to search and access humanities information.

And here is the parallel passage from its “About ODH” page today:

In the humanities as in the sciences, digital technology has changed the way in which scholars perform their work. Technology allows humanists to raise new questions and radically changes the ways in which archival materials can be searched, mined, displayed, taught, and analyzed. Digital technology has also had an enormous impact on how scholarly materials are preserved and accessed, generating challenging issues related to sustainability, copyright, and authenticity. ODH therefore supports projects that employ digital technology to improve humanities research, education, preservation, access, and public programming. To that end, ODH works with the scholarly community, and with other funding agencies in the United States and abroad, to encourage collaboration across national and disciplinary boundaries. In addition to sponsoring grant programs, ODH also works collaboratively with the field, participating in conferences and workshops with scholars, librarians, scientists, and other funders to learn more about how to best serve digital scholarship.

While the second passage is longer and in some ways more abstract, it is also more specific and restrictive; projects must “employ digital technology,” but ODH no longer mentions supporting projects “that study the impact of digital technology,” a major bone of contention in the ongoing DH definition controversy. There is also very little (though not nothing) that points toward the use of existing technology to arrive at new insights, but instead focuses on methods that start from the presumption that humanities research needs to be “improved,” “new questions” must be “raised,” and “radical changes” are called for, all of which dovetails conceptually with “building new tools,” because there is something wrong or at least out-of-step with existing forms of literary scholarship. I won’t try to understand or reconstruct the narrative that explains how this change occurred, but it does seem to echo exactly the dynamic that concerns me: when the thing is public, when others are looking and wondering, the “big tent” definition is used; but when we get down to brass tacks, it’s the narrow definition that actually holds sway, and it’s important for that narrow definition to define the field as far as funding and hiring go.

I was curious about how this pattern has played out in the actual grants, so I read through several lists of the grants ODH has awarded since it was formed in 2007. I’ll admit that I was surprised by how exactly this funding conforms, almost entirely, to the narrow definition.

I couldn’t find an easy way to download all of the data, so here I’ve compiled a table of the ODH grants in 2010 (I’ve uploaded the complete data in an Excel spreadsheet of 2010 ODH grants). I’ve broken them down into categories that I’ve tried to make as fair as possible. There are just under $5 million in grants; of that about 1/3 goes to archives, 1/3 to tool-building, and 1/3 to workshops; in terms of the number of grants awarded the percentages are slightly different, but still go almost entirely to these three activities. There is exactly 1 grant that can reasonably be said to foreground interpretation or analysis. There are none that “study the impact of digital technology.” Based on my reading of the recent NEH records, this is a representative sample of ODH funding, and it is important to reiterate that while it by no means encompasses all of the grants NEH awarded that touched on digital topics, it does include all of the ODH grants, and therefore all of the grants formally labeled “Digital Humanities.” What is especially notable is exactly what the change in ODH mission wording would lead one to expect: there is virtually no funding for interpretation, analysis, or tool use as a primary activity. (The only topic that arguably might be framed misleadingly by my rough categorization is pedagogy, but only very subtly so: between a third and a half of the 12 workshops can be said to have pedagogy as a focus of the workshop being held–that is, they are workshops for teachers and other educators– but as Katherine Harris so rightly keeps emphasizing, this is not direct funding for pedagogical projects.)

Project Type Number % of Total $ Amount % of Total
Archives, Exhibits, Editions, Dictionaries 19 29.69% $1,480,560 30.29%
Tools, Applications, Prototypes, Standards 31 48.44% $1,711,112 35.01%
Assessment, Evaluation, Publishing 1 1.56% $92,662 1.90%
Interpretation, Analysis, Tool Use 1 1.56% $99,244 2.03%
Workshops, Institutes 12 18.75% $1,504,327 30.78%
Pedagogy 0 0% $0 0%
Total number of grants: 64
Total amount of grants: $4,887,905

The data speak pretty clearly: ODH funds go overwhelmingly to the building of tools and archives; secondarily to workshops of various sorts; and almost not at all to projects that are primarily interpretive or pedagogical.

Now, it’s important to say that ODH is not exactly a static organization of government officials who make decisions and impose them on academic disciplines. My understanding is that the staff of ODH determines the general outlines of grant programs along with setting some of the criteria for ranking grants, and then that panels of experts–mostly, but not exclusively, academics, although the prevalence of DHers who are not primarily tenure-track professors suggests that ODH experts would not be restricted to professors–rank the grant proposals. I don’t know who is on those panels; I’ve never been asked, and none of my close friends who engage with the digital but consider interpretation and analysis the primary job of humanities professors tells me they’ve been asked, but that’s a small group. Given the impact of this funding on English Departments, and given the difference between the priorities suggested by the “narrow” definition and the other practices of English professors, I think it is fair to see this disjunction as worrisome. What are the limits on external funding for activities not generally recognized by a field? The worry here is that the number of non-DH English professors–the number of non-narrow-DH English professors–who have authority over this field-transforming funding is extremely low””quite a bit lower, arguably, than those of us who oversaw the incorporation of a variety of theoretical discourses into literary studies. If what DH did was recognizably very similar to what non-DH professors had been doing, there would be little cause for alarm. But if you want to isolate a single force that has brought critiques like the ones at Dark Side of the Digital Humanities into being, it is this: a field-defining amount of funding has been injected into English, accompanied by claims that English professors do not know or understand the parameters of our own field, and that we are “backward” or “traditionalist” or “conservative” for suggesting that we do.

This is part of why my major concern is not with the existence even of “narrow” DH–more power to it–but with its troubling relationship with English Departments. The claim that “narrow” DH looks much like other forms of English scholarship is hard to maintain–indeed, a great deal of hand-wringing about new tenure standards and so on exists because DH scholarship does not look like other English scholarship. Whitney Trettien, in “So, What’s Up With MLA?,” is right to ask, “if we’re going to agree DH is a discipline, we should start having conversations about its disciplinarity at appropriately disciplinary venues. MLA is not that.” But DHers show no sign of retreating from MLA, despite their certainly having conferences of their own. Matt Kirschenbaum, in “What Is Digital Humanities and What’s It Doing in English Departments?,” spends much more time defining DH and placing its history among the English professors who have practiced it than he does asking the deep question the second part of his title suggests: why should non-DH English professors accept DH as a part of their discipline, when and if it overtly rejects so many of the methods and activities of our discipline?

This is where the double definition of DH seems especially concerning. The double definition licenses a statement, call it the “big tent,” under which “we just do what you already do, so you have to accept us as genuinely part of your project.” But then the narrow definition licenses almost the diametrically opposite sentiment: “what we do is very different from what you do, so you have to change your standards and methods to get on board with us.” Like many other things in the digital world, DH ends up asserting both that it is “completely different from and exactly the same as” other forms of English scholarship.

closeness over time

Chart of network closeness over time, Elijah Meeks, https://dhs.stanford.edu/visualization/too-pretty-to-pass-up/

I can summarize my concerns in the form of some propositions about the effects that the narrow definition and the funding streams available to it have had on the study of English literature and the profession of being an English professor, and about the way that the two definitions of DH function inside and outside DH. These are by no means “laws,” but as rules of thumb they aren’t bad, and they approximate fairly well what I tell and have told PhD students in English:

  1. Many, perhaps even most, DH practitioners endorse the “big tent” definition of Digital Humanities.
  2. To get funding labeled Digital Humanities, you should nevertheless conform to the “narrow” definition.*
  3. To get hired in an English Department to a job labeled Digital Humanities, you should nevertheless conform to the “narrow” definition: you should foreground tool & archive building and background your commitment to interpretation, especially of the digital world.

These propositions suggest further, disturbing ones:

  1. The definition of DH is not up to the majority of its practitioners, but to an influential subset of practitioners, including funding bodies that are formally outside the profession itself.
  2. These practitioners know that the “narrow” definition is unwelcome to many, and so will often in public endorse the “big tent” definition to others, especially to “outsiders,” while continuing to insist on the “narrow” definition in critical field-defining activities like funding and hiring.

Now if we examine the entire range of jobs available to English professors whose primary focus is in some way or other “the digital” (see 2013 Jobs in New Media Studies and Digital Humanities) we observe these propositions in action: there are virtually no jobs listed in English departments in which critical study of the digital, even of digital literature, is foregrounded, let alone in which it appears minus DH (such jobs are now almost exclusively found in Media Studies departments); most digital jobs in English Departments foreground explicitly “narrow” definitions of DH and most explicitly mention funding, by which they clearly mean the kind of unprecedented and explicitly narrow-definition funding offered by ODH, not the general humanities funding available to all humanities professors. Together, the funding and hiring data suggest some further propositions, the last of which is one that I base largely on my own experience and anecdotal observation:

  1. If you want to be hired in an English Department and study digital media in any sense, you would be wise to make a significant investment in “narrow” DH.
  2. If you want to be hired in an English Department and study digital media in any sense, your critical engagement with digital culture will in most cases be at best a secondary area of interest.
  3. In some cases, a significant or primary investment in critical studies of digital culture (very broadly construed) may be directly detrimental to being hired in an English Department, in a way different from the study of other primary source materials, in part because it may be seen as interfering with the pursuit of funding.

Of course, there is a politics at play in this, which I’ll leave aside for now, having probably said enough semi-explosive stuff to get myself in real trouble. But to anyone observing the study of English in 1993 and today in 2013, the fact that the only new movement anyone is talking about in English Departments is one in which critique, politics, interpretation, analysis, and close reading are at best playing second fiddle–where people seriously say “more hack, less yack” as if they are watchwords for the discursive humanities, and declare that building a database is so inherently theoretical that no additional theorizing or contextualizing of it is necessary–the fact that it’s virtually impossible to get hired as primarily a critical scholar of the digital in English departments, when all we are in other topic areas is critical scholars–might appear something less than accidental.

It is notable that when (narrow) DH speaks about the discipline of English, it has very little good to say. DH thinks the research methodologies, pedagogical approaches, career paths, promotion and tenure requirements, of literary studies as currently construed all need to change radically. It often asserts these necessities with a voice of technological inevitability and authority–anyone, it says, who understands “the technology” will see the inevitability of these changes, and thus anyone who does not see these changes as necessary inherently does not understand the technology, etc. I am heartened by the insistence on the part of so many young DHers that the narrow version of DH, especially as a construal of literary studies, is unacceptable. What remains is for us to reflect this back onto the scholars who have been convinced to see themselves as “non-digital,” and who accept the story of technological inevitability as if we English professors did not have the authority and even the responsibility to determine the constitution of the discipline. As a new crop of students (and some professors) who do understand the technology come forward, but who maintain that the changes to the study of literature necessitated by the digital are much less thoroughgoing than narrow DH often tells us, we are seeing signs that literary scholars are learning how much is at stake in ceding authority over our own field to those who reject both that authority and the procedures of the field itself. One can only hope that at some point in the near future, professors of literature will again exert enough authority over our field to be able to choose the funding and hiring protocols we see as significant, rather than following the lead of a small group of people (many themselves avowedly not English professors) whose members tell us that we don’t understand how to practice our own discipline.

*UPDATE 4/26/13: Please see my next post, “Definitions That May Matter Less (For NEH-ODH Grants),” for an important update from the NEH about its commitment to funding a broader range of projects.

Notes

I appreciate helpful feedback on earlier versions of this essay from an anonymous reviewer, Tara McPherson, and the participants in her Spring 2013 USC course CTCS 677: The Digital Humanities and Digital Media Studies.

Works Cited

  • Patrick Juola, “Killer Applications in Digital Humanities,” Literary and Linguistic Computing 23:1 (2008), 73-83.
Posted in digital humanities, information doesn't want to be free, materality of computation, rhetoric of computation, theory | Tagged , , , , , , , , , , , , , , , , , , , , , | 1 Response

Completely Different and Exactly the Same

I was flattered to see Nicholas Carr picking up on a blog entry I wrote about the Cartesian dualism underlying most thinking about the Singularity. I was equally pleased to read this comment on Carr’s post from CS Clark, who is otherwise unknown to me:

I’m reminded that many tech/law debates depend on the new tech being completely different from old tech right up till the difference is a problem in which case the new tech is now exactly the same as the old tech. Ebooks are great because you can make infinite copies of them, and they’re also great because you must be able to share them with your friends in exactly the same way you can share a physical book. Google has changed nothing so let’s listen to them/Google has changed everything so don’t worry. Talk about syzygy. Talk about doublethink.

This happens all the time. Consider this exchange:

I think the first comment is wrong, because it isn’t just Google Glass or Google itself at all–it’s the widespread distribution of camera technologies and, most critically, the technologies that allow the uploading and archiving of these images in publicly-available fora that makes this so disturbing. It seems to me that if we have rights to privacy at all, even if the government has a right to observe us in public spaces, we have the right not to have our images displayed without our consent by third parties in public fora (Flickr, Instagram, Facebook) about which we may or may not have any knowledge. This is a profound and deeply troubling issue and it is especially pointed when we consider the rights of minors: what in the world gives Sergey Brin, or anyone else, the right to videotape or photograph a child without their parents’ permission and to put those images online? I can’t actually come up with a good legal basis for asserting that right in such a form that it trumps the right to privacy (and what some European countries refer to as the “right of publicity“).

But to return to my main point: Jarvis’s response is exactly the one mentioned by CS Clark, very oddly and tellingly used to describe a technology that deserves the adjective “new” if anything does. Yes, Google Glass is in part a camera. But it’s not the kind of camera that requires you to sit for two hours to get any exposure, or the kind whose film needs professional developing, or the kind that makes one print that develops in your hand, or even the kind that other people notice when you use it to take their picture. It’s a new kind of camera, built in part out of old bits of camera technology. Yet to Jarvis it’s “fear-mongering” to consider these new features as new, even as he and other engage in ecstatic reverie over what this new stuff enables.

This deserves expansion and further reflection. The logic is evident everywhere today, and it’s mostly aligned with power. It connects to the “Borg complex” described by L M Sacasas, with certain debates in Digital Humanities, with the advent of MOOCs, and much else. It’s not a phenomenon of which I’m aware there being a discussion in the critical literature. It deserves more attention. It deserves a name. Ideas? I’m working on it.

Posted in "social media", cyberlibertarianism, google, materality of computation, privacy, rhetoric of computation, surveillance | Tagged , , , , , , , , , , , | 5 Responses

Building and (Not) Using Tools in Digital Humanities

As I mentioned in my last post, the “Short Guide to Digital Humanities” (pages 121-136 of Digital_Humanities, by Anne Burdick, Johanna Drucker, Peter Lunenfeld, Todd Presner, and Jeffrey Schnapp, MIT Press, 2012) includes the following stricture under the heading “What Isn’t the Digital Humanities?”:

The mere use of digital tools for the purpose of humanistic research and communication does not qualify as Digital Humanities.

I’m not going to speculate on the reasons that these authors make this declaration, or on why they feel they (or anyone else) should be authorized to decide what kinds of activities do and do not “qualify” as parts of the field.

Here I want only to reflect on the potential damage done to the field by adhering to this restriction.

First, I think it raises questions about the credibility of the field. Among the strongest justifications for the existence of DH is its formal resemblance to established sub-disciplines in which computers are used to address academics subjects whose existence predates the age of computerization. The field of this sort to which I am closest and about which I know the most is computational linguistics, really a name for a group of diverse sub-disciplines ranging from machine translation to natural language processing to corpus linguistics. While there are some exceptions among some of the subfields, it is almost entirely the case that these fields do not distinguish between “building” tools and “using” tools when it comes to the evaluation or promulgation of scholarship. On the contrary, what is valued in all these fields is interesting results, results that can be widely used and shared among the community of computational linguists, especially results that expand our understanding of human language. There are a dozen or two world-famous computational linguists, all of whom use tools in any number of ways in their research. I know of no particular attention or approbation paid to them because they did or did not build the tools they use. I certainly know of no internal or external strictures dictating that the “building” part of their work is computational linguistics, while the use of those tools is not. Not only that: all of these linguists (just a few examples are Douglas Biber, John Goldsmith, Dan Jurafsky, Mark Davies, and Christopher Manning, all of them known for a wide range of scholarly activities), despite their overt and declared interests in exploiting computers for the analysis of language, write substantial analyses of their work for others to read. All of this work “counts” as computational linguistics. I know of no stricture within the field that says you must be dedicated to building things if you want to be a part of it, or that building things must constitute your primary investment in the field: on the contrary, you must be dedicated to computational analysis of languages that produces interesting results, just as the literal meaning of the phrase “computational linguistics” suggests. Just to be clear: that can and does entail building tools, if you want to do so: it just entails building as part of a suite of activities, which also includes using tools and generating analyses that compel the attention of other linguists. I think DH should and could be even more like Computational Linguistics (and more directly allied with it, as well), and the suggestion that only the building part “counts” moves in the contrary direction.

There is more to say about this analogy, because there certainly are some researchers who prefer to focus their work almost exclusively on building computational tools. For the most part, those folks live in another discipline entirely, usually computer science or electrical engineering. There are close and productive ties between computer science and computational linguistics. There are some similar ties between digital humanities and computer science, but I think there could be more. But the more of these there are, the less one might expect the humanist members of teams to be the right ones to do the heavy parts of building, since those are typically the skills that are treated in exquisite detail in computer science. So what? What I want are interesting results, period. I want them in DH the same way I want them in any other humanities specialization.

It’s worth reflecting on this just a bit more, because it speaks to a problem we hear a lot about especially in English Departments with regard to evaluation of DH scholars. Computational linguists often work in teams, just like DH teams. In these teams, the distribution of tasks varies. I don’t know of particular scrutiny being paid to who does or does not do the actual building on those teams–the interest is in the results, and in having made a significant contribution to them in any fashion, and being able to articulate that contribution.

Now if we reflect on significant DH projects, can we actually say with certainty that the significant figures in DH actually are builders, and if so, in what respect? One of the figures whose work is most often and rightly pointed to as indicative of the potential of DH is Franco Moretti. As far as I know, the work for which he is most famous, on historical trends in the development of the novel in Europe, can best be described as using tools and interpreting the results of that tool-use; Moretti famously expressed very little interest in the tools used. (In his more recent Stanford Literary Lab experiments there is more discussion of the tools, but not much emphasis on Moretti’s direct involvement in building them.) Curiously, the Digital_Humanities stricture would pretty much rule out Moretti’s work as DH, which strikes me as seriously counterproductive (and very hard to explain to outsiders). Moretti is by no means the only relatively well-known DHer whose direct, practical engagement with building–especially with actual coding–is somewhere between “unclear” and “nonexistent.” Only if you care more about boundaries than results would you want to try to distinguish whether these people “are” or “are not” DH, rather than looking at the results they produce.

Beyond the credibility issue, I think we already see the distorting effects that promoting the building of tools and demoting their use can have. One is to generate a series of what are effectively prototypes that never move beyond that phase; another is a lack of focus on the differences between prototype work and actual live supported software; related to that is a lack of focus on even the process that takes designs from prototypes to live use, so that too many prototypes simply get built and then pass out of awareness. The authors of an important recent report about digital projects in the UK, Sustaining Our Digital Future (Strategic Content Alliance, 2013), Nancy Maron, Jason Yun, and Sarah Pickle, write a lot about these problems:

For well over a decade, significant investment in creating digital resources has been spurred by government agencies and funders, as well as by private philanthropists. Even today, developing sustainability plans remains a challenge for many of these projects. While most will agree that, at the very least, early efforts to create digital content were valuable for increasing the capability and experience of those who engaged in them, some of these earlier projects have been criticised for not being “future-proofed” and indeed, not all are easily available today; a few may be entirely inaccessible and even those that do exist have lost value as their content and interfaces remain frozen in time or worse. A review of UK Digitisation Projects, funded by the New Opportunities Fund (NOF) Programme from 1999-2004, evaluated 154 grants and found that as of August 2009, 25 or 16% were found to have “no known URL or URL not available” and for 82 or 53%, it was noted that while the website exists, it “seems not to have changed since the launch”. The LAIRAH Project: Log Analysis of Digital Resources in the Arts and Humanities, a study that sought, among other things, to “determine the scale of use and neglect of digital resources in the humanities” reviewed usage logs of the 1255 projects in the Arts and Humanities Data Service, finding that “most of the projects that we studied are finished, [but] very few are being actively updated.” (11)

Such an unfortunate situation has many explanations, but it is clear that the demotion of using tools, both in the discipline of DH and in its funding streams, cannot be helping matters.

TOPIC MODEL

Force-Directed Graph of Topic Correlation Network Layout, 1800-1849, from Michael Simeone, “Visualizing Topic Models with Force-Directed Graphs,” generated with tool at http://isda.ncsa.illinois.edu/~mpsimeon/topics/FDE/indexvml.htmlL

One troubling dynamic I’ve seen over the decade-plus history of digital humanities and its frequent and often overt emphasis on building above and beyond most other forms of scholarly activity is this one, which is to some extent the converse of the prototype problem. This is that when tools or methods are proven useful and powerful, and therefore become widely distributed and used by a range of scholars who may or may not see themselves explicitly as “part” of DH, that very usefulness and ubiquity ends up disqualifying them as part of DH. An old example: in the early days of the web, plain old HTML was considered enough of “building” that many projects were said to qualify as DH simply by dint of using HTML. Then, when blogs started to become popular but blogging software had not been packaged up enough to make it easy for anyone to use it, blogging was seen as part of DH and assembling a blog was seen as “building.” Then HTML skills became more widespread, building blogs became easy, and of course many DHers use blogs, but having a blog and writing HTML are no longer considered good ways to qualify a project as DH.

Now let’s take a more pointed and more current example. One of the latest technologies being promulgated through many quarters in DH is topic modeling. We’ve already seen a number of truly insightful, analytical projects using topic modeling to draw interesting conclusions about various bodies of text. Just a few of the several recent projects that have gotten attention include: Andrew Goldstone and Ted Underwood, “What Can Topic Models of PMLA Teach Us About the History of Literary Scholarship?“; Jonathan Goodwin, “Topic Modeling Signs” and “Creating Topic Models with JSTOR’s Data for Research (DfR)“; Ted Underwood, “Topic Modeling Made Just Simple Enough” and “Visualizing Topic Models“; and Ben Schmidt, “Keeping the Words in Topic Models.”

I hope and trust that there are few people who would demur from the view that all of these are excellent, worthwhile, even exemplary instances of what DH can do. They tell us important things about both the shape of literary production and of critical production, depending on the corpora to which they have been applied. As exemplars, it’s important to note what these projects have in common: building tools, using tools, and writing up the results. I would hope that some of those results would eventually be published in peer-reviewed journals or edited collections, but taken as a whole it is hard to imagine these not being the sorts of contributions that English and History departments would consider meaningful contributions to promotion and tenure files.

So, I think that topic modeling is here to stay, and there are reasons to suspect that major data providers–for example, JSTOR, which provides some of the data (via its Data for Research interface, described at some length in Goodwin’s “Creating Topic Models with JSTOR’s Data for Research (DfR)“) used in the analyses of critical writing–do too. But topic modeling is nothing more than a set of algorithmic processes. If they are valuable, I’d expect to see JSTOR, EEBO, and many others to incorporate topic modeling tools into their interfaces: in fact, I presume they see the work of Profs. Underwood, Goodwin, Goldstone, Schmidt and others as in some sense building prototypes for future applications. That should make it possible for a wide range of scholars to use the tools to draw all kinds of interesting inferences about a wide range of subject matters. But if we follow the Digital_Humanities stricture, that work would no longer count as digital humanities. Yet continuing to see it as DH would encourage even more scholars to utilize it and generate interesting results with it. The deployment of topic modeling tools by data providers would also help to address the sustainability issue, since these major data providers are already in the business of supporting and maintaining tools like these and the data on which to do research with them.

I don’t deny that there will probably remain new topic modeling tools to build. What I am hoping to point out is that the very usefulness of topic modeling suggests it will become part of the scholar’s toolkit, and that if we then arbitrarily deem that success to mean it is no longer part of our research enterprise, we are cutting off our nose to spite our face. Wide adoption and use is success, and interesting results produced with digital tools deserve to be called digital humanities.

Next: a follow-up on exclusionary definitions of DH

Posted in digital humanities, materality of computation, rhetoric of computation | Tagged , , , , , , , , , , , , , , , , | 3 Responses