[ Content | Sidebar ]

The Economist and the librarian-economist on the Google settlement

September 7th, 2009

The current issue of The Economist has a leader supporting the Google settlement and an article in the business section that quotes me in the course of discussing the issue. I am described, with my enthusiastic consent, as running an orphanage. The more I think of it the better the orphan metaphor works. Orphan works are orphans of a particular type — foundlings. They are not orphaned by a premature loss of their parents. They are left on the doorstep, taken in (by the library, of course, in the role of the tough but kind orphanage staff), nurtured and kept for as long as care is needed. They may have parents out there and they may not, no one knows. And now there is some hope that they will be invited to the dance, and we shall see how the story plays out.

The Economist interviewed me about the settlement at some length, and made a podcast that I quite like. It recapitulates fairly painlessly (it’s 13 minutes) some of things that I’ve been saying about the Google lawsuit and settlement for some time.

And, for something completely different and arguably more important, Paul Krugman has a superb piece entitled “How Did Economists Get It So Wrong” in the New York Times Magazine of September 6. What’s remarkable is how economists got it so wrong 70 years after Keynes got it so right. Anyhow, this is a testimonial for Krugman’s piece from an admiring economist.

Orphan Works Legislation and the Google Settlement

March 15th, 2009

I spent Friday at a fascinating conference at the Columbia University Law School, on the subject of (what else?) the Google settlement. Lead counsel from all three parties, lots of other lawyers, several princpals, publishers, authors and librarians were there.

I learned something important that at some level I already knew.

The most important single thing about the Google settlement, simultaneously its greatest achievement and among its most vexing features, is the treatment of orphaned works (in James Grimmelman’s witticism, “zombie” works). The problem, as we all know, is that there are millions – no one quite knows how many – of works that may or may not be in copyright and for which the rightsholder(s) may or may not exist and may or may not be aware of their rights. Our ability to use these works is thus much compromised: we run the risk that a copyright holder will appear and claim damages. As we all know, Congress’s efforts to make it easier and safer to use orphaned works have failed. Moreover, the most recent draft legislation would have imposed difficult and costly burdens on a potential user by requiring the would-be user to make substantial efforts to find any potential but unknown rightsholder.

Along comes the Google settlement, which solves at least part of the problem, for Google and the Book Rights Registry, at one fell swoop. (Only part of the problem, because works that were not registered with the copyright office will likely not be in the settlement and yet may be just as orphaned as those that are registered.) Under the settlement, revenues generated by orphaned works will be held in escrow for for five years, allowing time for a rightsholder to come forward. It’s a moving window; if the rightsholder comes forward in year 22, she gets revenues from year 17 on. Thus the products that Google sells to individuals and institutions can include, among other works, millions of orphans (zombies). Without the orphans, the great public benefit of the settlement – the ability to find and use much of the literature of the 20th century in digital form – would be much diminished.

At the same time, the disposition of the revenues attributed to orphaned works is one of my least favorite parts of the settlement. The unclaimed revenues go first to support the operations of the BRR, and then, after that, will be used for charitable purposes consistent with the interests of publishers and authors. As the head of a library that has lovingly cared for these works for decades, the notion that the fruits of our labors (and those of many others in many libraries) redound to the benefit of entities that did not write, publish, or curate these works sticks a bit in my craw. So I hope that authors, publishers, the court, and the public will be vigilant in making sure the BRR does not squander the unclaimed revenues on mismanagement, high salaries, and the like. The “charitable purposes” should be an objective, not a remainder for unclaimed funds.

The settlement also gives Google and the BRR, and no one else, the right to use the orphaned works in this way. A number of commentators, have noted problems that may arise from Google’s privileged position in this regard. But there is an obvious solution, one that was endorsed at the Columbia meeting by counsel for the Authors Guild, the AAP, and Google: Congress could pass a law, giving access to the same sort of scheme that Google and the BRR have under the Google Settlement to anyone. And they could pass some other law that makes it possible for people to responsibly use orphaned works, while preserving interests for the missing “parents” should they materialize. Jack Bernard and Susan Kornfield have proposed just such an architecture to “foster” these orphans. Google has also made a proposal that would be a huge improvement.

Given that the parties to the suit, libraries, and the public would all benefit from such legislation, it should be a societal imperative to pass it. I look forward to AAP, the Authors Guild, and Google lobbying and testifying in favor of such legislation. I’d be happy to be there, too.

The Stimulus Package (and now for something completely different)

February 7th, 2009

Suppose that there were a major fire, and that in order to put out the fire you would need, say, a trillion gallons of water.  Can you imagine a city council that would say, “oh no, we can only afford 734 billion gallons of water, so let’s leave out about a quarter of the neighborhoods.  It’s the right thing to do because we won’t go into debt, and future residents will be better off for having had a quarter of the city burn down.”?

Or, for a better analogy, suppose that your ship is sinking, through a hole that that is 10 feet in diameter.  How about saving on repair costs but inserting a plug that covers only 75 percent of the leak?  Sound like a good plan?  Not so much.

The reason that we need fiscal stiumus is that monetary policy is impotent to provide sufficient stimulus (not generally true, but true now, and essentially no one disagrees with this view).  With an unemployment rate of 7.6 percent, the economy is well below its potential level of output — we are about five percent below potential GDP, and the situation is getting worse by the hour.  The current cost of putting unemployed resources to work in this setting is very low, because the alternative is that those resources will not be used at all. Deficit-financed spending, public and private, can create current income and will reduce unemployment and the risk of future unemployment.  Some of the income generated by the stimulus, and some of the stimulus itself, will go into investment, and hence lead to increases in future income.  The income gains are valuable in themselves, and will offset a good deal of the taxes required to service the debt.  This analysis would be completely different if the economy were somewhere near full employment.  In that case the new spending, both private and public, would substitute for other activity, and the increase in the deficit would reduce investment and growth.  (To go back to the sinking ship analogy, patching a leak where there is no leak is simply a waste of resources.)

Everything that I have said above is oversimplified, of course, but the public discussion of the size and shape of the stimulus package seems to be missing the point.  The point isn’t to have the cheapest stimulus package possible; the point is to align the size and timing of the package with the size of the problem.  The most immediate and effective form of stimulus is to support state governments, because their revenues are falling and they will be forced, by their own constitutions, to reduce spending and lay off workers.  So the immediate stimulus effect of a dollar of support for state spending is a dollar, growing to about two dollars once the effects percolate through the economy.  (Note that what is really going on is the avoidance of a dollar’s spending reduction, growing to two dollars, at the worst imaginable time.)  In this context, Congress gets all sanctimonious about waste in government.  Halleluljah!

Paul Krugman’s recent columns and blogs on this subject have been excellent, by the way.  I commend them to the world of libraries.

And one more thing.  If we happen to make a mistake and overstimulate the economy, monetary policy will be perfectly effective in reining things in.  One way of characterizing the goal of fiscal policy in the current crisis is to restore the economy to a place where monetary policy can work.  The task is urgent.

Google, Robert Darnton, and the Digital Republic of Letters

February 4th, 2009

Robert Darnton recently published an essay in the New York Review of Books on the Google settlement. There has been much commentary in blogs, listserves, and print media. Below I reproduce a letter that I sent to the New York Review of Books, that they found to be far too long to publish. It is my understanding that they expect to publish a much-shortened revision. In any case, here’s what I had to say.

—–

To the editors:

My colleague and friend Robert Darnton is a marvelous historian and an elegant writer. His utopian vision of a digital infrastructure for a new Republic of Letters (Google and the Future of Books, NYRB Feb. 12) makes the spirit soar. But his idea that there was any possibility that Congress and the Library of Congress might have implemented that vision in the 1990s is a utopian fantasy. At the same time, his view of the world that will likely emerge as a result of Google’s scanning of copyrighted works is a dystopian fantasy.

The Congress that Darnton imagines providing both money and changes in law that would have made out-of-print but in-copyright works (the great majority of print works published in the 20th century) digitally available on reasonable terms showed no interest in doing anything of the kind. Rather, it passed the Digital Millennium Copyright Act and the Sonny Bono Copyright Term Extension Act. (More recently, Congress passed the Higher Education Opportunity Act, which compels academic institutions to police the electronic environment for copyright infringement). This record is unsurprising; the committees that write copyright law are dominated by representatives who are beholden to Hollywood and other rights holders. Their idea of the Republic of Letters is one in which everyone who ever reads, listens, or views pretty much anything should pay to do so, every time.

The Supreme Court, which was given the opportunity to limit the extension of the term of copyright, which was already far too long (like Darnton, I think that 14 years renewable once is more than enough to achieve the purposes of copyright) refused to do so (with only two dissenters) in Eldred v. Ashcroft, decided in 2003. Instead, it upheld legislation that, contrary to the fundamental principles of copyright, provided rewards to authors who are long dead, preventing our cultural heritage from rising into the public domain,

In short, over the last decade and more, public policy has been consistently worse than useless in helping to make most of the works of the 20th century searchable and usable in digital form. This is the alternative against which we should evaluate Google Book Search and Google’s settlement with publishers and authors.

First, we should remember that until Google announced in 2004 that it was going to digitize the collections of a number of the world’s largest academic libraries, absolutely no one had a plan for mass digitization at the requisite scale. Well-endowed libraries, including Harvard and the University of Michigan, were embarked on digitization efforts at rates of less than ten thousand volumes per year. Google completely shifted the discussion to tens of thousands of volumes per week, with the result that overnight the impossible goal of digitizing (almost) everything became possible. We tend to think now that mass digitization is easy. Less than five years ago we thought it was impossibly expensive.

The heart of Darnton’s dystopian fantasy about the Google settlement follows directly from his view that “Google will enjoy what can only be called a monopoly … of access to information.” But Google doesn’t have anything like a monopoly over access to information in general, nor to the information in the books that are subject to the terms of the settlement. For a start (and of stunning public benefit in itself) up to 20% of the content of the books will be openly readable by anybody with an Internet connection, and all of the content will be indexed and searchable. Moreover, Google is required to provide the familiar “find it in a library” link for all books offered in the commercial product. That is, if after reading 20 percent of a book a user wants more and finds the price of on-line access to be too high, the reader will be shown a list of libraries that have the book, and can go to one of those libraries or employ inter-library loan. This greatly weakens the market power of Google’s product. Indeed, it is much better than the current state affairs, in which users of Google Book Search can read only snippets, not 20% of a book, when deciding whether what they’ve found is what they seek.

Darnton is also concerned that Google will employ the rapacious pricing strategies used by many publishers of current scientific literature, to the great cost of academic libraries, their universities, and, at least as important, potential users who are simply without access. But the market characteristics of current articles in science and technology are fundamentally different from those of the vast corpus of out-of-print literature that is held in university libraries and that will constitute the bulk of the works that Google will sell for the rights holders under the settlement agreement. The production of current scholarship in the sciences requires reliable and immediate access to the current literature. One cannot publish, nor get grants, without such access. The publishers know it, and they price accordingly. In particular the prices of individual articles are very high, supporting the outrageously expensive site licenses that are paid by universities. In contrast, because there are many ways of getting access to most of the books that Google will sell under the settlement, the consumer price will almost surely be fairly low, which will in turn lead to low prices for the site licenses. Again, “find it in a library,” coupled with extensive free preview, could not be more different than the business practices employed by many publishers of scientific, technical and medical journals.

There is another reason to believe that prices will not be “unfair”, which is that Google is far more interested in getting people to “google” pretty much everything than it is in making money through direct sales. The way to get people to come to the literature through Google is make it easy and rewarding to do so. For works in the public domain, Google already provides free access and will continue to do so. For works in the settlement, a well-designed interface, 20 percent preview, and reasonable prices are all likely to be part of the package. Additionally, libraries that don’t subscribe to the product will have a free public terminal accessible to their users. This increases the public good deriving from settlement both directly and by providing yet another distribution channel that does not require payment to Google or the rightsholders.

The settlement is far from perfect. The American practice of making public policy by private lawsuit is very far from perfect. But in the absence of the settlement – even if Google had prevailed against the suits by the publishers and authors – we would not have the digitized infrastructure to support the 21st century Republic of Letters. We would have indexes and snippets and no way to read any substantial amount of any of the millions of works at stake on line. The settlement gives us free preview of an enormous amount of content, and the promise of easy access to the rest, thereby greatly advancing the public good.

Of course I would prefer the universal library, but I am pretty happy about the universal bookstore. After all, bookstores are fine places to read books, and then to decide whether to buy them or go to the library to read some more.

Paul N. Courant

Note: This letter represents my personal views and not those of the University of Michigan, nor any of its libraries or departments.

The Google Settlement – From the Universal Library to the Universal Bookstore

October 28th, 2008

If you think about it, a universal bookstore is a pretty cool idea. Bookstores are wonderful things. Anyone can walk into bookstore, take a book off a shelf, read in it, decide whether to buy it or forget about it, or get it from the library. The settlement announced today by Google, the Association of American Publishers, and the Authors Guild will in time make it possible for millions of books, currently out of print and in-copyright, to be perused, searched and purchased (or not) in an electronic bookstore that will be operated by Google.

The books will come from a number of academic libraries, including the University of Michigan, the University of California, and Stanford University, which have been participants Google Book Search from the beginning, These three worked with Google during the settlement negotiations in an effort to shape the settlement to serve the interests of research libraries and the public, as discussed in a joint press release.

The settlement is complicated, and as people work through it I expect a lively set of discussions and I invite comment on this blog and elsewhere. I’d like to start with what I see as a couple of key points.

First, and foremost, the settlement continues to allow the libraries to retain control of digital copies of works that Google has scanned in connection with the digitization projects. We continue to be responsible for our own collections. Moreover, we will be able to make research uses of our own collections. The huge investments that universities have made in their libraries over a century and more will continue to benefit those universities and the academy more broadly.

Second, the settlement provides a mechanism that will make these collections widely available. Many, including me, would have been delighted if the outcome of the lawsuit had been a ringing affirmation of the fair use rights that Google had asserted as a defense. (My inexpert opinion is that Google’s position would and should have prevailed.) But even a win for Google would have left the libraries unable to have full use of their digitized collections of in-copyright materials on behalf of their own campuses or the broader public. We would have been able, perhaps, to show snippets, as Google has being doing, but it would have been a plain violation of copyright law to allow our users full access to the digitized texts. Making the digitized collections broadly usable would have required negotiations with rightsholders, in some cases book by book, and publisher by publisher. I’m confident that we would have gotten there in time, serving the interests of all parties. But “in time” would surely have been many years, and the clock would have started only at the end of a lawsuit that had many years left to run. Moreover, each library would have had to negotiate use rights to its own collection, still leaving us a long way from a collection of digitized collections that we could all share.

The settlement cuts through this morass. As the product develops, academic libraries will be able to license not only their own digitized works but everyone else’s. Michigan’s faculty and students will be able to read Stanford and California’s digitized books, as well as Michigan’s own. I never doubted that we were going to have to pay rightsholders in order to have reading access to digitized copies of works that are in-copyright. Under the settlement, academic libraries will pay, but will do so without having to bear large and repeated transaction costs. (Of course, saving on transaction costs won’t be of much value if the basic price is too high, but I expect that the prices will be reasonable, both because there is helpful language in the settlement and because of my reading of the relevant markets.)

The settlement is not perfect, of course. It is reminiscent, however, of the original promise of the Google Book project: what once looked impossible or impossibly distant now looks possible in a relatively short period of time. Faculty, students, and other readers will be able to browse the collections of the world ‘s great libraries from their desks and from their breakfast tables. That’s pretty cool.

“Less than perfect” is not always bad

October 21st, 2008

In a recent paper prepared for the Boston Library Consortium, Richard Johnson decries the fact that some mass digitization arrangements between libraries and corporations have been “less than perfect.”

The choices that we face are indeed less than perfect. We can choose purity and perfection, and not permit any restrictions on the use of scans of public domain material, with the result that the rate of scanning and consequent display will be pitifully slow. Or we can permit corporate entities, including dreaded Google, to scan our works, enabling millions of public domain works to be made available to readers on line, at no cost to the readers, in a relatively short period of time. I am on record by word and deed as preferring the second choice.

In his paper, Johnson notes that the original works are retained by the libraries and could be scanned again. He fails to note that libraries whose PD works are scanned by Google get to keep a copy of the scans and are free to display them on line, independent of Google Book Search. Over 300,000 public domain works can be found in the University of Michigan catalog and read on line. The number grows by thousands per week. Of course I would prefer it if the digital files could be used without restriction. Would someone please tell me the name of the entity that stands ready to digitize our collections, for free, without restriction on the use of the digital files? In the meantime, it seems to me that making the books available to readers online makes for a better world, albeit, sadly, not a perfect one.

And, this just in, an article by Kalev Leetaru in First Monday that compares Google Book Search and the Open Content Alliance and finds much that is both good and less than perfect in both.

On the Meaning and Importance of Peer Review

October 12th, 2008

In my previous post I briefly discussed peer review, which has been raised by many in the publishing industry as a justification for opposing the NIH mandate for deposit of articles into PubMed Central, and, more broadly, as a justification for the vigorous protection of publisher-held copyright in scholarly publications. In this post I discuss the role(s) of peer review in the academy more generally.

Broadly, peer review is the set of mechanisms that enable scholars to have reliable access to the informed opinions of other scholars, in a way that allows that those informed opinions themselves to be subject to similar vetting.

Scholarship requires reliable and robust peer review, and the academy engages in peer review in a variety of ways, both direct and indirect. Peer reviewed publication is one method, and a fairly powerful one at that. If you read a paper in (for my field) Econometrica or the Journal of Political of Economy, you are reasonably confident that accomplished scholars in the field have made a judgment that the paper is of high technical quality and worth reading, and that experienced scholars have made a judgment that the paper is of interest beyond its narrow subfield. Those are valuable pieces of news as one is looking for a way to spend some time, and they also tell you something about the likely quality and accessibility of papers outside of one’s specialty, should one be branching out or needing some background information or trying to figure out who to consider for an open position in the department.

Similarly, the appearance of an article in a leading specialized journal, or of a monograph in a prestigious series published by a scholarly press, conveys valuable information (at least to the cognoscenti in the field) about the quality of the book or paper.

The peers who undertake the reviews are genuine peers. They are scholars whose judgment is trusted by experienced members of editorial boards, who are themselves generally senior scholars in the relevant field(s). Such people engage in peer review pretty much all the time. They go to seminars and talks, read draft manuscripts from students and colleagues, near and far, review grant proposals, engage in workshops, and vet tenure and promotion files. In short, the peers doing the reviews are active scholars engaged in active scholarship. (Sometimes they even spend some time writing their own stuff.) They could no more NOT provide “peer review” then they could give up reading and writing. Peer review is part and parcel of what serious scholars do.

I’d guess (and I would love to see a serious study) that the fraction of time that scholars spend engaged in formal peer review of publications – journal articles and monographs — is less than half of the time they spend on peer review in total. Moreover, the work that has traditionally been done under the aegis of publishers is increasingly being done in other settings. In fields where it is customary to post working papers on the web, interesting papers generate a good deal of peer review in the form of commentary from peers. Given that it takes essentially no time to move from word-processor to web posting, and that it often takes years to get from submission to a journal or scholarly press to formal publication, it’s not surprising that informal peer review is becoming more common. This is good news. Scholarship advances more rapidly if work can be widely shared relatively quickly and easily. Given that publication in the literal sense (making public) is now easy and cheap in the technical sense, it seems almost certain that informal review will grow relative to formal review.

For several years, I was the chief academic officer of the University of Michigan, and I have been involved in the review of tenure cases, grant proposals, journal articles and book manuscripts for more than 30 years. The most interesting and important of these activities are reviews associated with tenure and hiring. It is often argued (quite explicitly so by some) that without the reviews associated with publishing, the academy would be at a loss in making judgments about the quality and productivity of scholars. To be sure, for reasons adduced above, a record of publication in strong peer-reviewed settings conveys valuable information to tenure and search committees, chairs, deans, and provosts. But the fact of the matter is that we pay equal attention to other reviews, including (for some fields) those required to obtain research grants, and (for some fields) post-publication reviews that appear in journals and other venues. We also take very seriously the opinions of ad hoc reviewers, inside and outside of our institutions, who prepare and evaluate the case for promotion and hiring. Take away the information conveyed by publication venue, and these tasks become more difficult, to be sure, but by no means impossible. And the essential part – close reading of the work by peer reviewers – remains intact.

Just as it pays for almost all of the content that goes into scholarly publication, so too does the academy – colleges, universities, research centers, and the entitites that fund them – pay almost all of the costs of peer review.

Publishers provide many useful services, but they do not provide peer review. It is the peers themselves who do that essential work.

The Fair Copyright in Research Works Act is a lot of things, but fair ain’t one of them

September 17th, 2008

Last week there was a hearing on a new bill before the House Judiciary Committee, the “Fair Copyright in Research Works Act.” Think of it as the Clear Skies Act for copyright; an odious piece of corporate welfare wrapped in a friendly layer of doublespeak. The bill, introduced by Michigan Congressman John Conyers, would prohibit policies like the NIH Public Access Policy by making it illegal for government funding agreements to require any sort of copyright transfer or license from the grantee. It would make it illegal for U.S. government agencies to seek any rights at all in the research that they fund. This is anything but fair. Indeed, it is manifestly unfair to the taxpayers who ultimately pay for the research, and on whose behalf the research is conducted.

Publishers have pushed for this bill because they fear that open access mandates will reduce their profits. If people can access the research for free online, who will pay millions of dollars for subscriptions? Lots of people, actually, but that’s another post.

Instead of baldly admitting that what they seek is protection for their dying business model, publishers argue that the NIH Public Access policy violates their copyrights. The assertion is hogwash. Copyrights belong to authors before they belong to publishers. Authors can license their work however they please; the fact that they have traditionally signed over all of their rights to publishers without compensation does not mean they should continue doing so. Indeed, the case can be made that those who pay the authors — including public entities such as NIH and NSF that support research — could require assignment of some or all rights as a condition of receiving the grant in the first place. I wouldn’t favor such a policy, but it’s fatuous to suggest that Congress should limit the scope of contracts between grantors and grantees.

Allan Adler, VP of the Association of American Publishers, issued a statement in which he had the gall to say that “Government does not fund peer-reviewed journal articles—publishers do.”

That’s just not true. The NIH spends over $28 billion in taxpayer money annually to fund research. Researchers write articles about their findings, and their peers review those articles, without compensation from publishers. Without the research, there would be nothing to publish. Largely due to historical accident, publishers manage the peer review process, helping journal editors to badger referees into reviewing articles, generally for no pay. The value of the scientific expertise that goes into refereeing dwarfs that of the office expenses incurred by publishers in managing the process. The referees’ salaries are paid by universities and research institutes, not by publishers. Basically, we have a system in which the public pays for the research, the universities pay for the refereeing, the publishers pay for office work to coordinate the refereeing, and also for some useful editing. Then the publishers turn around and sell the results back to the universities and to the public who bore almost all of the costs in the first place.

The people of the United States pay good money to learn about the world. It would be a travesty if Congress decided that the interests of a few publishers were more important than the research investments of the American public, and that’s exactly what this bill would do.

Take Me Out to the Ball Game

August 2nd, 2008

{If you don’t like baseball, you should probably stop reading this post now.}

Thursday my wife and I made a pilgrimage to Yankee Stadium, to mark the imminent passing of one of the most famous and significant of ballparks. Neither of us had ever been to the Stadium before, even though we have both been fairly serious baseball fans most of our lives. I grew up in the New York area in the 1950s, and was a Brooklyn Dodgers fan. I went to games at Ebbets field, and, in the early years of the Mets, the Polo Grounds, but I would not have been caught dead at Yankee Stadium. Hating the Yankees was a birthright and a calling.

But the passage of decades change things, and looking back on 50-odd years of watching baseball, it seemed pretty obvious that I should add the Stadium to my life list while I still could. So we cashed in some frequent flier miles, presumed on the good offices of friends to get us good tickets (at prices that let you know what it takes to live in New York), and went to the big city to see the Yankees play the Angels.

The game was not especially well played, and the outcome was never in serious doubt after the 3rd inning, but we had a splendid time, and were reminded of what a wonderful game baseball can be.

We were in the fourth row of the upper deck, a little to the outfield side of 3rd base, with a great view of the infield and of all but the deep left-field corner of the outfield. Yankee Stadium is steep, so even though we were some distance from the field as the pigeon dives or the foul ball sails, we had a sense of being on top of the action. (Hmm, why did I never go to this wonderful place to watch a game, for all those years?) It was a perfect night for baseball. The sky started out blue and slowly darkened with the evening, with the stadium lights taking over from natural light, a phenomenon that never fails to thrill me.

Notwithstanding the hoary truth of the proposition that the game isn’t over ‘til the last man is out, this game was pretty much over in the third inning, when the score was 6-0 Angels, and even more over by the 6th, at 10-2. But the evening got darker (and, magically, the field got brighter) and we stayed to the end. In the bottom of the ninth, with the score at 12-3, the Yanks managed to load the bases with no one out. The few left in the crowd exchanged looks: Could this be the start of one of those miraculous comebacks of the kind that (almost) never happen? No, of course not. The Angels put in a competent pitcher, who let the runners on base score but no one else, and the game ended at 12-6, and was never really as close as that lopsided score.

The Yankees made it easy for me to do something that I had never done before. In that hopeless rally in the ninth, I was rooting for them all the way.

Farewell to Cody’s

June 24th, 2008

It’s all over the library and bookish blogs that Cody’s, Berkeley’s great bookstore, has closed its doors. No doubt I should have some deep policy insight about how this tragedy could have been averted, and about its implications for the future, but I don’t. Rather, I’m taking a little time to remember and to mourn.

Perhaps 30 years ago, my father visited me in Ann Arbor and we went to Borders (The idea that fathers and sons should hang out in bookstores together is one of the things that I am both remembering and, prospectively, mourning.) This was before Borders had become a chain, back when Ann Arbor was its only store, and Dad remarked that it was really quite wonderful that Ann Arbor had a bookstore that reminded him of Cody’s, the best of all of the college town bookstores. And Cody’s remained the standard, even as Borders got glitzy and became a mall store in college town clothing.

On my last visit to Cody’s (and how was I to know that it was to be the very last?) the checkout clerk, looking at what I had bought, engaged in a conversation that led to my buying William Maxwell’s “So Long See You Tomorrow.” The title of the book makes the loss of Cody’s all the more powerful. And of course the clerk ran circles around “readers who bought X also bought Y.”

Great bookstores are places where one can reliably expect to be in the presence of books and the people who love them. There should be one around every corner, and certainly one in every university town, and Cody’s is gone. Weeping and gnashing of teeth are in order.