Paul Duguid’s comment on an earlier post of mine gets to important issues that I expect to discuss repeatedly (although not repetitiously) in this space. Among the big questions that he raises are these two: (1) How good a job will Google Book Search do? (2) What are the consequences that flow from the answer to (1)?
I can’t answer the first question. Thus far GBS has not done well with multivolume works, sort of like iTunes with classical music. In both cases, metadata is thrown away, and the results are often more amusing than useful. Library partners, including Michigan, have been on Google’s case about this for some time. Duguid asks whether Google will learn from Michigan. My experience is that in general Google is very good at learning.
I have more to say about the second question. Here I am optimistic. Suppose Google never gets good at multivolume works, and in this and possibly other domains falls well short of good performance in delivering to users what they are looking for. I find it very unlikely that such a circumstance would be sustained, because Google has a strong interest in being responsive to its users. So the outcome will turn on how discerning the users will be, and on that subject colleges and universities and their libraries should have a great deal to say. What matters is whether academic libraries and their associated colleges and universities are able to teach their students well enough so that students can tell the difference between good search outcomes and misguided ones. (We also need to teach our students how to recognize sources with reliable provenance, and how to use such sources in order to make sense of their own and others’ arguments, but that is a longer discussion for another time.)
If we (academic institutions) do our job well, users will not tolerate unreliable search outcomes, and in that case I would expect Google to be responsive, not because libraries have told them how to catalog books, but because users will find books that are ill-cataloged to be less useful than books that are well-cataloged. By using the Google-scanned works well in our teaching and research, we can develop practices of scholarly literacy that use authenticated and reliable digital sources. GBS may be the direct source of the works, or we may rely on the library copies. Either way, the important job for academic institutions is to teach well (or, more precisely, to assure that their students to learn well) and that is exactly as it should be.
i’ll say it one more time here.
paul, your biggest problems are _not_ with google,
they’re right there in front of you in your own shop.
when you’re ready for me to point them out to you,
let me know, and i’ll try to be gentle…
-bowerbird
November 12, 2007 @ 11:06 pm
Paul,
Back in the day the project was University Microfilms, not Google, and the promise was of universal access to all of those inconvenient bulky bits of paper on the shiny new microfilm format.
The net result of that was the pulping of many years of historical newspapers (because the microfilm was “preservation” and the storage of said bulky newspapers was too expensive).
I understand that Buhr is full and that all of the libraries are full. Is the Google project an opportunity to get rid of some more books?
November 12, 2007 @ 11:35 pm
ok, edward, first thing to know is
librarians don’t really like to call it
“getting rid of some more books…” :+)
i think the more popular expression
these days is “deaccessioning”…
it’s kind of a sensitive topic… ;+)
-bowerbird
p.s. seriously, we better scan them
before we trash ’em, that’s for sure…
November 14, 2007 @ 4:22 am
bowerbird….STFU.
November 19, 2007 @ 11:13 pm
“[…] users will find books that are ill-cataloged to be less useful than books that are well-cataloged.”
This is true even in a “pre-Google” or “non-Googled” world. All of the time-to-shelf reduction in the world won’t make users’ lives easier if they can’t find the books in the first place; ill-cataloged books can’t be found, either via shelf-browse or via the OPAC.
November 20, 2007 @ 1:35 pm