Picture this: at the click of a button, you gain instant access to 25 million digital copies of books from major libraries across the world. You would be able to search for keywords without browsing through your expansive library with no avail. You would be able to highlight important sections and annotate complicated diagrams without ruining the original copy; even share the newfound knowledge with a friend through links without carrying a physical copy. This is ‘Project Ocean’, a secret book-scanning effort led by Google since 2002.
“The universal library has been talked about for millennia,” Richard Ovenden, the head of Oxford’s Bodleian Libraries, said to the Atlantic. “It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution.”
When Larry Page and Marissa Mayer sat down together with a metronome in 2002, they scanned a 300-page book by hand and this arduous process required 40 minutes. After some calculation, they arrived at the conclusion that digitalizing an entire library as large as that of University of Michigan’s would require approximately 1000 years. However, Google was able to reduce this seemingly unachievable figure to 6.
Every week from then on, trucks filled with books from major universities and other prestigious library systems pulled up at Google’s scanning services one after another. With machines that recognized 1,000 pages per hour, Google’s well-planned project was extremely efficient and looked promising as ever.
Via the Digital Reader
However, this imagined library of everything was not welcomed by the publishing industry. Regarding this project as a case of “massive copyright infringement”, authors and publishers filed lawsuits that threatened to fine Google on every book they uploaded. The misunderstanding stems from differing interpretations: while Google advocated their strenuous effort as “fair use”, copyright holders perceived this project as plain infringement; hence how the battle between Authors Guild and Google begun.
“A key part of the line between what’s fair use and what’s not is transformation,” Google’s lawyer, David Drummond, had said to the Atlantic in an article published earlier this year. “Yes, we’re making a copy when we digitize. But surely the ability to find something because a term appears in a book is not the same thing as reading the book. That’s why Google Books is a different product from the book itself.”
Via Search Engine Land
Writers and publishers and Google maintained hostility for years, until the two separate parties finally came to a mutual understanding that allows Google to offer digital access to older and out-of-print books. Instead of allowing complete access, Google would provide snippets of texts and offer a digital platform containing books that are no longer on shelves.
“We realized there was an opportunity to do something extraordinary for readers and academics in this country,” Richard Sarnoff, who was then Chairman of the American Association of Publishers, had said. “We realized that we could light up the out-of-print backlist of this industry for two things: discovery and consumption.”
In April 2016, the US Supreme Court ended the tiresome battle between Authors Guild and Google by declining to listen to the former’s appeal. Since then, Google has been granted allowance to scan library books and document them as catalogs without violating copyright laws. Now referred to Google Books, Project Ocean enables millions of internet users across the globe to search for books on four levels of access: full view, preview, snippet view and no preview.
Nevertheless, reporter and programmer James Somers at the Atlantic pondered at Google’s so-called victory and said, “somewhere at Google there is a database containing 25-million books and nobody is allowed to read them…and the only people who can see it are half a dozen engineers on the project who happen to have access because they’re the ones responsible for locking it up.”
Feature image courtesy of Good E-Reader