ltr: Vol. 43 Issue 3: p. 21
Chapter 7: Open Systems, Formats, and Standards
Casey Bisson

Abstract

Library Technology Reports 43:3 (May/Jun 2007)

“In the 70s, computer users lost the freedoms to redistribute and change software because they didn't value their freedom. Computer users regained these freedoms in the 80s and 90s because a group of idealists, the GNU Project, believed that freedom is what makes a program better, and were willing to work for what we believed in.” — “Linux, GNU, and Freedom,” by Richard M. Stallman, Free Software Foundation Founder

Casey Bisson, with the help of Jessamyn West and Ryan Eby, reports on open-source software (OSS) and its use and importance in libraries in the third issue of Library Technology Reports in 2007.

In “Open-Source Software for Libraries,” Bisson engagingly narrates the history of open source, explains how the OSS “movement” came about, details key players in OSS development, and discusses why and how open source can work for libraries.

Bisson also shares success stories from those in libraries using OSS including:

  • how Thomas Ford Memorial Library in Western Springs, IL, utilized OSS to build its popular and interactive Western Springs History Web site (www.westernspringshistory.org), which utilizes the widely used WordPress platform; and
  • why those at the Meadville (PA) Public Library (meadvillelibrary.org) started using OSS and how the librarians and library staff at that public institution have embraced and benefitted from OSS.

In addition to Bisson's insightful and interesting discussion of OSS, this issue of LTR includes the informative chapter “Open-Source Software on the Desktop,” by community technology librarian Jessamyn West. Also, Ryan Eby, “an active member of the Code4Lib community” provides an overview of open-source server applications, including that of ILS apps Koha and Evergreen; digital library and repository software, such as DSpace and FEDORA; and OPAC replacements, such as Scriblio and SOPAC.

About the Authors

Casey Bisson, named among Library Journal‘s Movers & Shakers for 2007 and recipient of a 2006 Mellon Award for Technology Collaboration for developing Scriblio (formerly WPopac), is an information architect at Plymouth State University. He is a frequent presenter at library and technology conferences and blogs about his passion for libraries, roadside oddities, and hiking in New Hampshire's White Mountains at MaisonBisson.com

Jessamyn West is a community technology librarian and a moderator of the massive group blog MetaFilter.com. She lives in Central Vermont, where she teaches basic computer skills to novice computer users and librarians. She maintains an online presence at jessamyn.com and librarian.net. Her favorite color is orange.

Ryan Eby is active member of the Code4Lib community and spends his days supporting distance learners and online courses at Michigan State University. He blogs at blog.ryaneby.com and can often be found on the #code4lib IRC channel. He enjoys brewing his own beer and roasting his own coffee.


The Internet of today may be a happy circumstance—if one appreciates endless diversion and instant access to information and entertainment—but it is no accident.

It may be hard to imagine how the engineers connecting the first four hosts to form ARPAnet in 1969 might have possibly prepared for this—they were building a defense research network after all—but their awareness of what they didn't know made it possible.1

During the initial development of the ARPAnet, there was simply a limit as to how far ahead anyone could see and manage. The [first network nodes] were placed in cooperative ARPA R&D sites with the hope that these research sites would figure out how to exploit this new communication medium.2

Some of this flexibility can be credited to J. C. R. Licklider, the grandfather of the Internet, who drew inspiration from from the early, but isolated, hacker communities forming in university computer labs.3

The applications of computing technology were still very much unknown, but Licklider understood that those communities were driving innovation and wanted to connect them across the geography that then separated them. But he also knew those hackers would be able to figure out how to use the network of computers he was proposing better than he or anyone else could imagine before the fact.

And so the network infrastructure was left intentionally open to exploration and innovation. Still, upon inventing the first e-mail program to communicate between computers on ARPAnet in 1971—giving the @ sign a new raison d'etre—Ray Tomlinson is reported to have said, “Don't tell anyone! This isn't what we're supposed to be working on.”4

The phone network, however, had a very different history. The technology of the time, and what we can now describe as Alexander Graham Bell's “limited intentions,” led to a network architecture of simple telephones and complex switches that supported only one use well: voice calls from one phone to another.

In fact, AT&T's policy up to 1968 specifically prohibited other uses or the connection of non–AT&T equipment. The watershed came when Carter Electronics sued AT&T to force the telephone monopoly to allow use of its Carterfone product, a device that interfaced two-way radios and telephones, by AT&T customers.5 Columbia Law School professor Tim Wu explains the significance of the decision:

The Carterfone principle has had enormous consequences not only in telecommunications policy, but for the economic prosperity of the United States. The ability to build a device to a standardized network interface (the phone plug, known as an RJ-11) gave birth to a new market in home and business telecommunications equipment. That led, predictably, to competition in the phone market. But it also led, unpredictably, to other innovations. Those have included mass consumer versions of the fax machine, the answering machine, and, perhaps most importantly, the modem. Arguably, the FCC's rules on network attachments—now known as the Part 68 rules—have been the most successful in its history. The freedom to buy and attach a modem became the anchor of the mass popularization of the Internet in the 1990s. As one observer put it, without Carterfone, “the development and broad popularization of the Internet also would not have occurred as it did. The key point of Carterfone is that it eliminated an innovation bottleneck in the form of the phone company.”6

It may seem ironic to describe the phone network as “smart” and the Internet as “dumb,” but that's exactly how network engineers view them. The Internet uses sophisticated software and computers at each end of a connection to achieve its magic; the equipment in between simply passes the data without caring much about what it is. Phone networks, however, connect relatively simple devices at each end through a very complex series of expensive switches to complete a call.7

Because the telephone network itself is smart, we can use very inexpensive, dumb phones, but those network smarts ultimately limit the applications and services that even the most expensive phone can support. The dumb Internet requires smart and relatively expensive end-user equipment, but imposes almost no limits to how it can be used.

Innovative uses of the phone system flourished after the 1968 Carterphone decision until they stretched the very architecture of the phone network. Innovation on the Internet, unhindered by its open architecture, has instead accelerated, and may surprisingly have evolved into exactly what its creators once imagined. Internet pioneer David Clark repeated Licklider's embrace of the community-building and communicative power of the technology:

It is not proper to think of networks as connecting computers. Rather, they connect people using computers… The great success of the internet is not technical, but in human impact. Electronic mail may not be a wonderful advance in Computer Science, but it is a whole new way for people to communicate.8

And the community grew quickly. Nearly 160,000 hosts were added to the Internet in the twenty years after those first four hosts were interconnected to form ARPAnet—the network almost doubled in size in 1989 alone.9 That remarkable expansion of the Internet was made possible by the flexibility of its foundation protocol: TCP/IP, transmission control protocol/internet protocol. Bruce Sterling's “Short History of the Internet” pays homage to that flexibility, noting, “As long as individual machines could speak the packet-switching lingua franca of the new, anarchic network, their brand-names, and their content, and even their ownership, were irrelevant.”10

The rest of the Internet—our e-mail, the domain name system, FTP, and hundreds of other standards that eventually led to the development of HTML and HTTP, the format and protocol pair that gave us the World Wide Web—all run on top of that TCP/IP foundation, taking advantage of the protocol's blind willingness to send packets of any type of data to any type of host.

That isn't to say everything based on TCP/IP has been successful. At one time Gopher and HTTP/HTML competed to create the World Wide Web we know today. Both allowed easy browsing and linking between resources. But compared to HTTP/HTML, Gopher had two shortcomings: it was limited by overspecific assumptions of its intended use—it was designed only for easy browsing of lists of files—and it could not immediately be modified to support the rich content that was appearing on HTTP servers. And it wasn't free.11

The Gopher team at the University of Minnesota explained the situation in a March 1993 e-mail. Facing budget pressures, the team was on notice to deliver recognizable value to the university. They argued that academic use would expand the volume of information available and indirectly benefit the university, but they didn't feel they could make the same case for the nascent commercial use that was starting to appear.

[If] you put up a gopher server that is commercial in nature … containing information whose primary purpose is to MAKE YOU MONEY, then we have a hard time making a case for our administrators supporting this. Indeed if you look at this honestly, a license fee is the right and proper thing to do.12

Tim Berners-Lee, on the other hand, worked passionately to encourage broader uses of HTTP and HTML. Writing in Weaving the Web, he likened his vision to a market economy where “anybody can trade with anybody,” as long as they agree on the basic principles of the market, “such as the currency used for trade, and the rules of fair trading.”13 To support that notion, Berners-Lee founded the the World Wide Web Consortium to promote interoperability. And to protect the freedom that he felt was necessary to make the protocol and format successful, the software was released into the public domain in April 1993.14

Like Gopher, HTML didn't initially support images, but the format was flexible enough that Marc Andreesen was able to add support for them in Mosaic, the HTTP/HTML-based Web browser he developed in 1992.15 Some of that flexibility wasn't appreciated—the %3cblink%3e tag that made on-screen text flash quickly grew tiresome after its 1994 debut—but it proved crucial to the Web's success.16


Open Ecosystems

A computer could be used in isolation. Even without a network—even if the documents are to be shared only in printed form—word processing software in the hands of a single author offers significant benefits, including spell-checking and easy corrections.

The author would ensure her writing conformed to certain conventions, depending on not only language but also formatting to convey meaning for details ranging from page numbers to citations. Readers would need to understand those conventions to derive the meaning; an unfamiliar format for the order of elements in the date might prove less impenetrable than an unfamiliar language, but it might lead to misunderstandings nonetheless.

Successfully communicating information between any two computers, or even between software applications on the same computer, demands the same conformance to convention. And when things go wrong, hand waving, pointing, and grunting turn out to be even less effective at resolving problems between computers than among their human operators.

Citing those concerns and the increasing importance of electronic documents, the state of Massachusetts in 2005 announced new IT standards that required its 80,000 employees and 173 agencies to adopt open file formats. The decision didn't specify the applications to be used, just the format of the electronic documents they created, stored, and exchanged.17 In making the decision, the state also had to establish a test for openness. What Massachusetts settled on was surprisingly simple:

  • It must be published and subject to peer review.
  • It must be subject to joint stewardship.
  • It must have no or absolutely minimal legal restrictions attached to it.18

The result, and the subject of considerable controversy, was that the state found the rather new Open Document Format along with Adobe's PDF to meet that test, while Microsft's formats, including its Office Open XML format, didn't.19 The critical failure of Microsoft's OOXML format was that the license didn't allow others to build applications that could both read and write the file format, meaning that Microsoft would be the only legal vendor of full-feature applications that used Office Open XML.20

The Massachusetts case was a very conscious and public pantomime of the decisions individuals and organizations of all sizes struggle with. The outcome that the state hopes for is easier upgrades, more reliable long-term access to electronic documents, better communication between departments, and expanded choices for its office software—leading, hopefully, to more competition and lower prices.21

Looking carefully at our information and communication technologies (ICT), the Harvard Berkman Center's Open ePolicy Group places file formats and the software used to read and write them in an ecosystem along with the people and processes that use them. “Most importantly, an ICT ecosystem includes people—diverse individuals who create, buy, sell, regulate, manage and use technology. And its vision is that “Openness—a synthesis of collaborative creativity, connectivity, access and transparency—is revolutionizing how we communicate, connect and compete.”22

Going far beyond the file format, the Open ePolicy Group sees open ICT ecosystems as making it “possible to re-engineer government, rewrite business models and deliver customized services to citizens,” and offers the following guiding principles to openness in ICT.23

  • Interoperable: allowing, through open standards, the exchange, reuse, interchangeability and interpretation of data across diverse architectures.
  • User-Centric: prioritizing services fulfilling user requirements over perceived hardware or software constraints.
  • Collaborative: permitting governments, industry, and other stakeholders to create, grow and reform communities of interested parties that can leverage strengths, solve common problems, innovate and build upon existing efforts.
  • Sustainable: maintaining balance and resiliency while addressing organizational, technical, financial and legal issues in a manner that allows an ecosystem to thrive and evolve.
  • Flexible: adapting seamlessly and quickly to new information, technologies, protocols and relationships while integrating them as warranted into market-making and government processes.24

In short, as information and communication technologies grow in importance, so too grows the importance that they be open.


The Unruly Crowd

The old joke—so old it's lost its attribution—goes, “The nice thing about standards is that there are so many to choose from.”

“Standards,” once the sole province of businesses and large organizations, are suddenly facing a larger, more fickle audience. Voting with their feet, Internet users—including 73 percent of American adults—represent a large marketplace.25 And among that mass is a growing number of citizen developers, geeks, and hackers who play with data and code because they can.

It's from those citizen developers that much our open-source software has emerged recently, and their “mashups”—hacks that remix data or Web sites in ways not demonstrated by established players—are changing the shape of the Web.

But those citizen developers are also changing how we create and recognize standards. These developers rarely recognize traditional standards bodies and, instead, pay attention to what works, whatever is easiest.

Matt Mullenweg, who began working on the code that would become WordPress as a citizen developer, views standards rather pragmatically, saying they're driven by adoption. It's their utility, not any recognition by a standards organization, that matters to him. “Name a successful standard that's gone through the standards process,” he asks.26

Amazon doesn't claim its Web services—the interfaces that developers use to interact with Amazon's data—are a standard, but with 180,000 registered developers it's probably best to treat them like one.27 Some of those developers are sharing bits of code that make it easier to develop larger applications that use Amazon's Web services (AWS). And these developers are big business: in 2005, the company attributed as much as 28 percent of its sales to third-party developers leading customers to purchase items through Amazon.28

Standards may rise and fall in the marketplace, but Amazon does offer something that few others can: money. The potential to earn affiliate revenue from using AWS can't be discounted as one of a number of factors driving its success. But what of the other factors? How do standards work when not driven by money?

I put that question to DeWitt Clinton, the creator of the OpenSearch format. OpenSearch allows software running on a Web site or your PC to communicate with one or more search indexes on remote servers—in library terms, it's similar in concept to Z39.50. Clinton developed the format for Amazon's A9 search division, which uses the format as the foundation of its metasearch efforts.

A9 Search http://a9.com

A9 doesn't have its own search engine; it displays search results from other search engines—some of them Amazon's partners—and Clinton and his team were responsible for integrating all of it. What he quickly found was that every search engine had its own API they had to develop an interface for.

Basically, if you were a search company—if you were Answers.com or something like that—you would say, “Well, OK, I can accept search requests, I'm going to give you search results back, maybe I'll use this XML format, maybe it's going to be SOAP, maybe it's going to be something else.”29

Clearly, Clinton recognized, the process would be much easier if the growing number of search engines they were integrating used a common format to receive requests and return their results. He and his team weren't able to find a format that was as simple yet as flexible as they believed was necessary, “So we said, ‘Let's pick it apart ourselves and propose a search format that our partners could use.'”

The team looked carefully at the details of the interaction between the tools that display the results and those that store the data, the parameters that were exchanged, and the features they supported. The team was able to identify what Clinton describes as “the 80 percent case,” the solution that would work for the great majority of applications most of the time.

Clinton describes what followed as a moment of stark clarity, when he realized that he was about to create “yet another proprietary format.” He says that's when the notion of using RSS developed. Clinton remembers thinking, “Search results are just a list, and the whole world is using RSS as a way of syndicating lists … instead of trying to invent something completely new, what if we leverage existing protocols?”

Using RSS, he realized, would allow OpenSearch to draw from a large body of developers already familiar with RSS and “tons of client libraries”—snippets of code that could read or write RSS and would make implementation easier.

And soon after that, the team encountered their “third lightbulb.” “Who benefits,” they wondered, “if this is a proprietary Amazon solution? Is the world a better place, is even our business better off if this is closed and proprietary? And the answer, very clearly, was ‘no.'”

And so, at the O'Reilly Emerging Technology Conference in March 2005, Amazon's Jeff Bezos took the stage to announce OpenSearch and invite attendees, indeed anybody on the Internet, to connect their search results to A9 or build new display tools that could read OpenSearch results.

OpenSearch Development Web Site www.opensearch.org

When did he know it was going to work? Clinton points proudly at what happened within hours of the announcement: a new site appeared on A9. One of the conference attendees quickly plugged their site's search engine to A9 using OpenSearch for all to see and use. “The smile on my face and Jeff Bezos's face said it. We were just happy. It validated so much of what we wanted to do with it.”

In the first year, new OpenSearch targets appeared on A9 at an average of one per day, and the site now features over 500 targets.30 Further driving adoption is Microsoft's Internet Explorer, which built OpenSearch display support into IE7. The Firefox team later adopted it as well. “Before I knew it, both Mozilla Firefox 2 and IE7 shipped, and the plugin format was OpenSearch. That was big.”

Clinton also credits the simplicity of the format for its success. He notes they'd only begun work on OpenSearch a few months before the ETech announcement, but quickly brought all the pieces together. “I think it speaks a lot of how nimble A9 was… It also speaks to the relative simplicity of the protocol itself.”

Simplicity was one of the design goals. “Only incorporate the 80 percent cases was one of the rules, the guiding principles.” Clinton looked carefully at actual search usage and implementations and found himself relentlessly cutting features that didn't meet the test. “If it wasn't immediately clear that everybody—the 80 percent cases—would use something, it didn't belong in the set. It was just as simple as that.”

That doesn't mean OpenSearch can't be extended to meet new or specific uses. The OpenSearch community is quite active, and Clinton points to extensions that add geocoding and other geographic details as just one exciting addition.

But is OpenSearch a “standard?”

Here's my thought on standards: standards work best when you're standardizing something that's already a standard. If you are still trying to figure out what the standard should be, then it's not going to be fun, it's not going to be pretty, and there's a good chance you won't end up with a successful standard.

You may not only not end up with a successful standard, you may not even end up with a successful protocol or format or specification. You may end up with something that nobody's implemented or can't implement. You may end up with something that doesn't even solve the right problems.

Still, Clinton points to the Internet Engineering Task Force's (IETF) work on Atom, an RSS-like format intended to resolve the ambiguity resulting from the three competing versions of RSS. It works, he says, because so many people were already using RSS, and Atom is really just formalizing the best practices the community had already identified. Of course, like OpenSearch, it helps that it's simple. “Those are the types of standards I really like: they're short, they're easy to read, they're noncontroversial.”

But now that almost half of all Internet users are running a browser that supports OpenSearch, now that it's been adopted in whole or part by big players like Microsoft and Google, as well as hundreds of small Web sites, is OpenSearch a standard?31 “Yeah,” Clinton admits, “OpenSearch is sort of at [that] point.”


Un-Free Formats

In 2006, the Free Software Foundation's Richard Stallman wrote his local library, the venerable Boston Public Library, about electronic audiobooks from OverDrive available from BPL.32 The audiobooks, delivered to patrons via a Web site OverDrive offers to libraries that contract for its services, are playable only on Windows computers and on devices that license Microsoft's technology.

The technology, often called copy protection or DRM, for digital rights management, allows content providers to restrict how we use music, movies, and, in this case, audiobooks. For this reason, Stallman calls it “digital restrictions management,” adding:

Describing it as “copyright protection” puts a favorable spin on a mechanism intended to deny the public the exercise of those rights which copyright law has not yet denied them.33

A number of vendors each have their own, incompatible DRM scheme. OverDrive licensed Microsoft's and had built its products and services around it.

The result is that Stallman, who uses exclusively free software, couldn't play the audiobooks on any computer he had available. And he wasn't alone. Another BPL patron who had brought the matter to Stallman's attention was troubled because the books couldn't be played on his Macintosh computer, or anybody else's Mac. And the audiobooks can't be played on any of the 90 million iPods sold worldwide, either.34

OverDrive pointed the finger at Apple, suggesting users “contact Apple and request that they open the iPod to other copy-protected formats or license their proprietary copy-protection method to third-party vendors,”35 but failed to mention that its competitor, Audible.com, does provide books that are playable on iPods.

Audible.com, which was among the first vendors to offer downloadable audio programs, had developed its own technology and was able to work with Apple to make its service compatible with iPods. OverDrive, which uses Microsoft's technology, couldn't do that because it doesn't control the software it depends on.

While it's easy to paint this as just another battle between Apple and Microsoft, Stallman urged BPL and other libraries to consider the risk that DRM posed to the very nature of libraries. Paper-bound books are easily readable throughout the ages and have stood both to serve the needs of library patrons and as documents of history. But while DRM-controlled materials may offer convenient service to a limited number of patrons with compatible equipment, the technology is difficult to maintain, and the incompatibilities will grow with time until the files can't be used anywhere, ruining their future value as historical documents.

The tendency of digitalization is to convert public libraries into retail stores for vendors of digital works. The choice to distribute information in a secret format—information designed to evaporate and become unreadable—is the antithesis of the spirit of the public library. Libraries which participate in this have lost their hearts.36

But there is no simple answer. Until content publishers allow online distribution of their materials without DRM controls, libraries will have to chose between offering online services and fulfilling their role in the preservation of knowledge.


Notes
1. Robert H. Zakon, “Hobbes' Internet Timeline v8.2,” last updated Nov. 1, 2006, on the Zakon Group Web site, www.zakon.org/robert/internet/timeline (accessed Mar. 19, 2007).
2. “ The History of ARPA Leading Up to the ARPANET,” History of ARPANET, part I, on the Web site of Departamento de Engenharia Informática do ISEP, www.dei.isep.ipp.pt/∼acc/docs/arpa–1.html (accessed Mar. 19, 2007).
3. Ibid.
4. Zakon, “Hobbes' Internet Timeline”; Sasha Cavender, “Legends,” Oct. 5, 1998, Forbes.com, http://members.forbes.com/asap/1998/1005/126.html (accessed Mar. 19, 2007).
5. Use of the Carterfone Device in Message Toll Telephone Service, 13 FCC 2d 420 (1968), available online at www.uiowa.edu/∼cyberlaw/FCCOps/1968/13F2-420.html (accessed Mar. 19, 2007); ”Mobile Malcontent,” transcript of a story reported by Brooke Gladstone, broadcast Mar. 2, 2007, on National Public Radio, available online on the Web site On the Media from NPR, www.onthemedia.org/transcripts/2007/03/02/04 (accessed Mar. 19, 2007).
6. Tim Wu, “Wireless Net Neutrality: Cellular Carterfone on Mobile Networks,” February 2007, New America Foundation Wireless Future Program Working Paper No. 17, available on the Social Science Research Network Web site, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=962027#PaperDownload (accessed Mar. 19, 2007).
7. David S. Isenberg, “The Dawn of the Stupid Network,” originally published in ACM Networker 2.1 (Feb./Mar. 1998): 24–31, available online at http://isen.com/papers/Dawnstupid.html (accessed Mar. 19, 2007).
8. David Clark, quoted in Gary Scott Malkin, “Who's Who in the Internet: Biographies of IAB, IESG, and IRSG Members,” Network Working Group Request for Comments 1336, May 1992, available on the Internet FAQ Archives Web site, www.faqs.org/rfcs/rfc1336.html (accessed Mar. 19, 2007).
9. Robert H. Zakon, “Hobbes' Internet Timeline,” Network Working Group Request for Comments 2235, Nov. 1997, available on the Internet FAQ Archives Web site, www.faqs.org/rfcs/rfc2235.html (accessed Mar. 19, 2007).
10. Bruce Sterling, “A Short History of the Internet,” first published in The Magazine of Fantasy and Science Fiction (Feb. 1993), available on the Web site of the Yale University Divinity School Library, www.library.yale.edu/div/instruct/internet/history.htm (accessed Mar. 19, 2007).
11. “ The Internet Gopher: History of the Gopher Protocol,” available on the Code Ghost Web site, www.codeghost.com/gopher_history.html (accessed Mar. 29, 2007).
12. The Minnesota Gopher Team, “University of Minnesota Gopher Software Licensing Policy,” e-mail message dated Mar. 11, 1993, available online at www.nic.funet.fi/pub/vms/networking/gopher/gopher-software-licensing-policy.ancient (accessed Mar. 19, 2007).
13. Berners-Lee, Tim; Fischetti, Mark. New York: HarperCollins; 2000. Weaving the Web: the Original Design and Ultimate Destiny of the World Wide Web by Its Inventor.
14. W. Hoogland and H. Weber, “Software Freely Available,” May 1993, on the World Wide Web Consortium Web site, www.w3.org/History/1993/WWW/Conditions/FreeofCharge.html (accessed Mar. 19, 2007); “Statement Concerning CERN W3 Software Release Into Public Domain,” on the World Wide Web Consortium Web site, www.w3.org/History/1993/WWW/Conditions/PublicDomain/PublicDomain9305p1.gif (accessed Mar. 19, 2007); W. Hoogland and H. Weber, “Declaration,” Apr. 30, 1993, on the World Wide Web Consortium Web site, www.w3.org/History/1993/WWW/ConditionsPublicDomain/PublicDomain9305p2.gif (accessed Mar. 19, 2007).
15. “ Marc Andreesen,” in Internet Pioneers on the ibiblio Web site, www.ibiblio.org/pioneers/andreesen.html (accessed Mar. 19. 2007); “Mosaic (Web Browser),” Wikipedia, http://en.wikipedia.org/wiki/Mosaic_(web_browser) (accessed Mar. 19, 2007).
16. Jakob Nielsen, “Original Top Ten Mistakes in Web Design,” May 1996, available on useit.com, www.useit.com/alertbox/9605a.html (accessed Mar. 19, 2007); “Netscape Navigator,” Wikipedia, http://en.wikipedia.org/wiki/Netscape_Navigator (accessed Mar. 19, 2007).
17. Commonweath of Massachusetts, “Enterprise Technical Reference Model, Version 3.5: Introduction,” Sept. 1, 2005, on the official Commonwealth of Massachusetts Web site, www.mass.gov/Aitd/docs/policies_standards/etrm3dot5/etrmv3dot5intro.pdf (accessed Mar. 19, 2007).
18. David Berlind, “Microsoft: We Were Railroaded in Massachusetts on ODF,” Oct. 17, 2005, on the ZDNet Web site, http://news.zdnet.com/2100-3513_22-5893208.html, p. 7 of 9 (accessed Mar. 19, 2007).
19. Peter Galli, “Mass. Back on Track for ODF Implementation,” Aug. 24, 2006, on the eWeek.com Web site, www.eweek.com/article2/0,1759,2008246,00.asp (accessed Mar. 19, 2007).
20. Berlind, “Microsoft: We Were Railroaded,” p. 7 of 9.
21. Massachusetts, “Enterprise Technical Reference Model: Introduction”; Berlind, “Microsoft: We Were Railroaded,” p. 1 of 9.
22. “ Roadmap for Open ICT Ecosystems,” (Berkman Center for Internet and Society at Harvard Law School, Sept. 2005), 3, available online at http://cyber.law.harvard.edu/epolicy/roadmap.pdf (accessed Mar. 19, 2007).
23. Ibid.
24. Ibid., 4.
25. Mary Madden, Data Memo Re: Internet Penetration and Impact (Pew Internet & American Life Project, April 2006), 4, available online at www.pewinternet.org/pdfs/PIP_Internet_Impact.pdf (accessed Mar. 19, 2007).
26. Matt Mullenweg (WordPress developer), interview by the author, Aug. 6, 2006.
27. Martin LaMonica, “Web Giants Lure Developers,” Sept. 1, 2006, on CNET News.com, http://news.com.com/Web+giants+lure+developers/2100-7345_3-6111465.html (accessed Mar. 19, 2007).
28. Thomas Claburn, “APIs Make Money for Amazon,” Oct. 18, 2005, InformationWeek Web site, www.informationweek.com/showArticle.jhtml?articleID=172302181 (accessed Mar. 19, 2007).
29. DeWitt Clinton (OpenSearch inventor), phone interview by the author, Mar. 20, 2007. All quotes from Clinton in the section that follows are from the same interview.
30. A9 search page, http://a9.com (accessed Mar. 19, 2007).
31. “ Browser Statistics,” W3Schools Web site, www.w3schools.com/browsers/browsers_stats.asp (accessed Mar. 19, 2007).
32. Richard Stallman, “Letter to the Boston Public Library,” Jan. 30, 2006, on the Free Software Foundation Web site, www.fsf.org/campaigns/bpl.html (accessed Mar. 19, 2007).
33. Ibid.
34. Steve Jobs, “Thoughts on Music,” Feb. 6, 2007, on the Apple Web site, www.apple.com/hotnews/thoughtsonmusic (accessed Mar. 19, 2007).
35. “ Frequently Asked Questions,” on the OverDrive Web site, www.overdrive.com/DeviceResourceCenter/faqs.asp (accessed Mar. 19, 2007).
36. Stallman, “Letter to the Boston Public Library.”.

Article Categories:
  • Information Science
  • Library Science

Refbacks

  • There are currently no refbacks.


Published by ALA TechSource, an imprint of the American Library Association.
Copyright Statement | ALA Privacy Policy