LTR_54n1_ch5

Chapter 5. Future Directions

There are many unknowns in the near future of technology, but two observational laws that continue to have predictive power are Moore’s Law and Koomey’s Law.1 Moore’s Law was coined by Gordon Moore, founder of Intel, after he observed that roughly every eighteen months the number of transistors on a silicon chip doubled, while at the same time the price for said chip was cut in half. This has the effect of doubling the computing ability and halving the price for computing power every year and a half. This means that computing power is one of the very few commercial resources that continually gets both better and less expensive over time. The companion law, Koomey’s Law, operates on the same time frame, but instead of computing ability, it describes the amount of electricity needed to drive the chip in question. According to Koomey, every eighteen months the amount of energy needed to do a specific amount of computing is halved.

Humans are bad at understanding the difference in effect between linear and exponential change. To give just one fairly simple example, suppose we imagine as our baseline for computing a modern cellphone, say the iPhone 8. To buy an iPhone 8 costs $699. If we then apply Moore’s Law to the phone as a whole (ignoring manufacturing costs—this is a very simple thought exercise, not a full accounting of the costs of production), we can extrapolate what the same amount of computing ability would cost in five, ten, or twenty years. To buy the same amount of computing power, complete with camera, wireless connectivity, and the like in five years will cost roughly ninety-two dollars; in ten years, twelve dollars; and in twenty years, only twenty-one cents. Yes, that’s not a typo, that’s twenty-one cents. And, of course, five years from that we’re talking about fractional cents.

Why do we care about this change? Because the end game of the Internet of Things is that computing power and connectivity are so cheap that they are literally in every object manufactured. Literally everything will have the ability to be “smart”—every chair, every table, every book, every pencil, every piece of clothing, every disposable coffee cup. Eventually the expectation will be that objects in the world know where they are and are trackable or addressable in some way. The way we interact with objects will likely change as a result, and our understanding of things in our spaces will become far more nuanced and detailed than now.

For example, once the marginal cost of sensors drops below the average cost for human-powered shelf reading, it becomes an easy decision to sprinkle magic connectivity sensors over our books, making each of them a sensor and an agent of data collecting. Imagine, at any time, being able to query your entire collection for misshelved objects. Each book will be able to communicate with each book around it, with the Wi-Fi base stations in the building, with the shelves, and be able to know when it is out of place. Even more radical, maybe the entire concept of place falls away, because the book (or other object) will be able to tell the patron where it is, no matter where it happens to be shelved in the building. Ask for a book, and it will be able to not only tell you where it is, but it can also mesh with all the other books to lead you to it. No more “lost books” for patrons, since they will be able to look on a map and see where the book is in their house and have it reveal itself via an augmented reality overlay for their phone.

The world of data that will be available to us in ten to twenty years will be as large as we wish it to be. In fact, it may be too large for us to directly make sense of it all. My guess is that we will need to use machine learning systems to sort through the enormous mounds of data and help us understand the patterns and links between different points of data. The advantage is that if we can sort and analyze it appropriately, the data will be able to answer many, many questions about our spaces that we’ve not even dreamed of yet, hopefully allowing us to design better, more effective, and more useful spaces for our patrons.

At the same time, we need to be wary of falling into measurements becoming targets. I opened this report with a concept credited to economist Charles Goodhart, phrased by Mary Strathern, “When a measure becomes a target, it ceases to be a good measure.”2 We can see this over and over, not just in libraries, but in any organization. An organization will optimize around the measures that it is rewarded by, often causing negative effects in other areas. This is captured in the idea of perverse incentives, where an organization rewards the achievement of an assessment, only to realize that the achievement undermines the original goal. The classic example of this is known colloquially as the “Cobra effect,” named after the probably apocryphal story of the British colonizers in India rewarding citizens for bringing in dead cobras in an attempt to control their deadly numbers in cities. Of course, the clever people of India were then incentivized to breed cobras in secret in order to maximize their profits.3

Libraries should be wary of the data they gather, especially as we move into the next decade or two of technological development. The combination of data being toxic to the privacy of our patrons and the risks of perverse incentives affecting decisions despite being warned by Goodhart about measures becoming targets is enough for me to caution libraries that wish to implement a data-heavy decision-making or planning process. I believe strongly in the power of data analysis to build a better future for libraries and our patrons. But if used poorly or unthoughtfully, the data we choose to collect could be our own set of cobras.

Conclusion

There is enormous potential for smart buildings to improve how libraries are viewed by their communities. There is also a huge threat presented by the addition of sensors to library spaces, in the form of destroying any semblance of privacy of the reading experience. This threat becomes larger the more that libraries outsource the collection of this environmental and usage data to outside vendors, especially those that trade in data outside of the library ecosystem. Libraries that start moving into this world need to be extremely careful to understand who controls the data about their spaces and where said data is going.

The risks for data collection aren’t always obvious. One example that illustrates the challenge in threat modeling for the Internet of Things is from the Measure the Future project. By itself, the data that is collected by Measure the Future is innocuous and can’t be tied to any particular patron. But if you have data about the movement of people in a space, and that space has only one person in it, then correlating that with another data source could serve to reveal the identity of the person browsing. If law enforcement shows up with a subpoena for all of the data that your library has for a particular period of time, then it is far better to not have the data for your patrons’ browsing habits than it is to risk revealing their browsing behaviors. In this particular threat model, Measure the Future solves the problem by not actually recording the data in question if fewer than three people are in the frame, instead buffering the data and collapsing it into the next data bucket.

Like many technologies, the risk versus reward for smart spaces may take some time to settle out. I believe that it will settle into positive outcomes for those who choose to carefully integrate data collection into their physical surroundings, but it’s equally clear that this must be done with care and thought about the risks to our patrons. It’s important to think about these risks now, because as J. B. S. Haldane quipped, “I have no doubt that in reality the future will be vastly more surprising than anything I can imagine. Now my own suspicion is that the Universe is not only queerer than we suppose, but queerer than we can suppose.”4 That is certainly going to be true for technology over the next two decades.

Notes

  1. Gordon E. Moore, “Cramming More Components onto Integrated Circuits,” Electronics 38, no 8 (April 19, 1965); Jonathan Koomey, Stephen Berard, Marla Sanchez, and Henry Wong, “Implications of Historical Trends in the Electrical Efficiency of Computing,” IEEE Annals of the History of Computing 33, no. 3 (March 29, 2010): 46–54, https://doi.org/10.1109/MAHC.2010.28.
  2. Marilyn Strathern, “‘Improving Ratings’: Audit in the British University System,” European Review 5, no. 3 (July 1997): 308.
  3. “Cobra Effect,” Wikipedia, last update October 6, 2017, https://en.wikipedia.org/wiki/Cobra_effect.
  4. J.B.S. Haldane, Possible Worlds: And Other Essays [1927] (Chatto and Windus: London, 1932, reprint), 286.

Refbacks

  • There are currently no refbacks.


Published by ALA TechSource, an imprint of the American Library Association.
Copyright Statement | ALA Privacy Policy