Chapter 2. The Digital Meets the Physical and the Biological

New Developments in Extended Reality

Extended reality (XR) is one of the new digital technologies that illustrates how the physical gets infused with the digital. Extended reality refers to the environments and human-machine interactions “that either merge the physical and virtual worlds or create an entirely immersive experience for the user.”1 Such environments and interactions are generated by computer technology and wearables, which are computer-powered devices or equipment, such as a headset or a pair of glasses, that can be worn by a user. Augmented reality (AR), virtual reality (VR), and mixed reality (MR) are different types of XR.2

Virtual Reality

Virtual reality is an artificial three-dimensional environment that is created on a computer or captured by a video camera. It is presented through a head-mounted display and base stations that track the user’s location. The head-mounted display and the base stations are both connected to a high-performance PC that runs VR apps. The user interacts with the virtual world by means of controllers. Oculus Rift and HTC Vive are two well-known VR systems, both released in 2016.3 The price for these VR systems has gone down significantly. They can be purchased for as little as $400 to $500 at the time of writing. Microsoft also has a VR platform called Windows Mixed Reality.4 While its name includes the term mixed reality, it is actually a VR platform.5 Unlike HTC Vive or Oculus Rift, Windows Mixed Reality headsets have inside-out tracking, which allows them to track the user’s movements and direction without external sensors. Many manufacturers, such as Samsung, Acer, and Dell, are selling this type of VR headset for the Windows Mixed Reality platform.

These VR systems enable individuals to immerse themselves in a simulated environment, which feels real to explore and manipulate. Gaming and entertainment are the areas where VR has become immediately popular. But VR can also bring benefits to a number of non-gaming activities. Its immersive power makes VR an effective tool for activities such as learning, job training, product design, and prototyping. For example, teachers are using VR apps such as Google Expeditions and DiscoveryVR in classrooms to take students on virtual field trips to faraway places.6

The three-dimensional VR environment also brings unique advantages in creating 3-D models.7 VR applications for 3-D modeling—such as MakeVR Pro, Medium, ShapeLab, MasterpieceVR, Gravity Sketch Pro, and Google Blocks—allow people to create a 3-D object in the 3-D environment, review it, and export it in the .STL or .OBJ file format ready to be 3-D printed.8 These applications enable users to import 3-D model files as well and modify them in the 3-D environment. The adoption of VR can bring interesting changes to product design. In 2016, BMW announced a plan to use HTC Vive VR headsets and mixed reality for vehicle development for greater flexibility, faster results, and lower costs.9

The current VR technology is limited in its support for social VR experiences, however. While it could be great to perform certain tasks in the VR environment, being unable to interact with others in the same VR environment is a shortcoming that will need to be overcome for VR to become fully mainstream. Two social VR platforms, VRChat and AltspaceVR, provide VR environments in which VR users can meet and interact with other VR users. But the experience on these platforms is not yet as smooth as one would expect.10 Facebook, which acquired Oculus in 2014, introduced Facebook Spaces in 2017 and is now developing Facebook Horizon, which is to be launched in 2020. Mozilla also started its own browser-based social VR platform, Hubs in 2018.11 Whether these experiments will eventually lead to a more refined social VR experience remains to be seen.



Facebook Horizon


Augmented Reality

Unlike VR, which creates a completely separate reality from the real-life environment, AR and MR add information to the real world. AR is an overlay of digital content on real-life surroundings. The general public has become familiar with AR through Google Glass and Pokémon Go. Pokémon Go is an AR game played on a smartphone. It was released in July 2016 and became highly popular, earning a total of $1.2 billion and being downloaded 752 million times by 2017.12 Two years later, the game is still popular and widely played across the world.

Google Glass is a device for AR. It debuted in 2013, and some libraries purchased and lent them to library users. Due to the widely raised privacy concerns, Google stopped selling the prototype Google Glass in 2015. However, its second-generation enterprise edition has been adopted and tested at several companies such as Boeing, GE, and DHL, reducing processing and training time and improving productivity and efficiency.13 For example, a farm equipment manufacturer, AGCO, has about 100 employees using the custom Google Glass. With the Google Glass on, AGCO employees can get a reminder about the series of tasks they need to perform while assembling a tractor engine. They can also locate and access certain information related to the assembly of parts. They can scan the serial number of a part to bring up a manual, photos, or videos with Google Glass. AGCO reported that the addition of Google Glass made quality checks 20 percent faster and also helped training new employees on the job.14 Google unveiled the Google Glass enterprise edition 2 in May 2019. This newer model does not look much different from plain eyeglasses. It costs $999 and is equipped with a faster quad-core 1.7 GHz CPU processor, an 8-megapixel camera, a 640 × 360 optical display, a microphone, a speaker, a multitouch gesture touchpad, 3 GB RAM, and 32 GB storage.15 The new Google Glass enterprise edition 2 is sold for corporate users only.

AR is also being adopted in education. A system called zSpace provides a suite of AR applications developed specifically for learning.16 It consists of a computer, a pair of 3-D glasses, and a pen. The educational applications available for zSpace help people to learn in categories of K–12 education, career and technical education, and medical education. zSpace provides a way for multiple people to experience AR at the same time, although the control is still limited to one person.

It is to be noted that many more smart glasses are now coming to the consumer electronics market. Some have only a few simple features and basically function as a combination of a fitness tracker, a notification display, an earphone, and a still and video camera.17 But other smart glasses, such as Vuzix Blade and North Focals, are designed to be more like a smartwatch, closer to the way Google Glass works, allowing people to use functions and apps, which include instant messaging, maps and directions, Alexa, Google Assistant, Yelp, and Uber.18

Vuzix Blade

North Focals

Interesting developments in AR are also taking place with Google Lens. Google Lens is the camera-based AR technology that started supporting the Android smartphone’s camera app in 2017. At the 2019 Google I/O conference, Google introduced AR search powered by Google Lens.19 Using a compatible Android and iOS device, such as an iPhone or an Android smartphone, people can now see a 3-D object in their search results and view it as if it were in their immediate surroundings in real-life scale through the device’s camera. It is not difficult to see that many businesses, such as furniture stores, will be motivated to provide 3-D files of their products to be available for AR search because such files can vastly improve their customers’ shopping experience. Google Lens can also find and suggest similar items to buy when people see something they like, whether it is a shirt, a chair, or a handbag. Achieving the same result by running a conventional web search would be much slower.

Google Lens

With the help of rapidly advancing research in artificial intelligence (AI) and computer vision, Google Lens is capable of performing real-time translation and object identification. It scans and translates texts, allows one to look up words and copy and paste them, adds events to one’s calendar, and calls a phone number. These features can come in handy on many occasions. For example, at a restaurant, one can not only translate the menu but also look up dishes and even find out which ones are popular from the reviews and photos from Google Maps, using the Google Lens feature on a compatible smartphone. While traveling, one can point the camera of a smartphone at a popular landmark and find out its hours and historical facts associated with it. Buildings and other landmarks are not the only items that Google Lens can identify. It also identifies plants and animals.

As its name suggests, Google Lens provides a lens through which the world can be viewed augmented and enriched with digital information. This will make the physical and digital worlds more integrated and enmeshed with each other. Currently, Google Lens operates through the camera on a smartphone, but once integrated with the future models of smart glasses or other wearables, it will open up a whole new way for us to interact with the physical world.

Mixed Reality

Mixed reality (MR) is a combination of VR and AR. It allows one to view and interact with the real physical world and digital objects by mixing them together. Mixed reality is a term originally created to describe a digital environment named ProtoSpace developed by NASA’s Jet Propulsion Laboratory in 2016. ProtoSpace is a multicolored CAD-rendering MR program that allows engineers to build an object that feels and acts like a real object. It is used to find flaws in the design before a physical part is built.20 MR has been around for a while, but it is not yet as well known to the public as VR and AR.

The Microsoft HoloLens is likely the most widely known MR headset.21 It is a self-contained holographic computer contained in a headset and can not only project virtual objects into the real world but also produce real-life-like interactions by mapping the user’s environment as a three-dimensional mesh. Scopis, advertised as “a mixed-reality interface for surgeons,” is a medical image guidance system that provides an MR overlay through the Microsoft HoloLens.22 A surgeon wearing the HoloLens headset gets guidance from Scopis through spinal and other complex surgeries.23 It improves the accuracy and speed of the surgery.24

Microsoft released the HoloLens 2 in November 2019 with a price of $3,500.25 The HoloLens2 comes with a much larger field of view and better ability to detect physical objects in comparison to the HoloLens 1. It is also equipped with a multitude of sensors, speakers, and a camera. Just like the Google Glass enterprise edition, the HoloLens 2 is available for industrial use only.26 It will not be available to general consumers. The development edition also requires a monthly subscription fee of $99.27

Magic Leap is another MR headset.28 Unlike the HoloLens, it is connected to a small hip-mounted round computer that handles the primary data and graphics processing and comes with a controller. Its personal bundle version is sold for $2,295. Another MR device, Meta 2, is a headset tethered to a conventional computer. It was released in late 2016 to developers with a much lower price of $949, but is no longer produced because the company shut down in early 2019. 29

The examples and new developments in VR, AR, and MR technologies described above show that, while still at an early stage, the adoption of XR has begun in a variety of areas including education, health care, manufacturing, aviation, engineering, shopping, and even search, blurring the lines between the physical and the digital. VR is becoming more and more common in entertainment and gaming. In the world of advanced MR, interacting with digital and physical objects would be nearly indistinguishable.

The early development of the AR Cloud, a real-time machine-readable three-dimensional map of the world, is also in progress.30 The AR Cloud is to serve as a kind of shared spatial screen that enables multiuser engagement and collaboration in the AR environment. It is thought to be an important future software infrastructure in computing comparable to Google’s PageRank index and Facebook’s social graph.31 By combining the digital and the physical world in a seamless manner, XR has the potential to transform people’s activities both online and offline into something completely new. It will be a while until compelling XR applications and experiences become available, but today’s XR is certainly moving beyond the stage of experimental prototyping.32

Currently, most libraries are focusing on providing VR equipment and space, so that library users can experience VR firsthand.33 VR equipment and spaces are often placed in library makerspaces, but some academic libraries have a separate immersive VR environment as well as spaces and equipment optimized for visualization work that facilitate and enhance the learning, teaching, and research experiences of their students and faculty.34 While most libraries that have adopted VR and AR currently allow users to experience commercially available VR or AR content, some libraries may begin to create their own VR or AR content in the future. When that happens, we may see library-specific VR and AR applications that enable library patrons to interact with the physical library environment for specific events, such as a summer reading challenge or a library scavenger hunt.

Big Data and the Internet of Things

Big Data

Another technology trend that is blurring the lines between the physical, digital, and biological spheres is Big Data and the Internet of Things (IoT). According to a report by Watson Marketing, approximately 2.5 exabytes (EB) of data are currently being created every day.35 More than 17 billion connected devices are in use worldwide, and 7 billion of them are IoT devices.36 International Data Corporation estimates that the number of those IoT devices will increase to 41.6 billion by 2025, which, in turn, will generate 79.4 zettabytes (ZB) of data.37

Big Data is often characterized by 3 Vs: high-volume, high-velocity, and high-variety. Here, high-volume refers to the scale of petabytes, exabytes, yottabytes, and zettabytes.38 An example of high-velocity is Twitter, an Internet service whose data is created by its users. Every second, an average of about 6,000 tweets are posted, amounting to more than 350,000 tweets per minute and 500 million tweets per day.39 That is a lot of data generated in just one day. Big Data is also high-variety, meaning data of many different types, such as text, audio, video, and financial transactions, that originate from a variety of sources, including electronic health record systems, global position systems (GPS), fitness trackers, set-top cable boxes, social media, emails, and various kinds of self-reporting sensors.

Big Data isn’t about data alone, however. No matter how much data one accumulates, that data would have no value unless it is analyzed to bring new insight. For this reason, Big Data is defined as “high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.”40 The tools and technologies for storing, retrieving, and analyzing today’s high-volume, high-velocity, and high-variety data are an indispensable component of the Big Data trend. As Dale Neef, author of the book Digital Exhaust, wrote, what makes Big Data different from just more data is the ability to apply sophisticated algorithms and powerful computers to large data sets to reveal correlations and insights previously inaccessible through conventional data warehousing or business intelligence tools.41 To an organization, tapping into Big Data means capturing and collecting both human- and machine-generated data related to its activities; analyzing such data to identify correlations and patterns to discover new insights; and utilizing those correlations, patterns, and insights to benefit the organization.

The Internet of Things

The Internet of Things (IoT) is an important contributor to Big Data because it generates a large volume of machine-to-machine data. Simply put, IoT is the network of uniquely identifiable things—that is, objects virtually represented on the Internet. IoT consists of sensors and actuators embedded in physical objects connected to the Internet.42 The network of those sensors and systems captures, reports, and communicates data about their environments as well as their own performances and interacts with those environments. A smartwatch, a smart thermostat, a Fitbit, and an Amazon Alexa are all examples of IoT devices.

Depending on their requirements, IoT devices fall into two categories: critical IoT and massive IoT. Critical IoT refers to sensor networks and systems that relate to critical infrastructure at a corporate or national level; it includes devices that require high network availability and low latency. On-board controls for an autonomous vehicle and the national energy and utility infrastructure are examples of such critical IoT.43 Massive IoT, on the other hand, refers to systems and applications with a very large number of devices equipped with sensors and actuators, which send data to a cloud-based platform on the Internet. Those devices are less latency-sensitive and require low energy to operate. Wearables (e-health), asset tracking (logistics), smart city and smart home, environmental monitoring and smart metering (smart building), and smart manufacturing (monitoring, tracking, digital twins) are the areas where such massive IoT applications can be developed and deployed.44

Radio frequency identification (RFID) systems have been long viewed as a prerequisite for the IoT because they allow machines to identify and control things in the real world. In an RFID system, an object with an RFID tag can be identified, tracked, and monitored by the RFID reader. The activities of RFID tags and readers are initiated by an RFID application, which collects and processes data from RFID tags. An RFID system creates digital representations of physical objects, and as a result, it is a good example of an IoT system.

An IoT system usually has three layers: the perception layer, the network layer, and the service layer (or application layer).45 In the perception layer, information about the physical world is captured and collected by sensors, wireless sensor networks, tags and reader-writers, RFID systems, cameras, GPS, and so on. The network layer provides data transmission capability. The service layer, also known as the application layer, processes complex data through restructuring, cleaning, and combining; provides services such as facility management and geomatics; and transforms information to content for enterprise application and end users in areas such as logistics and supply, disaster warning, environmental monitoring, agricultural management, production management, and so forth.46

The Impact of Big Data and the Internet of Things

How will Big Data and the IoT will change our lives? It is likely that the IoT infrastructure will be built slowly over many years. But the fully realized IoT can eventually connect all physical objects in the real world and allow us to detect, track, and control them digitally through their online representations. Furthermore, those connected physical objects will be able to communicate with one another to perform more sophisticated and complex tasks based upon the information received. This type of machine-to-machine communication and cooperation will significantly increase the degree of automation in the real world. In such a world, a smart refrigerator can alert you to buy milk when it runs out or even place an order to your predetermined grocery store, so that you can pick it up. A home entertainment system will automatically purchase movies that you would enjoy based upon your preferences and play them for you. Even energy grids will be kept at their optimal states thanks to a large amount of detailed sensor data collected, analyzed, and promptly acted upon.

The more physical objects are brought into the IoT, the more digital data they will generate. The Big Data phenomenon is likely to continue since such massive amounts of data need to be collected, stored, retrieved, analyzed, and acted upon on an ongoing basis. The quickly advancing Big Data tools and technologies related to data storage, retrieval, and analytics will help make the IoT infrastructure of the world more robust and complete.

Today’s IoT adoption and utilization are not yet close to this full realization. But many researchers in library and information science have proposed a variety of IoT applications for libraries. Those proposals include location information and services inside a library, a system for managing study room seating and library resource utilization, an intelligent energy-saving lighting control system, a library data resource object model and the process of library personalized information service management, and a library noise information storage system.47 But in the present at least, the most common type of IoT technology utilized at libraries is RFID. Many libraries now attach RFID tags to the materials in their collections. This allows them to implement self-checkout and to automate tasks such as shelf-reading, inventory, and handling of materials upon their return. RFID tags can, however, be used for purposes beyond inventory management.48

Along with the Big Data trend, academic libraries have been adding data-related services and support as part of research support. As government funding agencies, such as the NIH and NSF, have mandated data management plans in grant applications and public access to data from federally funded research projects, libraries started helping researchers with research data management plans and educating them about the need to make research data findable, accessible, interoperable, and reproducible (FAIR).49 Many libraries also operate their own data repositories and provide data storage and archiving. Data services librarians assist faculty, students, and researchers with identifying relevant data sets for their projects, advise on appropriate data management practices, and perform tasks such as data set acquisition, data curation and dissemination, and data-related consultation and instructional support.50

As the Big Data and IoT trends mature, libraries and librarians will be asked to play a larger role in developing a variety of data-related support, services, programs, and other educational offerings, systems, and applications. Libraries may be asked to take on managing, storing, and preserving massive real-time data sets.51 There will be an increasing demand for library professionals who are knowledgeable and skilled in data analytics. As more sensors and smart things are introduced to and integrated with the library in both its services and operation, innovative new ways to serve library patrons and to achieve a higher level of operational efficiency are also likely to emerge.52

It is easy to see how the IoT blurs the lines between the physical and the digital. The IoT aims to create a digital layer over our physical world. In the mature stage of the IoT, things in the world will be digital as much as physical. Full connected to the Internet and with one another, smart things will continuously engage in machine-to-machine communication and cooperation. This will enable them to operate much more intelligently, thereby reducing the need for human control or intervention. Naturally, all such smart objects, which would be basically everything in the world when the IoT is fully realized, will generate a massive amount of data. The infrastructure and the networking capability to capture, process, and store such a massive amount of data will be critical. This is how the IoT will accelerate the Big Data trend, and the massive amount of data from IoT devices will in return fuel future developments in artificial intelligence (AI), and machine learning in particular, where massive data sets are required to train algorithms.

Synthetic Biology and 3-D Bio-Printing

So far, we have seen how extended reality and the Internet of Things blur the lines between the physical and the digital. Now, let’s take a look at how biological processes are being transformed to be more digital with genetic circuits and biological parts.53

Synthetic Biology

Today’s digital computer is an electronic device that stores, retrieves, and processes data. The data processing takes place in the CPU (central processing unit), a microchip usually made of silicon. A computer program is a set of instructions for the computer hardware to perform particular operations. These operations all boil down to manipulating bits, the smallest unit of digital data in a computer—namely 0s and 1s. Synthetic biologists are interested in making a biochemical process, such as DNA/gene synthesis and the creation of proteins, more akin to computer programming.

Synthetic biology studies how to program cells using synthetic genes. With that, synthetic biologists want to make biological parts, devices, sensors, and chemical factories, which in turn can be used to build pharmaceuticals, renewable chemicals, biofuels, and food. They view a ribosome, which creates proteins in a cell, as a molecular machine. Ribosomes read a set of synthetic genes, in which the amino acid sequences of a protein are encoded. The genes give ribosomes the instructions for how to build proteins. In that sense, genes and ribosomes are analogous to programs and a machine that together produce an output. Cells, where ribosomes reside, can be regarded as tiny factories equipped with molecular machinery that produces chemicals.

The first synthetic life form, JCVI-syn1.0, was created in 2010 by an American biotechnologist, J. Craig Venter, and his team.54 The DNA code of the replica of the cattle bacterium Mycoplasma mycoides was written on a computer, assembled in a test tube, and inserted into the hollowed-out shell of a different bacterium. The genome assembly process required stitching together eleven 100,000 base-pair DNA segments into a complete synthetic genome and propagating as a single yeast artificial chromosome.55 The synthetic genome then encoded all the proteins required for life, which means the DNA “software” built its own “hardware.” This process of converting a digitized DNA sequence stored in a computer file into a living entity capable of growth and self-replication cost roughly $40 million and countless worker-hours. In 2019, a team of scientists at the Medical Research Council Laboratory of Molecular Biology, a research institute in Britain, succeeded in synthetizing the complete genome of E. coli, named Syn61. JCVI-syn1.0 had a total of approximately 1 million base pairs (1079 kb). Syn61 has a total of 4 million base pairs (4 Mb) of synthetic DNA sequence swapped in the native chromosome.56 This is the largest synthetic genome created to date.

The speed and the cost of DNA sequencing and DNA synthesis are important factors in taking synthetic biology to a larger scale. Sequencing DNA allows researchers to, so to speak, read the instructions of how to construct a biological part, which is a building block of life. In turn, DNA synthesis enables them to write new genetic information by replicating, modifying, and creating genes. These are the most basic steps in synthetic biology. But DNA sequencing and synthesis are time-consuming and expensive.

In digital computing, Moore’s Law—that the number of transistors on integrated circuits doubles about every two years while the cost halves—has been shown to be valid. This phenomenon has drastically lowered the cost of computing over the years. Some synthetic biologists are now hoping for a similar trend to surface in DNA sequencing and DNA synthesis.57 While it remains to be seen if this hope will be realized in the near future, the ability to quickly read and write DNA at a lower cost will make it possible to identify and catalog standardized genomic parts. Those biological parts will be used and synthesized to quickly build novel biological systems, redesign existing biological parts and expand the set of natural protein functions for new processes, engineer microbes to produce enzymes and biological functions required to manufacture natural products, and go as far as designing and constructing a simple genome for a natural bacterium.58

The drop in the cost of DNA sequencing and DNA synthesis will facilitate and accelerate developments in synthetic biology, such as the manipulation of organisms into bio-factories for producing biofuels, the uptake of hazardous material in the environment, and the creation of biological circuits.59 Since microorganisms are small and require only a small amount of energy to operate, the ability to program cells and biological processes to produce specific outputs with precision will usher in a truly new era of manufacturing.

3-D Bio-Printing

Synthetic biology is not limited to synthetizing DNA molecules and proteins. Today’s researchers are using the novel bio-printing technology to build whole cells, tissues, and even organs. This brings biology even closer to the digital realm. In 2016, regenerative medicine scientists at Wake Forest Baptist Medical Center succeeded in printing living tissue structures using a specialized 3-D bio-printer. Researchers were able to bio-print ear, bone, and muscle structures that further matured into functional tissue, which developed a system of blood vessels when implanted in animals.60 This means that in the future, patients with injured or diseased tissues can receive new living tissue structures that would replace the injured or diseased ones.

The way bio-printing works is not drastically different from the way a common 3-D printer works. Bio-printing is an additive manufacturing technology of a physical 3-D object. As such, it creates a three-dimensional object layer by layer. However, a bio-printer uses bio-ink, which is organic living material, while a common 3-D printer uses a thermoplastic filament or resin as its main material. Bio-ink is a combination of living cells and a compatible base, like collagen, gelatin, hyaluronan, silk, alginate, or nanocellulose. This base is a carrier material that envelops the cells. It provides nutrients for cells and serves as a 3-D molecular scaffold on which cells grow.61

Bio-printing can be done by different methods, such as extrusion, ink jet, acoustic, or laser. But regardless of the specific method used, a typical bio-printing process goes through the common steps of 3-D imaging, 3-D modeling, bio-ink preparation, printing, and solidification.62 3-D imaging uses the exact measurements of the tissues supplied by a CT or MRI scan. Based upon this information, a blueprint is created, which includes the layer-by-layer instructions for the bio-printer. Suitable bio-ink is prepared next. After that, this material is deposited layer by layer by the bio-printer and goes through the solidification process, producing functional tissue or even an organ. Researchers are currently working on ways to bio-print a human heart, kidney, and liver. In 2018, scientists at Newcastle University bio-printed the first human cornea.63

Synthetic biology’s vision to repurpose living cells as substrates for general computation has so far manifested itself in genetic circuit designs that attempt to implement Boolean logic gates, digital memory, oscillators, and other circuits from electrical engineering.64 Biological circuits and parts are not yet sufficiently modular or scalable. Nevertheless, synthetic biology holds a key to the potential future in which electronics and biology become fungible and matter becomes programmable.65 When this happens, the function of a mechanical sensor, for example, may be performed by bacteria, and those bacteria may function in connection with electronics and computers. In such a future, living organisms and nonorganic matter will interface and interact with each other seamlessly. One day, we may well use living organisms to produce materials, and living organisms may serve as an interface for everyday electronics. When developments in the areas of computational design, additive manufacturing, materials engineering, and synthetic biology are combined, the result will truly blur the line between the physical, the digital, and the biological.

DIYbio, Citizen Science, and Libraries

Synthetic biology inspired the citizen science and the DIYbio movement, which resulted in many local DIYbio communities and biohackerspaces. At biohackerspaces, the public can learn about and pursue biotechnological solutions that solve everyday problems without being professional scientists or affiliated with a formal wet lab.

The DIYbio movement refers to the new trend of individuals and communities studying molecular and synthetic biology and biotechnology without being formally affiliated with an academic or corporate institution.66 DIYbio enthusiasts pursue hobbyist biology projects, some of which may solve serious local or global problems. Those include testing for melamine contamination in milk and developing an affordable handheld thermal cycler that rapidly replicates DNA as an inexpensive diagnostic. A biohackerspace is a community laboratory that is open to the public where people are encouraged to learn about and experiment with biotechnology. A biohackerspace provides people with tools that are usually not available at home but often found in a wet lab, such as microscopes, Petri dishes, freezers, and PCR (polymerase chain reaction) machines that amplify a segment of DNA and create many copies of a particular DNA sequence.67 Currently, the DIYbio website lists more than a hundred such DIYbio communities and biohackerspaces.68 A biohackerspace democratizes access to biotechnology equipment and space and enables users to share their findings with others. In this regard, a biohackerspace is comparable to a makerspace and the open-source movement in computer programming.

A biohackerspace that involves chemicals and biological matter is not something that existing libraries can adopt as easily as a makerspace. However, libraries can work together with local DIYbio communities and biohackerspaces to advocate for scientific literacy and educate the public. It is also possible for libraries to partner with local DIYbio communities and biohackerspaces to host talks about biotechnology or promote hands-on workshops where people can have the experience of doing science by participating in a project, such as building a gene.69 A libraries’ reading collection focused on biohacking could be introduced to interested library patrons. Libraries can contribute their expertise in grant writing or donate old computing equipment to biohackerspaces. Librarians can also offer their expertise in digital publishing and archiving to help biohackerspaces publish and archive their project outcomes and research findings. These are all relatively untapped areas for libraries, which nevertheless hold great potential to raise the level of overall science literacy in the communities that libraries serve.


  1. “What Is XR?” Raconteur, accessed September 15, 2019,
  2. Bernard Marr, “What Is Extended Reality Technology? A Simple Explanation for Anyone,” Forbes, August 12, 2019,
  3. “Oculus Rift S,” accessed September 15, 2019,; “VIVE,” accessed September 15, 2019,
  4. “Windows Mixed Reality,” Microsoft, accessed September 16, 2019,
  5. Regarding mixed reality (MR) devices and what mixed reality means in contrast to virtual reality, see the later section in this chapter about MR. MR is a combination of virtual reality and augmented reality. Unlike VR, MR allows one to view and interact with the real physical world and digital objects by mixing them together.
  6. See Sophie Morlin-Yron, “Students Swim with Sharks, Explore Space, through VR,” CNN, September 19, 2017,; Jiabei Lei, “Adventures Abound: Explore Google Expeditions on Your Own,” Keyword (blog), Google, July 19, 2017, For Google Expeditions and DiscoveryVR, see Google, “Expeditions,” Google Play, accessed September 16, 2019,; “Immersive Experiences from Discovery Education,” Discovery Education UK, accessed September 16, 2019,
  7. For more information regarding the use of VR for 3-D modeling, see Bohyun Kim, “Virtual Reality for 3D Modeling,” in Beyond Reality: Augmented, Virtual, and Mixed Reality in the Library, ed. Kenneth J. Varnum (Chicago: ALA Editions, 2019), 31–46.
  8. “MakeVR Pro,” Viveport, accessed September 16, 2019,; “Medium,” Oculus, accessed September 16, 2019,; “ShapelabVR,” accessed December 7, 2019,; “FAQ,” MasterpieceVR, accessed September 16, 2019,; “Gravity Sketch,” Steam, accessed September 16, 2019,; “Blocks,” Google, accessed September 16, 2019,
  9. BMW Group, “BMW Opts to Incorporate HTC Vive VR Headsets and Mixed Reality into the Development of New Vehicle Models: Computer Images Instead of Laboriously Constructed Draft Models: Greater Flexibility, Faster Results and Lower Costs,” news release, July 4, 2016,
  10. Keith Stuart, “Alone Together: My Weird Morning in a Virtual Reality Chatroom,” Guardian, March 24, 2016, sec. Technology,
  11. David Lumb, “Mozilla’s ‘Hubs’ Is a VR Chatroom for Every Headset and Browser,” Engadget, April 26, 2018,
  12. Mike Minotti, “Pokémon Go Passes $1.2 Billion in Revenue and 752 Million Downloads,” VentureBeat, June 30, 2017,
  13. Vlad Savov, “Google Glass Gets a Second Chance in Factories, Where It’s Likely to Remain,” Verge, July 18, 2017,
  14. Tasnim Shamma, “Google Glass Didn’t Disappear: You Can Find It on the Factory Floor,” All Tech Considered, NPR, March 18, 2017,
  15. See Scott Stein, “Google Glass Gets a Surprise Upgrade and New Frames,” CNET, May 20, 2019,; Kelly Hodgkins, “Google’s New $999 Augmented Reality Smartglasses Are Ready for Business,” Digital Trends, May 20, 2019,
  16. zSpace home page, accessed November 9, 2019,
  17. Husain Sumra, “The Best Augmented Reality Glasses 2019: Snap, Vuzix, Microsoft, North and More,” Wareable, March 5, 2019,
  18. Sumra, “Best Augmented Reality Glasses.”
  19. Scott Stein, “Google Brings AR and Lens Closer to the Future of Search,” CNET, May 7, 2019,
  20. Mike Senese, “NASA Shapes the Future of Space Design and Exploration with Its Mixed Reality Program,” Make:, July 19, 2016,
  21. “HoloLens 2,” Microsoft, accessed November 9, 2019,
  22. Scopis home page, accessed September 16, 2019,
  23. “HoloLens 2.”
  24. Parker Wilhelm, “Microsoft HoloLens Might One Day Assist in Spine Surgeries,” TechRadar, May 5, 2017,
  25. Adi Robertson, “The Microsoft HoloLens 2 Ships Today for $3,500,” Verge, November 7, 2019,
  26. Dieter Bohn, “Microsoft’s HoloLens 2: A $3,500 Mixed Reality Headset for the Factory, Not the Living Room,” Verge, February 24, 2019,
  27. “HoloLens 2 Pricing and Options,” Microsoft, accessed September 16, 2019,
  28. Magic Leap home page, accessed November 9, 2019,
  29. Adi Robertson, “Meta’s Augmented Reality Headset Is Getting Rebooted at a New Company,” Verge, May 28, 2019,
  30. Open AR Cloud home page, accessed September 16, 2019,
  31. Charlie Fink, “The Search Engine of AR,” Forbes, January 3, 2018,
  32. Deloitte Insights, Tech Trends 2018: The Symphonic Enterprise (Deloitte Development, 2018),
  33. For a variety of examples of libraries providing VR and AR experience for their users with equipment, space, and programs, see Kenneth J. Varnum, ed., Beyond Reality: Augmented, Virtual, and Mixed Reality in the Library (Chicago: ALA Editions, 2019).
  34. The North Carolina State University Libraries provide an extensive array of services and spaces in virtual and augmented reality as well as in visualization. See “Virtual Reality and Augmented Reality,” NC State University Libraries, accessed November 4, 2019,; “Visualization,” NC State University Libraries, accessed November 4, 2019, Georgia State University Library also has a space called “CURVE,” which provides support for both VR and visualization activities. See “CURVE,” Georgia State University Library, accessed November 4, 2019,
  35. 1 EB equals 1 quintillion bytes. Watson Marketing, 10 Key Marketing Trends for 2017 and Ideas for Exceeding Customer Expectations, white paper (IBM, 2017), accessed October 17, 2019, (page discontinued).
  36. Knud Lasse Lueth, “State of the IoT 2018: Number of IoT Devices Now at 7B—Market Accelerating,” IOT Analytics (blog), August 8, 2018,
  37. “The Growth in Connected IoT Devices Is Expected to Generate 79.4ZB of Data in 2025, According to a New IDC Forecast,” International Data Corporation, June 18, 2019,
  38. A petabyte (PB) is approximately million gigabytes (GB). An exabyte (EB) is approximately a thousand PBs, a zettabyte (ZB) a thousand EBs and a trillion GBs, and a yottabyte a thousand ZBs.
  39. “Twitter Usage Statistics,” Internet Live Stats, accessed October 15, 2019,
  40. “Big Data,” Information Technology, Gartner Glossary, accessed October 15, 2019,
  41. Dale Neef, Digital Exhaust: What Everyone Should Know about Big Data, Digitization and Digitally Driven Innovation (Upper Saddle River, NJ: FT Press, 2014), 14.
  42. Mark Skilton and Felix Hovsepian, The Fourth Industrial Revolution: Responding to the Impact of Artificial Intelligence on Business (Cham, Switzerland: Palgrave Macmillan, 2018), 11.
  43. Skilton and Hovsepian, Fourth Industrial Revolution, 11.
  44. Claes Lundqvist, Ari Keränen, Ben Smeets, John Fornehed, Carlos R. B. Azevedo, and Peter von Wrycz, “Massive IoT Devices: Key Technology Choices,” Ericsson Technology Review, January 9, 2019,
  45. Xiaolin Jia, Quanyuan Feng, Taihua Fan, and Quanshui Lei, “RFID Technology and Its Applications in Internet of Things (IoT),” in 2012 2nd International Conference on Consumer Electronics, Communications and Networks (CECNet), IEEE Proceedings (Piscataway, NJ: IEEE, 2012), 1282–85,
  46. Jia et al., “RFID Technology.”
  47. For such proposed applications of the IoT to libraries, see Xueling Liang, “Internet of Things and Its Applications in Libraries: A Literature Review,” Library Hi Tech, August 22, 2018,
  48. Andrew Walsh, “Blurring the Boundaries between Our Physical and Electronic Libraries,” Electronic Library 29, no. 4 (2011): 429–37,
  49. For more information about the FAIR principles for data, see “The FAIR Data Principles,” FORCE11, accessed October 29, 2019,
  50. See Elaine Martin, “What Do Data Services Librarians Do?” Journal of EScience Librarianship 1, no. 3 (March 2013): e1038,; Maureen “Molly” Knapp, “Big Data,” Journal of Electronic Resources in Medical Libraries 10, no. 4 (2013): 215–22,; Daniel Goldberg, Miriam Olivares, Zhongxia Li, and Andrew G. Klein, “Maps and GIS Data Libraries in the Era of Big Data and Cloud Computing,” Journal of Map and Geography Libraries 10, no. 1 (2014): 100–122,
  51. For example, the Library of Congress collected all public tweets from 2010 to 2017. See Gayle Osterberg, “Update on the Twitter Archive at the Library of Congress,” Library of Congress Blog, December 26, 2017,
  52. One such example is a library’s potential role in building critical data capabilities in local communities. See John Carlo Bertot, Brian S. Butler, and Diane M. Travis, “Local Big Data: The Role of Libraries in Building Community Data Infrastructures,” in Proceedings of the 15th Annual International Conference on Digital Government Research (New York: ACM, 2014), 17–23, More ideas have been also proposed in Liz Lyon, “The Informatics Transform: Re-engineering Libraries for the Data Decade,” International Journal of Digital Curation 7, no. 1 (2012): 126–38,
  53. Joe Jacobson, “Building a Fab for Synthetic Biology: Joe Jacobson Keynote,” Solid Conference 2015, YouTube video, 13:11, posted by O’Reilly, June 29, 2015,
  54. Roy D. Sleator, “The Story of Mycoplasma Mycoides JCVI-Syn1.0,” Bioengineered Bugs 1, no. 4 (2010): 229–30,; Daniel G. Gibson, John I. Glass, Carole Lartigue, Vladimir N. Noskov, Ray-Yuan Chuang, Mikkel A. Algire, Gwynedd A. Benders, et al., “Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome,” Science 329, no. 5987 (July 2010): 52–56,
  55. Each DNA molecule consists of two strands that wind around each other like a twisted ladder, widely known as a “double helix.” A base pair is two chemical bases bonded to one another forming a rung of the DNA ladder. There are four bases/nucleotides present in DNA: adenine (A), cytosine (C), guanine (G), and thymine (T). See “Base Pair,” National Human Genome Research Institute, accessed October 20, 2019,
  56. Kostas Vavitsas, “Synthetic E. coli Pushes the Limits of Gene Synthesis,” PLOS Synthetic Biology Community (blog), May 22, 2019,; Julius Fredens, Kaihang Wang, Daniel de la Torre, Louise F. H. Funke, Wesley E. Robertson, Yonka Christova, Tiongsun Chia, et al., “Total Synthesis of Escherichia coli with a Recoded Genome,” Nature 569, no. 7757 (May 2019): 514–18,
  57. See “Applying Moore’s Law to Gene Synthesis,” Synthetic Technologies, accessed October 19, 2019,; Emily Leproust, “Beyond the $1K Genome: DNA ‘Writing’ Comes Next,” TechCrunch, September 18, 2015,
  58. “Synthetic Biology Explained,” BIO, accessed October 19, 2019,
  59. “Help: Synthetic Biology,” Registry of Standard Biological Parts, accessed October 19, 2019,
  60. Wake Forest Baptist Medical Center, “Scientists Prove Feasibility of ‘Printing’ Replacement Tissue,” news release, February 15, 2016,
  61. Ricardo Pires, “What Exactly Is Bioink?—Simply Explained,” All3DP, November 26, 2018,
  62. Farai Mashambanhaka, “What Is 3D Bioprinting?—Simply Explained,” All3DP, November 28, 2018,
  63. Newcastle University Press Office, “First 3D Printed Human Corneas,” news release, May 30, 2018,
  64. “Towards Programmable Biology (toProB),” satellite workshop at European Conference on Artificial Life, York, UK, July 20–24, 2015,
  65. Joy Ito, “Why Bio Is the New Digital: Joy Ito Keynote,” Solid Conference 2015, YouTube video, 11:45, posted by O’Reilly, June 25, 2015,
  66. Ellen D. Jorgensen and Daniel Grushkin, “Engage with, Don’t Fear, Community Labs,” Nature Medicine 17, no. 4 (2011): 411,
  67. Bohyun Kim, “Biohackerspace, DIYbio, and Libraries,” ACRL TechConnect (blog), February 10, 2015,
  68. “Local Groups,” DIYbio, accessed October 19, 2019,
  69. At the workshop that I took in 2015 at BUGSS (Baltimore Underground Science Space), a biohackerspace in Baltimore, workshop participants used template-less PCR (also called Polymerase Cycling Assembly or Assembly PCR) to assemble the oligonucleotides into the full-length Gene 68 of a virus called “mycobacteriophage” and amplified it. By the end of the full workshop, participants synthesized one of its genes (Gene 68) of the mycobacteriophage and combine this one synthetic gene with the rest of the phage genome to create a semi-synthetic phage, which should be able to infect bacteria as a natural phage does. See Baltimore Underground Science Space, “BUGSS: Build-a-Gene 2015,” OpenWetWare, 2015,


  • There are currently no refbacks.

Published by ALA TechSource, an imprint of the American Library Association.
Copyright Statement | ALA Privacy Policy