Google I/O, that glistening moment when developers galore descend up San Francisco to hear prognostications, occurred mid-May. The keynote speech offered some insight into Google’s vision on the next decade. Admittedly, GoogleIO 2017 is an exercise in marketing synergy and willing suspension of disbelief. The keynote had the feel of equal parts TED-talk, Home Shopping Network, Dad Jokes, and Nickeleon’s “You Can’t Do that on Television”, with a soupcon of Svengali. If you look past the hokey jokes and the corporate name drops, there were some useful harbingers of our possible future.
And, why look to Google I/O for futurecasting? As Sundar Pichai said Google “Uses technical insights to solve problems at scale for deep engagement.” If you can’t image the scale of Google, think of it this way. 1.2 Billion images are uploaded to Google Photo every day. With about 35,000 museums in the nation, all of the collections in the country could be uploaded in a week or so.
Google is masterful at understanding first world problems and addressing them. Much of the undertone for Google I/O was that they were helping cure the stress of the era (despite the fact those stresses grew from tech like Google). With the power of scale, they are poised to continue to make civilization wide changes. (Think I am being hyperbolic? Reflect on the diffusion of the phrase “Google It”.)
Overall, Google I/O was all about artificial intelligence, where machines perform actions that had originally needed human thinking. If the mobile period was about touch, AI is about sight & voice/sound. AI is becoming more human in its meaning making, particularly in its seamless understanding of visual and textual data. The change to AI will have major social changes. Think about the changes that occurred with mobile. When a new platform is introduced, peoples’ modes of interacting change until those practices become naturalized.
So, what practices will become natural for our future visitors?
- Computers will be able to read images and text: Google Lens will make reading images increasingly sophisticated. For example, your phone will be able to “read” signs, turning the pictures into text. In other words, images, not just text, will be understood and acted on. What does this mean for museums? The answer is two-fold. First, museums will have ever more robust tools to read images. Second, it means visitors will expect handheld technology to make sense of the world seamlessly. They will not want keyboards, QR codes, or any barrier in the way of knowledge acquisition.
- You will talk to computers and they will talk back: Google Home now has 4.9% error rate for misunderstanding spoken words; this down from 8% two years ago. Soon, a variety of tools will respond to a voice command. What does this mean for museums? Again, bye bye keyboard commands. If you want to find the fiercest dino, you will expect to ask a technology tool and then expect that tool to respond in audio correctly. (Unless you are in an art museum. In that case, it will tell you that sadly, they got no dinos).
- Google Maps will go granular: Virtual Positioning Services (VPS) helps move AR forward. This tool was described in terms of shopping, where you will be able to know where anything is on a mapped shelf in a warehouse. What does this mean for museums? There are several possibilities here. First, virtual collection record-keeping might change collections management and collection access. Next, think about gallery wayfinding. VPS will be able to calculate people’s position to a few centimeters. Instead of using text to point people to a certain basket amongst 100 baskets, your tool will be able to point people to the exact basket that defines the genre.
- The artificial world will feel pretty real: Overall, experience computing more like the real world. Already, many students are enjoying AR. Google Expedition is a classroom app where students can experience coral reefs. What does this mean for museums? Firstly, our visitors of the future will be raised with AR as part of their regular experiences. This can mean that they will expect this of museums, though if we implement AR, they will have high expectation. Alternately, they might choose museums for their authenticity. I suspect the future of AR in museums will be both really good AR and then AR-absent experiences.
- Promiscuous content will be the norm: Youtube is already a culture that fosters creative, iterative, and interrogative content. And other tools like Google Seurat are making it easier to render in 3D for VR. In other words, AR/ VR will become ever easier to implement even for citizen-technologists. Everyone will be doing it. What does this mean for museums? This to me is the most exciting point. The tools that were once in the hands of only a few are ever more quickly being available to many. Visitors of the future will not only be living in a milieu suffused with Artificial Intelligence but also be creators of such content. In other words, imagine your crowdsourced Instagram beta project crossed with robots, Pokemon, or Jurassic Park. Alright, kidding. My point is that you won’t be able to imagine a specific outcome of these techs for our field. However, we should expect that technology will become ever more seamlessly human in its behavior; our visitors will expect our tools to follow suit.
This is what struck me about Google I/O. What about you? What struck you? And, where will that tech take our field in the future?