It's a busy few weeks for me.
Three weeks ago I attended a GDE Summit and Google Cloud Next.
Two weeks ago, I was one of those honored as a Voice AI 100 at the 10th Project Voice conference.
Next week I will be attending my eleventh Google I/O - my tenth as a GDE.
More on all of these shortly and how they illustrate my past 15 years as a developer.
But first - I think about what last week marked. Because 13 years ago last week I walked into Google's offices on the top floor of Chelsea Market in NYC and unboxed my first pair of Google Glass. (The first blue framed Glass in NYC.) And that event has shaped me and how I think about the role of personal computing to this day.
Although I was already a GDE at that time, I became the first Glass GDE. I made dozens of presentations to groups about how to develop for Glass and how we needed to think about developing for Glass. I got over my fears of speaking, leaned into the experience of working with people, started wearing my trademark blue shirts, and met some amazing folks in art and engineering. I wrote a book about it, too.
Most of all, I began to digest what a post-smartphone interface would be like. When I spoke at Augmented World Expo NYC in 2014 about Google Glass, I saw lots of demonstrations of goggles and AR popping off phone screens, and I didn't think that was it.
Instead, the message I tried to advocate was that the AR and VR worlds had much to learn from our experiences developing for Glass. Concepts such as "there when you need it, out of the way when you don't". I also said that the future of Glass had much to learn from AR and VR as well. Not in what they were showing, but rather that our devices need to be more contextual and understand the environment we were working in. Head mounted wearables had a unique feature no other did - they could "see" the same perspective we did without any action on our part.
At Google I/O in 2016, the first at Shoreline Amphitheater, a reporter saw I was wearing Glass and asked me what I thought about the keynote earlier that day. He expected me to talk about the new augmented reality platform that Google had announced. But I wasn't interested in that. I saw what I realized was truly the next generation of the Google Glass interface - Google Assistant and the Google Home.
Google Assistant, and the Voice First interfaces I was now helping people understand, started refining the message that I delivered at AWE a couple of years earlier. Voice agents needed context to work, but they mostly remained silent partners until we asked them something. On Google Home devices, they were mostly passive, ubiquitous, presences in the world we lived in.
The interface was also new. My message at the time was that, since digital computers were first available, we had to teach people how to use them - what holes to punch, what keys to click, how to use a mouse, or what swiping gestures were necessary on our phone.
For the first time, devices like Google Assistant and Alexa were turning that around. Now we were teaching computers how to understand us. They weren't perfect, and there were still many lessons we needed to figure out, such as discovery and monetization, but the interfaces were taking bold new steps in trying to figure out these answers.
Personally and professionally, this was a time when I continued to expand and grow. I didn't just do presentations, I collaborated on a weekly podcast, participated in the frequent Voice Lunch discussions, and held weekly office hours. When Glass was discontinued, it started my move into wearables in general, and then into becoming a Google Assistant GDE.
But as Google lost interest in Assistant, and Amazon struggled with the future of Alexa, I knew it was time for me to find the next generation of the future of interfaces. As I started to explore the world of LLMs, I realized that these were taking many of the concepts we had in voice and starting to bring them to everyone and to far more modalities than voice alone.
I became, briefly, an AI GDE as conversational interfaces started to take off. It was clear to me that the agents we were beginning to talk about were the evolution of the agents we were talking about in the voice world. And it was no surprise that we were talking about "context windows" and how important context was in these LLMs being able to work with our queries.
It was also clear that, while text was the default modality for these conversations, that was just the stepping stone. Voice was a clear next step. Incorporating images was an obvious next step. Perhaps we had learned some of the lessons I was advocating for?
I was hopeful. At I/O 2022, a whole 10 years after Glass launched, Google was talking about using AI to "bridge the physical and digital worlds" to use the context of what you could see in front of you to help with your search queries.
"If only," I thought, "they had some... glasses.. or something to make that easier."
We saw the first tease of that at I/O in 2024 in a demonstration of Project Astra, where glasses were able to answer questions about the context they were "seeing". At I/O 2025, it went two steps further - we were told this technology would be part of the forthcoming Android XR, and we could try on and test a prototype!
But there were many unanswered questions. Most importantly in my mind - how would developers tap into this interface? Glass and Assistant were notable because they were platforms, allowing developers to use the new interface that was available. Would Android XR let us seamlessly ask Gemini a question and get it answered by our app, all through voice? Or would it force a clunky "launch" and change in interaction model? Do we have a discovery model? Can we monetize our apps to pay for their development? Had we learned the lessons yet?
My conferences these weeks tell the tale of my quest to answer that final question. The GDE summit let me connect with developers across different fields, cross-pollinating ideas, and reminding me of this journey I started nearly 15 years ago. Cloud Next reminds me of the underlying workhorse that AI, LLMs, and agents are bringing to the table. Project Voice reminds me of the people who were delivering that next generation interface to millions of households and the small role I played in it.
And I/O?
That reminds me of the future. The next step.
Next week we will see if Google has truly learned the lessons from Glass and Assistant and AI. We'll see if they let us do ambient and ubiquitous computing in a whole new way. We'll learn, I hope, when these devices will be available for everyone. And, perhaps most importantly, we'll learn if they'll come with blue frames.
We'll hear and see next week. And I'll give voice to my thoughts then.
Acknowledgements
Along this journey, I've walked alongside many amazing people. Some pointed me in new directions. Some collaborated in shared understanding. Any list I give would be entirely inadequate, and likely missing a few who should be included, but I wanted to try to mention some. Google and the Google Developer Expert program, as a whole, who have provided great opportunities to attend many of these conferences. Jonathan Beri, who invited me to my first I/O in 2012. Jen Tong and Timothy Jordan, my mentors during the Glass years. Jessica Earley-Cha, one of my mentors during the Assistant years. Jason Salas, my co-author. Mark Tucker, my podcast co-host. Gerwin Sturm, Steven Gray, Linda Lawton, Denis Valasek, Noble Ackerson, and Mike Wolfson, my fellow GDEs who helped me explore these new worlds. And, most of all, my family - my parents who started me on this path with computers decades ago, and my child who keeps me grounded every day.



Top comments (0)