Predicting the “digital superpowers” we could have by 2030

Predicting the “digital superpowers” we could have by 2030
By Louis Rosenberg | Published: 2025-01-08 17:20:00 | Source: The Future – Big Think

Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week for free.
It’s 2025, the year mainstream computing will begin to shift from a race to develop increasingly powerful tools to a race to develop increasingly powerful capabilities. The difference between tool and ability is subtle but profound. From the first hammers to the latest quantum computers, tools exist Foreign antiques That helps us humans overcome our organic limitations. Humanity’s ingenious tools have dramatically expanded the scope of what we can accomplish as individuals, teams, and civilizations.
Capabilities are different. We test abilities from a first-person perspective Self-embodied capabilities that I feel inside and Instantly accessible To our conscious minds. For example, language and mathematics are humanistic Technologies That we install in our brains and carry with us throughout our lives, expanding our abilities to think, create, and collaborate. They are real superpowers and feel so ingrained in our existence that we rarely think of them as technologies at all.
“Mindset Augmentation”
Unlike our verbal and mathematical superpowers, the next wave of superpowers will require some hardware, but we will still experience them as self-embodied skills that we carry with us throughout our lives. These capabilities will emerge from the convergence of artificial intelligence, augmented reality, and conversational computing. They will be unleashed before Context-aware AI agents Loaded into body-worn devices that see what we see, hear what we hear, experience what we experience, and provide us with enhanced abilities to perceive and interpret our world. I refer to this new technology trend as the augmented mindset, and I predict that by 2030, most of us will be living our lives with context-aware AI agents that bring… Digital superpowers In our daily experiences.
The majority of these superpowers will be delivered through Artificial intelligence powered glasses With cameras and microphones that act as their eyes and ears, but there will be other form factors for people who don’t like glasses. For example, there will be earbuds that have built-in cameras, which is a reasonable alternative if you don’t have long hair. We will whisper to these smart devices, and they will whisper to us and give us recommendations and directions, Spatial remindersdirection signals, Touch paymentsAnd other verbal and cognitive contents that will guide us through our days as the all-knowing alter ego.
How will our superpowers be revealed?
Consider this common scenario: You’re walking downtown and find a store across the street. Wondering: What time does it open? So, you grab your phone and type (or say) the name of the store. You can quickly find opening hours on the website and perhaps review other information about the store as well. This is the basis Tool usage model Of computing prevailing today.
Now, let’s take a look at how big tech companies are moving on Computing capacity model:
Stage 1: You wear AI-powered glasses that can see what you see, hear what you hear, and process your environment through a large multi-modal language model. Now when you see that store across the street, you simply whisper to yourself, “I wonder when it’ll open?” The voice will immediately ring in your ears, “10:30 AM”
I know this is a slight shift from asking your phone to search for a store name, but it will be profound. The reason is that a context-aware AI agent will do this Share your personal reality. Not only does it track your location like a GPS, it sees what you see, hears what you hear, and pays attention to what you pay attention to. This will make it feel much less like a tool and more like an internal ability tied directly to your first-person experiences.
Additionally, the interaction will not be one-way as we ask for help from an AI agent. The AI agent is often proactive and will ask us questions based on the context of our world (listen to this Enjoyable audio playback For example). And when we’re questioned by AI whispering in our ears, we often just answer We nodded our heads for emphasis Or we shake our heads no. It will seem so natural and smooth that we may not consciously realize that we have answered. It will feel like deliberating within ourselves.
Stage 2By 2030, we will no longer need to whisper to AI agents that travel with us throughout our lives. Instead, you’ll be able to simply say the words, and the AI will know what you’re saying by reading your lips and detecting activation signals from your muscles. I’m sure of that “mouth” It will be deployed because it is more private, more flexible in dealing with noisy spaces, and, most importantly, it will feel more personal, inward, and self-embodying.
Stage 3: By 2035, you may not even need to utter words. This is because artificial intelligence will learn to interpret the signals in our muscles with such precision and accuracy – we will simply need it To think about the pronunciation of words To convey our intention. You will be able to focus your attention on any item or activity in your world and He thinks Something Useful information will ring out from your AI glasses like an all-knowing alter ego in your head.
Of course, the capabilities will go beyond simply wondering about the items and activities around you. This is because the onboard AI that shares your first-person reality will learn to anticipate the information you want even before you ask for it. For example, when a coworker approaches you from down the hall, and you can’t quite remember her name, the AI will feel uncomfortable and make a sound: “The Jenny of Quantum Computing.”
Or when you grab a box of cereal at a store and are curious about carbs, or wonder if they’re cheaper at Walmart, the answers will ring in your ears or appear visually. It will even give you superpowers to assess the emotions on other people’s faces, predict their moods, goals, or intentions, and guide you during conversations in real-time to make you more persuasive, attractive, or persuasive. Video example).
As AI-powered glasses add mixed reality features that blend seamless visual content into our surroundings, these devices will give us actual superpowers, like X-ray vision. For example, the device will have access to digital models of your home and use them to let you look through walls and find studs, pipes or wires instantly.
I know some people will question my prediction of mass adoption by 2030, but I don’t make these claims lightly. I focused on techniques that Increase our reality and expand human capabilities For over 30 years, I can say without a doubt that the mobile computing market is about to head in this direction in a very big way.
Over the past 12 months, two of the world’s most influential and innovative companies, Meta and Google, have revealed their goal of giving us superpowers. Meta has taken its first big step by adding context-aware AI to its Ray-Ban eyewear and by showing off its Orion mixed reality prototype that adds impressive visual capabilities. Meta is now very well positioned to capitalize on its significant investments in AI and XR and become a major player in the mobile computing market and will likely do so by selling superpowers we can’t resist.
It shouldn’t be outdone by Google lately Announcing Android XRa new AI-powered operating system for Increase our world With seamless, context-aware content. They also announced a partnership with Samsung to bring new glasses and headphones to the market. With a market share of more than 70% for mobile operating systems and an increasingly strong AI presence with Gemini, Google is well positioned to be the leading provider of technology-enabled superhumans over the next 18 months.
But what about the risks?
In the famous words Spiderman 1962 movie“With great power comes great responsibility.” This wisdom is literally about superpowers. The difference is that the primary responsibility will fall not on the consumers who receive these technological powers, but on the companies that provide them and the regulators that oversee them.
After all, when wearing augmented reality glasses powered by artificial intelligence, each of us can find ourselves in a new reality where technologies… Controlled by third parties It can selectively change what we see and hear, while AI-powered voices whisper meaningful advice and guidance into our ears. While intentions may be positive, Potential for abuse Equally deep.
To avoid miserable results, my most important recommendation for both consumers and manufacturers is to do so Adopt a subscription business model. If the arms race to sell superpowers is driven by the company that can provide amazing new abilities for a reasonable monthly fee, we will all benefit. Alternatively, if the business model shifts to competing to monetize superpowers by delivering the most effective targeted influence in our eyes and ears, consumers may be able to It can be easily manipulated Throughout our daily lives.
I know some people find the concept of AI-powered glasses annoying or even scary and can’t imagine wanting or needing these products. I understand the sentiment, but by 2030, the superpowers these devices give us will no longer seem optional. Ultimately, not having it can put us at a social and cognitive disadvantage. It is now up to industry and regulators to ensure we roll out these new capabilities in a way that is not intrusive, invasive, manipulative or dangerous. It requires careful planning and control.
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week for free.
ــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــ





