It started like so many other things in my life do: with a slightly concerning obsession with sugar in its many forms.

While I was researching how to make the perfect gooey fudge brownies at home (my delivery bills pointing to a dissonance in food-finance-fitness axis my waistline is still ignoring), I decided to give Gemini a shot, describing what I wanted in the brownies and how to achieve that gooey but chewy consistency. ChatGPT is my usual recipe go-to, but I am catholic in neither religion nor information sources.

And Gemini gave me a very nice recipe, throwing in casually at the end, “Since you’re lactose-sensitive, you might want to substitute the milk with oat milk.” How sweet, I thought. At least the AI assistant remembers, even if pushy relatives at parties with their kheer don’t. Also, I had mentioned the fact much earlier, so it was nice to see the memory window expanding.

But Gemini’s “Personal Intelligence” really and literally came home when I was hunting for the email address of the nodal officer of a telecom provider. While I was asking it to identify the TRAI regulations that the company’s inaction and erroneous billing broke, it did some “personalising” and “workspace connecting” to provide me with a “smoking gun” to help prove my point in the complaint, in the form of previous emails and bills.

It also checked my Drive and Docs to check for any ancillary contracts and documents.

This is very, very cool. It is also deeply, quietly terrifying, if you stop to think about it.

It, of course, knows and remembers the field I’m in and my job experiences, my browsing habits, my medicines and supplement stack, my attempts at designing my websites and my pivot to professional services. It knows my likes and dislikes, the choices I’ll make, the decisions I won’t.

It’s not the only one.

ChatGPT is my mirror image as it surely is yours. Claude is, well, I still haven’t made up my mind about Claude. And Perplexity is the only real professional in the bunch, who doesn’t get personal (Gemini) or cocky (GPT).

But Gemini is truly its own thing. Because it sits right at the nexus of the invisible web that Google’s services have woven around our lives from business and personal communications to questions and consumerism to content.

Again. It knows my likes and dislikes, the choices I’ll make, the decisions I won’t.

And that, more than anything else, is what “Personal Intelligence” actually represents. Not just smarter search. Not just faster answers. Not just better chat. Memory. Context. Continuity. The quiet stitching together of the fragmented digital versions of ourselves that we have spent the last fifteen years scattering across apps, tabs, services, and devices.

For most of the internet’s history, every interaction was transactional. You searched, you clicked, you left. Even when platforms tracked you, the experience still felt episodic. Personal Intelligence changes that. The assistant is no longer responding to a question. It is responding to a person.

The shift is subtle but profound. Search gave way to answers. Answers gave way to assistance. Assistance is now giving way to delegation. And delegation, eventually, gives way to agency. Not human agency. Platform agency. The ability to act on your behalf, using your patterns, your history, your context, and increasingly, your probability profile.

On paper, every major AI company is trying to build this. In reality, Google is uniquely positioned to make it ambient rather than optional.

Because Google is not just an AI company. It is the closest thing modern digital life has to an operating system.

Your email likely runs through Gmail. Your documents live in Drive. Your meetings sit in Calendar. Your navigation runs through Maps. Your video consumption flows through YouTube. If your phone doesn’t run on Android, at least some apps and functions are sourced from there. Your questions, purchases, curiosities, symptoms, recipes, anxieties, and 2 a.m. existential spirals have probably, at some point, passed through Search.

Personal Intelligence is not being bolted onto this ecosystem. It is emerging from it. And that distinction matters.

Because when an AI assistant is built on top of a product, you can opt out. When it is built into the infrastructure layer, opting out starts to feel like opting out of modern digital life itself.

And here is the uncomfortable part. Most people will not want to opt out. Because the experience, when it works, is genuinely useful. It reduces friction. It compresses decisions. It lowers cognitive load. It remembers things you forgot to remember. It anticipates needs you have not articulated yet. It makes small parts of life easier, faster, smoother, and less mentally expensive.

Human beings are extremely good at trading abstract long-term risks for immediate tangible convenience. We do it with food, with finance, with attention, with health, and with technology. Personal Intelligence sits squarely in that tradition. It is not forcing surveillance on anyone. It is offering relief. And relief is very persuasive.

Which is why the real risk here is not surveillance in the dystopian sense. The real risk is probability shaping.

When a system knows your history, your tendencies, your constraints, your patterns of behaviour, and your historical decision trees, it does not need to control you. It only needs to shape the likelihood field around you. Suggest the slightly more probable option. Surface the slightly more relevant result. Highlight the slightly more likely purchase. Nudge the slightly more likely behaviour.

Individually, each nudge is trivial. Collectively, over years, they become behavioural gravity.

And this is where the conversation inevitably moves toward commerce. Because once a system understands not just what you search for, but how you live, when you spend, when you hesitate, when you impulse buy, when you research, when you delay, and when you regret, advertising stops being about targeting and starts being about timing, context, and life state.

Search advertising historically monetised intent. Personal Intelligence has the potential to monetise pre-intent. The moment before you even decide you need something. The moment where a system knows that based on your calendar, your messages, your travel patterns, your health searches, and your spending history, you are statistically likely to need something soon.

And crucially, it can surface that suggestion in a way that feels like help, not interruption.

Which is why Gemini feels different from standalone AI tools. Not necessarily smarter. Not necessarily more creative. But more embedded. More contextual. More present.

The future of digital interfaces will likely not feel like interacting with software. It will feel like interacting with something that understands you. Not perfectly. Not emotionally. But statistically. Behaviourally. Pattern-wise.

And statistically accurate understanding is often enough to feel like familiarity.

Which brings me back, oddly enough, to brownies.

Because what felt like a helpful memory about lactose sensitivity was also a demonstration of a system building a longitudinal model of me. Not just what I ask. But who I am, how I behave, and what I am likely to need next.

The future of computing is not just about answering questions faster. It is about quietly participating in decisions earlier.

And the question we are all going to have to answer, sooner rather than later, is not whether these systems know us. They will. Increasingly well. Increasingly invisibly.

The question is how comfortable we are with systems that do not just respond to our choices, but help shape them.

Because the future will not announce itself as surveillance. It will arrive as convenience. As personalisation. As help. As something that remembers that you cannot have lactose, even when your relatives forget. And yes, I realise that after years of worrying about internet cookies, and then being told not to, the real thing we should probably be thinking about is who is still baking them.

(Originally published in Substack)