top of page

AI on Your Phone: The Privacy Apocalypse You Didn’t See Coming

  • Writer: Matyas Koszegi
    Matyas Koszegi
  • Jul 25
  • 4 min read

Buckle up, folks. We’re about to take a rollercoaster ride through the dystopian nightmare that is AI on your phone. Forget the charmingly incompetent chatbots of yesteryear. Today’s AI assistants are like that overly enthusiastic intern who’s been given the keys to the kingdom—except instead of just spilling coffee, they’re spilling your data.


An image about a phone that has written "AI" on it

Imagine, if you will, an all-powerful digital entity that scans every app, every website, and every message you send. It’s not science fiction; it’s the reality of AI-enabled phones. Google, Apple, Meta, and Microsoft are in a frenzied race to embed an omniscient AI agent into every corner of your digital life. Their goal? To create an unconstrained super-user that can see everything, do everything, and—oh, by the way—upload all your precious data to the cloud.


You might think you have a choice in this matter. Spoiler alert: you don’t. The tech giants are employing the classic “boiling frog” technique, gradually ramping up AI access until it becomes an irrevocable default. The next OS update could be the one that installs an AI agent you can’t uninstall or contain—just like those other default apps you’ve learned to tolerate.


The pitch is always the same: “Imagine an AI that summarizes your emails, responds to your texts, and books your appointments!” Sounds dreamy, right? Until you realize that to do all this, the AI needs privileged access to every process, every application, and all content on your device. Your end-to-end encrypted messages? Fair game. Your browsing history? Open season. Your credit card info? Why not?


Here’s the kicker: whatever the AI on your phone processes will end up in the cloud. Companies like Google and OpenAI reserve the right to use your private data to train their models and let human reviewers annotate and review your information. They retain your data alongside identifiers like your location, device info, or even your IP address. In other words, your phone’s AI is acting like a virus, controlling system permissions, accessing other apps, and manipulating the information you see.


If the privacy invasion wasn’t bad enough, there’s the small matter of security. All large AI models are vulnerable to so-called “prompt injection attacks.” This is just a fancy term for giving an AI a prompt that convinces it to do something malicious. Imagine telling an AI to summarize a website, only for it to scan invisible text or tiny fonts containing a malicious prompt. Suddenly, your AI assistant is installing malware, visiting fraudulent websites, and handing over your credit card info.


Prompt injection attacks are scalable and can be automated. They can occur anywhere—inside subtitles in a YouTube video, in an invisible frame on a website, in an unskippable ad on a streaming service, or even through inaudible frequencies from an audio source. The possibilities are as endless as they are terrifying. And the worst part? There’s no way to completely mitigate this threat other than not using AI at all.


Apple users, don’t get too comfortable. Despite Apple’s lofty claims of prioritizing privacy, their AI implementations are just as invasive. Apple’s three-tier approach—local models, cloud-based models with supposed privacy guarantees, and delegation to OpenAI’s ChatGPT—ultimately surrenders your private data to the same privacy-invasive infrastructure as everyone else.


Apple’s AI attempts have been notoriously poor, not because of any commitment to privacy, but because they lack the talent and tech to compete. Their AI group internally earned the nickname “aimless,” and their corporate leadership had no clear vision on what to do with AI and machine learning. In short, Apple’s AI failures have nothing to do with privacy guarantees and everything to do with incompetence.


So, what can you do?


The only private messengers that actively resist AI encroachment and I also trust are Signal and Session. If you’re using WhatsApp or any Meta-owned app, you’re already exposed. For emails, Tuta Mail and Proton Mail are your best bets. Tuta Mail has no AI integration whatsoever, while Proton Mail offers optional AI features that can run locally. They recently started Lumo, which is promising. Alternatively, there is Venice.


If you want to run AI, do it locally. Tools like Jan.ai, Open WebUI, or even running your own AI server with GPUs can give you control over your data. For office workflows, Libre Office is an open-source, always-private alternative to AI-infested software.


AI is implemented system-wide, not just in individual apps. If you truly want to escape the AI overlords, you can’t use anything from Microsoft Windows, Apple iOS, or Macs, or even Android. It’s time to replace Windows with Linux (I recommend Linux Mint to start out) and ditch iOS and Android for GrapheneOS. It might sound scary, but trust me, it’s easier than you think.


The relentless push for AI to be in every single thing in existence is setting back decades of progress in privacy and security. The cost of AI is not just paid for by the stock market and investors’ money—it’s paid for by our privacy, our security, and our resources. But there are solutions. By making conscious choices about the apps and operating systems we use, we can fight back against the AI overlords and reclaim our digital lives.


So, are you ready to take the plunge and join the resistance? The future of your privacy depends on it.


Comments


bottom of page