top of page

North Korean Censorship: Mirror for Our Own Digital Surveillance?

  • Writer: Matyas Koszegi
    Matyas Koszegi
  • Aug 27
  • 5 min read

The Reel We’re Talking About


A recent BBC‑style Instagram Reel (click HERE, if link doesn't work) shows a smuggled North Korean smartphone that auto‑censors Korean words, takes a screenshot every five minutes, and hands those images over to the regime’s secret police. The woman speaking paints a vivid picture of a total‑state‑run “big brother” that rewrites language on the fly and watches every glance. Sounds terrifying, right? Absolutely. But before we point solely at the hermit kingdom, let’s take a quick detour to the other side of the globe—specifically, the United Kingdom (and, by extension, the wider Western tech ecosystem). We have our own brand of digital snooping, it’s just dressed up in a very different wardrobe.


A picture about surveillance in North Korea and the UK
Image created by me

Windows Recall


In 2023 Microsoft rolled out Windows Recall (aka “Windows Spotlight” on steroids). The feature captures periodic screenshots of your desktop, uploads them to the cloud, and then uses AI to generate “personalized” background images. What it does is store a visual log of what you were working on—sometimes even when you’re not actively using the PC. Those images can contain confidential documents, passwords typed in plain sight, or even a glimpse of your child’s homework. Just like the North Korean phone, the user has little control over when the snapshots happen or how they’re stored. The difference is that the “authorities” are a multinational corporation that can (theoretically) hand the data over to law‑enforcement under a warrant—or use it to fine‑tune its advertising algorithms.


Apple and Google: The “Shameless” Data Harvesters


Apple collects device identifiers, location data (even when “Precise Location” is off), Siri recordings, and health metrics via HealthKit. Apple markets itself as the “privacy champion,” yet its App Store Review Guidelines require developers to disclose data usage, but the disclosures are buried in legalese, and Apple still aggregates the data for its own services. Google collects search queries, Gmail content for ad‑targeting, Android device IDs, voice recordings, Chrome browsing history, and “Web & App Activity.” Even if you turn off “Web & App Activity,” Google still retains a shadow copy for debugging. Its “Data Saver” mode can route traffic through Google servers, effectively creating a man‑in‑the‑middle that logs everything. Both giants run massive telemetry pipelines that resemble the screenshot‑capture logic in the North Korean demo: collect, store, analyze, monetize. The only difference is the scale and the legal veneer.


Big Data & Predictive Policing


Western governments love to brag about “data‑driven decision making.” In the UK, the Home Office’s National Data Strategy encourages the pooling of CCTV footage, facial‑recognition scans, and social‑media metadata to predict “crime hotspots.” The result is that citizens become the subjects of algorithmic profiling without ever hearing a single warning pop up on their phone. No “comrade” label is slapped on you, but the effect is the same, you’re silently tagged, tracked, and judged by an unseen authority.


Online Safety Act (UK) – The Good‑Intented Censor


The Online Safety Act promises to protect children from harmful content, but its enforcement powers grant regulators the ability to order platforms to remove or block material deemed “illegal” or “harmful.” Backdoor potential arises because platforms must embed automated filters capable of scanning billions of posts per day. Those filters inevitably store metadata—who posted what, when, and to whom. The act’s language mirrors the North Korean phone’s warning (“this word can only be used to describe your siblings”). Instead of “comrade,” you get a generic “content removed for policy violation” notice, yet the underlying mechanism, the automatic linguistic policing is eerily similar. And don't forget, you have to censor yourself as well, can't write for example sex or drugs, you must use at least an asterisk so that your content doesn't get flagged.


Age Verification Schemes


From gambling sites to adult‑content platforms, the UK and other governments are pushing mandatory age‑verification solutions that rely on government‑issued IDs scanned and stored by private vendors and biometric checks (facial recognition) that create a permanent link between your face and your online identity. The result is a centralized repository of your age, appearance, and browsing habits—exactly the kind of dossier a totalitarian regime would love to have, only now it lives in a corporate data lake.


Putting It All Together: A Satirical Checklist


The North Korean phone automatically replaces words such as “oppa” with “comrade” and issues warnings that the word can only be used for siblings. Social‑media platforms today auto‑flag politically sensitive terms, achieving a similar effect of linguistic filtering before you even hit “post.” The phone takes periodic screenshots every five minutes, saved for authorities; Windows Recall and macOS screen‑recording for diagnostics perform a comparable function, logging visual records of your work, home, and late‑night streaming. Both systems store data centrally—government servers in one case, cloud backups and corporate analytics in the other—meaning a single breach can expose a treasure trove of personal information. Users receive warnings in both contexts: the phone tells you “only for siblings,” while platforms display a vague “content removed for policy violation.” Neither system offers a genuine opt‑out, effectively forcing you to consent.


Why Should We Care?


First, the normalization of surveillance: when we accept big‑tech data collection as “just the way things work,” we desensitize ourselves to the creeping erosion of privacy. Second, the power asymmetry: corporations and governments hold massive datasets that can influence elections, shape public opinion, and even blackmail individuals. Third, the precedent for future laws: today’s “safety” measures can become tomorrow’s justification for broader censorship, just ask the architects of the Online Safety Act.


What Can We Do?


Audit your devices: disable Windows Recall, turn off location history, and regularly delete old screenshots. Use privacy‑first alternatives: Brave for browsing and searching, Signal or Session for messaging, Proton for e-mails, calendar, drive and other features and Linux‑based operating systems if you’re comfortable. Demand transparency: push for clear, plain‑language privacy notices and genuine opt‑out mechanisms. Support legislation that protects, not surveils: back groups advocating for strong data‑minimization laws and judicial oversight of AI‑driven content filters.


Closing Thought


The North Korean phone is a stark reminder that technology itself is neutral, it’s the intent and governance behind it that determines whether it becomes a tool for liberation or oppression. The irony isn’t lost on me that the West, which prides itself on “freedom of speech,” now runs its own version of a language‑policing, screenshot‑snatching apparatus, only it’s wrapped in sleek branding and sold as “personalisation.” So the next time you scroll past a glossy ad promising “better experiences through smarter data,” remember: somewhere, a screenshot is probably being taken, a word is being flagged, and a corporation is quietly adding another line to your ever‑growing digital dossier. Stay skeptical, stay witty, and most importantly: stay private.


If you like my content, consider buying me a coffee. It keeps me posting. Cheers!


Comments


bottom of page