
Many undocumented immigrants who want to file tax returns have sought the help of clinics like this one in Los Angeles.

Many undocumented immigrants who want to file tax returns have sought the help of clinics like this one in Los Angeles.
Elon Musk's "X Corp" is back at it. The company's latest X-themed product is XChat, a messaging app built for X users to securely chat with one another. The app is currently available to preorder on the iOS App Store with an April 17 release date, and advertises itself as an end-to-end encrypted chat app free from ads or tracking. That sounds like a great pitch, especially if you're someone who frequently messages other X users. The problem is, the pitch doesn't seem entirely accurate.
As Mashable's Jack Dawes highlights, XChat's app privacy policies are a bit out of alignment with its promises. If you scroll to the "App Privacy" section of XChat's App Store page, you'll see that the app has declared it may collect the following data points, and link them to your identity:
Location
Contacts
Search History
Usage Data
Contact Info
User Content
Identifiers
Diagnostics
X Corp also says it may collect additional "User Content," but that this data is not linked to you. Regardless, this is a laundry list of information the so-called "private" chat app is taking from you, and linking to your identity. Even if XChat is entirely end-to-end encrypted, it seems rather disingenuous to claim the app has zero tracking, when its privacy policy says it can take any and all of these data point from you. I wouldn't feel particularly private if I knew XChat was scraping my contacts, location, and usage data, even if it didn't have access to the messages themselves. By comparison, Signal, one of the more popular secure chat apps, only collects contact info from its users—and doesn't link that data to the user themself.
XChat does claim it comes with some key features that other mainstream chat apps do. That includes editing or deleting messages for everyone in the chat, blocking screenshots, sending disappearing messages, cross-platform calling, and large group chats. (The App Store listing shows a group chat with 481 members.)
As the app is meant for X users to communicate with one another, you do need an X account to use XChat. That means the app likely won't pop off the same way other messaging apps have, but it may attract existing X users who have a number of contacts they already chat with in DMs. We'll see whether that's the case when the app launches later this week, but I imagine any privacy-minded users may prefer to seek alternative arrangements.
Gmail is one of—if not the—most popular email platform in the world. But it's not the favorite for users who care about their privacy. Google doesn't offer end-to-end encryption (E2EE) for basic Gmail users, instead opting for "Transport Layer Security" (TLS). This provides security in transit, but doesn't help once the message reaches its destination. While TLS is better than nothing, it doesn't offer the same level of security as E2EE, which scrambles messages for everyone other than the sender, recipients, and whoever else has the decryption key. As such, privacy-minded users often look elsewhere for their email needs, like Proton Mail.
But Google does offer more advanced encryption for some users—namely, work or school Workspace accounts. There's Secure/Multipurpose Internet Mail Extensions (S/MIME), which, like E2EE, encrypts emails in transit and in the sender's and recipients' inboxes. But it comes with the drawback of Google having a decryption key as well. In theory, Google could decrypt your emails—or, if Google was successfully hacked, an attacker could use the key to decrypt your emails. That's where client-side encryption (CSE) comes in: Here, the organizer of a Google Workspace plan has that decryption key, not Google, which means decryption is only possible within the organization.
If your company has a Workspace plan, this is the encryption to use if you want your email as secure as possible. But the main issue up to this point is that CSE has only been available on desktop. When at your computer, you could take advantage of encrypted Gmail, but when on the go, the mobile Gmail app didn't support it. According to Google, the only way to access CSE emails on mobile was to rely on extra apps and email portals.
That's all changing now. On Thursday, Google announced it is now rolling out CSE support for the iOS and Android Gmail apps. Going forward, you can write and read E2EE emails directly within Gmail, no matter how you access the app. Plus, you'll be able to send E2EE emails to anyone, even if they don't have Gmail.
Google says that if your recipient has Gmail, they'll simply be able to open the message in their inbox. If they have a different email address (e.g. Outlook, Yahoo, iCloud, Proton, etc.), they'll still be able to read the email, but they'll need to open it in their device's browser. However, be careful when sending messages with CSE, as not everything you send is encrypted end-to-end. According to Google's help page on CSE, the body of the email will have total encryption, but the header, subject, timestamps, and recipients, will not have additional encryption.
The admin of your organization will need to enable CSE for iOS and Android on their end before you see the option in your app. Once that happens, choose "Compose," then select "Message security," which has a lock icon. Under "Additional encryption," choose "Turn on." Then, craft your email as you normally would.
You might have heard about Signal, the encrypted chat app the U.S. government infamously used to discuss war plans last year. (Yikes.) But while the app is no alternative to a dedicated SCIF, it is a good option for the rest of us to communicate more securely. Signal uses end-to-end encryption (E2EE), which, very simply, means that messages are "scrambled" in transit, and can only be "unscrambled" by the sender and the recipient or recipients. If you're in a Signal chat, you'll be able to read incoming messages just like you would any other chat app—if you're an attacker, and intercept that message, all you'll find is a jumble of code.
E2EE makes it difficult for anyone without your unlocked device (or your unlocked Signal app) to read your Signal message—difficult, not impossible. That's part of the reason the chat app is no option for government officials (though no third-party chat app could be). But it's also a good reminder that no matter who you are, your secure chats are not impervious to outside forces. If someone wants to break into your chats, they might find a way to do so.
Case in point: As reported by 404 Media, the FBI recently extracted incoming Signal messages from a defendant's iPhone. The user had even deleted the app off their device, which only added another hurdle into the investigators' goals. You would think by deleting the app itself, your encrypted messages would be protected. As it turns out, however, the FBI didn't need to access the Signal app at all. While they weren't able to retrieve the defendant's outgoing messages, they were able to scrape incoming messages from the iPhone's push notification database. (I've been covering iPhones for nearly a decade, and I wasn't aware that iOS even had a push notification database—though I suppose it makes sense, given that alerts exist in Notification Center until you manually open or dismiss them.)
This revelation comes from a case involving a group allegedly vandalizing property and setting off fireworks at the ICE Prairieland Detention Facility. One officer involved in the altercation was shot in the neck. According to a supporter of the defendants in this case who took notes during the trial, the court learned that any app that has permission to show previews and alerts on the Lock Screen will save those previews to the internal memory of the user's iPhone. As such, the FBI was able to obtain messages the defendant had received, even though those messages were set to disappear in the app, and the app had been cleared from the device.
Again, this is not a security hole exclusive to Signal: Any app that displays an alert on your Lock Screen has this vulnerability. The FBI probably had plenty of other notifications to sift through as well, from any app the defendant had running on their iPhone. Think about the alerts you might have sitting in Notification Center right now: texts, reminders, news bulletins, purchases, DMs, etc. All of that could be fodder for anyone with the surveillance tech to root through your iPhone—locked or not.
If you use Signal, you actually have an advantage here, now that you know about this vulnerability. Signal has a setting that blocks the content of messages from appearing in their notifications. That way, even if someone accesses your alerts, all they'll see is you received a Signal message—not who sent it or what it contains.
To turn it on, open Signal, tap your profile in the top-left corner, then hit "Settings." Under Notification Content, choose "No Name or Content" to block all data to the alert. You can compromise here and choose "Name Only" if you want to know who a message is from before you open it—just remember, an intruder may also see you received a message from that person if they scrape your iPhone's notifications.
When you download an app from the App Store or Play Store, how much research do you do ahead of time? Do you look into who makes the app, and where that company is based? Do you scan the app's privacy policy to make sure your data is handled responsibly? You might not, but, as it turns out, the FBI wants you to.
The FBI issued a warning last Tuesday concerning "foreign-developer mobile applications (apps)." (Thank you, FBI, for that clarification.) The FBI's thesis is this: Many of the most popular apps in the U.S. aren't developed here—instead, they're often developed and maintained by foreign companies. Now, these discussions can verge dangerously close to xenophobic, especially considering the U.S.'s current administration, but some of the FBI's concerns are legitimate. The FBI's chief issue is with the security laws of countries like China, which the FBI says could allow China's government to access U.S. user data. This was one of the concerns that led to the TikTok ban, and why there is now a majority-U.S. ownership of the platform.
In its PSA, the FBI highlights how some apps will encourage you to invite friends or contacts to use the app as well. The companies behind those apps can then store that contact information, including names, email addresses, phone numbers user IDs, and home addresses. Even if you, personally, don't use the app, or share your contact info with the app, someone else who does have your contact information may share it themselves. The FBI also points to the privacy policies of some apps, that admit that data is stored in Chinese-based servers for "as long as the developers deem necessary." Finally, some apps may contain malware that exploits security vulnerabilities in your devices' operating systems. The FBI highlights that this malware can run programs in the background without your knowledge, designed to steal your data.
The PSA walks through a number of steps you can take to protect your data and protect your devices—regardless of whether or not you're using apps developed out of the U.S. That includes the following:
Disabling data sharing whenever you can
Downloading apps from official app stores, as opposed to unregulated online marketplaces
Change and update your passwords frequently
Install updates when they become available
Read terms of services and license agreements when downloading apps
The FBI also encourages you to file a report with the IC3 if you believe your data has been compromised.
The FBI's tips above are actually generally useful, but none is necessarily groundbreaking. These are pretty standard best practices for cybersecurity—though changing your passwords frequently without reason isn't as widely recommended anymore. Follow these tips, though, and you'll help protect your data as you engage with the internet.
It's a bit impractical to ask Americans to abstain from, or even be wary of, foreign-developed apps. Yes, other countries have different security laws than the U.S., but the U.S.'s current laws allow companies to scrape our data for profit. If not, Meta and Google would be hurting for business. The FBI isn't concerned about American companies having access to Americans' data, of course; just foreign governments.
I understand the logic, but I don't think it's something that you, as an individual American with a smartphone, needs to be all that worried about. Instead, I think your concern should be more general: rather than worry where an app was developed, look into what data that apps wants. It doesn't matter if the app is American, Chinese, or made by a company based somewhere else: If the app is asking for a whole bunch of data, don't give it to them without reason. If you're using a messaging app and want to be able to sync your contacts, that's one thing; if your meditation app wants your contacts, it's probably best to deny them.
Malware is definitely of the most biggest points of concern right now, especially as bad actors exploit some major vulnerabilities in platforms like iOS. While issues with malware are highlighted in this PSA, I think that's where the FBI should be focusing its attention. Downloading an app from a random site on the the internet, or from a dubious listing on the App Store or Play Store, can compromise your device and its data. It doesn't really matter where the app is from: Doing a bit of research before hitting "install" can protect you from a major headache in the future.
At least 25 million people have had their personal information stolen in a major hack on business services company Conduent. The data breach itself isn't new—it was initially disclosed in January 2025, and Conduent has already notified millions of individuals whose data was compromised in the incident. However, the breach is now believed to be larger in scale than previously reported, possibly among the largest to affect healthcare.
Conduent is a New Jersey-based business processing outsourcing (BPO) company that provides services like printing, payment, and document and claims processing to state and federal government agencies as well as large commercial and transportation organizations. According to the company's 2025 annual report, these offerings include disbursement of benefits, such as food assistance and child support, and administration of government healthcare programs (like Medicaid). For large corporations, services include workplace and unemployment benefits management.
Conduent was spun off from Xerox in 2017 and now employs around 51,000 people worldwide.
In January 2025, Conduent suffered an outage that was later confirmed to be the result of a "cybersecurity incident." The disruption lasted several days, during which agencies across the U.S. were unable to process some benefit payments. While the breach was discovered in January, hackers reportedly gained access to Conduent's systems months earlier on October 21, 2024. The Safepay ransomware gang later took credit for the attack.
While Conduent confirmed in April 2025 that client information had been stolen in the breach, it didn't begin notifying affected individuals until October. According to those notices, the compromised data included names, Social Security numbers, dates of birth, health insurance policy information, and medical information.
The scope of the breach continues to grow, but the total number of individuals affected currently sits around 25 million. The greatest impact appears to be in Texas and Oregon, though residents in California, Delaware, Maine, Massachusetts, New Hampshire, and New Mexico have also received notices. (For reference, the total number of users impacted by the 2024 ransomware attack on Change Healthcare is now estimated at 190 million.)
If you receive a notice saying your information was compromised, you should take every precaution to secure your identity: At a minimum, ensure your credit is frozen, and set up a one-year fraud alert on your credit files to prevent someone from applying for credit using your information. None of the notices we've seen have offered any type of credit monitoring or identity theft protection services to affected individuals, but you could utilize these services as well.
At this point—given the ubiquity of data breaches and information compromise—you should be keeping a close eye on your credit report and financial accounts at all times to quickly catch anything suspicious. If you do find fraudulent activity, report it to your bank and/or credit issuer immediately, and file an identity theft report.
Smart glasses aren't just the stuff of Hollywood anymore: You can buy a pair right now. Devices like Ray-Ban Metas come equipped with speakers, a microphone, embedded cameras, and connectivity to your smartphone—all in a package that largely looks like a normal pair of glasses. That's great for enthusiasts who want a hands-free smartphone experience when out and about, but not so great for anyone who dislikes the idea of invisible cameras everywhere.
There are two sides to these privacy worries. One is the personal angle. Many of us don't want the people around us shoving their smartphone cameras in our faces when we're out in public, but at least then we'd know we're being recorded. These embedded cameras are tough to spot unless you know what you're looking for, which means there's a feeling of always being watched by anyone walking past wearing glasses. On the other hand, there's the larger privacy concern that comes with the territory of a huge company like Meta. Just last week, we learned the company plans to bring facial recognition tech to its Ray-Ban and Oakley smart glasses with a feature called "Name Tag," which would give the wearer insights into the people they encounter using Meta AI. Taken together, smart glasses pose an unprecedented privacy and security risk for those of us living our lives, when both our neighbors and law enforcement have the accessibility to spy on us without our knowledge.
Of course, what can you do? If these glasses are legal, and they're relatively inconspicuous, how can you protect yourself from the average Ray-Ban Meta-wearing Joe? By the time you get close enough to tell whether or not they're wearing smart glasses, you're already in view of the camera.
Enter "Nearby Glasses," a new app that spills the beans on smart glasses wearers near your location. As reported by 404 Media, the app is made by developer Yves Jeanrenaud, and scans for smart glasses' "distinctive Bluetooth signatures" (also known as "advertising frames") to identify them in your immediate area. Jeanrenaud was able to use a directory of Bluetooth Low Energy (BLE) manufacturers to build a list of smart glasses the app can scan for, including devices from Meta, Luxottica Group S.p.A, and Snap. If the app spots one, it sends you a push notification.
The app can't currently distinguish between smart glasses and mixed reality headsets, however. As such, you may get an alert saying there are smart glasses nearby, but because the app picked up the Bluetooth signals from a Meta Quest headset. That said, these are much easier to spot than smart glasses, and are far less likely to be worn inconspicuously in public spaces.
Nearby Glasses is available for Android today, on both the Play Store as well as Github. Jeanrenaud says an iOS port "is in the making."
I'm a bit of a broken record when it comes to personal security on the internet: Make strong passwords for each account; never reuse any passwords; and sign up for two-factor authentication whenever possible. With these three steps combined, your general security is pretty much set. But how you make those passwords matters just as much as making each strong and unique. As such, please don't use an AI program to generate your passwords.
If you're a fan of chatbots like ChatGPT, Claude, or Gemini, it might seem like a no-brainer to ask the AI to generate passwords for you. You might like how they handle other tasks for you, so it might make sense that something seemingly so high-tech yet accessible could produce secure passwords for your accounts. But LLMs (large language models) are not necessarily good at everything, and creating good passwords just so happens to be among those faults.
As highlighted by Malwarebytes Labs, researchers recently investigated AI-generated passwords, and evaluated their security. In short? The findings aren't good. Researchers tested password generation across ChatGPT, Claude, and Gemini, and discovered that the passwords were "highly predictable" and "not truly random." Claude, in particular, didn't fare well: Out of 50 prompts, the bot was only able to generate 23 unique passwords. Claude gave the same password as an answer 10 times. The Register reports that researchers found similar flaws with AI systems like GPT-5.2, Gemini 3 Flash, Gemini 3 Pro, and even Nano Banana Pro. (Gemini 3 Pro even warned the passwords shouldn't be used for "sensitive accounts.")
The thing is, these results seem good on the surface. They look uncrackable because they're a mix of numbers, letters, and special characters, and password strength identifiers might say they're secure. But these generations are inherently flawed, whether that's because they are repeated results, or come with a recognizable pattern. Researchers evaluated the "entropy" of these passwords, or the measure of unpredictability, with both "character statistics" and "log probabilities." If that all sounds technical, the important thing to note is that the results showed entropies of 27 bits and 20 bits, respectively. Character statistics tests look for entropy of 98 bits, while log probabilities estimates look for 120 bits. You don't need to be an expert in password entropy to know that's a massive gap.
Hackers can use these limitations to their advantage. Bad actors can run the same prompts as researchers (or, presumably, end users) and collect the results into a bank of common passwords. If chatbots repeat passwords in their generations, it stands to reason that many people might be using the same passwords generated by those chatbots—or trying passwords that follow the same pattern. If so, hackers could simply try those passwords during break-in attempts, and if you used an LLM to generate your password, it might match. It's tough to say what that exact risk is, but to be truly secure, each of your passwords should be totally unique. Potentially using a password that hackers have in a word bank is an unnecessary risk.
It might seem surprising that a chatbot wouldn't be good at generating random passwords, but it makes sense based on how they work. LLMs are trained to predict the next token, or data point, that should appear in a sequence. In this case, the LLM is trying to choose the characters that make the most sense to appear next, which is the opposite of "random." If the LLM has passwords in its training data, it may incorporate that into its answer. The password it generates makes sense in its "mind," because that's what it's been trained on. It isn't programmed to be random.
Meanwhile, traditional password managers are not LLMs. Instead, they are designed to produce a truly random sequence, by taking cryptographic bits and converting them into characters. These outputs are not based on existing training data and follow no patterns, so the chances that someone else out there has the same password as you (or that hackers have it stored in a word bank) is slim. There are plenty of options out there to use, and most password managers come with secure password generators.
But you don't even need one of these programs to make a secure password. Just pick two or three "uncommon" words, mix a few of the characters up, and presto: You have a random, unique, and secure password. For example, you could take the words "shall," "murk," and "tumble," and combine them into "sH@_llMurktUmbl_e." (Don't use that one, since it's no longer unique.)
If you're looking to boost your personally security even further, consider passkeys whenever possible. Passkeys combine the convenience of passwords with the security of 2FA: With passkeys, your device is your password. You use its built-in authentication to log in (face scan, fingerprint, or PIN), which means there's no password to actually create. Without the trusted device, hackers won't be able to break into your account.
Not all accounts support passkeys, which means they aren't a universal solution right now. You'll likely need passwords for some of your accounts, which means abiding by proper security methods to keep things in order. But replacing some of your passwords with passkeys can be a step up in both security and convenience—and avoids the security pitfalls of asking ChatGPT to make your passwords for you.
Another Google tool is biting the dust: The company's dark web monitoring tool, launched in March 2023, will be shut down on Feb. 16. According to Google, feedback on the feature suggested it "didn't provide helpful next steps"—so while it alerted users when their data was out in the wild, it wasn't clear what to do about it. Now, Google is shifting its focus from the dark web monitoring tool to features like its online Security Check-Up and passkey protection. In other words, instead of flagging when your account credentials appear in a data breach, Google wants to make sure that your accounts stay safe even if a breach has occurred.
There are reasons why you should be keeping an eye on dark web chatter, however—and there are tools to take over the monitoring job now Google has backed out.
Essentially, the dark web is made up of online spaces that you can't get to just by pointing your browser at a web address. You need specialist software and a little bit of technical know-how to find your way into the dark web and to navigate around it. It's largely hidden from the world at large via encryption and rerouting. Why all the secrecy? The dark web is used to evade both law enforcement and ruling powers, so it's the perfect place to carry out somewhat illicit activities as well as get around the machinations of oppressive surveillance states. It's a place where hackers and whistleblowers alike can gather.
Speaking of hackers, dumps of information from data breaches will often find their way on to the dark web, to be traded or given away for free. Whether it's your email address, phone number, social security number, or passwords, if this data has been exposed by a hack, you're much more likely to find it on the dark web than on Reddit.
Dark web monitoring tools, like the one Google just shut down, are intended to give you a heads up if your details have appeared in a data dump. You can then do something about it, whether it's getting in touch with your bank to check for any signs of identity theft, or changing the password for your email service.
Having a dedicated tool for the task saves you from having to trawl the dark web yourself—which isn't particularly easy or pleasant—and while Google might be closing down its monitoring service, you've got several alternatives you can turn to instead.
Proton is a favorite among privacy enthusiasts, and the privacy-focused company also has a Dark Web Monitoring tool of its own. You do need a paid plan to access it though, from $12.99 a month or $119.88 a year, which includes multiple perks across all Proton's products. You can find it from the Security and privacy side panel in the Proton Mail app.
Proton uses a variety of intelligence datasets in its dark web sweep, and looks out for details including email addresses, usernames, dates of birth, physical addresses, and government IDs. The leaks will be categorized in terms of how urgently action needs to be taken, and Proton doesn't give your data to third parties.
Trend Micro has a Data Leak Checker that covers the dark web, which you can use without paying anything or even signing up for an account—though you can only check for mentions of your email address or phone number in leaks. For more comprehensive scans and alerts, you can sign up for a premium account, from $9.99 a month or $49.99 a year—and there's lots more included besides dark web monitoring.
Keeper Security takes the same approach with BreachWatch: You can run a quick scan for breaches including your email address without paying or signing up, but if you want anything more advanced (including proactive notifications) then you need to sign up for $24.99 a year. The feature can be added to any of Keeper's other paid-for plans too.
If you currently pay for a security product, such as a password manager or a VPN, you may well find that dark web monitoring is included—so check through your existing subscriptions. For example, the Surfshark Alert dark web monitoring tool comes as part of the Surfshark One VPN bundle, with pricing from $17.95 a month or $40.68 a year.
Some iOS users are getting an extra layer of privacy when it comes to how their location data is shared. Limit Precise Location is a new setting that prevents some Apple devices from broadcasting specific locations to cell carriers.
Precise location sharing is useful, even essential, in some cases, such as when you're navigating with your maps app. But you may not want to constantly be sending your exact address to your phone provider, where it could be used for malicious purposes. If you enable Limit Precise Location, your iOS device will share your general area instead.
As TechCrunch points out, precise location sharing introduces a whole host of privacy and security risks. Cell carriers have been targeted by hackers, compromising sensitive customer data. Surveillance vendors and law enforcement agencies may also use location information broadcast via cellular networks for the purposes of real-time and ongoing tracking.
Users already have the option to disable precise location sharing at the app level on both iOS and Android for apps that don't need GPS coordinates to function—which is most of them. This allows you to prevent companies from receiving (and selling) your exact location data when a general location is sufficient. Limit Precise Location won't change these app-specific settings.
For now, the feature is available only on select Apple models—the iPhone Air, iPhone 16e, and iPad Pro (M5) Wi-Fi + Cellular—running iOS 26.3 with a limited number of global carriers:
U.S.: Boost Mobile
UK: EE, BT
Germany: Telekom
Thailand: AIS, True
Apple says that even with this setting enabled, emergency responders will still be able to pinpoint exact location during an emergency call.
If you have a supported device with a partner carrier, go to Settings > Cellular and tap Cellular Data Options (you may need to select the specific line under SIMs if you have more than one). Scroll down and toggle Limit Precise Location sharing off.
Before you head out to a protest, take some precautions to protect your privacy and both the physical and digital security of any device you bring along. The most secure option, of course, is to leave your phone at home, but you can also lock things down to minimize the risk that your data will be accessible to law enforcement or someone who gets hold of your device.
Thankfully, both iOS and Android have built-in device encryption if you're using a passcode, meaning that your device's data cannot be accessed when it is locked. (On Android, go to Settings > Security to ensure Encrypt Disk is enabled). You'll want to maximize this protection with the following privacy settings.
At an absolute minimum, you'll want to disable biometric access, such as face and fingerprint authentication, on your device in favor of a passcode or PIN. As the Electronic Frontier Foundation notes, this minimizes the risk of being physically forced to unlock your device and may provide stronger legal protections against compelled decryption.
On iOS, go to Settings > Face ID & Passcode and toggle off iPhone Unlock. You can also set up a stronger passcode—a custom numeric or alphanumeric code—under Change Passcode. On Android, you'll find the option to delete your fingerprint in favor of your PIN or screen lock pattern under Settings > Security & Privacy > Device Unlock > Fingerprint.
Again, the best option to prevent your location from being tracked is to coordinate any details in advance and leave your phone at home. If you must bring it along, keep it off unless you absolutely need to use it.
You can turn on Airplane Mode in advance, as well as disable Bluetooth, wifi, and location services, which keeps your device from transmitting your location. However, note that some apps may still be able to store GPS data and transmit it when an internet connection is available—so again, the safest bet is to keep your device off for the duration.
Airplane Mode can be enabled (and wifi and Bluetooth disabled) in your device's settings or quick access menu. On Android, go to Settings > Location to disable location services and turn off Location History in your Google account. On iOS, head to Settings > Privacy & Security > Location Services to disable locations entirely.
Temporarily disable notifications and screen previews so that if someone gets your device, they won't be able to glean any information from your lock screen. You can adjust these options under Settings > Notifications on iOS and Settings > Apps & notifications > Notifications on Android.
Minimize your screen lock time to as short a period as possible so that your screen turns off when you're not actively using it and will require authentication to reopen. On iOS, go to Settings > Display & Brightness > Auto-Lock and select 30 seconds. The exact path on Android may vary, but typically you'll find this under Settings > Display or Lock Screen.
Know that most devices have camera access from the lock screen, so you can take photos or record video without actually unlocking your device.
App pinning (Android) and Guided Access (iOS) are features that prevent others from navigating through your phone beyond a specific app or screen. This allows you to use an essential feature on your device while locking the rest behind your PIN or passcode. You can enable this preemptively, and if someone grabs your device, they won't be able to snoop around.
You can find this setting on Android under Security or Security & location > Advanced > App pinning and on iOS under Settings > Accessibility > Guided Access.
You can also lock your SIM card to prevent unauthorized use of your device or SIM card, including access to two-factor authentication codes sent via SMS. This PIN will be required any time your phone restarts or if someone tries to use your SIM card in another device. On iOS, go to Settings > Cellular, select your SIM, and tap SIM PIN. On Android, you'll find this under Settings > Security > More security settings (the exact path varies by device).
This step will vary depending on what you keep on your phone and your risk tolerance, but you may want to consider signing out of your social media accounts and deleting apps that contain or allow access to sensitive data.
On iOS, you can also lock or hide specific apps: the former requires an extra authentication step to open apps on your home screen, while the latter sends apps to a hidden folder that also requires authentication to unlock. Touch and hold an app icon to bring up the quick actions menu, then tap Require Face ID/Require Passcode.
On Android, you can set up a "private space" to lock apps behind your pattern, PIN, or password. Apps are hidden from the launcher and recent views as well as quick search. Go to Settings > Security & privacy > Private space, authenticate with your screen lock, and tap Set up > Got it.
Both iOS and Android have strict device-level security modes that significantly limit access to certain app and web features as well as blocking changes to settings. Both were designed with journalists, activists, and other users with access to sensitive data that may be targeted by cyber actors in mind. These settings are overkill for day-to-day use but add a potentially helpful layer of security in high-risk situations.
Enable Lockdown Mode on iOS via Settings > Privacy & Security > Lockdown Mode. On Android, turn on Advanced Protection under Settings > Security & privacy > Advanced Protection.
While the above steps are largely about securing your data during a protest, you should also follow best practices for protecting privacy (yours and others') after the fact. If you plan to post photos or videos, utilize blurring tools to block faces and other unique identifying features, and scrub file metadata, which includes information like photo location. You can do this by taking a screenshot of the image to post or sending a copy to yourself in Signal, which automatically strips metadata. Signal also has a photo blurring tool, or you can blur in your device's default photo editing app.
Hackers continue to find ways to sneak malicious extensions into the Chrome web store—this time, the two offenders are impersonating an add-on that allows users to have conversations with ChatGPT and DeepSeek while on other websites and exfiltrating the data to threat actors' servers.
On the surface, the two extensions identified by Ox Security researchers look pretty benign. The first, named "Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI," has a Featured badge and 2.7K ratings with over 600,000 users. "AI Sidebar with Deepseek, ChatGPT, Claude and more" appears verified and has 2.2K ratings with 300,000 users.
However, these add-ons are actually sending AI chatbot conversations and browsing data directly to threat actors' servers. This means that hackers have access to plenty of sensitive information that users share with ChatGPT and DeepSeek as well as URLs from Chrome tabs, search queries, session tokens, user IDs, and authentication data. Any of this can be used to conduct identity theft, phishing campaigns, and even corporate espionage.
Researchers found that the extensions impersonate legitimate Chrome add-ons developed by AITOPIA that add a sidebar to any website with the ability to chat with popular LLMs. The malicious capabilities stem from a request for consent for “anonymous, non-identifiable analytics data." Threat actors are using Lovable, a web development platform, to host privacy policies and infrastructure, obscuring their processes.
Researchers also found that if you uninstalled one of the extensions, the other would open in a new tab in an attempt to trick users into installing that one instead.
If you've added AI-related extensions to Chrome, go to chrome://extensions/ and look for the malicious impersonators. Hit Remove if you find them. As of this writing, the extensions identified by Ox no longer appear in the Chrome Web Store.
As I've written about before, malicious extensions occasionally evade detection and gain approval from browser libraries by posing as legitimate add-ons, even earning "Featured" and "Verified" tags. Some threat actors playing the long game will convert extensions to malware several years after launch. This means you can't blindly trust ratings and reviews, even if they've been accrued over time.
To minimize risk, you should always vet browser extensions carefully (even those that appear legit) for obvious red flags, like misspellings in the description and a large number of positive reviews accumulated in a short time. Head to Google or Reddit to see if anyone has identified the add-on as malicious or found any issues with the developer or source. Make sure you're downloading the right extension—threat actors often try to confuse users with names that appear similar to popular add-ons.
Finally, you should regularly audit your extensions and remove those that aren't essential. Go to chrome://extensions/ to see everything you have installed.
There are many good reasons to get a VPN (Virtual Private Network) app installed on your phone or laptop: They make it harder for anyone else to track your browsing, they keep your data safe when you're on public wifi networks, and they even let you spoof your location so you can access geolocation-locked content.
You'll also find plenty of choice when it comes to VPNs. Our own guides to the best paid VPNs and the best free VPNs show the wealth of impressive apps out there, and even when you narrow down the criteria, you've still got lots of options to pick from—see our recommendations for the best free VPNs for Android.
So what exactly should you be looking for when it comes to choosing the right VPN for you? These are the features and selling points that you'll see mentioned when you're browsing VPN comparisons, and what they mean (and once you've built up a shortlist from these criteria, then you can look at the prices and extras).
One of the downsides of loading up a VPN is that your browsing speed can suffer, while your data gets pinged around multiple servers across the globe. Ideally, you want all the protection that a VPN offers, without too much of a hit on download and upload rates (no matter how many other people are using the same VPN).
Unfortunately, this isn't really something you can gauge just by looking at VPN listings and ads, as most VPNs will claim to be the fastest. Either read benchmark tests put together by publications and authors you trust (watch out for sponsored content), or make use of as many free trials as you can and do some testing yourself.
Your VPN of choice needs to reroute your internet traffic somewhere, and how many servers a particular VPN has around the world can make a substantial difference to speed and availability. It's also going to determine where in the world you can pretend to be of course, if you want to jump to another country virtually.
Broadly speaking, the more servers the better, though as with VPN speeds you may have to do some testing of your own to check reliability and transfer rates. Look for server locations close to you (for speed), and outside of heavily censored or surveilled countries (for privacy), and check any technical specs that are given for them.
Something else to look out for is split tunneling, or the ability to send only some of your internet traffic through a VPN. This means you get better speeds (and less security and privacy) on data that's not so important, if you're just reading the news or learning a language. It's a feature that many of the best VPNs now offer.
Another feature worth checking for is a kill switch. It sounds rather dramatic, but it's simply a feature that shuts down your internet connection if the data encryption somehow fails—cutting you off from the internet, but preventing your connection and data from being exposed. Again, this is now fairly common, but not every VPN offers it.
You should only consider VPNs that have clear no-logs policies (no browsing data is permanently retained) or zero-logs policies (supposedly even stricter, covering more data). Don't take the VPN's word for it. though: Look for third-party audits from independent security companies, carried out regularly, to verify these claims.
If these logs are retained, they might be sold to data brokers, or pulled by law enforcement agencies—so check the individual privacy policies for details of what happens when you're connected to your VPN. Some VPNs go above and beyond when it comes to letting you stay anonymous: Mullvad VPN lets you pay by cash through the post, for example.
A VPN protocol is the way that the VPN connects to the internet at large: It makes a major difference to speed and security, and you'll often see it mentioned in VPN listings. However, as important as it is, it's not something that's easy to compare across different VPN services—most VPNs will simply say their protocol of choice is the best.
Once you've got a shortlist of VPNs together, do some background reading on the protocols they use: Look for independent assessments of their security and transparency, technical benchmarks, and protocols that have been open sourced so they can be analyzed. OpenVPN and WireGuard are two well-regarded protocols, for example.
VPN companies are bound by the laws and regulations of the country that they're based in—so it's a good idea to look for ones based in places where surveillance regulation and government monitoring is less strict. If necessary, check the VPN's policies on how it deals with data requests from the authorities and law enforcement in its local region.
It's also worth weighinga VPN company's reputation: How does it make money? What other services does it offer? What's its record with data breaches? This is much more important with a VPN than it is with your streaming music provider, for example, because you're trusting it with all of your online data while you're connected.
Generally speaking, it's worth paying for a VPN, as you're giving it so much responsibility in terms of your online access and security. The paid options are almost always going to give you a faster and more reliable service, and if you regularly make use of a VPN then the monthly fee is well worth the investment.
It is, however, worth looking for services that offer free trials and your money back if you're not satisfied (usually after 30 days). Not only does it reflect well on the VPN company, it means you can see if the VPN suits your needs—and check how fast its servers are—before signing up for any kind of payment plan.
There's very little privacy on the internet: Data brokers collect tons of information about you and your online activity and sell it to anyone interested in marketing to you. California residents have gained more control over their personal data than those in other states since the passage of the California Consumer Privacy Act (CCPA) in 2018, and they now have a one-stop shop for requesting that their information be removed from hundreds of data brokers registered with the state (and any that do so in the future).
California isn't the only state to enact stronger consumer privacy laws in recent years, but its Delete Requests and Opt-Out Platform (DROP) is the first of its kind. The tool is live now, though brokers won't begin processing submissions until August. Here's what to do now if you live in California—and some options for removing your information from data brokers if you don't.
To get started with DROP, you'll need to confirm that you are, in fact, a California resident by verifying personal information via California Identity Gateway or signing in with Login.gov credentials. To be eligible, you must either live in California or be domiciled in the state even if you live elsewhere temporarily. (This is based on the location of your primary residence, where you are registered to vote, and which state issued your driver's license.)
You will then be able to create and submit a deletion request. You'll need to provide some personal data, which will be used to match your request with records held by data brokers. Data types include names, date of birth, zip codes, email addresses, phone numbers, Mobile Advertising IDs (MAIDs), and vehicle identification numbers (VINs). You can enter multiples of everything except your date of birth and update your request at a later time—if you get a new car or change your email, for example.
While you can begin submitting requests now, know that data brokers won't actually begin processing them until August 2026 and could take up to 90 days from then to delete your data. If they find a match, they are required to delete all of the information they have about you, though there are some exceptions, such as data available through public records or provided directly to a business.
Once processing begins later this year, you'll be able to track the status of your request on the DROP platform.
If you don't reside in California and qualify for DROP, all is not lost—though you will have to invest a bit more time and/or money to remove your information from data broker sites than simply mass deleting via a single request.
To start opting out of data collection, download Consumer Reports' donation-based Permission Slip app, which tracks where your data can be found and follows up on removal requests. You can try to manually opt out by identifying data brokers and going directly to their sites, but this can be tedious, and there are a handful of other paid services that will do it for you. (None are perfect, nor do they guarantee 100% success.)
We also have a guide to blocking companies from tracking your online activities, which can help mitigate the problem somewhat before it begins.
The internet has become a vital tool for human connection, but it comes with its fair share of risks, with the biggest being your privacy and security. With the big tech giants hungry for every ounce of your data they can get and scammers looking to target you every day, you do need to take a few precautions to protect your online privacy and security. There's no foolproof approach to these two things, and unfortunately, the onus is on you to take care of your data.
Before you start looking for a VPN or ways to delete your online accounts, you should take a moment to understand your privacy and security needs. Once you do, it'll be a lot easier to take a few proactive steps to safeguard your privacy and security on the internet. Sadly, there's no "set it and forget it" solution for this, but I'm here to walk you through some useful hacks that can apply to whatever risks you might be facing.
When you install an app on your phone, you'll often be bombarded with pop-ups asking for permission to access your contacts, location, notifications, microphone, camera, and many other things. Some are necessary, while most are not. The formula I use is to deny every permission unless it's absolutely necessary to the app's core function. Similarly, when you're creating a profile anywhere online, you should avoid giving out any personal information unless it's absolutely necessary.
You don't have to use your legal name, real date of birth, or an email address with your real name on most apps you sign up for. Some sites also still use antiquated password recovery methods such as security questions that ask for your mother's maiden name. Even in these fields, you don't have to reveal the truth. Every bit of information that you put on the internet can potentially be exposed in a breach. It's best to use information that's either totally or partially fake to safeguard your privacy.
If your personal information is easily available on Google, and you want to get it removed, you can send Google a request to remove it. Check Google's support page for how to remove results to see specific instructions for your case. For most people, the simplest way to remove results about yourself is to go to Google's Results About You page, sign in, and follow the instructions on screen.
Most modern email services let you create unlimited aliases, which means that you don't need to reveal your primary email address each time you sign up for a new service. Instead of signing up with realemail@gmail.com, you can use something like realemail+sitename@gmail.com. Gmail lets you create unlimited aliases using this method, and you can use that to identify who leaked your data. If you suddenly start getting a barrage of spam to a particular alias, you'll know which site sold your data.
When you take a photo, the file for it contains a lot of information about you. By default, all cameras will store EXIF (exchangeable image format) data, which logs when the photo was taken, which camera was used, and photo settings. You should remove exif data from photos before posting them on the internet. If you're using a smartphone to take photos, it'll also log the location of each image, which can be used to track you. While social media sites may sometimes remove location and exif data from your pictures, you cannot always rely on these platforms to protect your privacy for you.
You should take a few steps to strip exif data before uploading images. The easiest way to get started is to disable location access for your phone's camera app. On both iPhone and Android, you can open the Settings app, navigate to privacy settings or permissions, and deny location access to Camera. This will mean that you won't be able to search for a location in your photos app and identify all photos taken there, and you'll also lose out on some fun automated slideshows that Apple and Google create. However, it also means that your privacy is protected. You can also use apps to quickly hide faces and anonymize metadata from photos.
While you're at it, don't forget that screenshots can also leak sensitive information about you. Some types of malware steal sensitive information from screenshots, so be sure to periodically delete those, too.
Nearly every single AI tool is mining your data to improve its services. Sometimes, this means it's using everything you type or upload. At other times, it could be using things you've written, photos or videos you've posted, or any other media you've ever uploaded to the internet, to train its AI models. There's not much you can do about mass data scraping off the internet, but you can and should be careful with your usage of AI tools. You can sometimes stop AI tools from perpetually using your data, but relying on these companies to honor those settings toggles is like relying on Meta to keep your data private. It's best to avoid revealing any personal information to any AI service, regardless of how strong a connection you feel with it. Just assume that anything you send to an AI service can, and probably will, be used to train AI models or even be sold to advertising companies.
Yes, big companies like Facebook or TikTok can track you even if you don't have an account with them. Data brokers collect vast troves of information about your internet visits, and sell it to advertisers or literally anyone who's willing to pay. To limit the damage, you can start by following Lifehacker's guide to blocking companies from tracking you online. Next, you can go ahead and opt out of data collection by data brokers. If that's not enough, you can also use services that remove your personal information from data broker sites.
Now, I'm sure some of you are thinking that using a VPN will protect you from most of the tracking on the internet. That may be true in some cases, but using a VPN 24/7 is not the right approach for most people. For starters, it just routes all your traffic via the VPN company's servers, which means that you need to place your trust in the company's promises not to log your information, and its ability to keep your data safe and private. It also won't protect you from the types of data leaks that might happen from, say, publicly posting photos tagged with location data.
Many VPN providers claim to be able to protect you, but there are downsides to consider. Some companies such as Mullvad and Proton VPN have earned a solid reputation for privacy, but using a VPN all the time can create more problems than it solves. Your internet speed slows down a lot, streaming services may not work properly, and lots of sites may not load at all because they block VPN IP addresses. In most cases, you'll probably be better off if you use adblockers and an encrypted DNS instead.
For most people, ad blockers are a good privacy tool. Even though Google is cracking down on ad blockers, there are ways to get around those restrictions. I highly recommend using uBlock Origin, which also has a mobile version now. Once you've settled on a good ad blocker, you should consider also using a good DNS service to filter out trackers, malware, and phishing sites on a network level.
Having a DNS service is like having a privacy filter for all your internet traffic, whether it's on your phone, laptop, or even your router. I've been using NextDNS for a few years, but you can also try AdGuard DNS or ControlD. All of these services have a generous free tier, but you can optionally pay a small annual fee for more features.
Almost all apps these days send telemetry data to remote servers. This isn't too much of a problem if you only use apps from trusted sources, and can help with things like automatic software updates. But malicious apps or even poorly managed ones may be more open with your data than you would like.
You can restrict some of that by using a good firewall app. This lets you monitor incoming and outgoing internet traffic from your device, and restrict devices from sending unwanted data to the internet. Blocking these requests can hamper some useful features, like those automatic app updates, but they can also stop apps from unnecessarily sending data to online servers. There are some great firewall apps for Mac and for Windows, and you should definitely consider using these for better online privacy.
I've probably said this a million times, but I will repeat my advice: use a good password manager. You may think it's a bit annoying, but this single step is the easiest way to greatly improve your security on the internet. Password managers can take the hassle of remembering passwords away from you, and they'll also generate unique passwords that are hard to crack. Both Bitwarden and Apple Passwords (which ships with your Mac, iPhone, and iPad) are free to use, and excellent at their job. Go right ahead and start using them today. I guarantee that you won't regret it.
We’ve been using passwords to protect our various accounts for a few decades now, and, to be honest, we’re not very good at it. Many of us use the same simple, easy to remember passwords for all of our accounts—convenient for logging in, but horrible for security. Not only will a bad actor (or computer) be able to guess that password easily, they’ll try it against your other accounts. Before you know it, you have multiple breaches, some of which may involve financial or private information.
There are a number of steps you can take to beef up your password security, of course. First, you can use a complex and unique password for each of your accounts, making sure to never reuse a password. A well-made password can be impossible for a human to guess, and virtually impossible for a computer to guess. But even if a company loses your password in a data breach, using two-factor authentication (2FA) can protect you further. Without a trusted device that either generates or receives a 2FA code, your password becomes essentially useless to hackers. And since you didn’t repeat passwords, they can’t try it on your other accounts. That’s what makes this combo a winning strategy.
But many, if not most, of us aren’t using this winning strategy. Many are still at risk, or putting their organizations at risk, with insecure authentication measures. As such, there’s a push for consumers to adopt a new form of authentication, something that combines the convenience of passwords, with the security of 2FA, all without you needing to remember a thing: passkeys.
Passkeys are a (relatively) new authentication method that offer a similar experience to passwords without actually involving a password of any kind. The measure relies on something called public key cryptography: When you create a new account with a passkey, or you create a passkey for your existing account, a “key pair” is generated. One of these keys is public, and is stored by the company that runs the account in question. This key is not a secret, and, theoretically, could be stolen or lost in a breach. However, the other key is a secret. This private key is stored on your device–such as a smartphone, tablet, or computer—and is what is used to actually authenticate your identity.
To create the passkey, you simply need to use your device’s built-in authentication method. That might mean a face scan, a fingerprint scan, or a PIN. Once you successfully authenticate yourself, the passkey is established. To log in in the future, you simply authenticate with one of those same three methods. If it goes through, the system then checks with the account that holds the public key to confirm your identity, and you're in—no password required.
Your passkeys are securely stored on your devices, typically in a “vault” such as a keychain or password manager. Apple generates and stores passkeys in iCloud Keychain, for example. If you use a password manager, like Bitwarden or 1Password, you can create and store passkeys there. Any device that has access to that password manager can then also access the passkey for authentication.
However, you don't need to log into your accounts on the device that contains the passkey. If you're using a different device, say a friend's computer or a tablet that doesn't contain the passkey, you will have the option to use your trusted device to authenticate. For example, say you want to check your bank account on your PC, but your account uses a passkey stored on your iPhone. You can choose to authenticate using the passkey device, which will trigger the account's site to present a QR code. You can scan the QR code on your iPhone, authenticate using Face ID, Touch ID, or your PIN, and you'll log in. This is also how the feature works when signing into accounts on devices that don't store passkeys directly, like a PlayStation 5.
The short answer? Yes. Passkeys are an extremely secure authentication method. While they're way more secure than passwords, they're even more secure than 2FA. 2FA is great, and certainly better than using a password alone, but it is possible for attackers to steal the authentication codes—especially when these codes are SMS-based. This can be as sophisticated as hacking into the platforms that send your codes, or as simple as a phishing scheme: Scammers can pose as representatives of the account in question, and trick you into sharing your 2FA codes with them. As such, 2FA, while secure, has an inherent phishing flaw.
Passkeys don't have this flaw. You can't be tricked into giving over one of your passkeys, nor can a hacker steal it from your device. The system won't prompt you to authenticate unless you are visiting the exact domain for the platform, which means scammers can't create dummy sites that trick you into logging in: The passkey process will simply not start. Importantly, signing in via a passkey requires the trusted device to be physically close to the device you're logging into. As such, a hacker can't send you an image of a QR code, trick you into scanning it, and then convince you to authenticate to log in. Unless you're in the same room as the hacker, they're not getting your passkey.
One of the most common concerns regarding passkeys is what happens when you lose the device the passkey is stored on. After all, if the secret key is kept only on your smartphone, what happens if it is lost, stolen, or breaks?
As it turns out, there are a few possibilities here. First, it is true there is a risk of losing the passkey for good should you lose access to the trusted device. If you choose to store your passkeys on a physical security key, like a YubiKey, losing or breaking the key will mean losing your passkey. However, depending on the account, you may have recovery options—such as answering security questions to prove your identity. This will be case-dependent, of course: If your account only has a passkey set up, and that passkey is only stored on one device, you may lose access to the account. Check if your accounts offer recovery options, or even backup authentication measures. Some accounts may still have you create a password, even if you opt into passkeys, because of this possibility.
But more importantly, you don’t need to keep your passkeys to just one device. There are secure protocols that allow you to sync your passkeys between different devices. For example, if you create a passkey on your iPhone, iCloud Keychain securely syncs that passkey to your other connected Apple devices as well, such as an iPad and Mac. That way, when you want to log into your account on any of these devices, the option to authenticate with your passkey will be available on any—you just need to use Face ID, Touch ID, or present your PIN, and you’re in.
At this time, no. This is probably passkeys' biggest drawback. Unlike passwords, which you can export to other password managers, passkeys are stuck to the service they're generated with. If you set up a passkey for your Google Account on your iPhone, you won't be able to directly transfer it to, say, an Android device. If your passkey lives in Bitwarden, you can't transfer it to Google Password Manager. As such, you should try to create passkeys on the platform you most widely use. If you're fully in the Apple ecosystem, Apple's iCloud Keychain will work well for you. But if you have a mix of devices from different manufacturers, you'd be better off creating passkeys on a cross-platform password manager. You can always authenticate with your iPhone, of course, but the true convenience of passkeys is quickly logging in on a device that already contains the passkey.
That doesn't mean you need to keep this service forever, however: You can set up new passkeys for existing accounts on other services, so you can securely get rid of your old passkey devices. However, make sure to keep the old device until you have the passkey established on a new one. If something goes wrong, and you're not able to set up a new passkey on another device, you'll need the old device to confirm your identity—unless you have an alternative authentication option, like a password.
Passkeys aren't perfect: In practice, they can be a bit complicated, especially when working across different devices. But at their best, they offer both convenience and security. If you aren't particularly tech savvy, or if you're not totally entrenched in one tech company's ecosystem, it might be a bit too early to go all-in on passkeys. But passkeys can keep your accounts safe and secure, so long as you understand these other weaknesses.
You may be doing everything you can to protect your privacy online—using tools like multi-factor authentication, a secure password manager, and a VPN—but unfortunately, not all privacy-focused apps and services are actually doing what they promise. In its November fraud and scam advisory, Google is warning users about VPN apps and extensions that appear legitimate but are actually vectors for malware.
A VPN, or virtual private network, makes your internet activity much more difficult to track by routing your traffic through a different connection rather than your regular internet service provider (ISP). This allows you to hide your IP address and location, obscure your browsing data, and protect your information and devices from bad actors.
According to Google, malicious VPNs (posing as real ones) are delivering infostealers, remote access trojans, and banking trojans to user devices once installed, allowing hackers to access sensitive personal data like browsing history, financial credentials, and cryptocurrency wallet information. This means that an app you rely on to keep your information private could be doing the exact opposite. Cybercriminals are capitalizing on user trust in these services, creating apps that look and feel like legitimate VPNs but are actually dangerous spyware.
As with any app or extension, only download or install a VPN from an official source like the Google Play store. While malicious programs do sometimes sneak through, it's typically safer and more reliable than sideloading through a messaging app or other unvetted site.
In January 2025, Google launched a VPN verification process to help users identify trustworthy VPN apps in the Google Play store. To earn a "verified" badge, VPN apps have to undergo a Mobile Application Security Assessment (MASA) Level 2 validation and opt into independent security reviews. Badges are awarded only to VPNs that have been published for at least 90 days and reach 10,000 installs and 250 user reviews.
Of course, this system isn't perfect either: As TechRadar reported earlier this year, a popular (free) Chrome VPN extension earned a badge and was later discovered to be spying on users. That's why you should rely on a reputable VPN service—which means you'll likely have to pay for it. Free VPNs are far more likely to a privacy nightmare, and any app that sounds too good to be true probably is. You aren't going to get unlimited traffic at no cost without sacrificing something.
Finally, review VPN permissions carefully, and allow the minimum access possible for the app or extension to function. (You should do this with any app you download, and you should audit apps regularly to remove unnecessary permissions.) You can check your VPN service's support pages to find out which permissions are essential—this should not include access to your contacts, camera, microphone, or photos, for example.
Age verification is coming to app stores in Texas, meaning that users could soon be required to provide some form of identification in order to download anything from the Google Play and Apple App stores, regardless of the app's content.
Earlier this week, Gov. Greg Abbott signed the Texas App Store Accountability Act, which is set to take effect at the beginning of next year. The new law, which purports to be about keeping children safer online, has significant implications for user privacy and data security.
The Texas law will require Google and Apple to verify the age of all users before they download any app through their app stores, even if the app has no sensitive or age-specific content. Parents will have to provide consent for minors to download apps or make purchases, and app stores will have to confirm that parents or guardians have the legal authority to make those decisions for their children. App stores will also have to share which age categories users fall into (child, young teen, older teen, or adult) with app developers.
While the specifics are yet to be determined, that means Google and Apple will have to collect some form of user identification, whether that's a driver's license, passport, or other government-issued ID, or biometric data, such as a facial scan, for anyone using their app stores in Texas. Even more documentation will be required for parents proving legal guardianship of minor users.
Utah passed a similar bill earlier this year making app stores responsible for centralizing age verification, and while its requirements are slightly less onerous, they're not much better when it comes to your privacy.
Privacy experts—as well as both Apple and Google—have raised alarms about the implications of age verification, noting that requiring all users to turn over sensitive personal information included in data-rich documents that can prove your age is a form of digital surveillance. It creates an identifiable record of online activity and increases the risk that the data will be used, shared, or sold (unlike physical ID checks, which are momentary and impermanent).
Age verification also presents security concerns with how sensitive user data is collected and stored. Data breaches are a fact of life in 2025, and individuals may have very little (if any) knowledge about whether and how their information is used and stored without their consent, and without recourse if it is compromised.
Aaron Mackey, free speech and transparency litigation director at the Electronic Frontier Foundation (EFF), notes that the Texas law doesn't have any built-in protections for user data, such as minimizing what is collected and transmitted and for how long it is retained. Plus, there are risks present in the likelihood that app stores will utilize third-party verification services to comply with the requirements, meaning data is available to multiple parties.
The EFF and the ACLU also argue that online age verification requirements violate users' First Amendment rights, as they may make protected free speech inaccessible—if adults don't have a valid form of identification, or facial recognition inaccurately estimates age, or minors can't get parental consent—or force people to choose between shielding their privacy and being online.
"If I have to provide this level of personal information because the government mandates it just to download an app from an app store, I'm going to be significantly worried about what happens to my data, and I might just decide to not actually download the app or even use this app store," Mackey says.
How far would you go to keep yourself private online? There’s little doubt that advances in technology over the past three decades have eroded traditional concepts around privacy and security: It was once unthinkable to voluntarily invite big companies to track your every move and decision—now, we happily let them in exchange for the digital goods and services we rely on (or are hopelessly addicted to).
Most people these days either tolerate these privacy intrusions or outright don’t care about them. But there’s a growing movement that believes it’s time to claim our privacy back. Some are working piecemeal, blocking trackers and reducing permissions where they can, while not totally ditching modern digital society as a whole. Others, however, are as hardcore as can be—a modern equivalent of "going off the grid."
We put out a call looking for the latter—people who are going to great lengths to protect their privacy in today’s mass surveillance world. We received a number of insightful, fascinating, and unique situations, but for this piece, I want to highlight four specific perspectives: "Ed," "Jane," "Mark," and "Jay."
The first respondent, I’ll call Ed, since their privacy journey began with the Edward Snowden leaks: “I'd known something was likely up…as early as 2006[.] I remember headlines about AT&T possibly spying, but high school me didn't take it too seriously at the time. The Snowden leaks, when I was in college, really opened my eyes. Ever since, I've taken steps to protect my privacy.”
Ed says the biggest step they’ve taken towards a digitally private life has been their Proton account. If you’re not aware, Proton is a company that offers apps designed for privacy. Their email service, Proton Mail, is the most famous of the company’s products, but Proton makes other apps as well. Ed uses many of them, including Proton VPN, Proton Calendar, and Proton Drive. Ed pays for Proton Ultimate, which costs them nearly $200 every two years (a new account is now billed yearly at $119.88). You don’t have to pay for Proton, but your experience is much more limited. That’s not totally dissimilar to Google’s offers, which gives you more features if you pay, but most people can definitely get by with a free Google Account. I'm not so sure the reverse is true.
Speaking of Google, Ed does have a Google Account, but rarely logs into it. They don’t keep anything attached to it, however—Ed stores all files, for example, in Proton Drive or Tresoirt (another end-to-end encrypted service).
Ed uses SimpleLogin for throwaway email addresses. That’s not just for the times Ed wants to avoid giving their email address to someone. According to them, they use an alias anytime an organization asks for their email, and frequently delete it when it’s no longer useful. Each online purchase gets its own alias, and that alias is deleted once the purchase is complete. Whenever Ed travels, they use an alias for any flights, hotels, and rental cars they use. Once the trip is up, they delete the alias. If one of those aliases receives a spam message, they delete it as well.
Ed’s smartphone of choice is iPhone, and although Apple arguably has the best reputation for privacy in big tech, Ed is no fan: “Apple is no bastion of privacy of course, but they seem to be the least-worst of the big tech companies.” Ed doesn’t use iCloud for any backups: Any iPhone files are kept in Tresorit.
That iPhone, of course, contains apps. But each app is there for a reason, and no app gets access to permissions unless it requires it: “I'm ruthless about apps and app permissions. If I'm not going to use the app regularly, I uninstall it. I grant only those permissions I think the app reasonably needs.” Ed protects his mobile internet traffic with Proton VPN, and only accesses the web via Firefox Focus, a special version of Firefox designed for privacy.
Location services are always off on Ed’s iPhone, unless they’re using Apple Maps for navigation. Once they arrive at their destination, Ed disables location services again. They also have an interesting trick for getting back home without revealing their actual address: “Additionally, when I'm navigating home, I don't enter my home address. I enter the address down the street just as an extra layer so I'm not entering my actual home address…I'll end navigation and turn off location while still driving…if I know the rest of the way home myself."
Most of us deal regularly (if not daily) with spam calls. Not Ed: They use the “Silence Unknown Callers” setting on iOS to send all numbers not in the Contacts app to voicemail. They then review all voicemails, and if they didn’t leave a message, they block the number. Our initial call out for this piece referenced how using a VPN can sometimes block incoming phone calls, but Ed isn’t bothered by that: “Since most calls these days are scams or telemarketing, and most people I do want to talk to aren't going to call me anyway, I see this as more of a feature than a bug.”
For their desktop computing needs, Ed uses Windows. They admit they aren’t privacy experts when it comes to Microsoft’s OS, but they do what they can, including changing all privacy settings and uninstalling all programs they don’t use. (That includes OneDrive and Edge.) They also run a clean version of Windows 11 after following Lifehacker’s guide. Firefox is their go-to PC browser, and they use a variety of extensions, including:
ClearURLs: removes trackers from links.
Decentraleyes: blocks data requests from third-party networks.
Disconnect: blocks trackers from "thousands" of third-party sites.
Firefox Multi-Account Containers: separates your browsing into siloed "containers" to isolate each session from one another.
PopUpOFF: blocks pop-ups, overlays, and cookie alerts.
Privacy Badger: blocks invisible trackers.
Proton VPN: Proton's Firefox add-on for its VPN.
uBlock Origin: popular content blocker.
Ed didn’t say how much of an impact this array of extensions and settings has on their browsing, save for YouTube, which they admit does sometimes give them trouble. However, Ed has workarounds: “When YouTube wants me to 'sign in to confirm you're not a bot,' changing VPN servers usually does the trick.” Ed also uses the audible clues for ReCAPTCHA prompts, rather than the pictures, since they don’t want to help train Google’s “braindead AI.”
Ed deleted all their social media accounts, including Facebook, X, Instagram, and LinkedIn. Though they’ve never had TikTok installed on their phone, they will watch it in Firefox when a friend sends them a video.
While Edward Snowden may have kicked off Ed’s interest in personal privacy, "Jane" has many strong beliefs motivating their desire for privacy. They are concerned about data brokers and Meta’s practices of tracking internet activity, and how these companies build profiles based on that data to sell to third-parties; they’re concerned about the possibility of telecommunication companies tracking our locations via cellular towers; they worry about US law enforcement and agencies reviewing citizens’ social media accounts accounts and tracking people. Their focus on privacy is fueled by true concern for their own well-being, not only the value of privacy as a concept.
Jane uses a VPN on all of their devices. Instead of Proton, however, Jane opts for Mullvad. They enable ad and tracker blocking, as well as a kill switch, which blocks your internet if you lose connection with the VPN—thus protecting your connection from being leaked out of the secure network.
I’m a big advocate for strong and unique passwords and proper password management, but Jane definitely beats me when it comes to secure credentials. Jane uses six to eight-word passphrases generated by diceware, a tactic that chooses words based on dice rolls. Something like this diceware generator will roll a die five times, then find a word in a bank based on that five-digit number. You can repeat this as many times as you want to come up with a passphrase built up with random words. Jane saves all of their passphrases to a password manager, except for the ones for important accounts, like their bank. They commit those to memory, just in case someone breaches their password manager.
Like Ed, Jane uses Mullvad, but instead of just using their VPN, they opt for the web browser, which has those protections built in. Mullvad’s strict privacy settings break persistent logins on websites, so any sites Jane wants to stay logged in on are kept in Brave browser. For both Mullvad and Brave, Jane uses uBlock Origin.
“From time-to-time I do run into sites that will block access due to being on a VPN or blocking ads and trackers. Instead of disabling [my] VPN completely, switching my connection to one of Mullvad's rented servers instead of ones they own usually helps. Barring that, I occasionally go into [uBlock Origin] and temporarily whitelist a needed [URL] ([ReCAPTCHA] etc). This works for me to get around site blocks most of the time.”
Jane uses a Mac, and configured macOS based on various privacy guides. But instead of an iPhone, Jane opts for a Google Pixel. That might surprise readers who assumed hardcore privacy enthusiasts would break away from Google entirely. But X doesn’t run Android: Instead, they installed GrapheneOS on their Pixel, an open-source OS designed for privacy. Following a restart, Jane configured the Pixel to only unlock with a seven-word dice passphrase—for general use, they use a fingerprint scan and a six-digit PIN. If the don’t unlock their Pixel for a while, their phone automatically reboots to put it back into this “First Unlock” state. They also keep airplane mode on at all times to disable the phone’s radio communications, but maintain a wifi connection with timed automatic Bluetooth and wireless disabling.
Jane also deleted all their social media accounts after downloading all data associated with those platforms.
“Mark” is perhaps the least hardcore of the respondents in this story, but that makes their experience both interesting and relatable. Unlike most of the people we spoke to, Mark is still on Facebook and Instagram. That’s due to their job, which requires them to be on the platform, but they’ve been “systematically” deleting everything they can over their 19-year Facebook history and saving the data to an external hard drive. Mark doesn’t follow anything that isn’t relevant to their job, and only uses Facebook and Instagram inside the DuckDuckGo browser. They don’t react to posts they see, and following their privacy tactics, Facebook doesn’t show them relevant ads anymore. “If there is an ad I'm actually interested in I'll search it up in a different browser rather than click it.”
Mark has had four Google Accounts in their time online, and has deleted two so far. Like Facebook, they have to use Google for their job, but they delegate all their work to Chrome. All other browsing runs through Firefox, DuckDuckGo, or Tor. The latter is perhaps best known for being the browser of choice for browsing the dark web, but what makes it great for that is also what makes it a great choice for private browsing.
Unlike others in this story, Mark hasn’t de-Googled themselves completely. In addition to using Chrome for work, Mark has a phone mask through Google, and has their contacts, calendar, and maps tied to the company—though they are moving away from Google as much as they can. They've been running through their old emails to find and delete outdated accounts they no longer use. Any accounts they do need now use an email mask that forwards to a Mailfence account, an encrypted email service.
Mark was the only respondent to talk about entertainment in relation to privacy: “I've also been switching to physical media over streaming, so buying CDs and DVDs, locally as much as possible. I'm lucky to have a local music store and a local bookstore...one of the owners of our bookstore wrote a book on how to resist Amazon and why. Any book I want, I can either order through them or on Alibris. For music, I use our local record store and Discogs.”
When shopping online, Mark uses a credit card mask, but still uses the card itself when shopping in person. They want to start using a credit card mask in retail locations like Janet Vertesi, an associate professor of sociology at Princeton University, but they haven’t quite gotten there yet.
What really piqued my interest most about Mark, however, wasn’t their perspective on their own privacy concerns, but the concerns around the privacy of their kids: “They each have a Gmail, two of them have Snapchat. Their schools use Gaggle and Google to spy on them. I don't even know how to start disconnecting them from all this...I was a kid during the wild west of the internet and this feels like getting back to my roots. My kids are end users who understand apps and touchscreens, not torrenting their music or coding a basic website. (Is this my version of "I drank out of the garden hose"?) I feel like Big Data has its grip on the kids already and I don't have a guidebook on navigating that as a parent.”
Mark’s current focus on their kids’ privacy includes deleting their health data from their local health system. That’s in part due to a data breach impacting the health system, but also the language about autism from Robert F. Kennedy Jr., the current Secretary of Health and Human Services.
"Jay's" origin story with personal privacy dates back to 2017. That year, Equifax suffered a major hack, where nearly 148 million Americans had sensitive data stolen and weren’t notified about the breach for months. Jay was frustrated: You don’t choose to give your data to Equifax, or any credit bureau, and yet so many people lost their data. They also felt that companies were not properly held responsible for these events, and lawmakers were simply too out of touch to do what was necessary to protect citizens’ privacy, so they took it upon themselves to protect their own data.
Ever since this incident, Jay freezes their credit: “It was frustratingly difficult back then, but nowadays, it is very easy (it just requires an account, which I use a burner email for)...The freeze will not allow anyone to pull credit for large purchases in your name, even if they have your social security number (and because of the data breach, someone probably does). I decided I wanted to pursue some privacy for the things I do have a choice over.”
From here, Jay de-googled their life, including both Google Search as well as YouTube. They’ve found no issue with using alternative search engines, and, in fact, sees Google getting worse, as it tries to show you results based on what it thinks it knows about you, not what is most relevant to your actual query: “The internet was supposed to be a place you went to find information, not where you became the information that companies take instead."
Jay uses tools to prevent fingerprinting, where companies identify you and track you across the internet, but worries that going too far with things like ad blockers puts a target on your back as well. Jay chooses to pick “a couple of effective tools,” and runs with those.
For their smartphone needs, Jay goes with Apple. Like Ed, Jay doesn’t believe Apple is perfect, and even considers their privacy policies a bit of a gimmick, but sees them as the better alternative to Android. Jay likes the security of the App Store, and the array of privacy features in both Safari and Apple Accounts as a whole. They highlight Safari’s “Advanced Tracking and Fingerprinting Protection” feature, which helps block trackers as you browse the web; iCloud’s Private Relay, which hides your IP address; and “Hide My Email,” which generates email aliases you can share with others without giving your true email address away.
Most of us are plagued with spam calls, but following the Robinhood data breach in 2021, Jay started receiving a flood of them. They decided to change their phone number and made a point of never sharing it with businesses. For the times they need to give out their number to parties they don’t trust, they use a number generated by My Sudo, which, for $20 per year, gives them a VoIP (Voice over Internet Protocol) phone number. It works with most services that rely on SMS, but it won’t function for two-factor authentication. (Which is fine, seeing as SMS-based 2FA is the weakest form of secondary authentication.) My Sudo lets you change your number for an additional $1, so if Jay’s number ever was compromised or started receiving too much spam, they could swap it.
Jay, like many respondents, deleted all social media services: “It has its place in society for a lot of people, and is no doubt a great way to connect. However, I found that the fear of deleting it was a lot worse than actually deleting it. The people you care about won’t forget you exist.” That said, Jay doesn't mind any of the obstacles this lifestyle does throw their way: “It is a challenging topic, as most people consider you a little bit 'out there' if you take steps to make your life a little less convenient, but more private. The modern world sells you convenience, while pretending it is free, and harvesting your data for so much more than you actually get out of your relationship to them.”
There's no one way to tackle personal privacy. Every one of the respondents to our query had something unique about their approach, and many had different motivations behind why they were so concerned about their privacy.
There are plenty of common through lines, of course. Most privacy people love Proton, which makes sense. Proton seems to be the only company that offers a suite of apps most closely resembling Google's while also prioritizing privacy. If you want your email, calendar, word processor, and even your VPN all tied up nicely under one privacy-focused umbrella, that's Proton.
But not everyone wants an ecosystem, either. That's why you see respondents using other VPNs, like Mullvad, or other private storage options, like Tresorit. These apps and services exist—they might just not be owned by one company, like Apple or Google (or Proton).
Google and Meta are more commonalities, in that most privacy enthusiasts ditch them entirely. Some, like Mark, haven't been able to fully shake off these data-hungry companies. In Mark's case, that's because they need these platforms for work. But while most hardcore privacy people delete their Google and Meta accounts, most of us have trouble de-Googling and de-Metaing our digital lives.
In general, though, the keys to privacy success include the following: Use a VPN to protect your internet traffic; prioritize privacy in your web browser, both through the browser itself, as well as extensions that block ads and protect your traffic; shield your sensitive information whenever possible, by using email aliases, alternate phone numbers, or credit card masks; use strong and unique passwords for all accounts, and store those passwords in a secure password manager; use two-factor authentication whenever possible (perhaps passkeys, when available); and stick to end-to-end encrypted chat apps to communicate with others. While there's always more you can do, that's the perfect storm to keep your digital life as private as reasonably possible.
Some might read through the examples here and see steps that are too much effort to be worth it. It might seem out of reach to ditch Gmail and Instagram, break certain websites, and force your friends and family to learn new numbers and email addresses to protect your privacy, especially if you don't feel your privacy has that much of an impact on your life. But even if you aren't sold on the concept of privacy itself, there are real-world results from sticking with these methods. Jay no longer receives spam calls and texts; Mark no longer sees ads that are freakishly relevant to their likes. It's a lifestyle change, to be sure, but it's not just to serve some concept of privacy. You can see results by changing the way you interact with the internet, all without having to actually disconnect from the internet, and, by extension, the world at large.