How to Easily Harden Signal

Modifying Signal stock settings to increase privacy.

*Legal disclaimer: I am not a lawyer and this is not legal advice. I am an artist and designer, I am not claiming this is the whole truth or the only truth about this subject; the things I say here are based on my experience and research. Also, I am not advocating any of this information be used in the perpetration of a crime, and I am not instructing, soliciting or condoning the perpetration of a crime. Also, I have not been paid or sponsored by any of the services mentioned herein.*

Signal is a very popular encrypted messaging app made by Open Whisper Systems. It’s probably the most widely adopted encrypted messaging platform right now. Signal is a pretty well rounded, secure and easy to use application. Its purpose is to provide end-to-end encryption for its users, and it is pretty much unparalleled in terms of balance between usability and security. Signal shares many features with more common messengers which aren't security and privacy oriented, which makes it approachable for those who don't have the capacity, energy or time to devote themselves to learning more complex software with less functional UI. These qualities make signal a great option, though Signal is far from the most bulletproof secure and private communication platform (it's still pretty damn good). I have some issues with specific functions of Signal, but it’s undeniably the strongest contender for its audience.

Signal has been used by activists, organizers and others seeking privacy for years, but it has become much more widely adopted this year. This is really exciting because it means more people can communicate within Signal’s encrypted ecosystem.

In the privacy tech world, “hardening” means fortifying a pre-existing program against security and privacy threats. Basically hardening is changing settings and configuration to make the most of an app or software (or device). We can easily use Signal with all it's default configurations, but we will get more security and value from Signal by making a few changes to the settings to “harden" the app. These are my recommendations for settings in Signal. As with all things privacy and security related, there are infinite interpretations of what the “best” way to do something is, so I’m not saying these are the “best” Signal settings — they’re just what I use based on my experience and research. Take each with a grain of salt and tailor your use to your threat model first and foremost. These settings may appear different to iPhone users, as iOS allows slightly less customization. These are pulled from Signal version 4.76.3 running on CalyxOS.


>App Access

• Screen Lock - ON
• Screen Lock Inactivity timeout - 30s-1m
◇ I always keep screen lock on. I recommend 1 minute or 30 second timeout so that whenever you set your phone down for a moment it locks. This will inhibit an adversary from accessing your Signal messages even if your phone is unlocked.
• Screen Security (Block screenshots in recents list and inside app) - ON
◇ I always keep screen security on. This is intended to prevent bad actors from screenshotting/recording anything inside Signal. The only time I ever would turn this off is if I absolutely had to screenshot something within Signal, and I would immediately turn it back on when I was done.
• Incognito Keyboard (request keyboard to disable personalized learning) - ON
◇ Incognito keyboard is a must. I don’t want any other app/process seeing what I type in Signal as that would defeat the purpose of using Signal. This setting basically just disables your phone keyboard’s ability to record and learn from your keystrokes.


• Always relay calls - ON
◇ I prefer to relay all Signal calls through the Signal servers, even though I also use other methods to obfuscate my IP. This will ideally prevent an adversary from exploiting Signal to learn your IP. This only applies to calls made from within Signal, it doesn’t route all your phone calls through Signal.
• Read Receipts - OFF
◇ I never use read receipts. I don't think it's helpful/necessary to share more information about what I am doing with any person I talk to. Not a really big deal in terms of security risk, but in terms of privacy I generally try to operate by the philosophy that sharing less information is always better.
• Typing Indicators - OFF
◇ Again, I don't think it's helpful/necessary to share more information about what I am doing with any person I talk to. If I'm typing, they'll know when they get the message lol.
• Generate Link Previews - OFF
◇ Link previews can seem helpful and convinient, but they’re a possible attack surface. Shortened links can also expose you to malware and IP data collection. It’s generally best practice to never click a redirect/shortened link without using a tool to view what it redirects to. One such tool is which will show you the link you’re being redirected to, meaning your IP is never exposed to the redirect service.

>Sealed Sender

Sealed sender is an interesting option in Signal that’s not offered by many of their peers. It means that Signal allows the sending of messages without metadata of the sender being exposed to the receiving party. This is basically like sending an envelope with no return address. One of my main issues with Signal is that it relies on phone numbers to indentify users. I greatly prefer when platforms allow the user to set a unique username or identify with a random ID number. Sealed sender is Signal’s way of partially circumventing that vulnerability for the moment, until they figure out a way to avoid the use of phone numbers entirely. I keep sealed sender on for everyone, so I can use this function of Signal to communicate.

• Display Indicators - ON
◇ This tells you if
• Allow from anyone - ON
◇ This setting is up to you - I keep it on because it means anyone can have the option of more private/secure way to message me, but do what works best for you.

>Signal PIN

I highly recommend setting a PIN for Signal. The PIN is used to lock Signal when you’re not using it, but also for registration lock which prevents someone from using a SIM swapping attack to register your number with Signal so they can receive messages intended for you. This is crucial. Signal has gotten into some controversy for storing the PIN numbers on their servers for verification, but all-in-all I think the benefits of a PIN on Signal are worth the very slight risk.
• Change your pin (N/A)
◇ This option is obviously only relevant if you want to change your PIN. If you feel your PIN has been compromised or you accidentally used the same PIN for a different account/device, it might be time to change it.
• PIN Reminders - ON
◇ These will help you remember your PIN. It should be something you've never used for any other account/device, which can be tricky to remember. Having consistent reminders which won't lock you out of your device is a helpful way to memorize.
• Registration Lock - ON
◇ This is intended to prevent another person from registering a Signal account with your phone number. If someone were to attempt to register an account with your number by spoofing it or SIM swapping, registration lock would require them to input your PIN, thus adding another layer of protection.
DO NOT use a PIN you use for your phone lock screen or anything else. It should be completely unique to Signal. I recommend using an 8 digit PIN, as it will have more entropy meaning it will be harder to crack via brute force. Four digits is better than nothing, but longer is always preferable.



• Screen Lock - Name Only
◇ I never want Signal to show the content of a message in a notification, because that would make it available outside the app, to the OS and other potential threats so I use “name only” or “no name or message” options for this setting. I always turn off lock screen notifications in my OS too, because I don’t want any content to show when my phone is locked.

General recommendations


Keep signal updated ( this doesn't just apply to signal, but pretty much an application) as developers find bugs, they fix them and roll out new updates. These updates aren't always just new features, they often include fixes to security issues and bugs which could otherwise compromise the intended use of the application.

A note about phone numbers

Signal recommends against using a burner number to register your Signal account, but I would personally never use a real number which is registered to my name for Signal. I honestly at this point wouldn’t even use a number tied to my actual SIM card for Signal. This is because your phone number can expose data about you, and can be compromised by SIM swapping attacks. If you have a phone number that is registered to your real name (or that of a family member or friend) or if you pay your phone bill with a card/account tied to your real name, do not count on a Signal account registered to that number to be anonymous. If you use your real number with Signal, you’re tying all the data your cell provider has about you to your Signal account.

Signal says not to use a burner number because you might lose control of the number, so someone else could register with the same number or you could lose access to recovery options. Here’s my plain and simple solution: just DON’T lose control of the number. Treat this differently than you would a one-and-done burner number, make sure you keep the number for as long as you will be using it for Signal.

You can do this by a couple different methods. If you want to, you could use a burner SIM card number but I wouldn’t recommend it because that number will be permanently tied to the IMEI of whatever phone you put the SIM card in. What I recommend is using a VOIP number. There are a few options for VOIP numbers but I’d recommend MySudo or Hushed for those who don’t want to go through the process of setting up a Twilio number. Twilio is the cheapest of the three, but requires much more involvement.

No matter what VOIP service you choice, make sure you do not use any real personally identifying information or payment methods because that would defeat the purpose. I really only recommend Hushed and MySudo because they’ll work with anonymous payment methods, though it’s worth noting MySudo won’t work without the Play Store or Apple App Store so if you use a degoogled android ROM you’re currently unable to use MySudo.

Use a randomly generated fake name (not one you thought of) and pay with either anonymized cryptocurrency or a Visa gift card bought with cash. Most services will not allow you to set up an automatically recurring subscription on a prepaid Visa gift card, so you will have to manually reload minutes/months/data when you need to. Signal only requires the phone number for registration purposes, so you don’t need to keep the VOIP app on your phone after you’ve finished registering. Signal will not use that number’s data or minutes at all after it’s registered — the number basically just becomes an ID.

Set signal as your default messaging application (Android only)

On android, I would recommend setting Signal as your default messaging app. You will still be able to text people who do not have signal from the app, but it will tell you if someone you normal text via SMS has Signal, and you can move to communicating with them that way. Unfortunately Apple doesn't allow anything to replace the stock messenger on iPhones. If someone you talk to has signal, the send button will be blue and a lock will show beside each message. If you want to text them via SMS but you normally talk via Signal, and you've made Signal your stock messenger, long press down on the send button and it should turn grey, or give you an option to send them a message via SMS.

Use ephemeral/self destructing messages

If you’re talking about something sensitive/private (or just all the time if you feel like it) make use of Signal’s ephemeral message function. E2EE doesn’t really matter if someone gets ahold of either endpoint device, as the message has to be unencrypted for you to read it. Ephemeral messages offer a solution to this. It’s worth noting that in 2018 a security researcher found that while using Signal’s macOS desktop client, copies of disappearing messages were stored in plaintext in macOS notifications bar. This issue seems like it’s entirely dependent on notifications being on, and doesn’t really have to do with the disappearing message function of Signal directly — but regardless I would recommend keeping up to date on the issues raised in Signal’s Github repo if you rely on Signal for extremely sensitive communications (or pick something you have more direct control over).

Be smart, be safe.

I think it’s just generally best practice to never trust software (or hardware for that matter) 100%. I don’t think paranoia is the solution, but I do think having a little wariness and being a little careful is always a good idea. Also remember that even if your messages are encrypted, it could always be a different person than you think you’re talking to who is holding the device on the other end. The person you’re talking to could be coerced, being held against their will or their device could simply be compromised.

What happens to your phone if you get arrested? Part 1: What is a UFED?

*Legal disclaimer: I am not a lawyer and this is not legal advice. I am an artist and designer, I am not claiming this is the whole truth or the only truth about this subject; the things I say here are based on my experience and research. Also, I am not advocating any of this information be used in the perpetration of a crime, and I am not instructing, soliciting or condoning the perpetration of a crime.*

Getting arrested throws a person into a world of uncertainty. The experience of being totally subjected to the power of the state is overwhelming and it can be hard to think clearly about anything when you are getting arrested. It’s certainly difficult to remember what your rights are and what careful steps you have to take to preserve them. Most people know not to talk to the police because it can be self-incriminating, but most people don’t think about the ways their smartphone can be even worse than a loud mouth. One step we can take to reduce harm is to think about mobile device security before anyone gets arrested, and implement protective measures as a regular part of our life.

So what is a UFED, and what does it have to do with getting arrested?

The short answer is that a UFED is an incredibly invasive tool which can crack your phone’s encryption and extract tons of data about you, your contacts, your online habits and your communications. UFED stands for universal forensic extraction device. It’s a product line made by Cellebrite, and Israeli tech company. There are multiple devices in the UFED line, including both hardware devices and software meant to be used on designated forensic analysis computers. The current UFED lineup includes the following products:

  • UFED Ultimate is the “industry standard” forensic extraction device according to Cellebrite. It’s a software only version of the UFED meant to be used as an application on a designated computer for forensic analysis. UFED Ultimate can bypass pattern, password and PIN locks on most devices, extract logical, file system and physical data (including deleted data) and then reassemble that data into human-readable reports. Most major metropolitan police departments have purchased UFED Ultimate in the years between 2016 and 2020.

  • UFED 4PC is another software only version, meant to be used as an application on a designated computer for forensic analysis. Most police stations in the US use this at a bare minimum if they don’t have devices.

  • The UFED Touch2 is a portable touch screen device, almost like a small tablet, which “enables comprehensive extraction capabilities anywhere, whether in the lab, a remote location, or in the field.” These kinds of portable options are often preferred because the sooner forensic images can be taken, the less there is a chance that a person could have an associate of theirs remotely wipe their phone.

  • The UFED Touch2 Ruggedized is the same as the Touch2 but made to withstand harsher environmental conditions.

  • the UFED Ruggedized Panasonic Laptop is a pre-configured UFED specific laptop meant to expand the functions of the basic software and the Touch2 devices to give even more forensics options.

Beyond the basic UFED line listed above, Cellebrite also makes lots of more advanced options for departments and agencies willing to shell out a little more money;

  • UFED Cloud is a software suite which can extract both public and private domain information from online sources. It can rip and decrypt social media data, instant messaging, cloud file storage such as Google Drive and iCloud, and other web based content. Cellebrite even goes as far as to say that one of the problems UFED Cloud aims to solve is service providers delaying meeting subpoena demands for private information. The implication here is the UFED Cloud is a workaround for those police and investigators who don’t want to wait for the bothersome legal processes of obtaining a warrant and a subpoena for a cloud provider their suspect is using.

  • Cellebrite Premium is an add on software which increases the UFED line’s power to crack encryption. Cellebrite Premium gives the user the option to crack all current devices running any version of iOS up to the latest one, all Samsung flagship devices and many other android phones.

  • Cellebrite Responder is a software which is intended for “police stations, correction facilities, border control checkpoint, or on-the-go” situations, according to Cellebrite. It’s likely the software used by ICE and CBP to make forensic copies of phones of people entering the US, even those who haven’t been arrested. This software is meant to be used in real-time, whereas UFEDs generally are used to generate a report which is reviewed later. The suggested use cases in Cellebrite’s product overview include quickly confirming if a person is a threat in a triage scenario, using their location data to see where a person has been before allowing them to cross a border, and using real-time information to confirm a person’s claims while they are being interrogated.

  • Cellebrite Macquisition is a tool intended to crack, extract data from and otherwise exploit Mac computers. Cellebrite says Macquisition “is the first and only solution to create physical decrypted images of Apple’s latest Mac computers utilizing the Apple T2 Chip.” Macquisition is basically a UFED made streamline just for Apple computers; it extracts files, email, chat, address book and other data. It can also extract data from RAM.

Cellebrite offers more products than just these, but these are their products which I think are most relevant to this discussion. Their suite of products together make a formidable force. The UFED suite can crack the stock encryption of pretty much any of the most common phones. Cellebrite says they can crack any iPhone up to the 11, all Samsung flagship phones and most other android phones. The only real solution I can see (which would maintain some degree of usability) is running a custom ROM on an android phone with heavy encryption enabled and using ephemeral communications. Short of doing that, using encrypted ephemeral communications should provide some additional layer of protection. If you’re using signal or a similar platform which allows the setting of a PIN for the app, I’d also recommend that you set an 8+ digit pin which locks the app after one minute of inactivity. Make sure this PIN is different from your device PIN, and preferably not the same as any other PIN you use anywhere else.

So what data can they really get with these things, why should I care?

UFEDs can extract call logs, texts, app data, contacts, all account credentials that have been logged into on the phone, all wifi networks connected to by the phone, Bluetooth connection logs voicemails, deleted messages and more. They also support data extraction from thousands of apps, meaning they can pull DMs, posts, history and other data from inside individual apps. All that data provides a lot of information a prosecutor could piece together to try to make a convincing case. This kind of data is often taken as objective truth, and that perception makes it easier to fabricate narratives with said data. In many cases there is only enough data to form conjecture, but that doesn’t stop a narrative from being built around extracted data.

This data can also be used to target people in your network. If one person is arrested at a protest, but they’ve been communicating with all the organizers of the protest via an insecure device with stock encryption, all the people they have talked to will likely get a door knock from LE. Your lack of security could mean trouble for the people you love, it’s all intertwined.

Additionally, if you have an iPhone and the UFED is unable to extract meaningful data from your phone itself, Cellebrite has also built in the capability to decrypt and decode iCloud data provided by Apple. iCloud backups include pretty much everything that would be found on your phone otherwise. Apple will turn over any data the police request, and their encryption guarantees mean nothing when LE has access to UFEDs.

Why does this matter if you don’t think you have have anything incriminating on your phone?

1. You don’t know what is or is not incriminating. there are 30,000+ pages of federal law. The laws are always changing and we’re currently seeing active criminalization of protest, civil disobedience and dissent.

2. They can extract your contacts and social circle to investigate them as well. This could lead to your contacts being subject to door knocks, raids, detention and general harassment and intimidation by LE.

3. The data extracted from your phone could be used retroactively against you or the people you talk to on your phone.

Your privacy is intrinsically linked to everyone you interact with.

But… who really has these? Is there really a risk for me?

Yes, there really is. I couldn’t find any list of agencies that have UFED technology, so I did some research to find primary sources (mostly FOIA request documents). This is far from a complete list of US agencies with known UFED purchases, it’s just those that I could come up with in a few hours of research. I’ll probably add more to this list as I have time. Many of these original documents show purchases of between two and ten UFEDs in a single year alone. I think it’s safe to assume that if this many medium sized metropolitan police departments have multiple UFEDs, probably pretty much every single local department in the US has them. Special thanks to journalists who submit FOIA requests for uncovering this data, agencies only disclose this kind of info if citizens submit requests so we wouldn’t know about it otherwise!

Alameda County District Attorney's Office -

Baltimore County Police Department -

California DOJ -

Charlotte-Mecklenburg Police Department -

Chicago Police Department -
2020 ,

Colorado State Police -


Delaware State Police (Criminal Intelligence and Homeland Security Section) -

Houston Police Department -

Iowa Department of Public Safety (Division of Criminal Investigation) -

Kansas City Police Department -

Maryland Department of State Police -

Mesa Police Department -

Minneapolis Police Department -

Nebraska State Patrol -
2015 (page 58)

New Jersey State Police -

New Mexico Attorney General -

New Mexico High Intensity Drug Trafficking Area - Las Cruces

North Carolina DPS (Division of Prisons Administration) -

Oklahoma City Police Department -

Riverside County Sheriff's Department -- Annual budget of $75,000 for Cellebrite contracts and devices
2013 (page 43)

County of San Diego Sheriff's Department -
2013 (page 49)
-- Used in District Attorney's Office
-- Sheriff's Department Gang Task Forces
-- Regional Computer Forensic Laboratory
-- High Intensity Drug Trafficking Border Crome Suppression Team

San Antonio Police Department -

San Jose Police Department -

San Leandro Police Department -
2014 $14,082.99

Tucson Police Department -

Washington State Patrol -

This is post was originally written by MW and published on the clearnet (warning) at
This blog is mirrored on tor at writeas7pm7rcdqg.onion/m-w/
This blog is mirrored on the clearnet (warning) at
If you’d like to support further writing, subscribe via MW's paid substack, or make donations via BTC to MW's wallet at
With love and in solidarity,

Compartmentalization P. 1 - Email

Compartmentalizing email addresses allows us to have more privacy.

Compartmentalization in the context of digital privacy means creating separate “compartments” for different parts of our lives to reduce the potential harm cause by an attack or leaked data. This writing is about email compartmentalization specifically but compartmentalization could also mean having different phone numbers, different phones entirely or even different computers for different purposes – it’s a method which can be applied in many different ways.

The compartmentalization approach works on the pretense that we will all have our security or privacy compromised somehow, eventually. Essentially every time we create an online account, input our information in a checkout page, or otherwise give our personally identifying information to a company, we have to accept the inevitability that our data will be leaked or breached. Compartmentalization is a way to reduce possible harm because it allows us to contain those individual incidents and keep them from affecting all our accounts and devices. Compartmentalization is basically a direct application of the old idiom “don’t put all your eggs in one basket.”

Email compartmentalization means using different alias email addresses as a way to protect your personally identifiable information (PII). There are several reasons why I’m recommending compartmentalizing email addresses as a primary privacy practice.

1. Credential Stuffing Attacks

It seems like every couple weeks there’s another major data breach, with companies offering only canned apologies and little accountability. Since the start of 2019 there have been at least 42 major data breaches affecting companies including Facebook, DoorDash, Microsoft, Capital One, and even possibly the US Census Bureau. If you’re online at all, your data is most likely in publicly available breaches. Credential stuffing is when an adversary uses breached credentials to try to attack other accounts owned by the same person. If you use the same email and password combination and that login information is leaked by even one company, any person can use those credentials to log in to all your other accounts with zero hacking knowledge. If you’re wondering if your credentials have appeared in a breach, I’d suggest using the service to check. Have I Been Pwned is a service which collects and indexes breach data to help people stay informed about where their personal information has appeared online.

Using separate credentials to login to each of our accounts allows us to compartmentalize parts of our life from one another, so when our credentials are leaked or accounts breached it has as little destructive effect as is possible. In addition to having different email addresses, every single account we have should have a completely unique password, and furthermore, it should be a password that cannot be remembered (we will get more into password management at a later date).

2. Data Brokerage Companies

Data brokerage companies create and sell profiles of every single person who uses the internet. They track us across nearly all websites, even if we use plugins to block tracker scripts, even if we are careful. They assemble profiles by tying together information we didn’t even know we were giving out with information like our email addresses which we did willingly give out. These profiles make for incredibly invasive ad targeting, and invaluable resources for adversaries. If we are careful and isolate our compartments from one another effectively, we can deny them the ability to piece together personally identifying information about us.

When your email address is the same across all accounts you hold, those accounts are all tied together to create a widely accurate profile of you and your behavior. These companies don’t just sell this data to advertisers; the Secret Service buys location data, which would otherwise require a warrant to obtain, straight from data brokerage companies to circumvent constitutional rights. Compartmentalizing email addresses won’t solve this problem, but it does make it much more difficult for data scraping to automate a clear profile of your online activity. These companies know more than is imaginable about us, why give them any more?

3. Isolating content

If your work and personal emails are compartmentalized, there’s less chance of accidentally sending a confidential work document to a personal contact or a private email to a work contact. If you’re a person who works with particularly sensitive documents, an accidental breach of that data could mean a lawsuit. More generally, if you’re a person who wants/needs to keep an identity private from your family, employer or otherwise, email compartmentalization can reduce the risk of accidentally crossing your streams of communication.

4. Control and comfort

If we implement compartmentalization down to the level that every single service has a unique email address, we can feel totally comfortable giving them that address. I don’t particularly trust that any business is going to keep my data safe on their servers but I am also a working class person and sometimes companies offer further discounts for email subscribers. I would never feel comfortable giving a company my personal email but by using a service like AnonAddy which allows me to make a unique email for each account I create I can give Starbucks an email address I will never use for anything else, such as and feel totally comfortable doing so. We have been made to feel we have no control over our data, but there are some concrete things we can do to regain data autonomy. This is one step in that direction.

Here’s an example to emphasize the importance of compartmentalization.

Alex uses the same credentials for everything they do. They log in to their online classes, their bank account, their shopping accounts on store websites and all the apps on their phone using the same email and password. Alex's password is leaked in a breach of a smaller service they use, and it appears in plaintext all over the internet in pastes of the breach data. Alex has a dedicated adversary who is trying to access and exploit their accounts. This person could be an investigator, stalker or other attacker—it doesn’t matter who they are or what their skill level is. Because Alex uses the same password with all their accounts, when this adversary finds their credentials in the breach data online, they can use the credentials to log in to any of their accounts. This is especially pernicious because it won't raise the same flags as brute force hacking or other exploits – it basically looks like Alex is logging in because the adversary knows the correct login credentials.

Even if Alex were to find out their data had been leaked before an adversary did, it would be somewhat of a nightmare to change every single account password and manage that scale of crisis response. If Alex had compartmentalized their credentials, the problem would have ended at the breach of the first account. That's the only account they would need to change the password for because that's the only account which the adversary would be able to gain access to with the leaked credentials.

I pushed back compartmentalizing my email for a long time because it sounds tedious and daunting, but it’s not nearly as bad as it sounds. We will get to other kinds of compartmentalization in the future (hint- compartmentalizing devices, sandboxing apps, virtual machines and Qubes OS) but for email the following methods work well.

Base Level Email Compartmentalization

The first, most basic kind of compartmentalization I would recommend is what I consider the base level email compartmentalization. By base level, I mean this is something I think every single person should do, something I think we should teach our kids when they set up their first email accounts, and something which requires very little change in behavior. In fact, I think many people already do this even for reasons other than privacy.

Base level email compartmentalization means having separate email addresses (compartments) for personal, commercial and work emails. These are the compartments I've found to work best for me but if you feel like you'd rather organize your base level compartments a different way, do what works for you. Some other compartments I've heard people find helpful are financial, school, and dating specific email addresses. For people who need to be able to dictate their presentation different ways in different circles, compartmentalization could offer increased safety by isolating conversations, app accounts, purchases, subscriptions and other communications which could involuntarily out a person if they were made public. Compartmentalizing can offer us some degree of freedom and comfort in participating online in ways we don’t always feel safe presenting in meatspace. Generally, if you’re subscribed to, logged in to or otherwise using a service which would jeopardize your safety if your name publicly appeared associated with it (as in a date leak/breach), I would recommend creating a new email compartment just for that. Moreover, any person who has sensitive communications could make a more specific compartment for comms about that thing only, such as “protest organizing,” “labor union membership,” “anarchist forums,” or “torrenting sites” to name a few.

This means you would have distinct email addresses for each of your compartments. For me, hypothetically, I could have the following compartments (these are not real email addresses, they’re made up as an example - please don’t email them).

Commercial compartment:

Work compartment:

Personal compartment:

Sensitive compartment 1 (for p2p filesharing):

Sensitive compartment 2 (for organizing):

If you need to use your name in your email for work or personal, that should be fine as long as you maintain good separation between your compartments, however I would not recommend using your actual name for sensitive compartments or commercial email, as that would be considered personally identifying information and the point is to have those accounts not be tied to your actual identity. For my sensitive compartments, I would recommend using randomly generated strings of letters and numbers to make the email addresses as untraceable as possible.

For added disinformation one could use different email providers for each compartment, but that won't be necessary for most users. The use of separate email providers mainly would be to deter an adversary from trying to connect the two accounts to find out they're owned by the same person. I would definitely suggest using a different email provider if you're making a compartment for highly sensitive activities, especially those which might put you at risk of legal action.

This base level compartmentalization means we can give our “commercial” email to a store for a receipt without worrying that now our work inbox will be bombarded by spam emails and phishing attempts. We don’t have to worry that our personal email (which will never be given to a company or posted publicly) will become a target for adversaries. Base level email compartmentalization

Individualized Compartmentalization

The second level of email compartmentalization is individualized compartmentalization. This means the use of individual emails for each individual service/communication, as in “” for your starbucks account. This is clearly a little more overhead and upkeep, but worth it for total control. I would absolutely not recommend creating hundreds of different email accounts to individualize your compartments, as it could be very easy to lose track of them. Instead, try a service like AnonAddy for individualization. I also want to specifically state that I do not recommend using a secondary forwarding service like AnonAddy for anything really sensitive like your primary social media accounts or banking accounts. Use base level compartment for those, so you know you have control over it. This tool is mainly for use with commercial accounts which don’t have such high stakes. Frankly, if I lost access to my Seven-11 account I wouldn’t panic.

The reason I recommend doing this through a service like AnonAddy is convenience. There are a number of other ways to do this, and similar services out there if you’re interested in looking around, but I’ve found AnonAddy to be the middle ground between convenience and privacy. Lots of hardcore privacy advocates seem to recommend doing everything with the most difficult and cryptic tools possible, but I think that if we are going to really have sustained long term privacy, we have to find a middle ground between convenience and security- if something is so inconvenient that I don’t want to use it or I am going to use it wrong, it doesn’t make a difference that I am using it at all. I also want to note that I am not an affiliate, employee, or otherwise paid by AnonAddy in any way, this is not an advertisement – I just like the service. AnonAddy allows you to create an infinite number of alias emails, all of which forward to a specific inbox. It also allows GPG encryption for forwarded emails, custom domains and self-hosting. AnonAddy is also open source.

AnonAddy works by allowing you to create unlimited aliases with the domain, where “your” is replaced by whatever you choose as your subdomain name when you register on AnonAddy. The service will forward all If you have a base level compartment for commercial emails, you can have your individualized AnonAddy emails forward to that inbox. The coolest thing about AnonAddy, in my opinion, is that these emails can be created on-the-fly, meaning that if you’re asked by someone for your email on the spot you can tell them any compination of words and if they email that address it will forward to you. If I am checking out at a bodega, and they tell me they’ll give me 50% off my purchase if I sign up for emails I can makeup an email on the spot, like “” and when they email at that email it will forward to my base level spam compartment in my actual email inbox. As long as everything after the @ is correct, it doesn’t matter what comes before it.

Individualized compartmentalization is really useful for commercial/spam email signups and for untrusted contacts. If you’re a journalist who doesn’t want to give out your actual email, but you need to give people a means of contact, AnonAddy (or similar services) can protect your actual email address. I would not recommend using a forwarding service for banking/financial accounts, important social media, work accounts or school accounts. This is because it adds an extra intermediate layer, which increases your potential loss of control and attack surfaces.

In summary, I recommend creating base level compartments by registering individual email addresses for each compartment. Once you’ve created your base compartments, I recommend using AnonAddy, or a similar service, to create individual email addresses for every single account you have, list-serve you’re signed up for, and for even individual contacts in some cases.

TL;DR + Simple guide.

Compartmentalization means having separate “compartments” or identities for each part of your life. This segments your digital presence in a way that makes it harder for companies, adversaries and attackers to pwn you.

Your email address and passwords appear in breaches, so you should have a different email and password for each account you have. If you recycle passwords and emails, you are putting yourself at risk.

I use two methods for compartmentalization.

1. Base level compartmentalization – this means having separate email addresses for each compartment. This could look something like the following (these are not real email addresses, they’re made up as an example - please don’t email them).

Work compartment: – this is the email I would use for emailing an employer, clients or registering work accounts with software/services.

Personal compartment: – this is the email I would use for emailing my family, partner and my friends about non-work topics. This is also the email I would use if I was signing up for social media accounts

Commercial compartment: – this is the email I would use for signing up for accounts with online stores, email receipts from purchases at shops and other uses of email which are likely to generate spam. This compartment could also be used for throwaway social media accounts.

Sensitive compartment 1 (for p2p filesharing): - this is the email I would use for filesharing trackers, so those accounts are not tied to my other accounts. For sensitive compartments, I would recommend using randomly generated strings of letters and numbers to make the email addresses as untraceable as possible.

Sensitive compartment 2 (for organizing): - this is the email I would hypothetically use for any communications about political organizing. For sensitive compartments, I would recommend using randomly generated strings of letters and numbers to make the email addresses as untraceable as possible.

2. Individualized compartmentalization – this means having separate email addresses for each service/account, even within the same base level compartment. To do this, I use a service called AnonAddy, which allows you to register unlimited alias email addresses which forward to your actual base level email account. This protects your actual, base level, email address so it’s not leaked into the public sphere. This could look like the following.

Commercial base level compartment:

Boba shop individual compartment:

QuickTrip individual compartment:

Starbucks individual compartment:

Work base level compartment:

Client 1 compartment:

Client 2 individual compartment:

Client 3 individual compartment:

This post was originally written by MW and published on the clearnet at
This post is found on tor at http://writeas7pm7rcdqg.onion/m-w/we-are-all-connected
This blog is found on tor at http://writeas7pm7rcdqg.onion/m-w/
This post is mirrored on the clearnet at
This blog is mirrored on the clearnet at
If you’d like to support further writing, subscribe via MW's paid substack, or make donations via BTC to 3PnjHL8kwGaTFbgYoBtKLUasKqv2khJq4R
With love and in solidarity,

We are all connected all the time.

An argument for privacy as mutual aid.

*Legal disclaimer: I am not a lawyer and this is not legal advice. I am an artist and designer, I am not claiming this is the whole truth or the only truth about this subject; the things I say here are based on my experience and research. Also, I am not advocating any of this information be used in the perpetration of a crime, and I am not instructing, soliciting or condoning the perpetration of a crime.*


Social Network: I am not using this term to refer to a specific social networking platform (i.e. Facebook or Twitter). I am using it in its broader sense, to mean a network of people who have interpersonal relationships with one another, who talk to each other and who interact with each other.
Mutual aid: Mutual aid means the cooperative and reciprocal distribution of services and resources for mutual benefit. In other words, people supporting each other at the same time. Mutual aid means we help each other instead of waiting for a charity, government or outside entity to help us. Mutual aid is based on sharing the abundance we have, and it directly counteracts the scarcity mindset which is fundamental to the maintenance of capitalism.
Privacy: this may seem like an obvious one, but I think it’s important to distinguish the specific way I am using the word privacy in this context. Privacy means the ability to keep information and data out of the reach of anybody who is not specifically granted access. Having privacy means that you get to decide who knows what about you.
Anonymity: Anonymity means a person’s identity is not disclosed or known. Anonymity does not guarantee privacy, in fact anonymity is often used as a tool when privacy is not an option. For example, a whistleblower who is leaking secret documents might desire that they remain anonymous but would not want the documents to stay private, as that would defeat the purpose of leaking them; the point is that they enter the public sphere. Anonymity can be an important component of privacy and privacy can help us maintain anonymity, but they are distinct ideas. I will expand more on this in future writing.
Encrypted/Unencrypted: Encrypted means a file, message or other piece of data/info is locked so that it becomes unreadable without the correct decryption key. Both static files on a device and data which is in motion, like an email, web traffic or instant message, can be encrypted. End-to-end encryption (E2EE) is a term used to describe data in motion which is encrypted the whole time it is traveling. Only the sender and recipient’s devices have unencrypted versions of that data, no intermediate person would be able to read it. One of my upcoming posts this month will be a deeper dive into the nuances of encryption because it’s too big of a topic to discuss adequately here. Encrypted does not mean inherently secure or private as there are lots of different kinds of encryption and it’s a term which is often misused by companies to represent their product as secure, private or anonymous when it is not.

Privacy as Mutual Aid.

If there’s one thing that’s clear to me after spending countless hours since mid-June poring over the contents of recently leaked internal police documents, it’s that our digital networks intrinsically link us in ways that mean that if one person’s privacy and security are compromised, they become a conduit for attacks on everyone else in their social network. Furthermore, the state is aware of this and actively exploiting our social groups. We can no longer afford to ignore it; privacy practices must be a part of our digital etiquette. Corporate tech overlords, state surveillance apparatuses, and data harvesting companies all have their hands in our pockets literally all the time. In 2020, this means more than just inconvenience or uneasiness, it literally can mean the difference between life and death, freedom and detention (as seen here, here, here, here, here, and many more). The only way any one of us can begin to shake the looming threat of surveillance is if we do it together because we are all connected all the time. Privacy can and should be viewed as a form of radical mutual aid. It’s not about you or me as individuals, privacy is about preserving our collective freedom; from each according to their abilities, to each according to their need.

It has been so encouraging to see so many people become interested in the concept of mutual aid as the pandemic has changed our relationships to one another. Prior to this year, the term mutual aid was rarely heard outside anarchist spaces, now it’s a topic of discussion in the mainstream media – that’s exciting. Mutual aid has clearly taken root as a core part of the growing anti-authoritarian movement in the US and across the globe. My hope in writing this is that those same gains made by applying mutual aid theory to things like grocery distribution and community care can be made in the realm of privacy and security culture.

Common Struggle.

One of the most important parts of mutual aid is the basis of common struggle. The fundamental understanding that we are in this together is often a guiding principle in anti-authoritarian work and it gives us reason to coordinate mutual aid efforts. We certainly experience different intersections of oppression and liberation is absolutely not uniform or homogeneous but operating from the basis that our different kinds of struggle are intertwined with one another means we can work together to liberate each other simultaneously. If you’re not a member of the state or the wealthy, you are subject to their policing, oppression and control. If you’re subject to their policing, oppression and control, you’re subject to their surveillance. If you’re subject to their surveillance, we’re in this together. Simple as that. As long as we divide our own struggles, as long as we feel we aren’t responsible to one another, as long as we fail to see ourselves as connected to our communities and social networks, we are compromising our own ability to fight together.

The most common rationalization which prevents us from pursuing more private, anonymous and secure digital habits is the idea that a person who has nothing to hide should not be afraid of surveillance. This argument is widely employed by governments to justify state surveillance; the state tells us that invasion of our privacy is only a threat if we are Doing Something Wrong. I am using capitals here to make the distinction between actually doing something which is morally wrong, and Doing Something Wrong as defined by the state. The argument could be made for abolition of the moralistic stance that anything is morally wrong, but I’ll leave that argument for others to have. For almost ten years we’ve known that the NSA employs dragnet surveillance to collect every single conversation they can get their hands on, yet they tell us we shouldn’t be worried unless we’re Doing Something Wrong. We’re trained to willingly accept invasive surveillance, and further, to be excited at the prospect that said surveillance might lead to the capture of the bad guys (those who are Doing Something Wrong).

Who decides who the bad guys are, and how do we know we’re not them? What happens when we begin to understand that the bad guys are a moving target created by the state to preserve the legitimacy of the state? I am not advocating that we all feel that we’re actually always doing something wrong, but I am advocating that we acknowledge that in the eyes of the state everyone is potentially Doing Something Wrong. If we imagine ourselves as people who are not Doing Something Wrong and thereby do not see digital surveillance as harmful we endanger those in our networks who do find surveillance to be a risk, even if they make efforts to lock down their own privacy. In relation to privacy, common struggle means that we have to acknowledge surveillance is a risk to all of us, not just those who are Doing Something Wrong. We may experience different risks from lack of privacy, just as we experience different kinds of oppression, but we must acknowledge that we are all harmed or made at risk of harm by any one of us not having autonomous control over our data, thereby privacy must be a mutual aid effort.

None of us can know all the laws.

The most glaring problem with the “nothing to hide” argument is that the state constantly creates, rewrites and alters the definition of Doing Something Wrong. There simply is no possible way for us to know if we are doing something wrong, because there are too many laws and the laws are in constant flux, not to mention the fact that most cases are basically totally up to the discretion of the individuals involved in the criminal (in)justice system. Lawyers, judges and even supreme court justices have acknowledged that there are basically just too many laws for any one person to be aware of. This is clear reason to believe that we simply can never be certain we aren't Doing Something Wrong. Supreme Court Justice Stephen Breyer's statement in a 1994 court case explains this well.

“First, the complexity of modern federal criminal law, codified in several thousand sections of the United States Code and the virtually infinite variety of factual circumstances that might trigger an investigation into a possible violation of the law, make it difficult for anyone to know, in advance, just when a particular set of statements might later appear (to a prosecutor) to be relevant to some such investigation.”

There are nearly 30,000 pages of federal law. The possibility that something in your email inbox, call logs, text messages or search history could be construed to be in violation of one of the more than ten thousand federal laws is highly likely if not certain. The 1986 Computer Fraud and Abuse Act (CFAA), which is the primary document upon which computer crime prosecutions are based, is so vague and riddled with loopholes that it allows a prosecutor to find many commonplace computer habits criminal. The point I'm getting to here is that even if only those who are Doing Something Wrong have something to hide, that still implicates pretty much all of us because we can never be certain we aren’t Doing Something Wrong. This argument has been made repeatedly in favor of privacy practices, but somehow the “nothing to hide” myth remains central the persistent justification of surveillance.

Having something to hide ≠ doing something wrong.

Another point of weakness in the “nothing to hide” myth is the idea that having something to hide is contingent upon Doing Something Wrong. Even if at this point you still feel you definitely are not Doing Something Wrong, you still have something to hide. Suppose, for example, you have saved credit card information logged in an account on an app. If that card information is unencrypted on the app’s servers (which it would likely be legally required to be if the proposed EARN IT Act is signed into law), their servers could be hacked and your card info could be easily leaked publicly. In the event your card information shows up in a list of breached card credentials, you find out very quickly that you do indeed have something to hide. Privacy, anonymity and encryption are necessary for everyone in everyday interactions Just like escrow isn’t just for dark web markets, encryption isn’t reserved to secretive military comms, it’s pretty much everywhere in our digital world. You don’t have to be Doing Something Wrong to want privacy and your desire not to publish your credit card info illustrates this perfectly.

The same argument could be made for any private information we store digitally. If you aren’t comfortable with all of your emails being published in your towns newspaper for all to see, you have something to hide. If you don’t want to publicly post the content of your phone’s gallery, you have something to hide. If you send a text or picture to an intimate partner that you wouldn’t send to your whole contact list, you have something to hide. If you are a businessperson and you don’t want audio recordings of all of your meetings mailed to your competitors, you have something to hide. The list could go on and on, but hopefully these examples make clear how unjust it is to equate having something to hide and Doing Something Wrong. It’s also important to note that having something to hide shouldn’t be a source of shame for us either; the myths which are used to justify surveillance and invasion of privacy often make us feel ashamed that we desire privacy. We all have something to hide, which means we deserve transparency from the companies we trust with our data, and I feel we owe it to ourselves and each other to invest in privacy practices.

We have to care, even if we don’t feel we’re the target.

Even if you feel you don’t have anything to hide, and you’ve done nothing wrong, your privacy is directly tied to that of everyone you interact with. Even if you are totally comfortable publishing your own credit card information, home address, browser history, nude photographs of yourself, your medical records, your texts and emails etc, your network includes people who would not do the same. We owe it to each other to determine whether our privacy practices are increasing or reducing risk for ourselves and our networks. We have the opportunity to leverage our knowledge to reduce harm instead of creating it.

Imagine this scenario. Alex is a person who feels he has nothing to hide; Alex feels that even if someone were to get their hands on his data, it wouldn’t matter. He doesn’t see himself as a criminal, he doesn’t think he has broken any laws, and he doesn’t have anything he feels is really private on any of his digital devices.

Alex has a friend named Sarah who is seriously concerned about her privacy. Sarah is an organizer with a group that advocates for human rights. Her work is legal and should be uncontroversial, but fascist groups keep targeting Sarah’s organization with harassment, death threats and intimidation.

Alex communicates with Sarah from the stock text messaging app as he does not want to download any encrypted chat platforms because it’s a hassle. He has her number and her email saved in his contacts under her full name. Alex uses a free contact management app from the app store which backs up his contacts to a cloud server so if he loses his phone he will still be able to have all his contacts. That app stores the contacts on its servers in cleartext, meaning it’s unencrypted. After a few months of using the app Alex gets an email from the developers saying that they’ve been hacked and 23 million contacts were breached. Covve, a popular contact management app had that exact amount of contacts leaked just three months ago. Sarah’s contact information was in that breach and the breach data is publicly available online. Fascists who are trying to disrupt the organizing Sarah is involved in find her phone number and email online because of the breach. This information can now be used to conduct different kinds of attacks; credential stuffing, account exploitation, SIM swapping, blackmail, etc…

Despite the fact that Sarah uses only apps with zero knowledge end-to-end encryption, never publicly shares her email or phone number, and works tirelessly to make sure she is private and protected online, her information still wound up leaked. Alex’s disregard for his own privacy ultimately ended up endangering his friend. This hypothetical situation serves to illustrate one of the infinite ways our own practices are directly tied to the safety and security of everyone we digitally communicate with.

Solidarity, Not Charity.

Mutual aid is not charity. By recognizing our common struggle we can begin to counteract the pedantic structures of charity. Mutual aid means we (those who are in the struggle) are helping each other rather than waiting for those in power, who impose harm and struggle upon us, to alleviate the harm they impose on us through charity. The fundamental structure of charity is built to help them feel righteous and to validate their privilege. The idea of counteracting charity by building solidarity fits perfectly into the conversation about privacy as mutual aid; nearly all the infrastructure of the internet and just about every piece of digital technology was created collaboratively by open source developers or based on Free and Open Source Software(FOSS). The world of big tech benefits from our lack of knowledge about our digital devices, and the iPhone is a beloved talking point for those arguing in favor of capitalism, but the reality is that we wouldn’t have most of the technology we do have if it weren’t for collaboration, open source ideology and some pretty anti-capitalist practices in early computer development.

Most, if not all, privacy researchers and advocates would recommend always using FOSS applications. Open source means the code which makes up the software is published for anyone to read. While I may not be able to understand any of the code that makes up an app like Signal, for example, there are certainly researchers and companies which do independent audits of open source privacy oriented software to make sure there are no bugs, callbacks or flaws which would threaten the privacy of the user. If a developer or company publishes a closed source app and claims it is private, encrypted or secure, we can never really be sure their claims are true because we do not know what’s happening in the code.

Using only FOSS applications also means that when there are inevitable bugs and flaws they are caught quickly and fixed. This is also why it’s good digital etiquette to update apps as soon as new versions are released, as they often contain patches for bugs in the code.

For me, journeying into the world of digital privacy has been incredibly empowering because it has given me more understanding of and control over my devices and digital presence. We don’t have to beg tech companies like Facebook and Google to make their platforms safer and more private, and they wouldn’t do it even if we did! We don’t have to trust closed source proprietary software with our sensitive data, asking that they *pretty please* keep our data safe. We have the power to make ourselves more secure, and to reduce harm for those in our networks in doing so; making our own digital practices more private and secure is an act of mutual aid.

This is post was originally written by MW and published on the clearnet (warning) at

This post is mirrored on tor at writeas7pm7rcdqg.onion/m-w/we-are-all-connected

This blog is mirrored on tor at writeas7pm7rcdqg.onion/m-w/

This post is mirrored on the clearnet (warning) at

This blog is mirrored on the clearnet (warning) at

If you’d like to support further writing, subscribe via MW's paid substack, or make donations via BTC to MW's wallet at 3PnjHL8kwGaTFbgYoBtKLUasKqv2khJq4R

With love and in solidarity,


Surveillance, dissent, privacy, digital freedom.

Mapping Watchtowers is a space for me to build privacy guides based on my own experiences, discuss privacy/surveillance related news, write about some of my influences and expand the dialogue I aim to foster in my artwork. The two primary motivations for creating this space are my desire to be free from the alluring algorithmic feeds of social media (which feature ever expanding censorship and ad-based surveillance) and my desire to create more sustainable relationships with my audience.

If you’re interested in supporting my work, learning more about accessible anti-surveillance tools/tips/tricks or just getting regular content about surveillance, dissent, privacy and digital freedom please consider subscribing or supporting.

Sign up now to receive the first issue when it comes out!

In the meantime, tell your friends!

Loading more posts…