mirror of
https://github.com/PrivSec-dev/privsec.dev
synced 2024-12-23 05:11:34 -05:00
Merge branch 'main' of https://github.com/PrivSec-dev/privsec.dev
Signed-off-by: Tommy <contact@tommytran.io>
This commit is contained in:
commit
de62a1708a
@ -4,13 +4,13 @@ tags: ['knowledge base', 'security']
|
||||
author: Tommy
|
||||
---
|
||||
|
||||
**Multi-factor authentication** is a security mechanism that requires additional verification beyond your username (or email) and password. This usually comes in the form of a one time passcode, a push notification, or plugging in and tapping a hardware security key.
|
||||
**Multi-factor authentication** is a security mechanism that requires additional verification beyond your username (or email) and password. This usually comes in the form of a one-time passcode, a push notification, or plugging in and tapping a hardware security key.
|
||||
|
||||
## Common protocols
|
||||
|
||||
### Email and SMS MFA
|
||||
|
||||
Email and SMS MFA are examples of the weaker MFA protocols. Email MFA is not great as whoever controls your email account can typically both reset your password and receive your MFA verification. SMS, on the other hand, is problematic due to the lack of any kind of encryption, making it vulnerable to sniffing. [Sim swap](https://en.wikipedia.org/wiki/SIM_swap_scam) attacks, if carried out successfully, will allow an attacker to receive your one time passcode while locking you out of your own account. In certain cases, websites or services may also allow the user to reset their account login by calling them using the phone number used for MFA, which could be faked with a [spoofed CallerID](https://en.wikipedia.org/wiki/Caller_ID_spoofing).
|
||||
Email and SMS MFA are examples of the weaker MFA protocols. Email MFA is not great as whoever controls your email account can typically both reset your password and receive your MFA verification. SMS, on the other hand, is problematic due to the lack of any kind of encryption, making it vulnerable to sniffing. [Sim swap](https://en.wikipedia.org/wiki/SIM_swap_scam) attacks, if carried out successfully, will allow an attacker to receive your one-time passcode while locking you out of your own account. In certain cases, websites or services may also allow the user to reset their account login by calling them using the phone number used for MFA, which could be faked with a [spoofed CallerID](https://en.wikipedia.org/wiki/Caller_ID_spoofing).
|
||||
|
||||
Only use these protocols when it is the only option you have, and be very careful with SMS MFA as it could actually worsen your security.
|
||||
|
||||
@ -18,9 +18,9 @@ Only use these protocols when it is the only option you have, and be very carefu
|
||||
|
||||
Push confirmation MFA is typically a notification being sent to an app on your phone asking you to confirm new account logins. This method is a lot better than SMS or email, since an attacker typically wouldn't be able to get these push notifications without having an already logged-in device.
|
||||
|
||||
Push comfirmation in most cases rely on a third party provider like [Duo](https://duo.com/). This means that trust is placed in a server that neither you nor your service provider control. A malicious push confirmation server could compromise your MFA or profile you based on which website and account you use with the service.
|
||||
Push confirmation in most cases relies on a third party provider like [Duo](https://duo.com/). This means that trust is placed in a server that neither you nor your service provider control. A malicious push confirmation server could compromise your MFA or profile you based on which website and account you use with the service.
|
||||
|
||||
Even if the push notification application and server is provided by first party as is the case with Microsoft login and [Microsoft Authenticator](https://www.microsoft.com/en-us/security/mobile-authenticator-app), there is still a risk of you accidentally tapping on the confirmation button.
|
||||
Even if the push notification application and server is provided by a first party as is the case with Microsoft login and [Microsoft Authenticator](https://www.microsoft.com/en-us/security/mobile-authenticator-app), there is still a risk of you accidentally tapping on the confirmation button.
|
||||
|
||||
### Time-based One-time Password (TOTP)
|
||||
|
||||
@ -32,7 +32,7 @@ If you have a [Yubikey](https://www.yubico.com/), you should store the "shared s
|
||||
|
||||
Unlike [WebAuthn](#fido-fast-identity-online), TOTP offers no protection against [phishing](https://en.wikipedia.org/wiki/Phishing) or reuse attacks. If an adversary obtains a valid code from you, they may use it as many times as they like until it expires (generally 60 seconds + grace period).
|
||||
|
||||
Despite of its short comings, we consider TOTP better and safer than Push Confirmations.
|
||||
Despite its short comings, we consider TOTP better and safer than Push Confirmations.
|
||||
|
||||
### Yubico OTP
|
||||
|
||||
|
@ -8,20 +8,20 @@ The first task a person should do when taking steps to protect their privacy and
|
||||
|
||||
## Defining a threat
|
||||
|
||||
To make a threat model, we must first define a threat. A common mistake made by people who are just getting into the privacy space is to define the threat as "big-tech companies". There is a fundamental problem with this definition:
|
||||
To make a threat model, we must first define a threat. A common mistake made by people who are just getting into the privacy space is to define the threat as "big-tech companies." There is a fundamental problem with this definition:
|
||||
|
||||
Why are we not trusting "big-tech companies", but then shift our trust to "small-tech companies"? What happens if those "small-tech companies" turn out to be malicious? What happens when our favorite "small-tech company" becomes successful and grow exponentially? **The proper way to define the threat here is the "service provider", not "big-tech".**
|
||||
Why are we not trusting "big-tech companies," but then shift our trust to "small-tech companies"? What happens if those "small-tech companies" turn out to be malicious? What happens when our favorite "small-tech company" becomes successful and grow exponentially? **The proper way to define the threat here is the "service provider," not "big-tech."**
|
||||
|
||||
Generally, there are four primary threats a person would want to protect themselves from:
|
||||
|
||||
- A service provider spying their users
|
||||
- Cross site/service tracking and data sharing, a.k.a "mass surveillance"
|
||||
- Cross site/service tracking and data sharing, a.k.a. "mass surveillance"
|
||||
- An app developer spying on users through their malicious software
|
||||
- A hacker trying to get into the users' computers
|
||||
|
||||
A typical person would have several of these threats in their threat model. Some of these threats may weigh more than others. For example, a software developer would have a hacker stealing their source code, signing keys and secrets as their primary threat, but beyond that they would also want privacy from the websites they visit and so on. Likewise, an average Joe may have their primary threat as mass surveillance and service providers, but beyond that they also need to have decent security to prevent a hacker from stealing their data.
|
||||
|
||||
For whistleblowers, the threat model is much more extreme. Beyond what is mentioned above, they also need anonimity. Beyond just hiding what they do, what data they have, not getting hacked by hackers or governments, they also have to hide who they are.
|
||||
For whistleblowers, the threat model is much more extreme. Beyond what is mentioned above, they also need anonymity. Beyond just hiding what they do, what data they have, not getting hacked by hackers or governments, they also have to hide who they are.
|
||||
|
||||
## Privacy from service providers
|
||||
|
||||
@ -29,9 +29,9 @@ In most setups, our "private" messages, emails, social interactions are typicall
|
||||
|
||||
With end-to-end encryption, you can alleviate this issue by encrypting communications between you and your desired recipients before they are even sent to the server. The confidentiality of your messages is guaranteed, so long as the service provider does not have access to the private keys of either party.
|
||||
|
||||
In practice, the effectiveness of different end-to-end encryption implementations varies. Applications such as Signalrun natively on your device, and every copy of the application is the same across different installations. If the service provider were to backdoor their application in an attempt to steal your private keys, that could later be detected using reverse engineering.
|
||||
In practice, the effectiveness of different end-to-end encryption implementations varies. Applications such as Signal run natively on your device, and every copy of the application is the same across different installations. If the service provider were to backdoor their application in an attempt to steal your private keys, that could later be detected using reverse engineering.
|
||||
|
||||
On the other hand, web based end-to-end encryption implementations such as Proton Mail's webmail or Bitwarden's web vault rely on the server dynamically serving JavaScript code to the browser to handle cryptographic operations. A malicious server could target a specific user and send them malicious JavaScript code to steal their encryption key, and it would be extremely hard for the user to ever notice such a thing. Even if the user does notice the attempt to steal their key, it would be incredibly hard to prove that it is the provider trying to do so, because the server can choose to serve different web clients to different users.
|
||||
On the other hand, web-based end-to-end encryption implementations such as Proton Mail's webmail or Bitwarden's web vault rely on the server dynamically serving JavaScript code to the browser to handle cryptographic operations. A malicious server could target a specific user and send them malicious JavaScript code to steal their encryption key, and it would be extremely hard for the user to ever notice such a thing. Even if the user does notice the attempt to steal their key, it would be incredibly hard to prove that it is the provider trying to do so, because the server can choose to serve different web clients to different users.
|
||||
|
||||
Therefore, when relying on end-to-end encryption, you should choose to use native applications over web clients whenever possible.
|
||||
|
||||
@ -50,7 +50,7 @@ Your goals should be to segregate your online identities from each other, to ble
|
||||
|
||||
Instead of relying on privacy policies (which are promises that could be violated), try to obfuscate your information in such a way that it is very difficult for different providers to correlate data with each other and build a profile on you. This could come in the form of using encryption tools like Cryptomator prior to uploading your data to cloud services, using prepaid cards or cryptocurrency to protect your credit/debit card information, using a VPN to hide your IP address from websites and services on the internet, etc. The privacy policy should only be relied upon as a last resort, when you have exhausted all of your option for true privacy and need to put complete trust in your service provider.
|
||||
|
||||
Bear in mind that companies can hide their ownership or or share your information with data brokers, even if they are not in the advertising business. Thus, it makes little sense to solely focus on the "ad-tech" industry as a threat in your threat model. Rather, it makes a lot more sense to protect yourself from service providers as a whole, and any kind of corporate surveillance threat that most people are concerned about will be thwarted along with the rest.
|
||||
Bear in mind that companies can hide their ownership or share your information with data brokers, even if they are not in the advertising business. Thus, it makes little sense to solely focus on the "ad-tech" industry as a threat in your threat model. Rather, it makes a lot more sense to protect yourself from service providers as a whole, and any kind of corporate surveillance threat that most people are concerned about will be thwarted along with the rest.
|
||||
|
||||
## Limiting Public Information
|
||||
|
||||
@ -83,10 +83,10 @@ As a beginner, you may often fall into some bad practices while making a threat
|
||||
- Heavy reliance on privacy policies
|
||||
- Blindly shifting trust from one service provider to another
|
||||
- Heavy reliance on badness enumeration for privacy instead of systematically solving the problem
|
||||
- Blindly trusting open source software
|
||||
- Blindly trusting open-source software
|
||||
|
||||
As discussed, focusing solely on advertising networks and relying solely on privacy policies does not make up a sensible threat model. When switching away from a service provider, try to determine what the root problem is and see if your new provider has any technical solution to the problem. For example, you may not like Google Drive as it means giving Google access to all of your data. The underlying problem here is the lack of end to end encryption, which you can solve by using an encryption tool like Cryptomator or by switching to a provider who provides it out of the box like ProtonDrive. Blindly switching from Google Drive to a provider who does not provide end to end encryption like the Murena Cloud does not make sense.
|
||||
|
||||
Badness enumeration cannot provide any privacy guarantee and should not be relied upon against real threat actors. While things like adblockers may help block the low hanging fruits that is common tracking domains, they are trivially bypassed by just using a new domain that is not on common blacklists, or proxying third party tracking code on the first part domain. Likewise, antivirus software may help you quickly detect common malware with known signatures, but they can never fully protect you from said threat.
|
||||
Badness enumeration cannot provide any privacy guarantee and should not be relied upon against real threat actors. While things like ad blockers may help block the low hanging fruits that is common tracking domains, they are trivially bypassed by just using a new domain that is not on common blacklists, or proxying third-party tracking code on the first part domain. Likewise, antivirus software may help you quickly detect common malware with known signatures, but they can never fully protect you from said threat.
|
||||
|
||||
Another thing to keep in mind is that open source software is not automatically private or secure. Malicious code can be sneaked into the package by the developer of the project, contributors, libary developers, or the person who compile the code. Beyond that, sometimes, a piece of open source software may have worse security properties than its proprietary counterpart. An example of this would be traditional Linux desktops lacking verified boot, system integrity protection, or a full system access control for apps when compared to macOS. When doing threat modeling, it is vital that you evaluate the privacy and security properties of each piece of software being used, rather than just blindly trusting it because it is open source.
|
||||
Another thing to keep in mind is that open-source software is not automatically private or secure. Malicious code can be sneaked into the package by the developer of the project, contributors, library developers or the person who compile the code. Beyond that, sometimes, a piece of open-source software may have worse security properties than its proprietary counterpart. An example of this would be traditional Linux desktops lacking verified boot, system integrity protection, or a full system access control for apps when compared to macOS. When doing threat modeling, it is vital that you evaluate the privacy and security properties of each piece of software being used, rather than just blindly trusting it because it is open-source.
|
||||
|
@ -10,7 +10,7 @@ When you buy an Android phone, the device's default operating system often comes
|
||||
|
||||
This problem could be solved by using a custom Android-based operating system that does not come with such invasive integration. Unfortunately, many custom Android-based operating systems often violate the Android security model by not supporting critical security features such as AVB, rollback protection, firmware updates, and so on. Some of them also ship [`userdebug`](https://source.android.com/setup/build/building#choose-a-target) builds which expose root over [ADB](https://developer.android.com/studio/command-line/adb) and require [more permissive](https://github.com/LineageOS/android_system_sepolicy/search?q=userdebug&type=code) SELinux policies to accommodate debugging features, resulting in a further increased attack surface and weakened security model.
|
||||
|
||||
When choosing a custom Android-based operating system, you should make sure that it upholds the Android security model. Ideally, the custom operating system should have subtantial privacy and security improvements to justify adding yet another party to trust.
|
||||
When choosing a custom Android-based operating system, you should make sure that it upholds the Android security model. Ideally, the custom operating system should have substantial privacy and security improvements to justify adding yet another party to trust.
|
||||
|
||||
## Baseline Security
|
||||
|
||||
@ -46,7 +46,7 @@ It would be much better if you just stick to the stock operating system (which g
|
||||
|
||||
### Chromium Webview Updates
|
||||
|
||||
Android comes with a system [webview](https://developer.android.com/reference/android/webkit/WebView), a component that many apps rely on to use as part of their activity layout. It effectively behaves like a minimal browser, opening random websites with arbitary code the internet. Thus, it is very important that this component is consistently kept up to dater.
|
||||
Android comes with a system [webview](https://developer.android.com/reference/android/webkit/WebView), a component that many apps rely on to use as part of their activity layout. It effectively behaves like a minimal browser, opening random websites with arbitrary code the internet. Thus, it is very important that this component is consistently kept up to dater.
|
||||
|
||||
Some Android-based operating systems, including ones like CalyxOS, often fall behind on security updates for this component. Particularly, this has gotten so bad that they actually fell behind for [3 months](https://github.com/privacyguides/privacyguides.org/pull/548#issuecomment-1018245074) back in January 2022 and [2 months](https://github.com/privacyguides/privacyguides.org/pull/1378) in June 2022. It is a good indication that these operating systems cannot keep up with security updates and should not be used.
|
||||
|
||||
@ -58,7 +58,7 @@ End users should be using the production `user` builds, and any distributions th
|
||||
|
||||
### SELinux in Enforcing Mode
|
||||
|
||||
[SELinux](https://source.android.com/security/selinux) is a critical part of the Android security model, having the Linux kernel enforcing confinement for all proccesses, including system processes running as root.
|
||||
[SELinux](https://source.android.com/security/selinux) is a critical part of the Android security model, having the Linux kernel enforcing confinement for all processes, including system processes running as root.
|
||||
|
||||
In order for a system to be secure, it must have SELinux in Enforcing mode, accompanied by fine-grained SELinux policies.
|
||||
|
||||
@ -87,7 +87,11 @@ Currently, Google Pixel phones are the only devices that meet GrapheneOS's [hard
|
||||
|
||||
### DivestOS
|
||||
|
||||
<<<<<<< HEAD
|
||||
[DivestOS](https://divestos.org/) is a great aftermarket operating system for devices that have gone end of life or near end of life. Note that this is a harm reduction project, run by one developer on a best effort basis, and you should not buy a new device just to run DivestOS.
|
||||
=======
|
||||
DivestOS is a great aftermarket operating system for devices that have gone end-of-life or near end-of-life. Note that this is a harm reduction project, run by one developer on a best effort basis, and you should not buy a new device just to run DivestOS.
|
||||
>>>>>>> 7dd45472f23bcd00539a1afe4a79f9b1fdafcff6
|
||||
|
||||
Being a soft-fork of [LineageOS](https://lineageos.org/), DivestOS inherits many [supported devices](https://divestos.org/index.php?page=devices&base=LineageOS) from LineageOS. It has signed builds, making it possible to have [verified boot](https://source.android.com/security/verifiedboot) on some non-Pixel devices.
|
||||
|
||||
@ -102,4 +106,4 @@ It comes with substantial hardening over AOSP. DivestOS has automated kernel vul
|
||||
- GrapheneOS's per-network full [MAC randomization](https://en.wikipedia.org/wiki/MAC_address#Randomization) option on version 17.1 and higher
|
||||
- Automatic reboot/Wi-Fi/Bluetooth [timeout options](https://grapheneos.org/features)
|
||||
|
||||
With that being said, DivestOS is not without its faults. The developer does not have all of the devices he is building for, and for a lot of them he simply publishes the builds blind without actually testing them. Firmware update support [varies](https://gitlab.com/divested-mobile/firmware-empty/-/blob/master/STATUS) across devices. DivestOS also takes a very long time to update to a new major Android, and actually took longer than CalyxOS did as mentioned [above](#firmware-updates).
|
||||
With that being said, DivestOS is not without its faults. The developer does not have all of the devices he is building for, and for a lot of them he simply publishes the builds blind without actually testing them. Firmware update support [varies](https://gitlab.com/divested-mobile/firmware-empty/-/blob/master/STATUS) across devices. DivestOS also takes a very long time to update to a new major Android, and actually took longer than CalyxOS did as mentioned [above](#firmware-updates).
|
||||
|
Loading…
Reference in New Issue
Block a user