Access reports back from its digital security workshop at the regional conference of the Pan Africa chapter of the International Lesbian, Gay, Bisexual, Trans and Intersex Association - Pan Africa ILGA, or PAI.
The 4th USENIX Workshop on Free and Open Communications on the Internet is calling for papers. The workshop will be held on August 18, 2014 and seeks to bring together researchers and practitioners working on means to study, detect, or circumvent Internet censorship.
Past iterations of the workshop featured a number of great Tor research papers including a proposal for cloud-based onion routing, our OONI design paper, an analysis of how the GFW blocks Tor, a censorship analyzer for Tor, and a proposal for latency reduction in Tor circuits.
Given the past success of the FOCI series, we are looking forward to another round of great Tor papers.
Paper submissions are due on Tuesday, May 13, 2014, 11:59 p.m. PDT.
Last month Telstra became the first non-U.S. telco to release a regular report on government and law enforcement requests for user data.
Welcome to the fifteenth issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.
New beta version of Tor Browser 3.6
The second beta version of the next major Tor Browser release is out. Version 3.6 main highlight is the seamless integration of pluggable transports in the browser.
The update is important to users already using version 3.6-beta1 as it contains an updated OpenSSL to address potential client-side vectors for CVE-2014-0160 (also known as “Heartbleed”).
Jump to the release announcement to know more. Enjoy the update and report any bug you may find.
Key rotation at every level
The “Heartbleed” issue forces system administrators to consider private keys of network-facing applications affected by the bug as compromised. As Tor has no shortage of private keys in its design, a serious number of new keys has to be generated.
Roger Dingledine prompted relay operators to get new identity keys, “especially from the big relays, and we’ll be happier tolerating a couple of bumpy days while the network recovers”. Switching to a new relay identity key means that the relay is seen as new to the authorities again: they will lose their Guard status and bandwidth measurement. It seems that a number of operators followed the advice, as the network lost around 1 Gbit/s of advertised capacity between April 7th and April 10th.
For a brighter future if such massive RSA1024 relay key migration is ever again in order, Nick Mathewson wrote proposal 230. The proposal describes a mechanism for relays to advertise their old identity to directory authorities and clients.
Directory authorities can currently tie a relay’s nickname to its identity key with the Named flag. That feature proved to be less helpful than it seemed, and can subject its users to impersonation attacks. As relays switch to new identity keys, those who keep the same name will lose their Named flag for the next six months. So now seems a good time to “throw out the Named and Unnamed flags entirely”. Sebastian Hahn acted on the idea and started a draft proposal.
How should potentially compromised relays which have not switched to a new key be handled? On April 8th, grarpamp observed that more than 3000 relays had been restarted — hopefully to use the fixed version of OpenSSL. It is unknown how many of those relays have switched to a new key since. Andrea Shepard has been working on a survey to identify them. What is known though are relays that are unfortunately still vulnerable. Sina Rabbani has set up a visible list for guards and exits. To protect Tor users, directory authority operators have started to reject descriptors for vulnerable relays.
The identity keys for directory authorities are kept offline. But they are used to certify medium-term signing keys. Roger Dingledine’s
analysis reports “two (moria1 and urras) of the directory authorities were unaffected by the openssl bug, and seven were affected”.
At the time of writing, five of the seven affected authorities had new signing keys. In the meantime, Nick and Andrea have been busy writing code to prevent the old keys from being accepted by Tor clients.
Changing the relay identity keys of the directory authorities has not been done so far “because current clients expect them to be at their current IP:port:fingerprint and would scream in their logs and refuse to connect if the relay identity key changes”. The specification of the missing piece of code to allow a smoother transition has been written by Nick Mathewson in proposal 231.
Finally, hidden service operators are also generating new keys. Unfortunately, this forces every user of the service to update the address in their bookmarks or configuration.
As Roger summarized it: “fun times”.
More monthly status reports for March 2014
CVE-2014-0160 prompted Anthony Basile to release version 20140409 of Tor-ramdisk. OpenSSL has been updated and so has the kernel. Upgrading is strongly recommended.
David Fifield released new browser bundles configured to use the meek transport automatically. These bundles “use a web browser extension to make the HTTPS requests, so that the TLS layer looks like Firefox” — because it is Firefox. Meek is a promising censorship circumvention solution, so please try them!
The Tails developers announced that Tchou’s proposal is the winner of the recent Tails logo contest: “in the coming days we will keep on fine-tuning it and integrating it in time for Tails 1.0. So don’t hesitate to comment on it.”
Andrew Lewman reported on his week in Stockholm for the Civil Rights Defender’s Defender’s Days where he trained activists and “learned more about the situation in Moldova, Transnistria, Burma, Vietnam, and Bahrain”.
Andrew also updated the instructions for mirror operators wishing to have their sites listed on the Tor Project website. Thanks to Andreas Reich, Sebastian M. Bobrecki, and Jeremy L. Gaddis for running new mirrors!
Alan Shreve requested feedback on “Shroud”, a proposal for “a new system to provide public hidden services […] whose network location cannot be determined (like Tor hidden services) but are accessible by any client on the internet”.
Tor help desk roundup
Users often ask for steps they can take to maximize their anonymity while using Tor. Tips for staying anonymous when using Tor are visible on the download page.
News from Tor StackExchange
Jack Gundo uses Windows 7 with the built-in firewall and wants to block all traffic except Tor traffic. Guest suggested that on a closed-source system one can never be sure that all traffic really is blocked, so the original poster might be better off using a router which does the job. Another possible solution is PeerBlock, which also allows you to block all traffic from a machine.
Broot uses obfs3 to route OpenVPN traffic and can’t get obfsproxy running because the latest version only implements SOCKS4. Yawning Angel answered that version 0.2.7 of obfsproxy uses SOCKS5 and works with OpenVPN. However there is a bug that needs to be worked around.
This issue of Tor Weekly News has been assembled by Lunar, harmony, Matt Pagan, qbi, Roger Dingledine, Karsten Loesing and the Tails team.
Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!
Access provides a timeline and analysis of the Heartbleed vulnerability
The internet affects every individual in this world whether directly or indirectly. For example, a medical professional somewhere in Goma, Congo might access the internet to read and post reviews to current medication available and this might have an impact on the kind of medication that he/she recommends to the patient, whether the patient has access to affordable internet or not. Since the internet affects everyone, Africans citizens who are aware of internet governance discussions, expect African stakeholders to engage in these discussions.
Last October, in the aftermath of the revelations of mass government surveillance, the government of Brazil and the Internet Corporation for Assigned Names and Numbers (ICANN) announced a joint initiative that would bring together government, industry, civil society, and academia in a meeting in Brazil in April 2014 to discuss the future of internet governance. This evolved to become the Global Multistakeholder Meeting on the Future of Internet Governance, better known as NetMundial, an initiative of 12 governments -- Argentina, France, Ghana, Germany, India, Indonesia, South Africa, South Korea, Tunisia, Turkey, and the United States have since joined Brazi l-- with representatives of civil society, academia, and the technical community participating in various planning committees.
This release is an important security update over 3.6-beta-1. This release updates OpenSSL to version 1.0.1g, to address potential client-side vectors for CVE-2014-0160.
The browser itself does not use OpenSSL, and is not vulnerable to this CVE. However, this release is still considered an important security update, because it is theoretically possible to extract sensitive information from the Tor client sub-process.
Here is the complete changelog since 3.6-beta-1:
- All Platforms
- Update OpenSSL to 1.0.1g
- Bug 9010: Add Turkish language support.
- Bug 9387 testing: Disable JS JIT, type inference, asmjs, and ion.
- Update fte transport to 0.2.12
- Update NoScript to 184.108.40.206
- Update Torbutton to 220.127.116.11
- Update Tor Launcher to 0.2.5.3
- Bug 9665: Localize Tor's unreachable bridges bootstrap error
- Backport Pending Tor Patches:
- Bug 11286: Fix fte transport launch error
A list of frequently encountered known issues with the Tor Browser can be found on our bugtracker. Please check that list and help us diagnose and arrive at solutions for those issues before contacting support.
On April 8, the Court of Justice of the European Union’s (CJEU) ruled on the Data Retention Directive and invalidated this controversial European law.
In the wake of the ongoing revelations about NSA surveillance, Access releases an infographic measuring how the leading four reform proposals stack up against the International Principles on the Application of Human Rights to Communications Surveillance.
Yesterday, the Open Technology Institute at New America Foundation, Public Knowledge, Access, Center for Democracy & Technology, Freedom House, and Human Rights Watch, ahead of the April 10th hearing “Should The Department Of Commerce Relinquish Direct Oversight Over ICANN?”, sent a letter to the House Judiciary Committee restating their support of the NTIA’s decision to transition key Internet domain name functions to the global multi-stakeholder community and the organizations' concerns regarding the DOTCOM Act. The DOTCOM Act is a piece of legislation that was introduced this past March that would require a Government Accountability Office review and report prior to the NTIA transition, a process that could take up to a year.
Today, Access and our partners sent a letter demanding an immediate investigation into what appears to be U.S. government complicity in silencing political speech in Mexico.
Welcome to the fourteenth issue of Tor Weekly News in 2014, the weekly newsletter that covers what’s happening in the Tor community.
The Heartbleed Bug and Tor
OpenSSL bug CVE-2014-0160, also known as the Heartbleed bug, “allows anyone on the Internet to read the memory of systems protected by the vulnerable versions of the OpenSSL software”, potentially enabling the compromise of information including “user names and passwords, instant messages, emails, and business critical documents and communication”. Tor is one of the very many networking programs that use OpenSSL to communicate over the Internet, so within a few hours of the bug’s disclosure Roger Dingledine posted a security advisory describing how it affects different areas of the Tor ecosystem.
“The short version is: upgrade your openssl”. Tor Browser users should upgrade as soon as possible to the new 3.5.4 release, which includes OpenSSL 1.0.1g, fixing the vulnerability. “The browser itself does not use OpenSSL…however, this release is still considered an important security update, because it is theoretically possible to extract sensitive information from the Tor client sub-process”, wrote Mike Perry.
Those using a system Tor should upgrade their OpenSSL version and manually restart their Tor process. For relay operators, “best practice would be to update your OpenSSL package, discard all the files in keys/ in your DataDirectory, and restart your Tor to generate new keys”, and for hidden service administrators, “to move to a new hidden-service address at your convenience”. Clients, relays, and services using an older version of OpenSSL, including Tails, are not affected by this bug.
For mobile devices, Nathan Freitas called for immediate testing of Orbot 13.0.6-beta-3, which not only upgrades OpenSSL but also contains a fix for the transproxy leak described by Mike Perry two weeks ago, in addition to smaller fixes and improvements from 13.0.6-beta-1 and subsequently. You can obtain a copy of the .apk file directly from the Guardian Project’s distribution page.
Ultimately, “if you need strong anonymity or privacy on the Internet, you might want to stay away from the Internet entirely for the next few days while things settle.” Be sure to read Roger’s post in full for a more detailed explanation if you are unsure what this bug might mean for you.
A hall of Tor mirrors
Users the world over are increasingly aware of Tor’s leading reputation as a well-researched and -developed censorship circumvention tool — and, regrettably, so are censorship authorities. Events such as last month’s (short-lived) disruption of access to the main Tor Project website from some Turkish internet connections have reaffirmed the need for multiple distribution channels that users can turn to during a censorship event in order to acquire a copy of the Tor Browser, secure their browsing, and beat the censors. One of the simplest ways of ensuring this is to make a copy of the entire website and put it somewhere else.
Recent days have seen the establishment of a large number of new Tor website mirrors, for which thanks must go to Max Jakob Maass, Ahmad Zoughbi, Darren Meyer, Piratenpartei Bayern, Bernd Fix, Florian Walther, the Electronic Frontier Foundation (on a subdomain formerly housing the Tor Project’s official site), the Freedom of the Press Foundation, Caleb Xu, George Kargiotakis, and Tobias Markus, as well as to all the mirror operators of longer standing.
If you’d like to participate in the effort to render blocking of the Tor website even more futile, please see the instructions for running a mirror, and then come to the tor-mirrors mailing list to notify the community!
Mission Impossible: Hardening Android for Security and Privacy
On the Tor Blog, Mike Perry posted another large and comprehensive hacking guide, this time describing “the installation and configuration of a prototype of a secure, full-featured, Android telecommunications device with full Tor support, individual application firewalling, true cell network baseband isolation, and optional ZRTP encrypted voice and video support.” The walkthrough covers hardware selection and setup, recommended software, Google-free backups, and disabling the built-in microphone of a Nexus 7 tablet (with a screwdriver).
As it stands, following this guide may require a certain level of patience, but as Mike wrote, “it is our hope that this work can be replicated and eventually fully automated, given a good UI, and rolled into a single ROM or ROM addon package for ease of use. Ultimately, there is no reason why this system could not become a full fledged off the shelf product, given proper hardware support and good UI for the more technical bits.”
Mike has already added to and improved parts of the guide following contributions from users in the comments beneath the post. If you would like to work (or already are working) at the cutting-edge of research into mobile device security and usability, take a look at Mike’s suggestions for future work at the bottom of the guide, and please share your ideas with the community.
More monthly status reports for March 2014
The wave of regular monthly reports from Tor project members for the month of March continued, with submissions from Arlo Breault, Colin Childs, George Kadianakis, Michael Schloh von Bennewitz, Philipp Winter, and Kevin Dyer.
David Goulet announced the seventh release candidate for Torsocks 2.0.0, the updated version of the wrapper for safely using network applications with Tor. “Nothing major, fixes and some code refactoring went in”, said David. Please review, test, and report any issues you find.
Nathan Freitas posted a brief analysis of the role played by Orbot in the recent Turkish internet service disruption: “it might be good to think about Turkey’s Twitter block as a “censorship-lite” event, not unlike the UK or Indonesia, and then figure out how we can encourage more adoption.”
Jann Horn drew attention to a potential issue caused by some Tor relays sending out globally-sequential IP IDs. Roger Dingledine linked to an academic paper connected with the same question, while Daniel Bilik suggested one method of preventing this from happening on FreeBSD. Exactly how significant this issue is (or is not) for the Tor network is very much an open question; further research into which operating systems it affects, and how it might be related to known attacks against anonymity, would be very welcome.
As part of their current campaign to fund usable encryption tools (including Tor) for journalists, the Freedom of the Press Foundation published a blog post on the “little-known” Tails operating system, featuring quotes from three of the journalists most prominently associated with the recent Snowden disclosures (Laura Poitras, Glenn Greenwald, and Barton Gellman) attesting to the important role Tails has played in their ability to carry out their work. If you’re impressed by what you read, please donate to the campaign — or become a Tails contributor!
Two Tor-affiliated projects — the Open Observatory of Network Interference and Tails — have each submitted a proposal to this year’s Knight News Challenge. The OONI proposal involves further developing the ooni-probe software suite and deploying it in countries around the world, as well as working on analysis and visualization of the data gathered, in collaboration with the Chokepoint Project; while Tails’ submission proposes to “improve Tails to limit the impact of security flaws, isolate critical applications, and provide same-day security updates”. Voting is limited to the Knight Foundation’s trustees, but feel free to read each submission and leave your comments for the developers.
Robert posted a short proposal for “a prototype of a next-generation Tor control interface, aiming to combine the strengths of both the present control protocol and the state-of-the-art libraries”. The idea was originally destined for this year’s GSoC season, but in the end Robert opted instead to “get some feedback and let the idea evolve.”
Following last week’s progress on the Tor website redesign campaign, William Papper presented a functioning beta version of the new download page that he and a team of contributors have been building. Have a look, and let the www-team list know what works and what doesn’t!
Michael Schloh von Bennewitz began work on a guide to configuring a virtual machine for building the Tor Browser Bundle, and another to building with Gitian.
Tor help desk roundup
Tor Browser users often try to set a proxy when they don’t need to. Many users think they can circumvent website bans or get additional security by doing this. Discussion on clarifying the tor-launcher interface is taking place on the bug tracker.
News from Tor StackExchange
The question “Why does GnuPG show the signature of Erinn Clark as not trusted?” got the best rating. When a user verified the downloaded copy of Tor Browser Bundle, GnuPG showed Erinn’s signature as not-trusted. Jens Kubieziel explained the trust model of GnuPG in his answer, and gapz referred to the handbook.
The following questions need better answers: “How to validate certificates?”; “Why does Atlas sometimes show a different IP address from https://check.torproject.org?”; “Site login does not persist”; and “My Atlas page is blank”.
If you know good answers to these questions, please help the users of Tor StackExchange.
This issue of Tor Weekly News has been assembled by harmony, Matt Pagan, qbi, Lunar, Roger Dingledine, and Karsten Loesing.
Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!
This release updates only OpenSSL to version 1.0.1g, to address potential client-side vectors for CVE-2014-0160.
The browser itself does not use OpenSSL, and is not vulnerable to this CVE. However, this release is still considered an important security update, because it is theoretically possible to extract sensitive information from the Tor client sub-process.
Here is the changelog:
- All Platforms
- Update OpenSSL to 1.0.1g
Today’s historic decision reopened the debate on the necessity and proportionality of communications data retention in the EU and around the world.
A new OpenSSL vulnerability on 1.0.1 through 1.0.1f is out today, which can be used to reveal memory to a connected client or server.
If you're using an older OpenSSL version, you're safe.
Note that this bug affects way more programs than just Tor — expect everybody who runs an https webserver to be scrambling today. If you need strong anonymity or privacy on the Internet, you might want to stay away from the Internet entirely for the next few days while things settle.
Here are our first thoughts on what Tor components are affected:
- Clients: The browser part of Tor Browser shouldn't be affected, since it uses libnss rather than openssl. But the Tor client part is: Tor clients could possibly be induced to send sensitive information like "what sites you visited in this session" to your entry guards. If you're using TBB we'll have new bundles out shortly; if you're using your operating system's Tor package you should get a new OpenSSL package and then be sure to manually restart your Tor. [update: the bundles are out, and you should upgrade]
- Relays and bridges: Tor relays and bridges could maybe be made to leak their medium-term onion keys (rotated once a week), or their long-term relay identity keys. An attacker who has your relay identity key can publish a new relay descriptor indicating that you're at a new location (not a particularly useful attack). An attacker who has your relay identity key, has your onion key, and can intercept traffic flows to your IP address can impersonate your relay (but remember that Tor's multi-hop design means that attacking just one relay in the client's path is not very useful). In any case, best practice would be to update your OpenSSL package, discard all the files in keys/ in your DataDirectory, and restart your Tor to generate new keys. (You will need to update your MyFamily torrc lines if you run multiple relays.) [update: we've cut the vulnerable relays out of the network]
- Hidden services: Tor hidden services might leak their long-term hidden service identity keys to their guard relays. Like the last big OpenSSL bug,
this shouldn't allow an attacker to identify the location of the hidden service[edit: if it's your entry guard that extracted your key, they know where they got it from]. Also, an attacker who knows the hidden service identity key can impersonate the hidden service. Best practice would be to move to a new hidden-service address at your convenience.
- Directory authorities: In addition to the keys listed in the "relays and bridges" section above, Tor directory authorities might leak their medium-term authority signing keys. Once you've updated your OpenSSL package, you should generate a new signing key. Long-term directory authority identity keys are offline so should not be affected (whew). More tricky is that clients have your relay identity key hard-coded, so please don't rotate that yet. We'll see how this unfolds and try to think of a good solution there.
- Tails is still tracking Debian oldstable, so it should not be affected by this bug.
- Orbot looks vulnerable; they have some new packages available for testing.
- The webservers in the https://www.torproject.org/ rotation needed (and got) upgrades. Maybe we'll need to throw away our torproject SSL web cert and get a new one too.
Access, an international organization committed to extending and defending the rights of internet users worldwide, is encouraged by recent votes that will help secure an open internet. Yesterday, the European Union voted 534-23 in favor of network neutrality, and just last week the Brazilian Congress also voted to protect the internet as part of a larger "internet bill of rights."
Today the European Parliament voted on the European Telecoms Single Market proposal, a major legislative achievement protecting net neutrality that will have a crucial impact on how European users experience the internet for generations.
Updates: See the Changes section for a list of changes since initial posting.
The future is here, and ahead of schedule. Come join us, the weather's nice.
This blog post describes the installation and configuration of a prototype of a secure, full-featured, Android telecommunications device with full Tor support, individual application firewalling, true cell network baseband isolation, and optional ZRTP encrypted voice and video support. ZRTP does run over UDP which is not yet possible to send over Tor, but we are able to send SIP account login and call setup over Tor independently.
The SIP client we recommend also supports dialing normal telephone numbers, but that is also UDP+ZRTP, and the normal telephone network side of the connection is obviously not encrypted.
Aside from a handful of binary blobs to manage the device firmware and graphics acceleration, the entire system can be assembled (and recompiled) using only FOSS components. However, as an added bonus, we will describe how to handle the Google Play store as well, to mitigate the two infamous Google Play Backdoors.
Android is the most popular mobile platform in the world, with a wide variety of applications, including many applications that aid in communications security, censorship circumvention, and activist organization. Moreover, the core of the Android platform is Open Source, auditable, and modifiable by anyone.
Unfortunately though, mobile devices in general and Android devices in particular have not been designed with privacy in mind. In fact, they've seemingly been designed with nearly the opposite goal: to make it easy for third parties, telecommunications companies, sophisticated state-sized adversaries, and even random hackers to extract all manner of personal information from the user. This includes the full content of personal communications with business partners and loved ones. Worse still, by default, the user is given very little in the way of control or even informed consent about what information is being collected and how.
This post aims to address this, but we must first admit we stand on the shoulders of giants. Organizations like Cyanogen, F-Droid, the Guardian Project, and many others have done a great deal of work to try to improve this situation by restoring control of Android devices to the user, and to ensure the integrity of our personal communications. However, all of these projects have shortcomings and often leave gaps in what they provide and protect. Even in cases where proper security and privacy features exist, they typically require extensive configuration to use safely, securely, and correctly.
This blog post enumerates and documents these gaps, describes workarounds for serious shortcomings, and provides suggestions for future work.
It is also meant to serve as a HOWTO to walk interested, technically capable people through the end-to-end installation and configuration of a prototype of a secure and private Android device, where access to the network is restricted to an approved list of applications, and all traffic is routed through the Tor network.
It is our hope that this work can be replicated and eventually fully automated, given a good UI, and rolled into a single ROM or ROM addon package for ease of use. Ultimately, there is no reason why this system could not become a full fledged off the shelf product, given proper hardware support and good UI for the more technical bits.
The remainder of this document is divided into the following sections:
- Hardware Selection
- Installation and Setup
- Google Apps Setup
- Recommended Software
- Device Backup Procedure
- Removing the Built-in Microphone
- Removing Baseband Remnants
- Future Work
- Changes Since Initial Posting
If you truly wish to secure your mobile device from remote compromise, it is necessary to carefully select your hardware. First and foremost, it is absolutely essential that the carrier's baseband firmware is completely isolated from the rest of the platform. Because your cell phone baseband does not authenticate the network (in part to allow roaming), any random hacker with their own cell network can exploit these backdoors and use them to install malware on your device.
While there are projects underway to determine which handsets actually provide true hardware baseband isolation, at the time of this writing there is very little public information available on this topic. Hence, the only safe option remains a device with no cell network support at all (though cell network connectivity can still be provided by a separate device). For the purposes of this post, the reference device is the WiFi-only version of the 2013 Google Nexus 7 tablet.
For users who wish to retain full mobile access, we recommend obtaining a cell modem device that provides a WiFi access point for data services only. These devices do not have microphones and in some cases do not even have fine-grained GPS units (because they are not able to make emergency calls). They are also available with prepaid plans, for rates around $20-30 USD per month, for about 2GB/month of 4G data. If coverage and reliability is important to you though, you may want to go with a slightly more expensive carrier. In the US, T-Mobile isn't bad, but Verizon is superb.
To increase battery life of your cell connection, you can connect this access point to an external mobile USB battery pack, which typically will provide 36-48 hours of continuous use with a 6000mAh battery.
The total cost of a Wifi-only tablet with cell modem and battery pack is only roughly USD $50 more than the 4G LTE version of the same device.
In this way, you achieve true baseband isolation, with no risk of audio or network surveillance, baseband exploits, or provider backdoors. Effectively, this cell modem is just another untrusted router in a long, long chain of untrustworthy Internet infrastructure.
However, do note though that even if the cell unit does not contain a fine-grained GPS, you still sacrifice location privacy while using it. Over an extended period of time, it will be possible to make inferences about your physical activity, behavior and personal preferences, and your identity, based on cell tower use alone.
We will focus on the installation of Cyanogenmod 11 using Team Win Recovery Project, both to give this HOWTO some shelf life, and because Cyanogenmod 11 features full SELinux support (Dear NSA: What happened to you guys? You used to be cool. Well, some of you. Some of the time. Maybe. Or maybe not).
The use of Google Apps and Google Play services is not recommended due to security issues with Google Play. However, we do provide workarounds for mitigating those issues, if Google Play is required for your use case.
Installation and Setup: ROM and Core App Installation
With the 2013 Google Nexus 7 tablet, installation is fairly straight-forward. In fact, it is actually possible to install and use the device before associating it with a Google Account in any way. This is a desirable property, because by default, the otherwise mandatory initial setup process of the stock Google ROM sends your device MAC address directly to Google and links it to your Google account (all without using Tor, of course).
The official Cyanogenmod installation instructions are available online, but with a fresh out of the box device, here are the key steps for installation without activating the default ROM code at all (using Team Win Recovery Project instead of ClockWorkMod).
First, on your desktop/laptop computer (preferably Linux), perform the following:
- Download the latest CyanogenMod 11 release (we used cm-11-20140308-SNAPSHOT-M4)
- Download the latest Team Win Recovery Project image (we used 18.104.22.168)
- Download the F-Droid package (we used 0.63)
- Download the Orbot package from F-Droid (we used 13.0.5)
- Download the Droidwall package from F-Droid (we used 1.5.7)
- Download the Droidwall Firewall Scripts attached to this blogpost
- Download the Google Apps for Cyanogenmod 11 (optional)
Because the download integrity for all of these packages is abysmal, here is a signed set of SHA256 hashes I've observed for those packages.
Once you have all of those packages, boot your tablet into fastboot mode by holding the Power button and the Volume Down button during a cold boot. Then, attach it to your desktop/laptop machine with a USB cable and run the following commands, as root:
apt-get install android-tools-adb android-tools-fastboot fastboot devices fastboot oem unlock fastboot flash recovery openrecovery-twrp-22.214.171.124-flo.img
After the recovery firmware is flashed successfully, use the volume keys to select Recovery and hit the power button to reboot the device (or power it off, and then boot holding Power and Volume Up).
Once Team Win boots, go into Wipe and select Advanced Wipe. Select all checkboxes except for USB-OTG, and slide to wipe. Once the wipe is done, click Format Data. After the format completes, issue these commands from your Linux root shell:
adb server start adb push cm-11-20140308-SNAPSHOT-M4-flo.zip /sdcard/ adb push gapps-kk-20140105-signed.zip /sdcard/ # Optional
After this push process completes, go to the Install menu, and select the Cyanogen zip, and optionally the gapps zip for installation. Then click Reboot, and select System.
After rebooting into your new installation, skip all CyanogenMod and Google setup, disable location reporting, and immediately disable WiFi and turn on Airplane mode.
Then, go into Settings -> About Tablet and scroll to the bottom and click the greyed out Build number 5 times until developer mode is enabled. Then go into Settings -> Developer Options and turn on USB Debugging.
After that, run the following commands from your Linux root shell:
adb install FDroid.apk adb install org.torproject.android_70.apk adb install com.googlecode.droidwall_157.apk
You will need to approve the ADB connection for the first package, and then they should install normally.
VERY IMPORTANT: Whenever you finish using adb, always remember to disable USB Debugging and restore Root Access to Apps only. While Android 4.2+ ROMs now prompt you to authorize an RSA key fingerprint before allowing a debugging connection (thus mitigating adb exploit tools that bypass screen lock and can install root apps), you still risk additional vulnerability surface by leaving debugging enabled.
Installation and Setup: Initial Configuration
After the base packages are installed, go into the Settings app, and make the following changes:
- Location Access -> Off
- Language & Input =>
- Spell Checker -> Android Spell Checker -> Disable Contact Names
- Disable Google Voice Typing
- Android Keyboard (AOSP) =>
- Disable AOSP next-word suggestion (do this first!)
- Auto-correction -> Off
- Enable Back up my data (just temporarily, for the next step)
- Uncheck Automatic restore
- Disable Backup my data
- Enabled by default
- Settings (three dots) -> Show Built In Apps
- Enable Privacy Guard for every app with the following exceptions:
- Config Updater
- Google Account Manager (long press)
- Modify Settings -> Off
- Wifi Change -> Off
- Data Change -> Off
- Google Play Services (long press)
- Location -> Off
- Modify Settings -> Off
- Draw on top -> Off
- Record Audio -> Off
- Wifi Change -> Off
- Google Play Store (long press)
- Location -> Off
- Send SMS -> Off
- Modify Settings -> Off
- Data change -> Off
- Google Services Framework (long press)
- Modify Settings -> Off
- Wifi Change -> Off
- Data Change -> Off
- PIN screen Lock
- Allow Unknown Sources (For F-Droid)
- Encrypt Tablet
After that last step, your tablet will reboot and encrypt itself. It is important to do this step early, as I have noticed additional apps and configuration tweaks can make this process fail later on. You can and should also change the boot password to be different from the screen unlock PIN later on, to mitigate compromise of this password from shoulder surfers, and to allow the use of a much longer (and non-numeric) password that you would prefer not to type every time you unlock the screen.
To do this, open the Terminal app, and type the following commands:
su vdc cryptfs changepw NewMoreSecurePassword
Watch for typos! That command does not ask you to re-type that password for confirmation.
Installation and Setup: Disabling Invasive Apps and Services
Before you configure the Firewall or enable the network, you likely want to disable at least a subset of the following built-in apps and services, by using Settings -> Apps -> All, and then clicking on each app and hitting the Disable button:
- Face Unlock
- Google Backup Transport
- Google Calendar Sync
- Google One Time Init
- Google Partner Setup
- Google Contacts Sync
- Google Search
- Market Feedback Agent
- News & Weather
- One Time Init
- Picasa Updater
- Sound Search for Google Play
Ok, now let's install the firewall and tor support scripts. Go back into Settings -> Developer Options and enable USB Debugging and change Root Access to Apps and ADB. Then, unzip the android-firewall.zip on your laptop, and run the installation script:
That firewall installation provides several key scripts that provide functionality that is currently impossible to achieve with any app (including Orbot):
- It installs a userinit script to block all network access during boot.
- It disables "Google Captive Portal Detection", which involves connection attempts to Google servers upon Wifi assocation (these requests are made by the Android Settings UID, which should normally be blocked from the network, unless you are first registering for Google Play).
- It contains a Droidwall script that configures Tor transproxy rules to send all of your traffic through Tor. These rules include a fix for a Linux kernel Tor transproxy packet leak issue.
- The main firewall-tor.sh Droidwall script also includes an input firewall, to block all inbound connections to the device. It also fixes a Droidwall permissions vulnerability
- It installs an optional script to allow the Browser app to bypass Tor for logging into WiFi captive portals.
- It installs an optional script to temporarily allow network adb access when you need it (if you are paranoid about USB exploits, which you should be).
- It provides an optional script to allow the UDP activity of LinPhone to bypass Tor, to allow ZRTP-encrypted Voice and Video SIP/VoIP calls. SIP account login/registration and call setup/signaling can be done over TCP, and Linphone's TCP activity is still sent through Tor with this script.
Note that with the exception of the userinit network blocking script, installing these scripts does not activate them. You still need to configure Droidwall to use them.
We use Droidwall instead of Orbot or AFPWall+ for five reasons:
- Droidwall's app-based firewall and Orbot's transproxy are known to conflict and reset one another.
- Droidwall does not randomly drop transproxy rules when switching networks (Orbot has had several of these types of bugs).
- Unlike AFWall+, Droidwall is able to auto-launch at "boot" (though still not before the network and Android Services come online and make connections).
- AFWall+'s "fix" for this startup data leak problem does not work on Cyanogenmod (hence our userinit script instead).
- Aside from the permissions issue fixed by our firewall-tor.sh script, AFWall+ provides no additional security fixes over the stock Droidwall.
To make use of the firewall scripts, open up Droidwall and hit the config button (the vertical three dots), go to More -> Set Custom Script. Enter the following:
. /data/local/firewall-tor.sh #. /data/local/firewall-adb.sh #. /data/local/firewall-linphone.sh #. /data/local/firewall-capportal.sh
Note that these scripts have been installed into a readonly root directory. Because they are run as root, installing them to a world-writable location like /sdcard/ is extremely unwise.
Later, if you want to enable one of network adb, LinPhone UDP, or captive portal login, go back into this window and remove the leading comment ('#') from the appropriate lines (this is obviously one of the many aspects of this prototype that could benefit from real UI).
Then, configure the apps you want to allow to access the network. Note that the only Android system apps that must access the network are:
- CM Updater
- Downloads, Media Storage, Download Manager
Orbot's network access is handled via the main firewall-tor.sh script. You do not need to enable full network access to Orbot in Droidwall.
The rest of the apps you can enable at your discretion. They will all be routed through Tor automatically.
Once the Droidwall is configured, you can enable Orbot. Do not grant Orbot superuser access. It still opens the transproxy ports you need without root, and Droidwall is managing installation of the transproxy rules, not Orbot.
You are now ready to enable Wifi and network access on your device. For vulnerability surface reduction, you may want to use the Advanced Options -> Static IP to manually enter an IP address for your device to avoid using dhclient. You do not need a DNS server, and can safely set it to 127.0.0.1.
If you installed the Google Apps zip, you need to do a few things now to set it up, and to further harden your device. If you opted out of Google Apps, you can skip to the next section.
Google Apps Setup: Initializing Google Play
The first time you use Google Play, you will need to enable three apps in Droidwall: "Google Account Manager, Google Play Services...", "Settings, Dev Tools, Fused Location...", and "Google Play" itself.
If you do not have a Google account, your best bet is to find open wifi to create one, as Google will often block accounts created through Tor, even if you use an Android device.
After you log in for the first time, you should be able to disable the "Google Account Manager, Google Play Services..." and the "Settings..." apps in Droidwall, but your authentication tokens in Google Play may expire periodically. If this happens, you should only need to temporarily enable the "Google Account Manager, Google Play Services..." app in Droidwall to obtain new ones.
Google Apps Setup: Mitigating the Google Play Backdoors
If you do choose to use Google Play, you need to be very careful about how you allow it to access the network. In addition to the risks associated with using a proprietary App Store that can send you targeted malware-infected packages based on your Google Account, it has at least two major user experience flaws:
- Anyone who is able to gain access to your Google account can silently install root or full permission apps without any user interaction what-so-ever. Once installed, these apps can retroactively clear what little installation notification and UI-based evidence of their existence there was in the first place.
- The Android Update Process does not inform the user of changes in permissions of pending update apps that happen to get installed after an Android upgrade.
The first issue can be mitigated by ensuring that Google Play does not have access to the network when not in use, by disabling it in Droidwall. If you do not do this, apps can be installed silently behind your back. Welcome to the Google Experience.
For the second issue, you can install the SecCheck utility, to monitor your apps for changes in permissions during a device upgrade.
Google Apps Setup: Disabling Google Cloud Messaging
If you have installed the Google Apps zip, you have also enabled a feature called Google Cloud Messaging.
The Google Cloud Messaging Service allows apps to register for asynchronous remote push notifications from Google, as well as send outbound messages through Google.
Notification registration and outbound messages are sent via the app's own UID, so using Droidwall to disable network access by an app is enough to prevent outbound data, and notification registration. However, if you ever allow network access to an app, and it does successfully register for notifications, these notifications can be delivered even when the app is once again blocked from accessing the network by Droidwall.
These inbound notifications can be blocked by disabling network access to the "Google Account Manager, Google Play Services, Google Services Framework, Google Contacts Sync" in Droidwall. In fact, the only reason you should ever need to enable network access by this service is if you need to log in to Google Play again if your authentication tokens ever expire.
If you would like to test your ability to control Google Cloud Messaging, there are two apps in the Google Play store than can help with this. GCM Test allows for simple send and receive pings through GCM. Push Notification Tester will allow you to test registration and asynchronous GCM notification.
Ok, so now that we have locked down our Android device, now for the fun bit: secure communications!
We recommend the following apps from F-Droid:
Xabber is a full Java implementation of XMPP, and supports both OTR and Tor. Its UI is a bit more streamlined than Guardian Project's ChatSecure, and it does not make use of any native code components (which are more vulnerable to code execution exploits than pure Java code). Unfortunately, this means it lacks some of ChatSecure's nicer features, such as push-to-talk voice and file transfer.
Despite better protection against code execution, it does have several insecure default settings. In particular, you want to make the following changes:
- Notifications -> Message text in Notifications -> Off (notifications can be read by other apps!)
- Accounts -> Integration into system accounts -> Off
- Accounts -> Store message history -> Don't Store
- Security -> Store History -> Off
- Security -> Check Server Certificate
- Chat -> Show Typing Notifications -> Off
- Connection Settings -> Auto-away -> disabled
- Connection Settings -> Extended away when idle -> Disabled
- Keep Wifi Awake -> On
- Prevent sleep Mode -> On
Offline Calendar is a hack to allow you to create a fake local Google account that does not sync to Google. This allows you to use the Calendar App without risk of leaking your activities to Google. Note that you must exempt both this app and Calendar from Privacy Guard for it to function properly.
LinPhone is a FOSS SIP client that supports TCP TLS signaling and ZRTP. Note that neither TLS nor ZRTP are enabled by default. You must manually enable them in Settings -> Network -> Transport and Settings -> Network -> Media Encryption.
ostel.co is a free SIP service run by the Guardian Project that supports only TLS and ZRTP, but does not allow outdialing to normal PSTN telephone numbers. While Bitcoin has many privacy issues of its own, the Bitcoin community maintains a couple lists of "trunking" providers that allow you to obtain a PSTN phone number in exchange for Bitcoin payment.
A free offline mapping tool. While the UI is a little clunky, it does support voice navigation and driving directions, and is a handy, private alternative to Google Maps.
The VLC port in F-Droid is a fully capable media player. It can play mp3s and most video formats in use today. It is a handy, private alternative to Google Music and other closed-source players that often report your activity to third party advertisers. VLC does not need network access to function.
We do not yet have a port of Tor Browser for Android (though one is underway -- see the Future Work section). Unless you want to use Google Play to get Chrome, Firefox is your best bet for a web browser that receives regular updates (the built in Browser app does not). HTTPS-Everywhere and NoScript are available, at least.
Bitcoin might not be the most private currency in the world. In fact, you might even say it's the least private currency in the world. But, it is a neat toy.
The Launch App Ops app is a simple shortcut into the hidden application permissions editor in Android. A similar interface is available through Settings -> Privacy -> Privacy Guard, but a direct shortcut to edit permissions is handy. It also displays some additional system apps that Privacy Guard omits.
The Permissions app gives you a view of all Android permissions, and shows you which apps have requested a given permission. This is particularly useful to disable the record audio permission for apps that you don't want to suddenly decide to listen to you. (Interestingly, the Record Audio permission disable feature was broken in all Android ROMs I tested, aside from Cyanogenmod 11. You can test this yourself by revoking the permission from the Sound Recorder app, and verifying that it cannot record.)
In addition to being supercute, CatLog is an excellent Android monitoring and debugging tool. It allows you to monitor and record the full set of Android log events, which can be helpful in diagnosing issues with apps.
OS Monitor is an excellent Android process and connection monitoring app, that can help you watch for CPU usage and connection attempts by your apps.
Intent Intercept allows you to inspect and extract Android Intent content without allowing it to get forwarded to an actual app. This is useful for monitoring how apps attempt to communicate with eachother, though be aware it only covers one of the mechanisms of inter-app communication in Android.
Now that your device is fully configured and installed, you probably want to know how to back it up without sending all of your private information directly to Google. While the Team Win Recovery Project will back up all of your system settings and apps (even if your device is encrypted), it currently does not back up the contents of your virtualized /sdcard. Remembering to do a couple adb pulls of key directories can save you a lot of heartache should you suffer some kind of data loss or hardware failure (or simply drop your tablet on a bridge while in a rush to catch a train).
The backup.sh script uses adb to pull your Download and Pictures directories from the /sdcard, as well as pulls the entire TWRP backup directory.
Before you use that script, you probably want to delete old TWRP backup folders so as to only pull one backup, to reduce pull time. These live in /sdcard/TWRP/BACKUPS/, which is also known as /storage/emulated/0/TWRP/BACKUPS in the File Manager app.
To use this script over the network without a usb cable, enable both USB Debugging and ADB Over Network in your developer settings. The script does not require you to enable root access from adb, and you should not enable root because it takes quite a while to run a backup, especially if you are using network adb.
Prior to using network adb, you must edit your Droidwall custom scripts to allow it (by removing the '#' in the #. /data/local/firewall-adb.sh line you entered earlier), and then run the following commands from a non-root Linux shell on your desktop/laptop (the ADB Over Network setting will tell you the IP and port):
killall adb adb connect ip:5555
Network adb also has the advantage of not requiring root on your desktop/Laptop.
VERY IMPORTANT: Don't forget to disable USB Debugging, as well as the Droidwall adb exemption when you are done with the backup!
If you would really like to ensure that your device cannot listen to you even if it is exploited, it turns out it is very straight-forward to remove the built-in microphone in the Nexus 7. There is only one mic on the 2013 model, and it is located just below the volume buttons (the tiny hole).
To remove it, all you need to do is pop off the the back panel (this can be done with your fingernails, or a tiny screwdriver), and then you can shave the microphone right off that circuit board, and reattach the panel. I have done this to one of my devices, and it was subsequently unable to record audio at all, without otherwise affecting functionality.
You can still use apps that require a microphone by plugging in headphone headset that contains a mic built in (these cost around $20 and you can get them from nearly any consumer electronics store). I have also tested this, and was still able to make a Linphone call from a device with the built in microphone removed, but with an external headset. Note that the 2012 Nexus 7 does not support these combination microphone+headphone jacks (and it has a secondary microphone as well). You must have the 2013 model.
The 2013 Nexus 7 Teardown video can give you an idea of what this looks like before you try it (Opt-In to HTML5 to view in Tor Browser without flash). Again you do not need to fully disassemble the device - you only need to remove the back cover.
Pro-Tip: Before you go too crazy and start ripping out the cameras too, remember that you can cover the cameras with a sticker or tape when not in use. I have found that regular old black electrical tape applies seamlessly, is non-obvious to casual onlookers, and is easy to remove without smudging or gunking up the lenses. Better still, it can be removed and reapplied many times without losing its adhesive.
There is one more semi-hardware mod you may want to make, though.
It turns out that the 2013 Wifi Nexus 7 does actually have a partition that contains a cell network baseband firmware on it, located on the filesystem as the block device /dev/block/platform/msm_sdcc.1/by-name/radio. If you run strings on that block device from the shell, you can see all manner of CDMA and GSM log messages, comments, and symbols are present in that partition.
According to ADB logs, Cyanogenmod 11 actually does try to bring up a cell network radio at boot on my Wifi-only Nexus 7, but fails due to it being disabled. There is also a strong economic incentive for Asus and Google to make it extremely difficult to activate the baseband even if the hardware is otherwise identical for manufacturing reasons, since they sell the WiFi-only version for $100 less. If it were easy to re-enable the baseband, HOWTOs would exist (which they do not seem to, at least not yet), and they would cut into their LTE device sales.
Even so, since we lack public schematics for the Nexus 7 to verify that cell components are actually missing or hardware-disabled, it may be wise to wipe this radio firmware as well, as defense in depth.
To do this, open the Terminal app, and run:
su cd /dev/block/platform/msm_sdcc.1/by-name dd if=/dev/zero of=./radio
I have wiped that partition while the device was running without any issue, or any additional errors from ADB logs.
Note that an anonymous commenter also suggested it is possible to disable the baseband of a cell-enabled device using a series of Android service disable commands, and by wiping that radio block device. I have not tested this on a device other than the WiFI-only Nexus 7, though, so proceed with caution. If you try those steps on a cell-enabled device, you should archive a copy of your radio firmware first by doing something like the following from that dev directory that contains the radio firmware block device.
dd if=./radio of=/sdcard/radio.img
If anything goes wrong, you can restore that image with:
dd if=/sdcard/radio.img of=./radio
In addition to streamlining the contents of this post into a single additional Cyanogenmod installation zip or alternative ROM, the following problems remain unsolved.
Future Work: Better Usability
While arguably very secure, this system is obviously nowhere near usable. Here are some potential improvements to the user interface, based on a brainstorming session I had with another interested developer.
First of all, the AFWall+/Droidwall UI should be changed to be a tri-state: It should allow you to send app traffic over Tor, over your normal internet connection, or block it entirely.
Next, during app installation from either F-Droid or Google Play (this is an Intent another addon app can actually listen for), the user should be given the chance to decide if they would like that app's traffic to be routed over Tor, use the normal Internet connection, or be blocked entirely from accessing the network. Currently, the Droidwall default for new apps is "no network", which is a great default, but it would be nice to ask users what they would like to do during actual app installation.
Moreover, users should also be given a chance to edit the app's permissions upon installation as well, should they desire to do so.
The Google Play situation could also be vastly improved, should Google itself still prove unwilling to improve the situation. Google Play could be wrapped in a launcher app that automatically grants it network access prior to launch, and then disables it upon leaving the window.
A similar UI could be added to LinPhone. Because the actual voice and video transport for LinPhone does not use Tor, it is possible for an adversary to learn your SIP ID or phone number, and then call you just for the purposes of learning your IP. Because we handle call setup over Tor, we can prevent LinPhone from performing any UDP activity, or divulging your IP to the calling party prior to user approval of the call, prior to user approval. Ideally, we would also want to somehow inform the user of the fact that incoming calls can be used to obtain information about them, at least prior to accepting their first call from an unknown party.
Future Work: Find Hardware with Actual Isolated Basebands
Related to usability, it would be nice if we could have a serious community effort to audit the baseband isolation properties of existing cell phones, so we all don't have to carry around these ridiculous battery packs and sketch-ass wifi bridges. There is no engineering reason why this prototype could not be just as secure as a single piece of hardware. We just need to find the right hardware.
A random commenter claimed that the Galaxy Nexus might actually have exactly the type of baseband isolation we want, but the comment was from memory, and based on software reverse engineering efforts that were not publicly documented. We need to do better than this.
Future Work: Bug Bounty Program
If there is sufficient interest in this prototype, and/or if it gets transformed into a usable addon package or ROM, we may consider running a bug bounty program where we accept donations to a dedicated Bitcoin address, and award the contents of that wallet to anyone who discovers a Tor proxy bypass issue or remote code execution vulnerability in any of the network-enabled apps mentioned in this post (except for the Browser app, which does not receive security updates).
Future Work: Port Tor Browser to Android
The Guardian Project is undertaking a port of Tor Browser to Android as part of their OrFox project. This will greatly improve the privacy of your web browsing experience on the Android device over both Firefox and Chrome. We look forward to helping them in any way we can with this effort.
Future Work: WiFi MAC Address Randomization
It is actually possible to randomize the WiFi MAC address on the Google Nexus 7. The closed-source root app Mac Spoofer is able to modify the device MAC address using Qualcomm-specific methods in such a way that the entire Android OS becomes convinced that this is your actual MAC.
However, doing this requires installation of a root-enabled, closed-source application from the Google Play Store, which we believe is extremely unwise on a device you need to be able to trust. Moreover, this app cannot be autorun on boot, and your MAC address will also reset every time you disable the WiFi interface (which is easy to do accidentally). It also supports using only a single, manually entered MAC address.
Hardware-independent techniques (such as a the Terminal command busybox ifconfig wlan0 hw ether <mac>) appear to interfere with the WiFi management system and prevent it from associating. Moreover, they do not cause the Android system to report the new MAC address, either (visible under Settings -> About Tablet -> Status).
Obviously, an Open Source F-Droid app that properly resets (and automatically randomizes) the MAC every time the WiFi interface is brought up is badly needed.
Future Work: Disable Probes for Configured Wifi Networks
The Android OS currently probes for all of your configured WiFi networks while looking for open wifi to connect to. Configured networks should not be probed for explictly unless activity for their BSSID is seen. The xda-developers forum has a limited fix to change scanning behavior, but users report that it does not disable the active probing behavior for any "hidden" networks that you have configured.
Future Work: Recovery ROM Password Protection
An unlocked recovery ROM is a huge vulnerability surface for Android. While disk encryption protects your applications and data, it does not protect many key system binaries and boot programs. With physical access, it is possible to modify these binaries through your recovery ROM.
The ability to set a password for the Team Win recovery ROM in such a way that a simple "fastboot flash recovery" would overwrite would go a long way to improving device security. At least it would become evident to you if your recovery ROM has been replaced, in this case (due to the absence of the password).
It may also be possible to restore your bootloader lock as an alternative, but then you lose the ability to make backups of your system using Team Win.
Future Work: Disk Encryption via TPM or Clever Hacks
Unfortunately, even disk encryption and a secure recovery firmware is not enough to fully defend against an adversary with an extended period of physical access to your device.
Cold Boot Attacks are still very much a reality against any form of disk encryption, and the best way to eliminate them is through hardware-assisted secure key storage, such as through a TPM chip on the device itself.
It may also be possible to mitigate these attacks by placing key material in SRAM memory locations that will be overwritten as part of the ARM boot process. If these physical memory locations are stable (and for ARM systems that use the SoC SRAM to boot, they will be), rebooting the device to extract key material will always end up overwriting it. Similar ARM CPU-based encryption defenses have also been explored in the research literature.
Future Work: Download and Build Process Integrity
Beyond the download integrity issues mentioned above, better build security is also deeply needed by all of these projects. A Gitian descriptor that is capable of building Cyanogenmod and arbitrary F-Droid packages in a reproducible fashion is one way to go about achieving this property.
Future Work: Removing Binary Blobs
If you read the Cyanogenmod build instructions closely, you can see that it requires extracting the binary blobs from some random phone, and shipping them out. This is the case with most ROMs. In fact, only the Replicant Project seems concerned with this practice, but regrettably they do not support any wifi-only devices. This is rather unfortunate, because no matter what they do with the Android OS on existing cell-enabled devices, they will always be stuck with a closed source, backdoored baseband that has direct access to the microphone, if not the RAM and the entire Android OS.
Kudos to them for finding one of the backdoors though, at least.
- Updated firewall scripts to fix Droidwall permissions vulnerability.
- Updated Applications List to recommend VLC as a free media player.
- Mention the Guardian Project's planned Tor Browser port (called OrFox) as Future Work.
- Mention disabling configured WiFi network auto-probing as Future Work
- Updated the firewall install script (and the android-firewall.zip that contains it) to disable "Captive Portal detection" connections to Google upon WiFi association. These connections are made by the Settings service user, which should normally be blocked unless you are Activating Google Play for the first time.
- Updated the Executive Summary section to make it clear that our SIP client can actually also make normal phone calls, too.
- Document removing the built-in microphone, for the truly paranoid folk out there.
- Document removing the remnants of the baseband, or disabling an existing baseband.
- Update SHA256SUM of FDroid.apk for 0.63
- Remove multiport usage from firewall-tor.sh script (and update android-firewall.zip).
- Add pro-tip to the microphone removal section: Don't remove your cameras. Black electrical tape works just fine, and can be removed and reapplied many times without smudges.
- Update android-firewall.zip installation and documentation to use /data/local instead of /etc. CM updates will wipe /etc, of course. Woops. If this happened to you while updating to CM-11-M5, download that new android-firewall.zip and run install-firewall.sh again as per the instructions above, and update your Droidwall custom script locations to use /data/local.
- Update the Future work section to describe some specific UI improvements.
- Update the Future work section to mention that we need to find hardware with actual isolated basebands. Duh. This should have been in there much earlier.
Today marks the first in what is likely to be a series of congressional hearings called in response to the U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) historic announcement of its intent to transition key Internet domain name functions (DNS) to the global multistakeholder community. In advance of today’s hearing, Access, along with the Center for Democracy & Technology, Freedom House, Human Rights Watch, The Open Technology Institute at New America Foundation, and Public Knowledge have sent a letter to Congress expressing our support for the proposed transition.
Welcome to the thirteenth issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.
Tor Project website redesign takes two steps forward
Andrew Lewman put out two calls for help with the ongoing Tor Project
If you’d like to give the website redesign further momentum, please see the dedicated project page on the wiki for open tickets and advice on how to contribute, then come to the www-team mailing list and join in!
QR codes for bridge addresses
Since most pocket computers (sometimes called “phones”) and laptops began incorporating cameras, QR codes have become a ubiquitous way to enter short sequences of data into our devices. URLs are the canonical example, but the process also works for Bitcoin addresses or OpenPGP fingerprints.
Bridges are the standard tool for circumventing filters that prevent access to the Tor network. Users currently enter bridge addresses in Tor by copy/pasting from the BridgeDB web page or auto-responder email. But manually giving IP addresses and fingerprints to Orbot on keyboard-less devices is an error-prone process.
QR codes might be a solution to this problem. They could also enable peer-to-peer exchange among friends, or circumvention strategies involving IPv6 addresses and paper. According to Isis Lovecruft, adding QR codes to the BridgeDB web interface would be easy. Would any reader feel like hacking Orbot or the Tor Launcher Firefox extension (see relevant documentation and API)?
Client identification in hidden service applications
Applications behind hidden services currently cannot easily differentiate between client connections. Tor will make a different local TCP connection for each connections it receives, but the software is unable to tell if they are coming from the same circuit. Harry SeventyOne felt the latter would be useful to enable applications for diagnostic log analysis, identifying traffic trends, rate-limiting or temporarily blocking operations coming from the same client.
Harry sent a very rough patch to the Tor development mailing which enables circuit distinction by using a different source IP address from the IPv4 localhost pool (
127.0.0.0/8) for each circuit. Nick Mathewson liked the idea and gave several comments about the preliminary patch. Hopefully this work will make the life of hidden service operators easier in the future.
Monthly status reports for March 2014
The wave of regular monthly reports from Tor project members for the month of March has begun. Georg Koppen released his report first, followed by reports from Pearl Crescent, Damian Johnson, Sherief Alaa, Nick Mathewson, Matt Pagan, Lunar, and Karsten Loesing.
Lunar also reported help desk statistics.
An extensive guide to hacking on Tor Browser was posted to the Tor Project’s wiki by Mike Perry. Among other things, it covers the browser’s build instructions, design principles and testing procedures, as well as a summary of how browser team members organize and communicate. If you’d like to get involved in Tor Browser development, please take a look!
Nicholas Hopper followed up on George Kadianakis’ research on switching to a single guard. He used Aaron Johnson’s TorPS simulator to find out the “typical” bandwidth for a client. The conclusions match George’s: a single guard and a bandwidth cutoff of 2 Mbit/s would improve over the current situation. George subsequently sent an initial draft proposal to start the formal process.
BridgeDB version 1.6 was deployed on March 26th. Thanks to Isis Lovecruft, users should now be able to solve the CAPTCHA again. A custom solution is now used instead of Google’s reCAPTCHA services which will give more flexibility in the future.
John Brooks presented Torsion, “a ready-to-use hidden service instant messaging client”. “I’m looking for people to try it out, validate my ideas and implementation, and help plan the future”, wrote John. You can consult the design documentation and build instructions on Github; please share your comments with the community!
Martin Weinelt shared a plugin that generates graphs in the Munin network monitoring tool from data provided by Tor, using Stem. “At the moment it supports a connection graph, getting its data from orconn-status. More graphs are possible, but not yet implemented. Ideas are welcome,” wrote Martin.
Amid the ongoing censorship of internet services in Turkey, there were reports that the Tor Project’s website was unavailable over connections supplied by some Turkish ISPs. Feel free to try one of the mirrors!
Karsten Loesing published a draft of a guide to running a blog over a Tor hidden service using the Jekyll static site generator. “The intended audience are bloggers who can handle a terminal window but who don’t know the typical pitfalls of securely setting up a web server over a hidden service”, he wrote. However, the guide is in its first stages, and “may contain severe problems harming your privacy!” Feedback on its content, wording, and layout would be greatly appreciated.
Yawning Angel called for help with testing obfsclient 0.0.2, a C++ implementation of the obfs3 and ScrambleSuit pluggable transports: “This is mostly a bug fix release that addresses issues found in testing/actual use […] Questions, comments, feedback appreciated as always.”
Michael Rogers has been “working on a messaging app that uses Tor hidden services to provide unlinkability (from the point of view of a network observer) between users and their contacts”. But as “users know who their contacts are”, the mutual anonymity provided by hidden services is not a requirement. Michael asked how hidden services performance could be improved for this use case.
On the Tor Blog, Sukhbir Singh posted a round-up of the various methods by which users can download and run the Tor Browser, covering download mirrors, GetTor, bridge address distribution, and pluggable transports usage. If you’re having trouble acquiring or using a copy of the Tor Browser, please look here for links and guidance.
Mike Perry discovered “that the Linux kernel appears to have a leak in how it applies transproxy rules to the TCP CLOSE_WAIT shutdown condition under certain circumstances”. Be sure to look at Mike’s email if you use Tor’s TransProxy feature. velope later improved the original mitigating firewall rule.
As part of the ongoing project to rewrite the Tor Weather service, Sreenatha Bhatlapenumarthi and Karsten Loesing collaborated to produce a Python script that enables it to determine whether or not relay operators have fulfilled the requirements for a free Tor T-shirt.
Lukas Erlacher announced the avaibility of OnionPy, “a Python wrapper for OnionOO with support for transparently caching OnionOO replies in memcached”. It should be useful to the on-going rewrite of the Tor Weather service.
The deadline for submissions to the Tails logo contest passed on March 31st; you can review all of the proposed designs, from the minimalist to the psychedelic, on the Tails website.
Tor help desk roundup
The help desk often gets confusing reports that after being directed to download the latest Tor Browser version by a flashing TorBrowserButton, users still sometimes see a message that their Tor Browser is out of date. This happens when the new Tor Browser version was installed over the previous one. Fortunately the underlying bug will be fixed in the next Tor Browser release. We recommend extracting each Tor Browser update to an empty directory rather than overwriting the old one, to prevent similar unexpected behaviors. The longer-term solution for issues like this is an auto-updating Tor Browser.
News from Tor StackExchange
Tor’s StackExchange site is doing a self-evaluation. If you have an account, please log in and evaluate the questions as well as their answers. It helps to improve the answers and the site in general.
Furthermore, if you happen to visit the site, check the list of unanswered questions. If you know an answer, please share your knowledge with the people.
This issue of Tor Weekly News has been assembled by Lunar, harmony, David Fifield, Matt Pagan, qbi and Karsten Loesing.
Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report
important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!
Below is a collection of resources that will help you get Tor up and running. We also discuss alternative approaches of downloading the Tor Browser Bundle and provide mirrors for all these resources in case torproject.org is blocked.
To start with, please look at Bundle Downloads and determine the best way for you to download the Tor Browser Bundle. After you have downloaded the bundle and before you install/extract it, you should also verify it to make sure the bundle you downloaded is genuine and has not been tampered with; this step is optional but recommended.
We have screencasts (video guides) that will help you with the installation and verification process on Windows, Linux and OS X.
Text guide for signature verification
GetTor is a program for serving the Tor Browser Bundle through email. This is particulary useful if you cannot access torproject.org or any other mirrors.
To request a bundle from GetTor, send a blank email to firstname.lastname@example.org. GetTor will then respond with links to the Tor Browser Bundle for all platforms.
Note: GetTor was earlier restricted to requests from Gmail and Yahoo!. This is no longer the case and you can request for bundles from any email address, including Outlook.
If you are unable to reach the Tor network after installation (Tor Launcher starts, however the green progress bar stops), you need to use bridges.
One way to find public bridge addresses is to send an email (from a Gmail or a Yahoo! address) to email@example.com with the line 'get bridges' by itself in the body of the mail.
You can also acquire bridges by visiting https://bridges.torproject.org/. If you see that this page is offline, please wait for a few minutes and try again.
1. Launch the Tor Browser Bundle
2. Click "Configure"
3. Click "Next" until you reach a page that reads "If this computer's Internet connection is censored, you will need to obtain and use bridge relays"
4. Enter the bridges you received from one of the methods above into the text box
5. Click "Connect"
If you find that using standard bridges fails for you, you can try using the 3.6-beta-1 bundle located on the same downloads page listed above. These bundles included integrated pluggable transport support, and are useful in areas where standard bridges are blocked.
To activate pluggable transports in the 3.6-beta-1 bundle, follow the bridge directions above, however simply select "obfs3" or "fte" when you reach the bridge configuration page (instead of entering bridge addresses yourself).
Still need help? If you have any questions, trouble connecting to Tor network, or need to talk to a human, please contact our support team at:
firstname.lastname@example.org for English
email@example.com for Arabic
firstname.lastname@example.org for Spanish
email@example.com for Farsi
firstname.lastname@example.org for French
email@example.com for Mandarin
Written in collaboration with Colin Childs
Hardware Security Modules (aka Smartcards, chipcards, etc) provide a secure way to store and use cryptographic keys, while actually making the whole process a bit easier. In theory, one USB thumb drive like thing could manage all of the crypto keys you use in a way that makes them much harder to steal. That is the promise. The reality is that the world of Hardware Security Modules (HSMs) is a massive, scary minefield of endless technical gotchas, byzantine standards (PKCS#11!), technobabble, and incompatibilities. Before I dive too much into ranting about the days of my life wasted trying to find a clear path through this minefield, I’m going to tell you about one path I did find through to solve a key piece of the puzzle: Android and Java package signing.
For this round, I am covering the Aventra MyEID PKI Card. I bought a SIM-sized version to fit into an ACS ACR38T-IBS-R smartcard reader (it is apparently no longer made, and the ACT38T-D1 is meant to replace it). Why such specificity you may ask? Because you have to be sure that your smartcard will work with your reader, and that your reader will have a working driver for you system, and that your smartcard will have a working PKCS#11 driver so that software can talk to the smartcard. Thankfully there is the OpenSC project to cover the PKCS#11 part, it implements the PKCS#11 communications standard for many smartcards. On my Ubuntu/precise system, I had to install an extra driver,
libacr38u, to get the ACR38T reader to show up on my system.
So let’s start there and get this thing to show up! First we need some packages. The OpenSC packages are out-of-date in a lot of releases, you need version 0.13.0-4 or newer, so you have to add our PPA (Personal Package Archive) to get current versions, which include a specific fix for the Aventra MyEID: (fingerprint:
F50E ADDD 2234 F563):
sudo add-apt-repository ppa:guardianproject/ppa
sudo apt-get update
sudo apt-get install opensc libacr38u libacsccid1 pcsc-tools usbutils
First thing, I use
lsusb in the terminal to see what USB devices the Linux kernel sees, and thankfully it sees my reader:
Bus 005 Device 013: ID 072f:9000 Advanced Card Systems, Ltd ACR38 AC1038-based Smart Card Reader
Next, its time to try
pcsc_scan to see if the system can see the smartcard installed in the reader. If everything is installed and in order, then
pcsc_scan will report this:
PC/SC device scanner
V 1.4.18 (c) 2001-2011, Ludovic Rousseau
Compiled with PC/SC lite version: 1.7.4
Using reader plug'n play mechanism
Scanning present readers...
0: ACS ACR38U 00 00
Thu Mar 27 14:38:36 2014
Reader 0: ACS ACR38U 00 00
Card state: Card inserted,
ATR: 3B F5 18 00 00 81 31 FE 45 4D 79 45 49 44 9A
pcsc_scan cannot see the card, then things will not work. Try re-seating the smardcard in the reader, make sure you have all the right packages installed, and if you can see the reader in
lsusb. If your smartcard or reader cannot be read, then
pcsc_scan will report something like this:
PC/SC device scanner
V 1.4.18 (c) 2001-2011, Ludovic Rousseau
Compiled with PC/SC lite version: 1.7.4
Using reader plug'n play mechanism
Scanning present readers...
Waiting for the first reader...
Moving right along… now
pcscd can see the smartcard, so we can start playing with using the OpenSC tools. These are needed to setup the card, put PINs on it for access control, and upload keys and certificates to it. The last annoying little preparation tasks are finding where
opensc-pkcs11.so is installed and the “slot” for the signing key in the card. These will go into a config file which
jarsigner need. To get this info on Debian/Ubuntu/etc, run these:
$ dpkg -S opensc-pkcs11.so
$ pkcs11-tool --module /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so \
Slot 0 (0xffffffffffffffff): Virtual hotplug slot
Slot 1 (0x1): ACS ACR38U 00 00
token label : MyEID (signing)
token manufacturer : Aventra Ltd.
token model : PKCS#15
token flags : rng, login required, PIN initialized, token initialized
hardware version : 0.0
firmware version : 0.0
serial num : 0106004065952228
This is the info needed to put into a
jarsigner need in order to talk to the Aventra HSM. The name, library, and slot fields are essential, and the description is helpful. Here is how the
opensc-java.cfg using the above information looks:
name = OpenSC
description = SunPKCS11 w/ OpenSC Smart card Framework
library = /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so
slot = 1
Now everything should be ready for initializing the HSM, generating a new key, and uploading that key to the HSM. This process generates the key and certificate, puts them into files, then uploads them to the HSM. That means you should only run this process on a trusted machine, certainly with some kind of disk encryption, and preferably on a machine that is not connected to a network, running an OS that has never been connected to the internet. A live CD is one good example, I recommend Tails on a USB thumb drive running with the secure persistent store on it (we have been working here and there on making a TAILS-based distro specifically for managing keys, we call it CleanRoom).
First off, the HSM needs to be initialized, then set up with a signing PIN and a “Security Officer” PIN (which means basically an “admin” or “root” PIN). The signing PIN is the one you will use for signing APKs, the “Security Officer PIN” (SO-PIN) is used for modifying the HSM setup, like uploading new keys, etc. Because there are so many steps in the process, I’ve written up scripts to run thru all of the steps. If you want to see the details, read the scripts. The next step is to generate the key using
openssl and upload it to the HSM. Then the HSM needs to be “finalized”, which means the PINs are activated, and keys cannot be uploaded. Don’t worry, as long as you have the SO-PIN, you can erase the HSM and re-initialize it. But be careful! Many HSMs will permanently self-destruct if you enter in the wrong PIN too many times, some will do that after only three wrong PINs! As long as you have not finalized the HSM, any PIN will work, so play around a lot with it before finalizing it. Run the init and key upload procedure a few times, try signing an APK, etc. Take note: the script will generate a random password for the secret files, then echo that password when it completes, so make sure no one can see your screen when you generate the real key. Alright, here goes!
code $ git clone https://github.com/guardianproject/smartcard-apk-signing
code $ cd smartcard-apk-signing/Aventra_MyEID_Setup
Aventra_MyEID_Setup $ ./setup.sh
Edit pkcs15-init-options-file-pins to put in the PINs you want to set:
Aventra_MyEID_Setup $ emacs pkcs15-init-options-file-pins
Aventra_MyEID_Setup $ ./setup.sh
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
About to erase card.
PIN [Security Officer PIN] required.
Please enter PIN [Security Officer PIN]:
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
About to create PKCS #15 meta structure.
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
About to generate key.
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
About to generate key.
next generate a key with ./gen.sh then ./finalize.sh
Aventra_MyEID_Setup $ cd ../openssl-gen/
openssl-gen $ ./gen.sh
Usage: ./gen.sh "CertDName" 
"/C=US/ST=New York/O=Guardian Project Test/CN=test.guardianproject.info/emailAddressfirstname.lastname@example.org"
openssl-gen $ ./gen.sh "/C=US/ST=New York/O=Guardian Project Test/CN=test.guardianproject.info/emailAddressemail@example.com"
Generating key, be patient...
2048 semi-random bytes loaded
Generating RSA private key, 2048 bit long modulus
e is 65537 (0x10001)
subject=/C=US/ST=New York/O=Guardian Project Test/CN=test.guardianproject.info/emailAddressfirstname.lastname@example.org
Getting Private key
writing RSA key
Your HSM will prompt you for 'Security Officer' aka admin PIN, wait for it!
Enter destination keystore password:
Entry for alias 1 successfully imported.
Import command completed: 1 entries successfully imported, 0 entries failed or cancelled
Key fingerprints for reference:
The public files are: certificate.pem publickey.pem request.pem
The secret files are: secretkey.pem certificate.p12 certificate.jkr
The passphrase for the secret files is: fTQ*he-[:y+69RS+W&+!*0O5i%n
openssl-gen $ cd ../Aventra_MyEID_Setup/
Aventra_MyEID_Setup $ ./finalize.sh
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
About to delete object(s).
Your HSM is ready for use! Put the secret key files someplace encrypted and safe!
Now your HSM should be ready for use for signing. You can try it out with
keytool to see what is on it, using the signing PIN not the Security Officer PIN:
smartcard-apk-signing $ /usr/bin/keytool -v \
> -providerClass sun.security.pkcs11.SunPKCS11 \
> -providerArg opensc-java.cfg \
> -providerName SunPKCS11-OpenSC -keystore NONE -storetype PKCS11 \
Enter keystore password:
Keystore type: PKCS11
Keystore provider: SunPKCS11-OpenSC
Your keystore contains 1 entry
Alias name: 1
Entry type: PrivateKeyEntry
Certificate chain length: 1
Owner: EMAILADDRESSemail@example.com, CN=test.guardianproject.info, O=Guardian Project Test, ST=New York, C=US
Issuer: EMAILADDRESSfirstname.lastname@example.org, CN=test.guardianproject.info, O=Guardian Project Test, ST=New York, C=US
Serial number: aa6887be1ec84bde
Valid from: Fri Mar 28 16:41:26 EDT 2014 until: Mon Aug 12 16:41:26 EDT 2041
Signature algorithm name: SHA1withRSA
And let’s try signing an actual APK using the arguments that Google recommends, again, using the signing PIN:
smartcard-apk-signing $ /usr/bin/jarsigner -verbose \
> -providerClass sun.security.pkcs11.SunPKCS11 \
> -providerArg opensc-java.cfg -providerName SunPKCS11-OpenSC \
> -keystore NONE -storetype PKCS11 \
> -sigalg SHA1withRSA -digestalg SHA1 \
> bin/LilDebi-release-unsigned.apk 1
Enter Passphrase for keystore:
Now we have a working, but elaborate, process for setting up a Hardware Security Module for signing APKs. Once the HSM is setup, using it should be quite straightforward. Next steps are to work out as many kinks in this process as possible so this will be the default way to sign APKs. That means things like figuring out how Java can be pre-configured to use OpenSC in the Debian package, as well as including all relevant fixes in the
opensc packages. Then the ultimate is to add support for using HSMs in Android’s generated build files like the
ant that is generated by
android update project. Then people could just plug in the HSM and run
ant release and have a signed APK!
Welcome to the twelfth issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.
Tor 0.2.5.3-alpha is out
Nick Mathewson cut a new release of the Tor development branch on March 23rd: “Tor 0.2.5.3-alpha includes all the fixes from 0.2.4.21. It contains two new anti-DoS features for Tor relays, resolves a bug that kept SOCKS5 support for IPv6 from working, fixes several annoying usability issues for bridge users, and removes more old code for unused directory formats.”
This release also marks the first step toward the stabilization of Tor 0.2.5, as from now on “no feature patches not already written will be considered for inclusion”.
The source is available at the usual location, as are updated binary packages.
Tails 0.23 is out…
…but many Tails users are already running it. Now that incremental upgrades have been turned on by default with the previous release, users of Tails on USB sticks have been able to enjoy the process of a smooth upgrade in three clicks.
Tails will now do “MAC spoofing” by default. To hide the hardware address used on the local network, Tails will now use a randomized address by default. This will help prevent the tracking of one’s geographical location across networks. For more information about MAC spoofing, why it matters, and when it might be relevant to turn it off, be sure to read the very well-written documentation.
Another important feature is the integrated support for proxies and Tor bridges. This should be of immense help to users of Tails on censored networks. The integration is done using the Tor Launcher extension, familiar to everyone who has used recent versions of the Tor Browser.
For examples of smaller features and bugfixes: Tor, obfsproxy, I2P, Pidgin and the web browser have been upgraded, a 64-bit kernel is used on most systems to pave the way for UEFI support, documentation is now accessible from the greeter, and the “New identity” option in the browser is available again.
The next Tails release is scheduled for April 29th and will be 1.0. For this important milestone in 5 years of intense work, the Tails team is still looking for a logo.
New Tor Browser releases
The Tor Browser team put out two new releases based on Firefox 24.4.0esr. Version 3.5.3 is meant as a safe upgrade for every Tor Browser user. Among other changes, the new version contains an updated Tor, a fix for a potential freeze, a fix for the Ubuntu keyboard issue and a way to prevent disk leaks when watching videos.
On top of the preceding changes, version 3.6-beta-1 is the culmination of a months-long effort to seamlessly integrate pluggable transports into the Tor Browser. In the network settings, users can now choose “Connect with provided bridges” and select from “obfs3”, “fte” or “flashproxy”. Entering custom bridges is also supported and will work for direct, obfs2 and obfs3 bridges.
Other usability changes include wording improvements in the connection wizard, translatable Tor status messages, and the use of disk image (DMG) instead of ZIP archives for Mac OS X.
Please upgrade, in any case, and consider helping iron out the remaining issues in the 3.6 branch.
Since the 3.5 release, “Tor Browser Bundle is more like a standalone browser and less like a bundle”. This led the Tor Browser team to plan to “rename it to just ‘Tor Browser’ everywhere”.
Alex reported an “important case about Tor relay operators” which came to court in Athens, Greece on March 18th. The defendant, a Tor relay operator, was acquitted after proving that the IP address used for criminal activity was in fact a Tor relay.
James Valleroy wrote to tor-relays asking the best way to configure the FreedomBox as a Tor bridge. Lance Hathaway explained about pluggable transports and Roger Dingledine mentioned the potential issues of relaying a bridge and a hidden service at the same time.
A Tor exit operator recently held an Ask Me Anything on Reddit, which was quite successful, generating over 800 upvotes, 478 comments, and being read by thousands. The most popular questions were focused on how to improve the use of Tor, the legality of exit nodes, discussions on hidden services, the workings of Tor, and many other topics related to privacy and security.
Tor help desk roundup
Users sometimes want to know how to transfer their bookmarks from an old Tor Browser to an updated one. Mozilla provide instructions on how to do this on their website.
The new Tor Browser releases were again prevented from working properly by WebRoot Internet Security. The error message is “Couldn’t load XPCOM”. Users need to disable WebRoot, whitelist the appropriate Tor Browser files, and more importantly contact WebRoot support to warn them that their product is breaking the Tor Browser and, to the best of Tor support’s knowledge, Firefox stable releases. Ideally, WebRoot should test new releases before harming Tor users. See #11268 if you want to help.
News from Tor StackExchange
uighur1984 wanted to set up a hidden service for their public-facing website and decided that the HiddenServiceDir should be the same like the DocumentRoot of the website. This led to some problems with access rights. Sam Whited clarified that both directories should be separated: the data in the HiddenServiceDir doesn't contain any actual data from the website, but only the keys and other information from the hidden service.
Gondalse shot a video showing policemen torturing a citizen. The release of the video led to a trial, and Gondalse fears that someone might try to track the owner of the video down. Jens Kubieziel pointed out some OPSEC rules, and showed which problems can lead to deanonymization.
This issue of Tor Weekly News has been assembled by Lunar, Matt Pagan, harmony, qbi, Jesse Victors, and Karsten Loesing.
An interesting turn of events (which we are very grateful for!)
Get press kit and more at: https://guardianproject.info/press
GOOGLE EXECUTIVE CHAIRMAN ERIC SCHMIDT AWARDS GUARDIAN PROJECT A “NEW DIGITAL AGE” GRANT
The Guardian Project is amongst the 10 chosen grantee organizations to be awarded a $100,000 digital age grant due to its extensive work creating open source software to help citizens overcome government-sponsored censorship.
NEW YORK, NY (March 10, 2014)—Ten non-profits in the U.S. and abroad
have been named recipients of New Digital Age Grants, funded through a
$1 million donation by Google executive chairman Eric Schmidt. The
Guardian Project is one of two New York City-based groups receiving an
The New Digital Age Grants were established to highlight organizations
that use technology to counter the global challenges Schmidt and
Google Ideas Director Jared Cohen write about in their book THE NEW
DIGITAL AGE, including government-sponsored censorship, disaster
relief and crime fighting. The book was released in paperback on March 4.
“The recipients chosen for the New Digital Age Grants are doing some
very innovative and unique work, and I’m proud to offer them this
encouragement,” said Schmidt. “Five billion people will encounter the
Internet for the first time in the next decade. With this surge in the
use of technology around the world—much of which we in the West take
for granted—I felt it was important to encourage organizations that
are using it to solve some of our most pressing problems.”
Guardian Project founder, Nathan Freitas, created the project based on
his first-hand experience working with Tibetan human rights and
independence activists for over ten years. Today, March 10th, is the
55th anniversary of the Tibetan Uprising Day against Chinese
occupation. “I have seen first hand the toll that online censorship,
mobile surveillance and digital persecution can take on a culture,
people and movement,” said Freitas. “I am elated to know Mr. Schmidt
supports our effort to fight back against these unjust global trends
through the development of free, open-source mobile security
Many of the NDA grantees, such as Aspiration, Citizen Lab and OTI,
already work with the Guardian Project on defending digital rights,
training high-risk user groups and doing core research and development
of anti-censorship and surveillance defense tools and training.
The New Digital Age Grants are being funded through a private donation
by Eric and Wendy Schmidt.
About the Guardian Project
The Guardian Project is a global collective of software developers
(hackers!), designers, advocates, activists and trainers who develop
open source mobile security software and operating system
enhancements. They also create customized mobile devices to help
individuals communicate more freely and protect themselves from
intrusion and monitoring. The effort specifically focuses on users who
live or work in high-risk situations, and who often face constant
surveillance and intrusion attempts into their mobile devices and
Since it was founded in 2009, the Guardian Project has developed more
than a dozen mobile apps for Android and iOS with over two million
downloads and hundreds of thousands of active users. In the last five
years the Guardian Project has partnered with prominent open source
software projects, activists groups, NGOs, commercial partners and
news organizations to support their mobile security software
capabilities. This work has been made possible with funding from
Google, UC Berkeley with the MacArthur Foundation, Avaaz, Internews,
Open Technology Fund, WITNESS, the Knight Foundation, Benetech, and
Free Press Unlimited. Through work on partner projects like The Tor
Project, Commotion mesh and StoryMaker, we have received indirect
funding from both the US State Department through the Bureau of
Democracy, Human Rights and Labor Internet Freedom program, and the
Dutch Ministry of Foreign Affairs through HIVOS.
The Guardian Project is very grateful for this personal donation and
is happy to have its work recognized by Mr Schmidt. This grant will
allow us to continue our work on ensuring users around the world have
access to secure, open and trustworthy mobile messaging services. We
will continue to improve reliability and security of ChatSecure for
Android and iOS and integrate the OStel voice and video calling
services into the app for a complete secure communications solution.
We will support the work of the new I.M.AWESOME (Instant Messaging
Always Secure Messaging) Coalition focused on open-standards,
decentralized secure mobile messaging, and voice and video
communications. Last, but not least, we will improve device testing,
support and outreach to global human rights defenders, activists and
journalists, bringing the technology that the Guardian Project has
developed to the people that need it most.
About the NDA Recipients
Aspiration in San Francisco, CA, provides deep mentorship to build
tech capacity supporting Africa, Asia and beyond. Their NDA grant will
grow their capacity-building programs for the Global South, increasing
technical capacity to meet local challenges.
C4ADS, a nonprofit research team in Washington, DC, is at the cutting
edge of unmasking Somali pirate networks, Russian arms-smuggling
rings, and other illicit actors entirely through public records. Their
data-driven approach and reliance on public documents has enormous
potential impact, and the grant will help with their next big project.
The Citizen Integration Center in Monterrey, Mexico has developed an
innovative public safety broadcast and tipline system on social media.
Users help their neighbors—and the city—by posting incidents and
receiving alerts when violence is occurring in their communities. The
grant will help them broaden their reach.
The Citizen Lab at the Munk School of Global Affairs at the University
of Toronto, Canada, is a leading interdisciplinary laboratory
researching and exposing censorship and surveillance. The grant will
support their technical reconnaissance and analysis, which uniquely
combines experts and techniques from computer science and the social
The Guardian Project, based in New York City, develops open-source
secure communication tools for mobile devices. ChatSecure and OSTel,
their open standards-based encrypted messaging, voice and video
communication services, which are both built on open standards, have
earned the trust of tens of thousands of users in
repressively-censored environments, and the grant will advance their
The Igarapé Institute in Rio de Janeiro, Brazil, focuses on violence
prevention and reduction through technology. Their nonprofit work on
anti-crime projects combines the thoughtfulness of a think tank with
the innovative experimentation of a technology design shop. The grant
will support their research and development work.
KoBo Toolbox in Cambridge, MA, allows fieldworkers in far-flung
conflict and disaster zones to easily gather information without
active Internet connections. The grant will help them revamp their
platform to make it easier and faster to deploy.
The New Media Advocacy Project in New York, NY, is nonprofit
organization developing mobile tools to map violence and
disappearances in challenging environments. The grant will allow them
to refine their novel, interactive, video-based interfaces.
The Open Technology Institute at the New America Foundation in
Washington, DC, advances open architectures and open-source
innovations for a free and open Internet. The grant will assist their
work with the Measurement Lab project to objectively measure and
report Internet interference from repressive governments.
Portland State University in Portland, OR, is leading ground-breaking
research on network traffic obfuscation techniques, which improve
Internet accessibility for residents of repressively-censored
environments. The grant will support the research of Professor Tom
Shrimpton and his lab, who—with partners at the University of
Wisconsin and beyond—continue to push the boundaries with new
techniques like Format Transforming Encryption.
The HTTPS protocol is based on TLS and SSL, which are standard ways to negotiate encrypted connections. There is a lot of complexity in the protocols and lots of config options, but luckily most of the config options can be ignored since the defaults are fine. But there are some things worth tweaking to ensure that as many connections as possible are using reliable encryption ciphers while providing forward secrecy. A connection with forward secrecy provides protection to past transactions even if the server’s HTTPS private key/certificate is stolen or compromised. This protects your users from large scale network observers that can store all traffic for later decryption, like governments, ISPs, telecoms, etc. From the server operator’s point of view, it means less risk of leaking users’ data, since even if the server is compromised, past network traffic will probably not be able to be encrypted.
In my situation, I was using our development site, https://dev.guardianproject.info, as my test bed, it is Apache 2.2 and openssl 1.0.1 running on Ubuntu/precise 12.04 Long-Term Support, so that means that some of the options are more limited since this is an older release. On Debian, Ubuntu and other Debian-derivatives, you’ll only need to edit
/etc/apache2/mods-available/ssl.conf. There are more paranoid resources for perfectly configuring your TLS, but we’re not ready to drop support for old browsers that only support SSLv3, and not TLS at all. So I went with this line to enable SSLv3 and TLSv1.0 and newer:
SSLProtocol all -SSLv2
With TLS connections, the client and the server each present a list of encryption ciphers that represent the ciphers they each support in order of preference. This enables the client and server to choose a cipher that both support. Normally, the client’s list takes precedence over the server’s, but with many browsers that can be changed. Unfortunately it seems that Microsoft Internet Explorer (IE) ignores this and always uses the client’s preference first. Here’s how to make Apache request that the server preferences are preferred:
Next up is tweaking the server’s preference list to put ciphers that enable forward secrecy first (don’t worry if you don’t understand the next stuff about my rationale, my aim is to walk thru the process). This is done in most web servers using openssl-style cipher lists. I started out with what Mozilla recommends, then pared down the list to remove AES-256 ciphers, since AES-128 is widely regarded to be faster, quite strong, and perhaps more resistant to timing attacks than AES-256. I also chose to remove RC4-based ciphers, since RC4 might already be broken, and will only get worse with time. RC4 has historically been used to mitigate the “BEAST” attack, but that is mostly happening in the clients now. So with that I ended up with this cipher list (should be all one line in your config file):
One thing to make sure is that all of these ciphers are supported on your system. You can get the list of supported ciphers from
openssl ciphers. I used this command line to get them in a nice, alphabetized list:
openssl ciphers | sed 's,:,\n,g' | sort
Lastly, we want to set the HSTS header to tell the browser to always use HTTPS. To enforce this, a header is added to the collection of HTTP headers delivered when connecting to the HTTPS site. This header tells the client browser to always connect to the current domain using HTTPS. It includes an expiration date (aka
max-age) after which, the client browser will again allow HTTP connections to that domain. The server might then again redirect the HTTP connection to HTTPS, and again the client will get the HSTS header, and use only HTTPS until the expiration date comes again. To include this header in your Apache server, add this line:
Header add Strict-Transport-Security "max-age=15768000;includeSubDomains"
Now you can check the results of your work with Qualys’ handy SSL Test. You can see the result of my efforts here: https://www.ssllabs.com/ssltest/analyze.html?d=dev.guardianproject.info. A- is not bad. I tried for a good long while to get IE to use FS (Forward Secrecy) ciphers, but failed. IE does not respect the server-side cipher preferences. My guess is that the only way to get IE to use FS ciphers is to make a custom cipher list that does not include anything but FS ciphers and serve that only to IE. I know it is possible to do because bitbucket.com got an A+ for doing it. For a quick way to check out the cipher lists and HSTS header, look at iSEC Partner’s sslyze.
This is only a quick overview of the process to outline the general concepts. To find out more I recommend reading the source articles for this post, including specific directions for nginx and lighttpd:
Intents. It also provides the framework for reusing large chunks of apps based on the
Intents are the messages that make the requests, and
Activitys are the basic chunk of functionality in an app, including its interface. This combination allows apps to reuse large chunks of functionality while keeping the user experience seamless and fluent. For example, an app can send an Intent to request a camera
Activityto prompt the user to take a picture, and that process can feel integrated into the original app that made the request. Another common use of this paradigm is choosing account information from the contacts database (aka the People app). When a user is composing an new email, they will want to select who the message gets sent to. Android provides both the contacts database, and a nice overlay screen for finding and selecting the person to send to. This combination is an
Activityprovided by Android. The message that the email program sends in order to trigger that
As usual, one of the downsides of flexibility is increased security risk. This is compounded in the Android system by rules that will automatically export an Activity to receive Intents from any app, when certain conditions are met. If an
Activity is exported for any app to call, it is possible for apps to send malicious
Intents to that
Intents are meant to be public and others are exported as a side effect. Either way, at the very least, it is necessary to sanitize the input that an
Activity receives. On the other side of the issue, if an app is trusting another app to provide a sensitive service for it, then malware can pose as the trusted app and receive sensitive data from the trusting app. An app does not need to request any permissions in order to set itself up as a receiver of
Android, of course, does provide some added protections for cases like this. For very sensitive situations, an
Activity can be setup to only receive
Intents from apps that meet certain criteria. Android permissions can restrict other apps from sending
Intents to any given exported
Activity. If a separate app wants to send an
Intent to an
Activity that has be set with a permission, then that app must include that permission in its manifest, thereby publishing that it is using that permission. This provides a good way publish an API for getting permission, but leaving it relatively open for other apps to use. Other kinds of controls can be based on two aspects of an app that the Android system enforces to remain the same: the package name and the signing key. If either of those change, then Android considers it a different app altogether. The strictest control is handled by the “protection level”, which can be set to only allow either the system or apps signed by the same key to send
Intents to a given
Activity. These security tools are useful in many situations, but leave lots of privacy-oriented use cases uncovered.
There are some situations that need more flexibility without opening things up entirely. The first simple example is provided by our app Pixelknot: it needs to send pictures through services that will not mess up the hidden data in the images. It has a trusted list of apps it will send to, based on apps that have proven to pass the images through unchanged. When the user goes to share the image from Pixelknot to an cloud storage app, the user will be prompted to choose from a list of installed apps that match the whitelist in Pixelknot. We could have implemented a permission and asked lots of app providers to implement it, but it seems a mammoth task to get lots of large companies like Dropbox and Google to include our specific permission.
There are other situations that require even tighter restrictions that are available. The first example here comes from our OpenPGP app for Android. Gnu Privacy Guard (GPG) provides cryptographic services to any app that requests it. When the app sends data to GPG to be encrypted, it needs to be sure that the data is actually going to GPG and not to some malware. For very sensitive situations, the Android-provided package name and signing key might not be enough to ensure that the correct app is receiving the unencrypted data. Many Android devices are still unpatched to protect against master key bugs, and for people using Android in China, Iran, etc. where the Play Store is not allowed, they don’t get the exploit scanning provided by Google. Telecoms around the world have proved to be bad at updating the software for the devices that they sell, leaving many security problems unfixed. Alternative Android app stores are a very popular way to get apps. So far, the ones that we have seen provide minimal security and no malware scanning. In China, Android is very popular, so this represents a lot of Android users.
Another potential use case revolves around a media reporting app that relies on other apps to provide images and video as part of regular reports. This could be something like a citizen journalist editing app or a human rights reporting app. The Guardian Project develops a handful of apps designed to create media in these situations: ObscuraCam, InformaCam, and an new secure camera app in the works that we are contributing to. We want InformaCam to work as a provider of verifiable media to any app. It generates a package of data that includes a cryptographic signature so that its authenticity can be verified. That means that the apps that transport the InformaCam data do not need to be trusted in order to guarantee the integrity of the uploaded InformaCam data. Therefore it does not make sense in this case for InformaCam to grant itself permissions to access other apps’ secured
Activitys. It would add to the maintenance load of the app without furthering the goals of the InformaCam project. Luckily there are other ways to address that need.
The inverse of this situation is not true. The reporting app that gathers media and sends it to trusted destinations has higher requirements for validating the data it receives via
Intents. If verifiable media is required, then this reporter app will want to only accept incoming media from InformaCam. Well-known human rights activists are often the target of custom malware designed to get information from their phones. For this example, a malware version of InformaCam could be designed to track all of the media that the user is sending to the human rights reporting app. To prevent this, the reporter app will want to only accept data from a list of trusted apps. When the user tries to feed media from the malware app to the reporting app, it would be rejected, alerting the user that something is amiss. If an reporting app wants to receive data only from InformaCam, it needs to have some checks setup to enforce that. The easiest way for the reporting app to implement those checks would be to add an Android permission to the receiving
Activity. But that requires the sending app, in the example above that is InformaCam, to implement the reporting app’s permission. Using permissions works for tailored interactions. InformaCam aims to bring tighter secure to all relevant interactions, so we need a different approach. While InformaCam could include some specific permissions, the aim is to have a single method that supports all the desired interactions. Having a single method here means less code to audit, less complexity, and fewer places for security bugs.
We have started auditing the security of communication via
Intents, while also working on various ideas to address the issues laid out so far. This will include laying out best-practices and defining gaps in the Android architecture. We plan on building the techniques that we find useful into reusable libraries to make it easy for others to also have more flexible and trusted interactions. When are the standard checks not enough? If the user has a malware version of an app that exploits master key bugs, then the signature on the app will be valid. If a check is based only on a package name, malware could use any given package name. Android enforces that only one app can be installed with a given package name, but if there are multiple apps with the same package name, Android will not prevent you from installing the malware version.
Intents from, the installed app is then compared to the pre-stored pinned value. This kind of pinning allows for checks like the
Signaturepermission level but based on a key that the app developer can select and include in the app. The built-in
Signaturepermissions are fixed on the signing key of the currently running app.
TOFU/POP means Trust-On-First-Use/Persistence Of Pseudonym. In this model, popularized by SSH, the user marks a given hash or signing key as trusted the first time they use the app, without extended checks about that apps validity. That mark then describes a “pseudonym” for that app, since there is no verification process, and that pseudonym is remembered for comparing in future interactions. One big advantage of TOFU/POP is that the user has control over which apps to trust, and that trust relationship is created at the moment the user takes an action to start using the app that needs to be trusted. That makes it much easier to manage than using Android permissions, which must be managed by the app’s developer. A disadvantage is that the initial trust is basically a guess, and that leaves open a method to get malware in there. The problem of installing good software, and avoiding malware, is outside of the scope of securing inter-app communication. Secure app installation is best handled by the process that is actually installing the software, like the Google Play Store or F-Droid does.
To build on the InformaCam example, in order to setup a trusted data flow between InformaCam and the reporting app, custom checks must be implemented on both the sender and the receiver. For the sender, InformaCam, it should be able to send to any app, but it should then remember the app that it is configured to send to and make sure its really only sending to that app. It would then use TOFU/POP with the hash as the data point. For the receiver, the reporting app, it should only accept incoming data from apps that it trusts. The receiver then includes a pin for the signing key, or if the app is being deployed to unupdated devices the pin can be based on the hash to work around master key exploits. From there on out, the receiving app checks against the stored app hashes or signing keys. For less security-sensitive situations, the received can rely on TOFU/POP on the first time that an app sends media.
There are various versions of these ideas floating around in various apps, and we have some in the works. We are working now to hammer out which of these ideas are the most useful, then we will be focusing our development efforts there. We would love to hear about any related effort or libraries that are out there. And we are also interested to hear about entirely different approaches than what has been outlined here.
Note: A big discussion topic of 2013 was about how hard cryptography and security is for average people, journalists and others. With that in mind, we’d like to sub-title this post “Making Mobile Crypto Easy for Eyewitnesses”, as the InformaCam software and process described below includes the full gamut of security and cryptography tools all behind a streamlined, and even attractive application user experience we are quite proud of….
One of the primary goals of the InformaCam project (now in public beta!) is to create an environment where, when it comes to photos and video captured on smartphones, people and organizations can trust what they see. Faked photos and videos, whether intended to be humorous or malicious, are all too common online, especially in times of crisis. Thus, the software that been developed works to ensure the full, complete original photo or video captured of an event, can safely reach the people who need to see it, without it first being filtered, modified, cropped, trimmed or otherwise manipulated.
There are four ways this is achieved:
- At point of capture, secure storage and analysis of the media file itself to begin a chain of custody, create a means of verifying media pixel values directly, and defend against tampering by malicious apps.
- Gather corroborating metadata points using the device’s built-in sensors to establish an environmental context.
- Use a secure method of transmission to a secure repository to continue chain of custody, and to defend against network surveillance, intrusion and filtering.
- Provide a means, using open tools, to verify media was not tampered with and to view and analyze corroborating metadata.
Let’s dig deeper into each of these links of the verification chain.
Secure Storage and Analysis
When InformaCam is activated, it begins to actively monitor the device for any new photos or videos captured by the built-in camera software. InformaCam does not support importing already captured photos or videos. It must actively detect a new photo or video is captured by the active camera software on the device. As soon as it detects a new capture, it begins the following ingest process:
- Import the media file into an encrypted storage system, on the device, but only accessible by the InformaCam app. This ensures the file is not modified by any other application on the device.
- Generate and securely store a cryptographic hash value, or checksum, of the pixels of the media file, either the single photo or collectively for all the frames of the video. Any change to the pixels of the media files (“photoshopping”, removal of frames, editing, or other modifications) would result in a change to the hash value.
- Delete the source photo or video from its original location on the device’s shared storage, to keep it hidden from plain view in high-risk situations. Since it has been imported to encrypted storage, this version is no longer needed, and not trustworthy, ultimately.
With this three step process we have, as near as possible to the time and place of capture, ensured we have the media file in a secure storage location, and have generated a unique hash value to verify the file against later.
The hash value, which is just a short series of hexadecimal characters, can also be immediately shared to a third-party using email, text messaging, Twitter or other public notary system. The sooner it can be in the “public record” the better, to establish that the media file existed in this exact state at this time. This concept of a notary is important, and one we seek to develop more, to ensure the notary is also a trusted, tamper-proof service.
Corroborating Sensor Metadata
Secure Repository Submission
When the owner of the device running InformaCam with the media file on it decides to share it with an organization for verification and use, they can send it using InformaCam’s built-in Secure Share mechanism. This enables the media file and embedded metadata to be directly sent to an InformaCam Repository over a secure connection. While the connection uses the public internet, it is sent directly between the device and the repository inside of a secure, tamper proof tunnel powered by software known as Tor. This connection is configured using an InformaCam Trusted Definition configuration file which contains the necessary network addresses and credentials.
The secure repository is expected to be run on a Linux server that is properly secured with strong access controls, firewalls, encrypted disk storage, and all other available mechanisms well known for securing desktop or server systems. It should not be placed on the public Internet, but only exposed through the Tor network connection. It should be hosted in a location that can be physically secured by the organization, as much as possible, and that could not be accessed without the organization being aware. This means that third party data centers should not be used, as access to these machines by law enforcement or malicious hackers can be accomplished without notice to the customers.
However, as long as the media hash value itself is maintained in a secure manner, possibly even printed out and stored in an offline physically secure system, the state of the media file itself can be easily verified using common tools.
Open Verification and Analysis Tools
Once the media and metadata have been received in the secure repository, the organization managing it can used the InformaCam Analyzer and Dashboard software to process and verify the media file. All of the steps below are automatically done by the software, but can also be manually done by a competent, trained technician. These are the steps taken:
- Export the J3M corroborating metadata from the media file. It will be encrypted to the organization’s public cryptographic key, so it will first need to be decrypted, and also the signature of the data verified against the sender’s public key, which the organization previously obtained. This step is accomplished using the free and open-source GnuPG software tools.
- Run the media verification process on the photo or video file. This is accomplished using a tool in the InformaCam Analyzer software, which also includes the free and open-source FFMPEG media engine software. The cryptographic hash function is run again, this time on the server-side on not on the device, and the resulting hash value from the pixel values is displayed. This must match the hash value generated on the device, which should have been shared via private or public notary (SMS, email, Twitter, etc), and is also stored in the J3M metadata obtained in step #1.
- View the J3M metadata directly or import into the InformaCam Dashboard system for verification. The metadata will include information such as GPS location, cellular network location, nearby bluetooth and wifi devices, compass headings, altitude, temperature and more. This data can be used to match against the time and place the media claims to be from.
Four Ways, In Summary
Through the four ways described above, the InformCam system works to capture and safeguard both media and metadata at all points along the way, between the device and the repository. Cryptographic functions and features provide much of the power behind this, but relying on mathematics alone does not tell the whole store. By combining the corroborating metadata and open tools for analysis, we ensure that the context of the photo or video, and the means to verify the entire package are also readily available as part of the verification process.
ver the past couple of years, Android has included a central database for managing information about people, it is known as the
ContactsContract (that’s a mouthful). Android then provides the People app and reusable interface chunks to choose contacts that work with all the information in the
ContactsContract database. Any time that you are adding an account in the Settings app, you are setting up this integration. You can see it with Google services, Skype, Facebook, and many more. This system has a lot of advantages, including:
- a unified user experience for finding and managing data about people
- apps can launch common interface dialogs and screens for working with that database without having to write custom versions (launching
- streamlined methods for building custom UIs based on the contacts database
With our work porting GnuPG to Android, we want Gnu Privacy Guard for Android to be fully integrated into the Android experience. Gnu Privacy Guard registers itself as a handler for all OpenPGP file and data types in Android, so users can work with these files using standard Android methods like Share/Send buttons. Or users can start by finding the person to encrypt to in the People app, then choosing the file. These flows make it intuitive to Android users, and means we have to write less code because it taps into existing Android systems. With the past release, v0.2, we laid the foundations for having the GnuPG keyring integrated into this contacts database. The next release, v0.3 will improve contacts integration a lot.
One of the concerns that has been voiced about integrating with the
ContactsContract database is that all the data put there will be then uploaded to the other accounts, like the Google account of the phone, or other accounts. As far as we can tell, there is no automatic syncing of data between accounts in the
ContactsContract, instead it is a system of individual, local databases. We have not confirmed this with a code audit whether there is any data leakage from
ContactsContract, and would love to hear more information on that. There is a layer of matching rules for locally merging those local databases into a single, unified view of that data. A good example of this unified data view in action is the built-in People app. It will show data from all of the local databases, and it will link profiles together in a single view based on programmatic rules that look at email addresses, names, etc. In any case, Gnu Privacy Guard only syncs one way. It treats the GnuPG keyring as canonical and clones the GnuPG keyring contacts to the
ContactsContract whenever a sync is run. The sync process never reads from the
ContactsContract, and currently no data is ever imported from it. So at the very least, the ContactsContract should not serve as a point to inject data into the GnuPG keyring.
One unexplored idea is for apps that need to use crypto to use only the standard Android contacts API to fetch crypto identity information like public keys and fingerprints. For example, PGP email app K-9 could look up OpenPGP info at the same time it is looking in the contacts database for email addresses. It probably even makes sense for K-9 to offload even more to an OpenPGP provider, and have K-9 just query the PGP provider whether there is a signing key available, whether the receiver has a PGP key, etc.
It is also tempting to think about using a similar technique for storing other types of keys like OTR keys for secure chat. The hard part is that OTR has no method built-in to the key for verifying whether that key is trusted. OpenPGP has key signing and the Web-of-Trust, with all of its issues, but the OpenPGP security model is designed around untrusted methods of moving public key data around. Using the contacts database for moving around public key material for later verification will work equally well for OTR, OpenPGP, etc.
On a similar note, we are also working with Dominik Schürmann and the K-9 devs to create a common Android API for a generic OpenPGP provider. This is similar to the contacts system in recent versions of Android in that there is a single, central contacts system that any app can tap into for managing data related to people.
We have decided to go with Dominik Schürmann’s approach of using an AIDL API to an Android Service. AIDL does have some downsides mostly around it being overcomplicated. But AIDL is the main Android method for inter-process communication with
Services, so we are stuck with it, more or less. The beautiful thing is that this arrangement will make it possible for apps to fully offload the crypto handling to the
Service, including all the required GUI bits like passphrase prompting, progress dialog overlays, key selection, etc.
For example of how this idea would work, we can look at K-9 email again. If an incoming email includes a public key or fingerprint, either of these can be sent to the OpenPGP provider for importing. An
OPENPGP4FPR: URI will trigger downloading the public key from a keyserver. A public key contained in an attached file will be received by the OpenPGP provider via the Android file associations, which will then prompts the user to import it. When K-9 goes to send a OpenPGP-encrypted email to that new key, it checks the ContactsContract to see whether the recipient has a OpenPGP key. If so, it sends the email to the OpenPGP provider to be encrypted. The OpenPGP provider can then look up which key to use in it’s local keyring by using the recipient’s email address. If there are multiple keys for that email address, it prompts the user to choose. It could also base it’s choice on the OpenPGP trust level for that key.
These are currently all ideas for how GnuPG can be integrated into Android. Some of these are implemented and ready for you to try out on your device. The common OpenPGP provider idea is still very much a work in progress.
For the past two years, we have been thinking about how to make it easier for anyone to achieve private communications. One particular focus has been on the “security tokens” that are required to make private communications systems work. This research area is called internally Portable Shared Security Tokens aka PSST. All of the privacy tools that we are working on require “keys” and “signatures”, to use the language of cryptography, and these are the core of what “security tokens” are. One thing we learned a lot about is how to portray and discuss tools for private or anonymous communications to people who just want to communicate and are not interested in technical discussion. This is becoming a central issue among a lot of people working to make usable privacy tools.
The widely established way of talking about privacy tools comes from the lingo of the underlying methods: cryptography, networking, etc. We talk about public and private keys, signing, validation, verification, key exchange, certificates, and fingerprints. In order for cryptography to work, keys need to be marked whether they are verified or not. Few computers users understand what these terms are referring to, even highly technical people who regularly use encryption do not know the meaning of all these things, nor should they. This is a low level detail that is not important to how the vast majority of users understand privacy in computers. Keys and verification are far too abstract to be generally understandable, and what other kind of key has a fingerprint? Even more so, few people can tell you the difference between validation and verification when it comes to keys, signatures and certificates. The software should not be exposing all this, but instead should be minimizing the complexity as much as possible, and providing as simple a user experience as possible.
Defining the Concepts that Define the Experience
A key part of defining that simple user experience is defining the core concepts that the software is organized around. In our discussions, we mostly talked about the ideas of identity and trust, while some discussion of verifying identity seemed unavoidable. Talking about identity and trust is a lot more relevant in day-to-day life, i.e. knowing that the message came from the person you think it did, and trusting that it was private. It is most direct to talk about establishing a trusted connection to another person, but that’s not something that crypto can ever promise because there is still the analog gap between the person and the device. These core ideas must represent what is technically possible, so we searched for widely understood concepts that map well to the technical limitations: “a private conversation”, “a trusted app”, “verifiable video”.Diving in deeper, we concluded that the balance point between technical accuracy and widely understandable lingo was to talk about trusting the device, not the person. The technology can provide trusted connections between devices, and it is pretty close to how people experience digital communications. There is the laptop, the mobile phone, the net cafe, the friend’s computer, computer at work, etc. etc. When I look at my phone to see a message from a friend, it is easy to picture that friend typing that message out on that device, though it does take some conscious effort. The hard part here is that as we communicate more and more with our devices, there is less and less separation in our minds about whether we were talking in person, via voice, or by sending text. This is a point to focus on when thinking about designing the experience of private, secure communications software.
Let the Software Handle It!
There is a forming consensus in the world of usable security to focus on figuring out how to automate as much as possible then figure out how best tailor the experience of the essential parts that cannot be automated. The hard part will remain explaining the limitations of a given privacy tool.
At Guardian Project, we work a lot on incremental progress, so many of our projects are focused on specific, narrow improvements. With ChatSecure and Keysync , we were able to automate one small part of the whole process, cryptography identity portability, which provides the foundation to provide private communications and verifiable media. Allowing users to sync their trust profiles between desktop and mobile makes it much more likely that users will have fully verified OTR conversations when chatting on their devices and laptops.
With Gnu Privacy Guard for Android (GPGA), we have made it easy to import keys via QRCode as well as
openpgp4fpr: URLs (a standard defined in conjuction with the Monkeysphere project. We are also working on a common method of using NFC for OpenPGP key signing in conjuction with OpenPGP Keychain. Even little things like optimizing support for standard file extensions can go a long way to make things easier, so GPGA automatically sets itself up to receive files with the standard OpenPGP MIME types (
application/pgp-signature) as well as the corresponding file extensions (
.asc, etc.). That makes it so a user can just click on one of these files, and GPGA will walk them through the whole process, doing as much as possible automatically.
Another interesting idea that is a big step in this direction is “secure introductions”. The idea is to automatically share trusted identity information when securely communicating with multiple people. For example, whenever you send a signed, encrypted email to multiple people, the email program should include the key fingerprints of each recipient in that email. Then the email program of the people receiving that email should automatically mark those keys as verified if the sender’s key is trusted and the signature is valid. There is not a meaningful amount of detail leaked in this interaction, since the existence of all the people’s keys and email address is already present in a secure email. The tricky part is figuring out how to make it harder for someone to use this maliciously to spread false identity information while keeping things as automatic as possible. This is very much a long term research idea: there are no widespread implementations of it.
(Note: Originally this post had a title claiming 300 Million WeChat users… that would have included iOS and Android, and we don’t know if the WeChat iOS app also includes SQLCipher encryption or not. That said, there are 50-100M Google Play downloads of WeChat for Android, which does not include all of the users inside China)
Through some of our own recent sluething, Citizen Lab’s research into “Asia Chats” security, and now via this detailed look at WeChat security from Emaze.com, it has been recently discovered that WeChat for Android uses SQLCipher for local data encryption in its app. We co-developed SQLCipher for Android with Zetetic, and have been working to promote its adoption among Android developers who need to protect data stored locally on a device. While many people would point to Android’s Full Disk Encryption feature as a solution for that, only a small percentage of users ever enable it, and even then, once a device is unlocked, then all data is accessible by someone looking to extract it. With SQLCipher, the application can ensure its own data is encrypted, and if the app is closed, then the data is secured.
Now, as with most things WeChat, the actually implementation of SQLCipher is not that ideal, utilizing a short key, generated in part from the device’s ID, and some sort of server provided token. Still, at least they tried, and SQLCipher is considered stable enough to be used for the over 300 million WeChat users around the world. Who knows, though, maybe the devs are on our developer list or the SQLCipher list, and we can help them improve their implementation using CacheWord!
The biggest irony of this, is that I gave a lightning talk at Google IO 2013, highlighting the concern I had with the rapid growth of WeChat, and their parent company’s and country’s poor record on human rights, free speech, and generally defending their users. With the growth of WeChat beyond the borders of China, it is the first major mobile service to be exported and adopted outside of the Great Firewall, by non-Chinese users.
My part starts at about 17:00 in, and runs for about 5 minutes…
So, for now, I raise a toast to the Android developers at Tencent/WeChat, who at least took a shot at providing local message encryption in their app, and may they continue to endeavor to defend their users privacy and security, as best as they can, considering their circumstances.
More from the emaze-ing post below…
WeChat locally stores application data in an encrypted SQLite database
named “EnMicroMsg.db”. This database is located in the “MicroMsg”
subfolder inside the application’s data directory (typically something
The database is encrypted using SQLCipher, an open source extension for
SQLite that provides full database encryption. The encryption password
is derived from the “uin” parameter (see previous sections) combined
with the device identifier through a custom function. More precisely,
the key generation function leverages the mangle() function shown in the
previous Python snippet. The actual database encryption key can be
generated through the following pseudo-code:
password = mangle(deviceid + uin)[:7]
Here deviceid is the value returned by the Android API function
TelephonyManager.getDeviceId(). Follows a sample SQLCipher console
session that demonstrate how the EnMicroMsg.db database can be decrypted.
$ sqlcipher EnMicroMsg.db
sqlite> PRAGMA key = ‘b60c8e4′;
sqlite> PRAGMA cipher_use_hmac = OFF;
CREATE TABLE conversation (unReadCount INTEGER, status INT, …
CREATE TABLE bottleconversation (unReadCount INTEGER, status INT, …
CREATE TABLE tcontact (username text PRIMARY KEY, extupdateseq long, …
It is also worth pointing out that, as the key generation algorithm
truncates the password to 7 hex characters, it would be not so difficult
for motivated attackers who are able to get the encrypted database to
brute force the key, even without knowing the uin or the device identifier.
Now that you can have a full GnuPG on your Android device with Gnu Privacy Guard for Android, the next step is getting keys you need onto your device and included in Gnu Privacy Guard. We have tried to make it as easy as possible without compromising privacy, and have implemented a few approaches, while working on others. There are a few ways to get this done right now.
Gnu Privacy Guard registered itself with Android as a handler of all the standard OpenPGP MIME types (
application/pgp-signature), as well as all of the OpenPGP and GnuPG file extensions (
.bin). This means that users just have to share a file to Gnu Privacy Guard using any of the standard Android methods, these files can be launched from an email attachment, opened from the SD card using a file browser, clicked in the Downloads view, etc.
So if you want to quickly send your whole public keyring from your laptop to your mobile device, you can just grab the database file directly from GnuPG and copy it to your SD card. Here is how:
- plug your device into your laptop via USB so you can copy files to the SD card
- find your GnuPG home folder (on GNU/Linux and Mac OS X, it will be in
~/.gnupg/pubring.gpg, on Windows it is
- In your GnuPG home folder, copy pubring.gpg to your device’s SD card
- unmount and unplug your device
- on your device, open your favorite file manager app (OI File Manager, Astro, etc)
- go to the SD card
- long-click on pubring.gpg and share it to Gnu Privacy Guard
- click OK on the Import Keys dialog
After that, Gnu Privacy Guard will do the rest. Give is some time to sync to the Contacts database, then you’ll see all of your keys from your desktop are now in your People app and are listed in Gnu Privacy Guard itself. You can now encrypt files to any of those keys, or verify files signed by any of those keys. Here are a couple screenshots to illustrate key points in the process, using OI File Manager:
There are many ways to get the keyring files like pubring.gpg to your device: you can also share the keyring files via email, chat, or even services like Dropbox or Google Drive. Then once the files are on your device, you can import them using the same procedure as above. But keep in mind that you are sending your whole collection of secure contacts to that service, which will have full access to read it. If you have any worries about leaking your keyring to anyone, then a good method is to copy it directly to the SD card.
You can also search and download keys via the public pool of OpenPGP keyservers. If you already know someone’s keyid or fingerprint, you can search using that. Otherwise, you can search based on name or email address. But be careful! Downloading a key from a keyserver does not give you a key you can trust. Anyone can upload a key to the keyservers, and they can make that key have any name or email address. Downloading from the keyservers is a convenient way to download a key, but you must verify the key’s fingerprint with the person you are trying to find. In conjunction with the Monkeysphere project, we developed a standard URI scheme for sending OpenPGP key fingerprints. For example, you can find my key ID here:
openpgp4fpr:9F0FE587374BBE81. This provides a clickable way to get an OpenPGP key. On an Android device with Gnu Privacy Guard installed, you can click on this link to download my key from the keyservers. This URI scheme also works well in QR Codes. Scan this QR Code on your device with an app like Barcode Scanner, and click Open Browser, and Gnu Privacy Guard will download my key to your device.
There are other ideas out there that we also want to support. For example, OpenPGP Keychain includes a way to transmit the whole public key via NFC. This allows people can swap keys directly from phone to phone without having internet access at all. But NFC is quite slow to transmit data so the devices need to be held together for a while until the whole key is received. NFC could be used to rapidly transmit an
openpgp4fpr: URI, and then the whole public key would be fetched from a keyserver, but that then requires internet access and also leaks a bit of metadata to the internet. A better technique would be to transmit the entire public key over Bluetooth, using NFC to setup the Bluetooth session. We’re also looking at ways to do this via WiFi and Bonjour (mDNS) local service advertisements.
Ostel.co began as a R&D effort sponsored by The Guardian Project. The question: Is a peer-to-peer secure voice and video call network possible to build with open Internet standards and Open Source software? After two years and tens of thousands of users later, the answer is a resounding YES!
The Guardian Project will continue to support the Open Secure Telephony Network. This open source project aims to make it as simple as possible for anyone to stand up their own secure VoIP backend with a custom domain. OSTN is a best practices guide to build your own application stack and a federated network of VoIP services. The more operators who host their own domain, the larger the global federated infrastructure becomes, freeing users from carrier control and ensuring call security. There are also ongoing automation projects to bring ease to hosting your own domain. For example, Docker repositories, Chef cookbooks and even a guide for the Raspberry Pi!
If you would like to get started with free calls over ostel.co, register for an account and use any of the supported client applications. If you would like support building your own secure VoIP backend, check out the docs, hang out in the #guardian project IRC channel and email email@example.com . We look forward to growing the network!
In September, I was pleased to present a talk on the importance of making cryptography and privacy technology accessible to the masses at TED’s Montréal event. In my 16-minute talk, I discussed threats to Internet freedom and privacy, political perspectives, as well as the role open technologies such as Cryptocat can play in this field.
Independent Cryptocat server operators:
We’re issuing a mandatory update for Cryptocat server configuration. Specifically, the ejabberd XMPP server configuration must be updated to include support for mod_ping.
We’re doing this in order to allow upcoming Cryptocat versions better connection handling, and the introduction of a new auto-reconnect feature! All Cryptocat versions 2.1.14 and above will not connect to servers without this configuration update. Cryptocat 2.1.14 is expected to be released some time within the coming weeks.
This morning, we’ve begun to push Cryptocat 2.1.13, a big update, to all Cryptocat-compatible platforms (Chrome, Safari, Firefox and OS X.) This update brings many new features and improvements, as well as some small security fixes and improvements. The full change log is available in our code repository, but we’ll also list the big new things below. The update is still being pushed, so it may take around 24 hours for the update to be available in your area.
First things first: encrypted group chat in Cryptocat 2.1.13 is not backwards compatible with any prior version. Encrypted file sharing and private one-on-one chat will still work, but we still strongly recommend that you update and also remind your friends to update as well. Also, the block feature has been changed to an ignore feature — you can still ignore group chat messages from others, but you cannot block them from receiving your own.
New feature: Authenticate with secret questions!
An awesome new feature we’re proud to introduce is secret question authentication, via the SMP protocol. Now, if you are unable to authenticate your friend’s identity using fingerprints, you can simply ask them a question to which only they would know the answer. They will be prompted to answer — if the answers match, a cryptographic process known as SMP will ensure that your friend is properly authenticated. We hope this new feature will make it easier to authenticate your friend’s identities, which can be time-consuming when you’re chatting with a conversation of five or more friends. This feature was designed and implemented by Arlo Breault and Nadim Kobeissi.
New Feature: Message previews
Another exciting new feature is message previews: Messages from buddies you’re not currently chatting with will appear in a small blue bubble, allowing you to quickly preview messages you’re receiving from various parties, without switching conversations. This feature was designed by Meghana Khandekar at the Cryptocat Hackathon and implemented by Nadim Kobeissi.
We’ve addressed a few security issues: the first is a recurring issue where Cryptocat users could be allowed to send group chat messages only to some participants of a group chat and not to others. This issue had popped up before, and we hope we won’t have to address it again. In a group chat scenario, it turns out that resolving this kind of situation is more difficult than previously thought.
The second issue is related to private chat accepting unencrypted messages from non-Cryptocat clients. We’ve chosen to make Cryptocat refuse to display any unencrypted messages it receives, and dropping them.
Finally, we’ve added better warnings. In case of suspicious cryptographic activity (such as bad message authentication codes, reuse of initialization vectors,) Cryptocat will display a general warning regarding the violating user.
More improvements and fixes
This is a really big update, and there’s a lot more improvements and small bug fixes spread all around Cryptocat. We’ve fixed an issue that would prevent Windows users from sending encrypted ZIP file transfers, made logout messages more reliable, added timestamps to join/part messages, made Cryptocat for Firefox a lot snappier… these are only a handful of the many small improvements and fixes in Cryptocat 2.1.13.
We hope you enjoy it! It should be available as an update for your area within the next 24 hours.
We’re excited to announce the new Cryptocat Encrypted Chat Mini Guide! This printable, single-page two-sided PDF lets you print out, cut up and staple together a small guide you can use to introduce friends, colleagues and anyone else to the differences between regular instant messaging and encrypted chat, how Cryptocat works, why fingerprints are important, and Cryptocat’s current limitations. Download the PDF and print your own!
The goal of the Cryptocat Mini Guide is to quickly explain to anyone how Cryptocat is different, focusing on an easy-to-understand cartoon approach while also communicating important information such as warnings and fingerprint authentication.
Special thanks go to Cryptocat’s Associate Swag Coordinator, Ingrid Burrington, for designing the guide and getting it done. The Cryptocat Mini Guide was one of the many initiatives that started at last month’s hackathon, and we’re very excited to see volunteers come up with fruitful initiatives. You’ll be seeing this guide distributed at conferences and other events where Cryptocat is present. And don’t forget to print your own — we even put dashed lines where you’re supposed to cut with scissors.
Open Source Veteran Bdale Garbee Joins FreedomBox Foundation Board
NEW YORK, March 10, 2011-- The FreedomBox Foundation, based here, today announced that Bdale Garbee has agreed to join the Foundation's board of directors and chair its technical advisory committee. In that role, he will coordinate development of the FreedomBox and its software.
Garbee is a longtime leader and developer in the free software community. He serves as Chief Technologist for Open Source and Linux at Hewlett Packard, is chairman of the Debian Technical Committee, and is President of Software in the Public Interest, the non-profit organization that provides fiscal sponsorship for the Debian GNU/Linux distribution and other projects. In 2002, he served as Debian Project Leader.
"Bdale has excelled as a developer and leader in the free software community. He is exactly the right person to guide the technical architecture of the FreedomBox," said Eben Moglen, director of the FreedomBox Foundation.
"I'm excited to work on this project with such an enthusiastic community," said Garbee. "In the long-term, this may prove to be most important thing I'm doing right now."
The Foundation's formation was announced in Brussels on February 4, and it is actively seeking funds; it recently raised more than $80,000 in less than fifteen days on Kickstarter.
About the FreedomBox Foundation
The FreedomBox project is a free software effort that will distribute computers that allow users to seize control of their privacy, anonymity and security in the face of government censorship, commercial tracking, and intrusive internet service providers.
Eben Moglen is Professor of Law at Columbia University Law School and the Founding Director of the FreedomBox Foundation, a new non-profit incorporated in Delaware. It is in the process of applying for 501(c)(3) status. Its mission is to support the creation and worldwide distribution of FreedomBoxes.
For further information, contact Ian Sullivan at firstname.lastname@example.org or see http://freedomboxfoundation.org.
Cryptocat’s first ever hackathon event was a great success. With the collaboration of OpenITP and the New America NYC office, we were able to bring together dozens individuals, which included programmers, designers, technologists, journalists, and privacy enthusiasts from around the world, to share a weekend of discussions, workshops and straight old-fashioned Cryptocat hacking in New York City.
During this weekend, we organized a coding track, led by myself, Nadim, as well as a journalist security track that was led by Carol Waters of Internews, with the participation of the Guardian Project. The coding track brought together volunteer programmers, documentation writers and user interface designers in order to work on various open issues as well as suggest new features, discover and fix bugs, and contribute to making our documentation more readable.
Many people showed up, with many great initatives and ideas. Off the top of my head, I remember Meghana Khandekar, of the New York School of Visual Arts, who contributed ideas for user interface improvements. Steve Thomas and Joseph Bonneau helped with discovering, addressing and discussing encryption-related bugs and improvements. Griffin Boyce, from the Open Technology Institute, helped with organizing the hackathon and contributed the first working build of Cryptocat for newer Opera browsers. Ingrid Burrington participated by working on hand-outable Cryptocat quick-start guides. David Huerta and Christopher Casebeer further contributed some code-level and design-level usability improvements. I worked on implementing a user interface for SMP authentication in Cryptocat.
We were very excited to have a team of medical doctors and developers figuring out a Cryptocat-based app for sharing medical records while fully respecting privacy laws. The team was looking to implement a medium for comparing X-ray images over Cryptocat encrypted chat, among other medical field related features.
Update: The hackathon is over, and you can find out what happened (and see photos) at our report!
Cryptocat, in collaboration with OpenITP, will be hosting the very first Cryptocat Hackathon weekend in New York City, on the weekend of the 17th and 18th of August 2013.
Join us on August 17-18 for the Cryptocat Hackathon and help empower people worldwide by improving useful tools and discussing the future of making privacy accessible. This two day event will take place at the OpenITP offices, located on 199 Lafayette Street, Suite 3b, New York City. Please RSVP on Eventbrite or email email@example.com.
The Cryptocat Hackathon will feature two tracks to accomodate the diversity of the attendees:
Coding Track with Nadim
Join Nadim in discussing the future of Cryptocat and contributing towards our efforts for the next year. Multi-Party OTR, encrypted video chat using WebRTC, and more exciting topics await your helping hands!
Journalist Security Track with Carol and the Guardian Project
Join Carol in a hands-on workshop for journalists on how to protect your digital security and privacy in your working environment. The Guardian Project will also be swooping in to discuss mobile security, introducing tools and solutions. Carol Waters is a Program Officer with Internews’ Internet Initiatives, and focuses on digital and information security issues. The Guardian Project builds open source mobile apps to protect the privacy and security of all of mankind.
Who should attend?
Hackers, designers, journalists, Internet freedom fighters, community organizers, and netizens. Essentially, anyone interested in empowering activists through these tools. While a big chunk of the work will focus on code, there are many other tasks available ranging from Q&A to communications.
10:00: Introduction and planning
11:00 Some hacking
1:00 – 5:00 Split into two tracks:
Coding track with Nadim
Journalist security track with Carol Waters
10:00: Some hacking
1:00 – 4:00 Split into two tracks:
Coding track with Nadim
Journalist security track with Carol
4:00 – 5:00 Closing notes and roundtable
24 hours after last month’s critical vulnerability in Cryptocat hit its peak controversy point, I was scheduled to give a talk at SIGINT2013, organized in Köln by the Chaos Computer Club. After the talk, we held a 70-minute Q&A in which I answered questions even from Twitter. 70 minutes!
In the 45-minute talk, I discuss the recent bug, how we plan to deal with it, what it means, as well as Cryptocat’s overall goals and progress:
In the 70-minute Q&A that followed, I answer every question ranging from the recent bug to what my favourite TV show is:
I’m really pleased with these videos since they present a channel into how the project is dealing with security issues as well as our current position and future plans. If you’re interested in Cryptocat, they are worth watching.
Additionally, I recently gave a talk about Cryptocat at Republika in Rijeka, and will be at OHM2013 in Amsterdam as part of NoisySquare, where there will be Cryptocat talks, workshops and more. See you there!
In the unlikely event that you are using a version of Cryptocat older than 2.0.42, please update to the latest version immediately to fix a critical security bug in group chat. We recommend updating to the 2.1.* branch, which at time of writing is the latest version. We apologize unreservedly for this situation. (Post last updated Sunday July 7, 2:00PM UTC)
A few weeks ago, a volunteer named Steve Thomas pointed out a vulnerability in the way key pairs were generated for Cryptocat’s group chat. The vulnerability was quickly resolved and an update was pushed. We sincerely thank Steve for his invaluable effort.
The vulnerability was so that any conversations had over Cryptocat’s group chat function, between versions 2.0 and 2.0.42 (2.0.42 not included), were easier to crack via brute force. The period between 2.0 and 2.0.42 covered approximately seven months. Group conversations that were had during those seven months were likely vulnerable to being significantly easier to crack.
Once Steve reported the vulnerability, it was fixed immediately and the update was pushed. We’ve thanked Steve and added his name on our Cryptocat Bughunt page’s wall of fame.
In our update log for Cryptocat 2.0.42, we had noted that the update fixed a security bug:
- IMPORTANT: Due to changes to multiparty key generation (in order to be compatible with the upcoming mobile apps), this version of Cryptocat cannot have multiparty conversations with previous versions. However private conversations still work.
- Fixed a bug found in the encryption libraries that could partially weaken the security of multiparty Cryptocat messages. (This is Steve’s bug.)
The first item, which made some changes in how keys were generated, did break compatibility with previous versions. But unlike what Steve has written in his blog post on the matter, this has nothing at all to do with the vulnerability he reported, which we were able to fix without breaking compatibility.
Due to Steve’s publishing of his blog post, we felt it would be useful to publish an additional blog post clarifying the matter. While the blog post published by Steve does indeed point to a significant vulnerability, we want to make sure it does not also cause inaccuracies to be reported.
Private chats are not affected: Private queries (1-on-1) are handled over the OTR protocol, and are therefore completely unaffected by this bug. Their security was not weakened.
Our SSL keys are safe: For some reason, there are rumors that our SSL keys were compromised. To the best of our knowledge, this is not the case. All Cryptocat data still passed over SSL, and that offers a small layer of protection that may help with this issue. Of course, it does not in any way save from the fact that due to our blunder, seven months of conversations were easier to crack. This is still a real mistake. We should also note that our SSL setup has implemented forward secrecy since the past couple of weeks. We’ve rotated our SSL keys as a precaution.
One more small note: Much has been said about a line of code in our XMPP library that supposedly is a sign of bad practice — this line is not used for anything security-sensitive. It is not a security weakness. It came as part of the third-party XMPP library that Cryptocat uses.
Finally, an apology: Bad bugs happen all the time in all projects. At Cryptocat, we’ve undertaken the difficult mission of trying to bridge the gap between accessibility and security. This will never be easy. We will always make mistakes, even ten years from now. Cryptocat is not any different from any of the other notable privacy, encryption and security projects, in which vulnerabilities get pointed out on a regular basis and are fixed. Bugs will continue to happen in Cryptocat, and they will continue to happen in other projects as well. This is how open source security works. We’ve added a bigger warning to our website about Cryptocat’s experimental status.
Every time there has been a security issue with Cryptocat, we have been fully transparent, fully accountable and have taken full responsibility for our mistakes. We will commit failures dozens, if not hundreds of times more in the coming years, and we only ask you to be vigilant and careful. This is the process of open source security. On behalf of the Cryptocat project, team members and volunteers, I apologize unreservedly for this vulnerability, and sincerely and deeply thank Steve Thomas for pointing it out. Without him, we would have been a lot worse off, and so would our users.
We are continuing in the process of auditing all aspects of Cryptocat’s development, and we assure our users that security remains something we are constantly focused on.
Today, with Cryptocat nearing 65,000 regular users, the Cryptocat project releases “Cryptocat: Adopting Accessibility and Ease of Use as Security Properties,” a working draft which brings together the past year of Cryptocat research and development.
We document the challenges we have faced, both cryptographic and social, and the decisions we’ve taken in order to attempt to bring encrypted communications to the masses.
The full paper is available for download here from the public scientific publishing site, arXiv.
Excerpts of the introduction from our paper:
Cryptocat is a Free and Open Source Software (FL/OSS) browser extension that makes use of web technologies in order to provide easy to use, accessible, encrypted instant messaging to the general public. We aim to investigate how to best leverage the accessibility and portability offered by web technologies in order to allow encrypted instant messaging an opportunity to better permeate on a social level. We have found that encrypted communications, while in many cases technically well-implemented, suffer from a lack of usage due to their being unappealing and inaccessible to the “average end-user”.
Our position is that accessibility and ease of use must be treated as security properties. Even if a cryptographic system is technically highly qualified, securing user privacy is not achieved without addressing the problem of accessibility. Our goal is to investigate the feasibility of implementing cryptographic systems in highly accessible mediums, and to address the technical and social challenges of making encrypted instant messaging accessible and portable.
In working with young and middle-aged professionals in the Middle East region, we have discovered that desktop OTR clients suffer from serious usability issues which are sometimes further exacerbated due to language differences and lack of cultural integration (the technology was frequently described as “foreign”). In one case, an activist who was fully trained to use Pidgin-OTR neglected to do so citing usability difficulties, and as a direct consequence encountered a life-threatening situation at the hands of a national military in the Middle East and North Africa region.
These circumstances have led us to the conclusion that ease of use and accessibility must be treated as security properties, since their absence results in security compromises with consequences similar to the ones experienced due to cryptographic breaks.
A frequent question we get here at Cryptocat is: “why don’t you add a buddy lists feature so I can keep track of whether my friends are on Cryptocat?” The answer: metadata.
If you’ve been following the news at all for the past week, you’d have heard of the outrageous reports of Internet surveillance on behalf of the NSA. While those reports suggest that the NSA may not have complete access to content, they still allow the agency access to metadata. If we were talking about phone surveillance, for example, metadata would be the time you made calls, which numbers you called, how long your calls have lasted, and even where you placed your calls from. This circumstantial data can be collected en masse to paint very clear surveillance pictures about individuals or groups of individuals.
At Cryptocat, we not only want to keep your chat content to yourself, but we also want to distance ourselves from your metadata. In this post we’ll describe what metadata you’re giving to Cryptocat servers, what’s done with it, and what parts of it can be seen by third parties, such as your Internet service provider. We assume we are dealing with a Cryptocat XMPP server with a default configuration, served over SSL.
Reminder: No software is likely to be able to provide total security against state-level actors. While Cryptocat offers useful privacy, we remind our users not to trust Cryptocat, or any computer software, with extreme situations. Cryptocat is not a magic bullet and does not protect from all threats.
Who has your metadata?
OpenITP is happy to announce the hire of Nadim Kobeissi as Special Advisor starting in June 2013 Kobeissi is best known for starting Cryptocat, one of the world's most popular encrypted chat applications.
Based in Montreal, Kobeissi specializes in cryptography, user interfaces, and application development. He has done original research on making encryption more accessible across languages and borders, and improving the state of web cryptography. He has also lead initiatives for Internet freedom and against Internet surveillance. He has a B.A. In Political Science and Philosophy From Concordia University, and is fluent in English, French, and Arabic.
As Special Advisor, Kobeissi will collaborate with OpenITP staff to improve and promote Cryptocat, advise on security and encryption matters, and organize developer meetings.
Hacking to Empower Accessible Privacy Worldwide
Join us on August 17-18 for the Cryptocat Hackathon and help empower activists worldwide by improving useful tools and discussing the future of making privacy accessible. This two day event will take place at the OpenITP offices, located on 199 Lafayette Street, Suite 3b, New York City.
Cryptocat provides the easiest, most accessible way for an individual to chat while maintaining their privacy online. It is a free software that aims to provide an open, accessible Instant Messaging environment that encrypts conversations and works right in your browser.
Who Should Attend?
Hackers, designers, Internet freedom fighters, community organizers, and netizens. Essentially, anyone interested in empowering activists through these tools. While a big chunk of the work will focus on code, there are many other tasks available ranging from Q&A to communications.
For RSVP, please visit http://www.eventbrite.com/event/6904608871 or email nadim AT crypto DOT cat,
10:00 Presentation of the projects
5:00pm End of Day
Collateral Freedom: A Snapshot of Chinese Users Circumventing Censorship, just released today, documents the experiences of 1,175 Chinese Internet users who are circumventing their country’s Internet censorship— and it carries a powerful message for developers and funders of censorship circumvention tools. We believe these results show an opportunity for the circumvention tech community to build stable, long term improvements in Internet freedom in China.
This study was conducted by David Robinson, Harlan Yu and Anne An. It was managed by OpenITP, and supported by Radio Free Asia’s Open Technology Fund.
The report found that the circumvention tools that work best for Chinese users are technologically diverse, but are united by a shared political feature: the collateral cost of choosing to block them is prohibitive for China’s censors. Survey respondents rely not on tools that the Great Firewall can’t block, but rather on tools that the Chinese government does not want the Firewall to block. Internet freedom for these users is collateral freedom, built on technologies and platforms that the regime ﬁnds economically or politically indispensable
The most widely used tool in the survey—GoAgent—runs on Google’s cloud hosting platform, which also hosts major consumer online services and provides background infrastructure for thousands of other web sites. The Great Firewall sometimes slows access to this platform, but purposely stops short of blocking the platform outright. The platform is engineered in a way that limits the regime’s ability to differentiate between the circumventing activity it would like to prohibit, and the commercial activity it would like to allow. A blanket block would be technically feasible, but economically disruptive, for the Chinese authorities. The next most widely used circumvention solutions are VPNs, both free and paid—networks using the same protocols that nearly all the Chinese offices of multinational ﬁrms rely on to connect securely to their international headquarters. Again, blocking all traffic from secure VPNs would be the logical way to make censorship effective—but it would cause signiﬁcant collateral harm.
Instead, the authorities steer a middle course, sometimes choosing to disrupt VPN traffic (and commerce) in the interest of censorship, and at other times allowing VPN traffic (and circumvention) in the interest of commerce. The Chinese government is implementing policies that will improve its ability to segment circumvention-related uses of VPNs from business-related uses, including heightened registration requirements for VPN providers and users.
Respondents to the survey were categorically more likely to rely on these commercially widespread technologies and platforms than they were to use special purpose anti-censorship systems with relatively little commercial footprint, such as Freegate, Ultrasurf, Psiphon, Tor, Puff or simple web proxies. Many of the respondents have used these non-commercial tools in the past—but most have now stopped. The most successful tools today don’t make the free ﬂow of sensitive information harder to block—they make it harder to separate from traffic that the Chinese government wishes to allow.
The report found that most users of circumvention software are in what we call the “versatility-ﬁrst” group: they seek a fast and robust connection, are willing to install and conﬁgure special software, and (perhaps surprisingly) do not base their circumvention decisions on security or privacy concerns. To the extent that circumvention software developers and funders wish to help these users, the study found that they should focus on leveraging business infrastructure hosted in relatively freedom respecting jurisdictions, because the Chinese government has greater reason to allow such infrastructure to operate.
The report provided five practical suggestions:
- Map the circumvention technologies and practices of foreign businesses in China.
- Engage with online platform providers who serve businesses in censored countries.
- Investigate the collateral freedom dynamic in other countries.
- Diversify development efforts to match the diversity of user needs.
- Make HTTPS a corporate social responsibility issue.
Techno-Activism Third Mondays (TA3M) is an informal meetup that occurs on the same date in many cities worldwide. It is designed to connect techno-activists and hacktivists who work on or with circumvention tools, and/or are interested in anti-censorship and anti-surveillance technology. Currently, TA3M are held in New York, San Francisco, Amsterdam, and Madison, Wisconsin, with Boston and Seattle being planned in the near future.
For information about upcoming events, please visit: http://wiki.openitp.org/events:techno-activism_3rd_mondays
- Networking opportunities for people in the techno-activism and circumvention tools communities.
- Provide individuals with space for collaborative problem solving, to meet new friends, and recruit for projects.
- Introduce newbies into the community so as to diversify the circumvention tech community.
We are currently looking to expand the list of cities where #TA3M is hosted. If you are interested in hosting your own 3rd Mondays, contact SandraOrdonez [at] openitp [dot] org.
Want to see your blog on this planet? It's maintained by James Vasile (james AT openitp DOT org). Get in touch and let them know you want to join!
If you find Planeteria.org or the free software on which it runs useful, please help support this site.
Posts are copyright their respective authors. Click through to see each site's terms for redistribution.
- 2014-04-19 09:20:05
- Admin interface