Welcome to the twenty-ninth issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.

Tails 1.1 is out!

Tails, the Debian-based live system that protects its users’ communications by ensuring they are all sent through the Tor network, has been updated. This new 1.1 release reminds Tails users of the distribution’s roots in Debian: Tails is now based on the current stable version of Debian, dubbed “Wheezy”.

This means that almost all software components have been updated. One noticeable example is the desktop environment. The user experience of the GNOME 3 in fallback mode should be similar to previous Tails versions, but things will look a bit differently than they used to.

One of the most keenly-awaited features of this new version is the support for UEFI firmware. Mac users now have only to press the Alt key while booting their computer to start Tails from a DVD or USB stick. The same goes for owners of computers displaying “Windows 8” stickers. And, talking of Windows 8, the camouflage mode has been updated to look more like it, instead of the now discontinued XP.

This new release also contains security fixes, and minor tweaks over the previous versions.

Because of the newly-introduced support for UEFI and the amount of upgraded software, incremental upgrades will not be offered for Tails 1.1. A full upgrade is needed through the Tails Installer. The safest method for upgrading Tails sticks is to go through a freshly burned DVD. Be sure to have a look at the list of known issues to learn about other oddities that might happen in the process.

PETS 2014

The fourteenth Privacy Enhancing Technologies Symposium was held in Amsterdam, Netherlands, July 16-18, 2014. A wide range of research in privacy enhancing technologies was presented, with many of relevance to Tor. Keynotes were given by Martin Ortlieb, Senior User Experience Researcher in Privacy at Google, and William Binney, a former NSA employee.

Some papers focusing on Tor include:

Also announced at PETS was the 2014 PET Award for Outstanding Research in Privacy Enhancing Technologies, for A Scanner Darkly: Protecting User Privacy From Perceptual Applications by Suman Jana, Arvind Narayanan†, and Vitaly Shmatikov. The winner of the best student paper at PETS was I Know Why You Went to the Clinic: Risks and Realization of HTTPS Traffic Analysis by Brad Miller, Ling Huang, A. D. Joseph and J. D. Tygar .

Prior to PETS, there was a Tor meet-up which Moritz Bartl reported as a great success. Hopefully there will also be such an event at the 2015 PETS, to be held in Philadelphia, US, in the week of June 29, 2015.

Miscellaneous news

txtorcon, the Tor control protocol implementation for the Twisted framework, received a new minor release. Version 0.10.1 fixes “a couple bugs introduced along with the endpoints feature in 0.10.0”.

Roger Dingledine posted an official reaction to the cancellation of a proposed talk at the upcoming Blackhat2014 conference dealing with possible deanonymization attacks on Tor users and hidden services.

Tor ships with a sample webpage that can be used by exit node operators to identify their system as such to anyone wishing to identify the source of Tor traffic. Operators most often copy and adapt this template to the local situation. Mick Morgan discovered than his version was out of sync and contained broken links. “If other operators are similarly using a page based on the old template, they may wish to update”, Mick advised.

Michael Rogers, one of the developers of Briar, announced a new mailing list for discussing peer-to-peer-based communication systems based on Tor hidden services. As Briar and other systems might be “running into similar issues”, a shared place to discuss them seemed worthwhile.

Karsten Loesing and Philipp Winter are looking for front-end web developers: “We are looking for somebody to fork and extend one of the two main Tor network status websites Atlas or Globe” writes Karsten. Both websites currently need love and new maintainers. Please reach out if you want to help!

The database which holds Tor bridges, usually called BridgeDB, is able to give out bridge addresses through email. This feature was recently extended to make the email autoresponder support more bridge types, which required introducing new keywords that must be used in the initial request. Matthew Finkel is looking for feedback on the current set of commands and how they could be improved.

Lunar wrote a detailed report on his week at the Libre Software Meeting in Montpellier, France. The report covers the booth jointly held with Nos Oignons, his talk in the security track, and several contacts made with other free software projects.

Here’s another round of reports from Google Summer of Code students: the mid-term: Amogh Pradeep on Orbot and Orfox improvements, Israel Leiva on the GetTor revamp, Quinn Jarrell on the pluggable transport combiner, Juha Nurmi on the ahmia.fi project, Marc Juarez on website fingerprinting defenses, and Daniel Martí on incremental updates to consensus documents.

Tim Retout announced that apt-transport-tor 0.2.1 has entered Debian unstable. This package enables APT to download Debian packages through Tor.

Atlas can now also be used to search for Tor bridges. In the past, Atlas was only able to search for relays. This was made possible thanks to a patch developed by Dmitry Eremin-Solenikov.

Thanks to Tim Semeijn and Tobias Bauer for setting up new mirrors of the Tor Project’s website and its software.

Tor help desk roundup

Some Linux users have experienced missing dependency errors when trying to install Tor Browser from their operating system’s software repositories. Tor Browser should only be installed from the Tor Project’s website, and never from a software repository. In other words, using apt-get or yum to install Tor Browser is discouraged. Downloading and verifying Tor Browser from the Tor Project website allows users to keep up with important security updates as they are released.

News from Tor StackExchange

user3224 wants to log in to its Google, Microsoft etc. accounts and wonders if they will know the real name and other personal information. Roya and mirimir explained that if someone logs into an already personalized account Tor can’t anonymize this user. Instead it might be wise to use Tor to register a pseudonym and also use an anonymous operating system like Tails or Whonix.

escapologybb has set up a Raspberry Pi. It serves as SOCKS proxy for the internal network. While everyone can use it, escapologybb asks what the security implications are and if this lowers the overall anonymity. If you know a good answer please share your knowledge with the users of Tor StackExchange.

This issue of Tor Weekly News has been assembled by Lunar, Steven Murdoch, harmony, Philipp Winter, Matt Pagan, qbi, and Karsten Loesing.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Tor Blog | The Tor Blog blogs | 2014-07-23 12:00:00

On July 14th, the European Union and the United States kicked off the sixth round of negotiations of what could be the world’s largest trade pact — the Transatlantic Trade and Investment Partnership (TTIP). The negotiations, which have been taking place for more than a year, are about opening markets on both sides of the Atlantic for exchange in goods, services, investment, and public procurement.

Access | Access Blog | 2014-07-23 07:14:19

Tails, The Amnesic Incognito Live System, version 1.1, is out.

All users must upgrade as soon as possible: this release fixes numerous security issues.


Notable user-visible changes include:

  • Rebase on Debian Wheezy
    • Upgrade literally thousands of packages.
    • Migrate to GNOME3 fallback mode.
    • Install LibreOffice instead of OpenOffice.
  • Major new features
    • UEFI boot support, which should make Tails boot on modern hardware and Mac computers.
    • Replace the Windows XP camouflage with a Windows 8 camouflage.
    • Bring back VirtualBox guest modules, installed from Wheezy backports. Full functionality is only available when using the 32-bit kernel.
  • Security fixes
    • Fix write access to boot medium via udisks (ticket #6172).
    • Upgrade the web browser to 24.7.0esr-0+tails1~bpo70+1 (Firefox 24.7.0esr + Iceweasel patches + Torbrowser patches).
    • Upgrade to Linux 3.14.12-1 (fixes CVE-2014-4699).
    • Make persistent file permissions safer (ticket #7443).
  • Bugfixes
    • Fix quick search in Tails Greeter's Other languages window (Closes: ticket #5387)
  • Minor improvements
    • Don't install Gobby 0.4 anymore. Gobby 0.5 has been available in Debian since Squeeze, now is a good time to drop the obsolete 0.4 implementation.
    • Require a bit less free memory before checking for upgrades with Tails Upgrader. The general goal is to avoid displaying "Not enough memory available to check for upgrades" too often due to over-cautious memory requirements checked in the wrapper.
    • Whisperback now sanitizes attached logs better with respect to DMI data, IPv6 addresses, and serial numbers (ticket #6797, ticket #6798, ticket #6804).
    • Install the BookletImposer PDF imposition toolkit.

See the online Changelog for technical details.

Known issues

I want to try it or to upgrade!

Go to the download page.

Note that for this release there are some special actions needed when upgrading from ISO and automatically upgrading from Tails 1.1~rc1.

What's coming up?

The next Tails release is scheduled for September 2.

Have a look to our roadmap to see where we are heading to.

Would you want to help? There are many ways you can contribute to Tails. If you want to help, come talk to us!

Support and feedback

For support and feedback, visit the Support section on the Tails website.

Tor Blog | The Tor Blog blogs | 2014-07-22 19:05:54

This past March, people from all over the world gathered in San Francisco for RightsCon. Access’ annual conference brings together activists, corporate leaders, programmers, representatives from various governments, and experts in law and policy working on a range of issues at the intersection of technology and human rights.

Access | Access Blog | 2014-07-22 16:49:29

Today Access, together with 20 digital and civil rights organisations, sent the following letter (linked here and below) to E.U. Commissioners Michel Barnier and Cecilia Malmström to bring their attention to an infringement of E.U. law by the United Kingdom through the adoption of the Data Retention and Investigatory Powers Act (“DRIP”).

Access | Access Blog | 2014-07-22 13:28:47

As posted by Roger on the Tor-Talk mailing list:

Hi folks,

Journalists are asking us about the Black Hat talk on attacking Tor that got cancelled. We're still working with CERT to do a coordinated disclosure of the details (hopefully this week), but I figured I should share a few details with you earlier than that.

1) We did not ask Black Hat or CERT to cancel the talk. We did (and still do) have questions for the presenter and for CERT about some aspects of the research, but we had no idea the talk would be pulled before the announcement was made.

2) In response to our questions, we were informally shown some materials. We never received slides or any description of what would be presented in the talk itself beyond what was available on the Black Hat Webpage.

3) We encourage research on the Tor network along with responsible disclosure of all new and interesting attacks. Researchers who have told us about bugs in the past have found us pretty helpful in fixing issues, and generally positive to work with.

Tor Blog | The Tor Blog blogs | 2014-07-21 22:41:21

A report by the top UN human rights authority condemns government surveillance practices for the "lack of accountability for arbitrary or unlawful interference in the right to privacy."

Access | Access Blog | 2014-07-18 19:10:50

This week the U.K. Parliament adopted the “Data Retention and Investigatory Powers Act,” or DRIP, a bill that would dramatically expand the government’s surveillance powers. In the runup to the vote, the GCHQ, civil service, and coalition and opposition leaders showed a flagrant disregard for parliamentary procedure and failed to allow an informed and public debate. The result is a terrible bill that would treat all citizens, in the U.K. and abroad, as surveillance targets.

Access | Access Blog | 2014-07-18 14:25:23

This week Access submitted comments to the FCC urging it to use its full authority to reclassify broadband internet access service as a telecommunications service under Title II of the Telecommunications Act — the only viable way the agency can safeguard the values that enabled the internet to become a global force for commerce, culture, free expression, and innovation.

Access | Access Blog | 2014-07-18 14:07:03

Access Senior Policy Counsel Peter Micek questioned the company on transparency reports, surveillance, and sweeping data retention legislation making its way rapidly through the UK Parliament.

Access | Access Blog | 2014-07-18 07:14:59

Access is pleased to announce the first release of Digital First Aid Kit, created in collaboration with a number of civil society and digital security organizations, including Hivos, Frontline Defenders, Electronic Frontier Foundation, the Computer Incident Response Center Luxembourg, Virtual Road, Internews, and Global Voices.

Access | Access Blog | 2014-07-17 18:18:58

Hello front-end web developers!

We are looking for somebody to fork and extend one of the two main Tor network status websites Atlas or Globe.

Here's some background: both the Atlas and the Globe website use the Onionoo service as their data back-end and make that data accessible to mere humans. The Onionoo service is maintained by Karsten. Atlas was written by Arturo as proof-of-concept for the Onionoo service and later maintained (but not extended) by Philipp. Globe was forked from Atlas by Christian who improved and maintained it for half a year, but who unfortunately disappeared a couple of weeks ago. That leaves us with no actively maintained network status website, which is bad.

Want to help out?

Here's how: Globe has been criticized for having too much whitespace, which makes it less useful on smaller screens. But we hear that the web technology behind Globe is superior to the one behind Atlas (we're no front-end web experts, so we can't say for sure). A fine next step could be to fork Globe and tidy up its design to work better on smaller screens. And there are plenty of steps after that if you look through the tickets in the Globe and Atlas component of our bug tracker. Be sure to present your fork on the tor-dev@ mailing list early to get feedback. You can just run it on your own server for now.

The long-term goal would be to have one or more people working on a new network status website to replace Atlas and Globe. We'd like to wait with that step until such a new website is maintained for a couple of weeks or even months though. And even then, we may keep Atlas and Globe running for a couple more months. But eventually, we'd like to shut them down in favor of an actively maintained website.

Let us know if you're interested, and we're happy to provide more details and discuss ideas with you.

Tor Blog | The Tor Blog blogs | 2014-07-17 17:13:15

In a scathing new report, the UN High Commissioner for Human Rights warns that mass surveillance is “emerging as a dangerous habit rather than an exceptional measure” and that “the very existence of a mass surveillance programme…creates an interference with privacy.” The commissioner also slams judicial review processes, writing that in many countries they “amounted...to an exercise in rubber-stamping.”

Access | Access Blog | 2014-07-17 15:20:31

Welcome to the twenty-eighth issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.

Roundup of research on incentives for running Tor relays

As an hors-d’œuvre to the now on-going the Privacy Enhancing Technology Symposium, Rob Jansen wrote a long blog post covering the last five years of research on incentives for running Tor relays.

Rob introduces the topic by describing the current “volunteer resource model” and mentions that “has succeeded so far: Tor now consists of over 5000 relays transferring between 4 and 5 GiB/s in aggregate”. Rob lists several possible reasons why volunteers run relays right now. They are all intrinsic motivations: current operators run relays because they really want to.

Is only relying on volunteers going to limit the growth of the Tor network in the future? There are already not-for-profit organizations operating relays based on donations, but growing them too much would also be problematic. Another area being explored are extrinsic motivations: making Tor clients faster when someone runs a relay or giving a financial reward — in a currency or another — for the service. Some can legitimately ask if they are suitable for Tor at all and Rob raises plenty of legitimate concerns on how they would interact with the current set of volunteers.

The problem keeps interesting researchers, and Rob details no less than six schemes: the oldest are PAR and Gold Star which introduced anonymity problems, BRAIDS where double spending of rewards is prevented without leaking timing information, LIRA which focused on scalability, TEARS where a publicly auditable e-cash protocol reduce the reliance on trusted parties, and finally, the (not ideally namedTorCoin which introduces the idea of a crypto-currency based on “proof-of-bandwidth”.

Rob details the novel ideas and drawbacks of each schemes, so be sure to read the original blog post for more details. After this roundup, Rob highlights that “recent research has made great improvements in the area of Tor incentives”. But that’s for the technical side as “it is unclear how to make headway on the social issues”.

“Tor has some choices to make in terms of how to grow the network and how to position the community during that growth process” concludes Rob. So let’s have that conversation.

Defending against guard discovery attacks with layered rotation time

Guard nodes are a key component of a Tor client’s anonymity. Once an attacker gains knowledge of which guard node is being used by a particular client, putting the guard node under monitoring is likely the last step before finding a client’s IP address.

George Kadianakis has restarted the discussion on how to slow down guard discovery of hidden services by exploring the idea of “keeping our middle nodes more static”. The idea is to slow down the attacks based on repeated circuit destruction by reusing the same “middle nodes for 3-4 days instead of choosing new ones for every circuit”. Introducing this new behavior will slow down the attack, but George asks “are there any serious negative implications?”

The idea is not new, as Paul Syverson pointed out: “Lasse and I suggested and explored the idea of layered guards when we introduced guards”. He adds “there are lots of possibilities here”.

George worries that middle nodes would then “always see your traffic coming through your guard (assuming a single guard per client)”. Ian Goldberg added “the exit will now know that circuits coming from the same middle are more likely to be the same client”. Restricting the change to only hidden services and not every client means that it will be “easy for an entry guard to learn whether a client has static middle nodes or not”.

As George puts it the latest message in the thread: “As always, more research is needed…” Please help!

More monthly status reports for June 2014

The wave of regular monthly reports from Tor project members for the month of June continued, with submissions from Michael Schloh von Bennewitz and Andrew Lewman.

Arturo Filastò reported on behalf of the OONI team, while Roger Dingledine submitted the SponsorF report

Miscellaneous news

The various roadmaps that came out of the 2014 summer dev. meeting have been transcribed in a joint effort by George Kadianakis, Yawning Angel, Karsten Loesing, and an anonymous person. Most items will probably be matched with a ticket soon.

The Tor Project is hiring a financial controller. This is a part time position, approximately 20 hours per week, at the office in Cambridge, Massachusetts.

The Tails developers announced the creation of two new mailing lists. “If you are a designer, UX/UI expert or beginner” interested in the theory and practice of designing user interfaces for Tails, the tails-ux list is for you, while the tails-project list is dedicated to “the ‘life’ of the project“; however, “technical questions should stay on tails-dev”.

Alan kicked of the aforementioned tails-ux mailing list announcing progress on Tails initial login screen. The new set of mockups is visible on the corresponding blueprint.

More mockups! Nima Fatemi produced some for a possible browser-based Tor control panel, incorporating features that were lost with the removal of Vidalia from the Tor Browser, such as the world map with Tor circuit visualizations. “How would you perfect that image? What’s missing?”, asked Nima, hoping “to inspire people to start hacking on it”.

Meanwhile, Sean Robinson had been working on a new graphical Tor controller called Syboa. Sean’s “primary motivation for Syboa was to replace TorK, so it looks more like TorK than Vidalia”. Sean announces that he will not have time for further development soon but that he would answer questions.

Juha Nurmi submitted the weekly status report for the ahmia.fi GSoC project.

Thanks to the University of Edinburgh’s School of Informatics, funcube.fr, Stefano Fenoglio, IP-Connect, Justin Ramos, Jacob Henner from Anatomical Networks, and Hackabit.nl for running mirrors of the Tor Project website!

Tor help desk roundup

Users often ask about for assistance setting up Tor Cloud instances. Sina Rabbani is taking over the maintenance of Tor Cloud and is working on updating the packages and documentation. Until new documentation on using the up-to-date images and Amazon Web Services interface lands, users not already familiar with AWS may want to use a different virtual server provider to host their bridges.

Easy development tasks to get involved with

The setup scripts of the Flashproxy and Obfsproxy pluggable transports attempt to download and build the M2Crypto library if they are not already installed. We´d really want to avoid this and have the setup script fail if not all libraries are present for building Flashproxy. The ticket that describes this bug also outlines a possible workaround that disables all downloads during the setup process. If you know a bit about setuptools and want to turn this description into a patch and test it, please give it a try.

This issue of Tor Weekly News has been assembled by Lunar, harmony, Matt Pagan, Karsten Loesing, and George Kadianakis.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Tor Blog | The Tor Blog blogs | 2014-07-16 12:00:00

On June 25, U.S. Attorney General Eric Holder announced the Obama administration is seeking to extend to EU citizens several privacy protections in U.S. law, which today are only available to U.S. citizens and permanent residents. If the U.S. Congress follows through and passes legislation to this effect, Europeans will gain access to U.S. courts for certain privacy offences, for the first time.

Access | Access Blog | 2014-07-16 08:45:10

Today, Access sent a letter to Members of Parliament in the United Kingdom calling for the Data Retention and Investigatory Powers Act currently being rushed through under "emergency" procedures to be thrown out for its failure to comply with international human rights norms, the EU Charter of Fundamental Rights, and the recent decision of the Court of Justice of the EU invalidating the EU Data Retention Directive.

Access | Access Blog | 2014-07-15 15:12:37

Access and a coalition of digital rights groups, companies, and security experts submitted a letter to U.S. President Obama urging him to pledge to veto the Cybersecurity Information Sharing Act (CISA), and any other bill that includes similar provisions that would hurt our basic right to privacy.

Access | Access Blog | 2014-07-15 13:30:04

There has been a considerable amount of work in the area of Tor incentives since the last post on the topic in 2009 (over 5 years ago!). This post will give an overview of some of the major new approaches, including two new designs that I co-authored that will appear at the Workshop on Hot Topics in Privacy Enhancing Technologies later this week. Before getting to those, I'll give background on Tor and discuss some of the social issues to help form a foundation of understanding.

Tor’s volunteer resource model

The Tor network consists of a set of relays that provide bandwidth and other resources to forward traffic for Tor users. Anyone in any part of the world can contribute to Tor by downloading the Tor software and configuring it to operate in relay mode. In fact, the Tor network is currently composed exclusively of such voluntarily-operated relays.

There are many reasons existing relay operators might contribute to Tor. They may contribute because they really want a privacy-enhancing communication tool like Tor to exist, they believe in Tor’s philosophy of open source software and operational transparency, and they want Tor to succeed at its goals. They may wish to be part of the Tor community and the social movement to provide privacy-enhanced communication that connects people in all parts of the world. Tor may be practically useful to them for communication with others or to retrieve content that may otherwise be unavailable to them due to censorship or other network interference. Or they may be technically interested in the Tor software or network and the associated research problems raised by large, distributed systems. Finally, some may be interested for adversarial reasons, e.g., collecting information on the uses of the network or the content being transferred. All of these reasons provide intrinsic motivation for operators to contribute - they don’t expect any direct compensation for their contributions.

The volunteer approach has succeeded so far: Tor now consists of over 5000 relays transferring between 4 and 5 GiB/s in aggregate. For the most part, this is "organic" growth obtained through community outreach, where volunteers first have personal contact with existing community members and become inspired to help.

Whatever their reason for volunteering, relay operators not only contribute resources from the physical machines upon which their relays are executed, but also contribute the time involved with configuring, updating, and adjusting their relays as both the software and network mature. In many cases, operators also contribute monetarily through the direct purchase of dedicated machines or special fast connections to Internet Service Providers (ISPs), and exit relay operators spend social energy on maintaining relationships with their ISP (to make sure the ISP understands that requests coming from their machines are from Tor).

However, because many people that are otherwise excited about or interested in Tor are incapable or unwilling to incur these expenses, there are far fewer Tor relays than Tor clients (users that use Tor to access content, e.g. via the Tor Browser Bundle, without directly contributing resources to the network). Expanding the set of relays could have many benefits: Tor could become faster, more reliable, and more secure while at the same time distributing trust among a larger and more diverse set of distributed peers. A larger network that is transferring a larger quantity and more diverse types of data will, in general, make it more difficult for an adversary to determine who is talking to whom. This means a safer Tor for everyone.

Incentives and motivation... say whuuuht?

There are many social and technical challenges involved with expanding the set of Tor relays. We focus here on the social issues surrounding how to encourage more people to run a relay in the first place by providing incentives (i.e., rewards) for relay operators, and later focus on how to do this without leaking information or otherwise hurting Tor’s anonymity.

An important social issue to consider is that of maintaining the existing community of operators and how rewards could harm it. The presumption here is that existing relay operators are motivated to run relays for their own reasons - the act of contributing to Tor itself has intrinsic value to them. Then the question is: what would happen if we start rewarding operators through the use of an incentive system?

The answer isn’t clear, and there are at least a couple of forks in the cognitive road with many more questions along the way. First, what would happen to the existing operators who used to feel good about volunteering but are now compensated with a reward that may have extrinsic value? Although the new reward may provide some extrinsic value, will it lower their original intrinsic value enough to cause them to lose their motivation to contribute? Will they continue their contributions but begin to expect the reward, such that if the reward is removed later they would no longer be motivated even though originally (before the reward) they were happy to contribute? Second, is the new group of operators attracted to Tor by the reward somehow less desirable than the existing group, because they don’t care as much about Tor or are more likely to stop contributions and leave the system more quickly than the existing volunteers? Is the fact that they have higher extrinsic motivation than intrinsic make them better or worse operators? Will they be less likely to incur the costs of saying "no" when people ask them to log or turn over traffic? Will they in fact be willing to sell logs if it increases their rewards? Third, how will the new group of operators affect the existing group? If their intrinsic motivation was low enough before the reward that they didn’t contribute, but high enough with the reward that they do contribute, are these new contributors going to somehow shift the "community spirit" into something that no longer has the value that it once had? Will this "crowd out" the intrinsically motivated individuals and cause existing operators to leave (a widely acknowledged theory)?

The answers to these questions are not clear, though speculations abound. One response is that we will never know what will happen until we try it, but if we try it we may not like the result. Another is that if we try it and it succeeds, great; if it starts going badly, we can adapt and adjust.

Researchers and other Tor community members have been interested in the Tor incentive problem because a well-designed incentive scheme has the potential to greatly improve the network (we will discuss several technical designs below). However, because of the many open social questions, it has been challenging for the Tor community to come to a consensus on the path forward. On one hand and as discussed above, many people already run Tor relays and provide valuable service, and the network and bandwidth graphs imply some amount of success in network growth. It would be tragic if the introduction of an experimental reward scheme drove away the existing participants. On the other hand, the number of Tor clients still vastly exceeds the number of relays, and Tor’s bandwidth capacity remains the limiting factor for Tor’s growth.

A simplified model of reasoning

If we consider that relay operators with a higher intrinsic motivation to contribute are somehow more desirable to the network and the community because they care more deeply about Tor and the social movement, then we can consider there to be a trade-off between reward value and the desirability of the operator. The lower the extrinsic reward value is, the higher the intrinsic value a potential new operator must possess in order for the total value to be high enough to motivate her to contribute. The higher the extrinsic reward value is, the lower the intrinsic value may be to still attract the new operator. Under this model, as the value of the reward increases, the number of individuals willing to contribute also increases while their "quality" decreases.

Please note that this is a very simplified model of reasoning and since it does not necessarily reflect reality, not everyone should agree with it. There are many other values in play here as well; for example, motivations change over time as the world changes, and incentive schemes that require significant protocol modifications are more difficult to implement, test, and deploy. Nonetheless, this simplified model may help sort out the issues, and it seems to be consistent with the fact that the Tor community has thus far preferred to grow the network through intrinsically motivated relay operators.

Social rewards

Tor’s volunteer approach has generally been on the conservative side of our simplified model, where individuals with higher intrinsic motivations to contribute to Tor are preferred. New operators have been recruited through social and community-oriented efforts such as explaining how great Tor is and why they should care. This works well for people who are already intrinsically motivated about Tor, such as this guy who tattooed a Tor onion on his arm.

Other relay recruitment approaches for intrinsically motivated individuals include Torservers.net and the Noisebridge Tor Exit Node Project: both operate as independent nonprofit organizations for those people who would like to contribute to the Tor network but for one reason or another are not in a position to operate relays themselves. As discussed in a 2011 post, it is preferable that people who can run their own fast exit relays do so. But the approach of turning donations into fast exit capacity allows those who cannot run their own relays to still contribute to improving the performance and exit capacity of the Tor network.

The more recent EFF Tor Challenge is somewhere on the same side of the spectrum, except it actually offers small rewards as incentives. EFF explains Tor’s importance and users are encouraged to contribute to the network by running a relay. The challenge offers prizes to relay operators as an incentive: name and twitter handle published in a list of contributors, a limited edition sticker for running a relay for 12 months, and a t-shirt if that relay is fast (bandwidth of 1 MB/s or larger after 12 months).

Other social rewards are possible here as well. A post on the tor-dev mailing list proposes that Tor host a simple profile page for each relay that will list and celebrate the relay’s total bandwidth contribution over time, and recognize the operator for running special Tor bridge or exit relays. Relatively simple ideas like this may have the potential to attract intrinsically-motivated individuals without requiring extrinsic rewards or a lot of changes to Tor itself.

The approaches in this category are relatively low-risk/low-reward schemes: even though the new operators are receiving recognition or a reward for running a relay, the reward is of such little value that there is little risk that existing contributors who may not have been rewarded will leave. At the same time, a new operator loses little if it decides to shut down the new relay.

On the other side of the spectrum are more "radical" approaches that do not require individuals with high intrinsic motivation (though they are welcome, too!) but change Tor’s design have been explored by researchers. Researchers explore these designs because they are generally more technically interesting and have the potential to produce a much larger set of relays. We now shift to discussing these latest research proposals.

Review of early research on Tor incentive system designs - Gold Star and PAR

The 2009 post discussed two incentive papers that were new at the time: Payment for Anonymous Routing from PETS 2008 (PAR) and Building Incentives into Tor from FC 2010 (the "Gold Star" scheme).

In the Gold Star scheme, the Tor directory authorities measure the bandwidth of Tor relays and assign a special flag (a "gold star") to the fastest 7/8 of relays. Then whenever those relays send traffic through Tor, they choose the other fast gold star relays to form a fast gold star path. Relays then prioritize traffic on these gold star paths. The incentive here is that if you run a relay, you will get faster service when using Tor as a client. The main drawback to this approach is that only relays are able to get gold stars and priority service. This means that all relays that are part of a gold star path know for certain that the initiator of that traffic is someone from the small list of gold star relays. Because the set of gold star relays would be smaller than the set of all Tor users by several orders of magnitude, the anonymity for gold star relays would be significantly harmed.

In PAR, all users (not just relays) are able to be part of the incentive system. PAR has an honest-but-curious centralized entity called a "bank" that manages digital tokens called "coins". Users can purchase coins from the bank and then pay the relays while using Tor; the relays then deposit the coins back into the bank and the bank checks to make sure the coin is valid and that it has never been spent before (i.e., it has not been "double spent"). The main novel idea explored in this work is how to include digital payments into each Tor circuit in a way that prevents the relays from learning the client’s identity from the payment itself.

Adding real money into Tor opens a host of new legal and technical questions, which Roger already briefly discussed. For example, real money might shift Tor into a different legal category (see e.g. the EU discussions of what is a "service provider" and which ones are obliged to do data retention), or change the liability situation for relay operators.

The main design challenge we learned from the PAR design is that the timing of when the client withdraws the coins and the relays deposit them creates a trade-off between the ability to detect double spending and link a client to its coins. If the client withdraws some coins and a few seconds later a relay starts depositing coins, how much information does the bank gain? If the relay waits for some time interval to deposit the coins to hide this timing information, then it becomes possible for the client to double spend those coins during that interval. The longer the relay waits before depositing, the harder it is for the bank to link the coins but the easier it is for the client to cheat. A rational relay will deposit immediately to secure its payment, which leads to the worst situation for anonymity. If we assume the relays will deposit immediately, then it is up to the client to hold coins for some random amount of time after purchasing them from the bank, to disrupt potential attempts to link withdrawals to deposits at the cost of flexibility in usage.

In review, both of these proposals have anonymity problems: the Gold Star scheme because the assignment of rewards to relays is public and identifies traffic as originating from the relatively small set of clients of fast relay operators; and PAR because the timing of the withdrawal by the client and the deposit by the relay may leak information about the client’s traffic. In PAR, coins may be held by the client and the relay longer to disrupt this timing information, but this trades off flexibility in usage and the speed at which double spending can be detected. So what have these papers taught us? We have learned about some requirements that a Tor incentive scheme should probably fulfill: both clients and relays should be able to receive the incentive so that neither group stands out; and the timing of payments should not allow cheating and should not enable linkability.

Preventing double-spending without leaking timing information - BRAIDS

Recruiting New Tor Relays with BRAIDS was the followup research proposal presented at CCS in 2010. One of our goals in BRAIDS was to eliminate the trade-off between double-spending and linkability. To achieve this, we designed a new type of digital token which we called a "relay-specific ticket". Tickets were still issued by a trusted, centralized entity - the ticketmaster (we called it a "bank" in the paper, but BRAIDS was not designed to handle real money). Clients choose which relays they want to use, and receive tickets from the ticketmaster that are valid only at the chosen relays while hiding the chosen relay information from the ticketmaster (using partially-blind signatures). Clients then form a Tor circuit with the chosen relays and send the tickets to receive traffic priority (using a differentiated services scheduler). Each ticket will then provide traffic priority through the chosen relay for a specific number of bytes. Because each ticket a relay receives is only valid at that relay and no other, the relay could prevent double spending locally without contacting the ticketmaster.

Another feature of BRAIDS is that tickets are distributed freely in small amount to any client that asks, but only to one client per IP address, and only once per distribution round. If the clients change their mind about which relays they want to use for a circuit, they may contact the ticketmaster to exchange their old tickets for new ones following a time schedule. The exchange process is also used by relays to turn the tickets they received from clients into new usable tickets.

One main drawback to BRAIDS is that even though all users are able to get a small amount of tickets for free, relays are able to accumulate a much larger stash because they receive the free tickets AND the tickets sent to them by other clients. This means that relays stand out when using tickets to download large flows because it is less likely that a normal client would have been able to afford it. Another major drawback is that the exchange process is somewhat inefficient. Relays will exchange received tickets for new ones which they can use themselves, and clients that didn't spend their tickets (e.g., because their Tor usage was low or their chosen relays became unavailable) must exchange them or lose them. This leads to the ticketmaster exchanging all system tickets over every exchange interval. (The ticket validity interval is split into [spend, relay exchange, client exchange], so clients that don't spend in the "spend" time-frame must wait until "client exchange" time-frame to get new tickets. Increasing the interval lengths make them slightly less flexible to rapid changes. The longer the intervals, the more tickets will "pile up" for later processing by the ticketmaster.)

BRAIDS showed us the power of relay-specific tickets but unveiled the scalability problems associated with a trusted, centralized entity.

Improving scalability - LIRA

LIRA: Lightweight Incentivized Routing for Anonymity was published at NDSS 2013. LIRA still uses a centralized entity to manage the incentives, but LIRA only requires incentive management for the relays (thousands) instead of for all system users (millions) like BRAIDS. The way we achieve this is through the use of a lottery system, where clients could simply guess a random number for every circuit to receive priority on that circuit (traffic is prioritized using a differentiated services scheduler as in BRAIDS) with tunable probability. The lottery is set up with special cryptography magic such that relays are rewarded with guaranteed winning guesses to the lottery; relays are allotted winners according to the amount of bandwidth they contributed.

LIRA is more efficient and scalable than any earlier scheme. However, LIRA’s main drawback is that probabilistic guessing reduces flexibility for clients wanting to receive continuous priority over time, and creates a potential for cheating the system because it enables clients to continuously create new circuits and new guesses until a correct guess is found. Another problem is that a secure bandwidth measurement scheme is required to ensure that relays don’t receive rewards without actually contributing to Tor (this wasn't necessary in BRAIDS because the clients sent rewards (tickets) directly to the relays); secure bandwidth measurement is still an open research problem. Finally, LIRA still relies on a trusted central entity to manage the lottery.

Reducing reliance on trusted parties - TEARS

From Onions to Shallots: Rewarding Tor Relays with TEARS will be presented at HotPETs later this week. The main goal in TEARS is to remove the reliance on a central entity to manage the incentives. The central entities in the above schemes (let’s generalize them as "banks") are referred to as "semi-trusted", because there are several ways a malicious bank could misbehave - for example, the bank could refuse service, could extort users by demanding extra fees in order to process their requests, or could "inflate" the digital currency (coins, tickets, guesses, etc.) by printing its own tokens for a profit.

TEARS draws upon the decentralized Bitcoin design to address these concerns by using a *publicly auditable* e-cash protocol that prevents the bank from misbehaving and getting away with it. The bank in TEARS consists of a group of semi-trusted servers, such as the Tor directory servers (as opposed to Bitcoin’s distributed proof-of-work lottery), only a quorum of which need to function correctly for the overall system to keep working. The e-cash cryptography used here is publicly auditable and every interaction with the bank is conducted over a public communication channel (such as the Bitcoin blockchain itself). The security guarantee is that any form of misbehavior on the part of the bank servers leaves a trail of evidence that can be discovered by anyone watching. This approach is reminiscent of systems like Certificate Transparency and OpenTransactions.

In addition to a decentralized bank, TEARS offers a new two-level token architecture to facilitate rewarding relays for their bandwidth contributions without hurting anonymity. First, a decentralized process audits relays’ bandwidth and informs the bank of the results. The bank mints new "shallots" (anonymous, auditable e-cash) for each relay based on their contributions and deposits them into the relay accounts on their behalf. Separately, the bank’s monetary policy may allow it to mint new shallots and distribute them to users, e.g. using mechanisms suggested in BRAIDS but more commonly known to Bitcoin users as faucets. Second, shallots are transferable among users and may be redeemed for relay-specific "PriorityPasses", which are then used to request traffic priority from Tor relays (again, as in BRAIDS). PriorityPasses are relay-specific so that relays can immediately and locally prevent double spending without leaking information to any other entity. This is a similar feature present in BRAIDS’ tickets. However, a novel feature of PriorityPasses is that they are non-transferable and become useless after being spent at a relay -- this reduces overhead associated with exchanges, and ensures that the process of requesting traffic priority does not harm anonymity because the act of redeeming a shallot for a PriorityPass will be unlinkable to any later transaction. There is a question of how many PriorityPasses can be spent in one circuit before it is suspicious that a client has so many, so the size of the faucets and how they distribute Shallots will play a key role in anonymity. Anonymity is also tied to how relays decide to distribute their Shallots to clients, either via a faucet or a through a third party market.

TEARS was designed to operate inside the existing Tor network and thus does not significantly change the fundamentals of Tor’s design. The decentralized bank and bandwidth measurement components do not alter the way clients choose circuits. Clients and relays that want to use or support TEARS, however, would need to support new protocols for interacting with the bank and logic for handling shallots, PriorityPasses, and traffic priority.

TEARS still relies on a "bandwidth measuring" component that can accurately and robustly determine when a relay has indeed contributed useful bandwidth. While the e-cash system in TEARS is designed to be publicly auditable, the existing mechanisms for bandwidth still require trusted authorities to probe.

Bandwidth measurement - TorCoin

A TorPath to TorCoin - Proof-of-Bandwidth Altcoins for Compensating Relays is the other paper to be presented at HotPETs this week. TorCoin addresses the bandwidth measurement problem with a different approach -- an altcoin (a Bitcoin alternative) based on a novel "proof-of-bandwidth" (rather than proof-of-work) mechanism called TorPath, in which the relays (and endpoints) of a circuit effectively mine for new coins whenever they successfully transfer a batch of packets. In TorCoin, a group of "assignment" authorities are responsible for generating a list of circuits (using a shuffle protocol) and assigning them to clients. Bandwidth proofs are then constructed as the circuit is used such that the client mines the TorCoin and then transfers part of it to each of the circuit’s participants.

Like TEARS, TorCoin distributes the process of rewarding relays for their bandwidth contributions. Bandwidth measurement is done directly as part of the distributed mining process and provides strong guarantees. Also, by utilizing a group of assignment authorities that may have more information about the system or underlying network, there is a lot of potential for generating more secure paths for clients than clients are able to generate for themselves.

TorCoin still has some issues to work out; it may be possible to fix some of the smaller issues with protocol modifications, but some of the larger issues don’t have obvious solutions.

One drawback to TorCoin is that it requires the group of collectively-trusted assignment authorities (you have to trust only that some threshold/quorum number of them are correct) to generate and assign circuits to clients. This is a similar trust model to the current Tor directory authorities. In practice, the assignment authorities cause availability issues: if a majority of the assignment authorities are unreachable, e.g. due to DoS or censorship, then the system is unusable to the clients because they won’t be able to generate circuits. This is also somewhat true of the directory authorities, however, directory information can be signed and then mirrored and distributed by other relays in the system whereas assignment authorities are required to always be online and available to new clients. TorCoin clients contact the assignment authorities in order to build new circuits, whereas in Tor they can build as many circuits as they need once the directory information is retrieved (from the directory auths or from directory mirrors).

TorCoin as written has significant security issues. Because relay assignment is not based on bandwidth, it is easier for an adversary to get into the first and last position on a circuit and break anonymity. This can be done through a sybil attack by adding an arbitrary number of bad relays and joining them to the network without actually providing bandwidth. Because the protocol reveals which ephemeral keys are attached to the assigned circuits, an adversary can confirm when it has compromised a circuit (has malicious nodes in the correct positions) without needing to do any statistical correlation attack (it can match up the ephemeral keys assigned to its malicious relays to the ones posted in the circuit assignment list).

The formation of relay/client groups is not discussed and is similarly vulnerable to sybil attacks where the adversary can completely subvert the coin mining process. This can be done by registering an arbitrary number of relays and clients with the assignment servers, such that a large majority of circuits created by the assignment process will contain malicious relays in all positions and be assigned to a malicious client. This malicious collective of nodes can then “pretend” to send bytes through the circuit without actually doing so, and gain an advantage when mining coins. The paper suggests to use a persistent guard, which means the adversary only needs malicious relays in the middle and exit positions of its sybil client circuits, exponentially increasing the probability of a full compromise (or requiring far fewer nodes to achieve the same probability of compromise as without persistent guards). (The sybil clients only have to get 2 of its relays in the circuit instead of 3, reducing the probability from f^3 to f^2 for malicious fraction f.) Further, even if some relays on a circuit are honest, it is not rational for them to refuse to sign proofs of bandwidth that have been exaggerated (too high) by other relays. It will only benefit a relay to ignore proof-of-bandwidth checks, giving it an advantage over completely honest nodes in the TorCoin mining process.

There are a variety of unaddressed practical deployment issues as well. It is not clear how to do load balancing with TorCoin alone - no one should be able to determine how many bytes were sent by any of the circuit members or where the payments for mined coins are being sent (anonymous TorCoin transactions are necessary for anonymity). Exit policies are not discussed, and there is no clear way to support them and for a client that would like to request a specific exit port. Its not clear how to ensure that a relay is not chosen for the same circuit twice, or two relays from the same relay family are not on the same circuit. Finally, it is not clear how to link ephemeral circuit keys to TorCoin addresses so that payments may be sent from clients to relays without revealing client identity.

Don't misunderstand my ranting here - I think that the TorCoin idea is great (I am a co-author after all). It has the potential to motivate new researchers and developers to start thinking about solutions to problems that Tor is interested in, particularly bandwidth measurement. However, the limitations need to be clear so that we don't start seeing production systems using it and claiming to provide security without working through *at least* the issues mentioned above. The current paper was an initial concept in its early stages, and I expect the system to improve significantly as it is furthered developed.

(For completeness, we should also point out that TorCoin will need a new name if anybody decides to build it. The Tor trademark faq says it's fine for research paper designs to use the Tor mark, but if it becomes actual software then it sure will be confusing to users and the rest of the Tor community about whether it's written by or endorsed by Tor.)

Note that TorCoin and TEARS are at least somewhat complementary, since TEARS *needs* a proof-of-bandwidth scheme, and TorCoin *provides* one. However, they’re also not directly compatible. TorCoin requires a substantial change to both Bitcoin and to Tor (or to put it another way, it would be a system that only partially resembles Bitcoin and only partially resembles Tor). On the other hand, TEARS leaves Tor's circuit-finding mechanism intact, and the token protocol in TEARS is closer to traditional anonymous e-cash systems than to Bitcoin.

For better or for worse?

As outlined above, recent research has made great improvements in the area of Tor incentives. More work is needed to show the feasibility and efficiency of the decentralized approaches, and a secure bandwidth measurement scheme would help fill a critical piece missing from TEARS. Recent improvements to the way Tor schedules sockets and circuits would be necessary to correctly provide traffic priority (see #12541), and then a differentiated services scheduler that is able to prioritize traffic by classes (already described in and prototyped for the BRAIDS, LIRA, and TEARS papers) would also be needed.

Unfortunately, it is unclear how to make headway on the social issues. A small scale rollout of an "experimental build" to those relays who want to support new incentive features could be one way to test a new approach without committing to anything long term.

One question that is often raised is: if an incentive scheme rewards relays for providing bandwidth, then won’t everyone just pick the same cheapest hosting provider and Tor will lose location diversity? This question is largely addressed in the discussion of what constitutes a useful service in the TEARS paper (the TEARS paper and appendices also gives useful commentary on many of the common problems and design decisions to make when designing an incentive scheme). Basically, it can be addressed in the monetary policy, e.g., in addition to rewarding relays for bandwidth, the bank could also assign weights that could be used to adjust the rewards based on certain flags that relays possess, or the geographic location in which they operate. This could be adjusted over time so that there would be a higher incentive to run relays in parts of the world where none exist, or to prefer exit relays over other positions, etc. Although, note that it is unclear exactly what the "correct" utility function should be and when/how it should be adjusted. Note that Torservers.net similarly rewards relays for location diversity (see here).

Another point to make here is that most of these approaches have nothing to do with giving out or transferring real dollars. The tokens in most of these schemes are useful only to receive traffic priority in Tor. Will there be third party markets that form around the exchange of the tokens? Sure. And they may be speculated. But at the end of the day, the tokens would still only provide prioritized traffic. Depending on the configuration of the priority scheduler, the difference between priority traffic and normal traffic may not be that extreme. It is conceivable that the tokens would not be worth nearly enough to compensate an operator for the ISP connection, much less the overhead involved with updating the software, maintaining the machine, and talking with the ISP -- and in that case we are still on the more conservative side of the social incentive discussions above.

Tor has some choices to make in terms of how to grow the network and how to position the community during that growth process. I hope that this post, and the research presented herein, will at least help the community understand some of the options that are available.

All the best, ~Rob

[Thanks to Roger Dingledine, Bryan Ford, Aaron Johnson, Andrew Miller, and Paul Syverson for input and feedback on this post.]

Tor Blog | The Tor Blog blogs | 2014-07-14 19:37:34

On July 9th, the Grand Chamber of the European Court of Human Rights, its very highest body and one of the top human rights courts in the world, held a hearing on the landmark case of Delfi, an Estonian news outlet held liable for comments posted by users.

Access | Access Blog | 2014-07-11 19:44:47

The U.S. passed an intelligence budget authorization law on Monday, continuing the troubling practice of keeping the intelligence budget classified, while offering modest transparency reform and whistleblower protections.

Access | Access Blog | 2014-07-11 13:44:57

From the 10th to the 13th of June, the World Summit on the Information Society (WSIS) held the capstone event of its ten year review, the WSIS+10 High Level Event. Two important documents were endorsed at this event, a statement on the implementation of the WSIS Outcomes enshrined in the 2005 Tunis Agenda, and a second document articulating a vision beyond 2015.

Access | Access Blog | 2014-07-10 13:58:22

Welcome to the twenty-seventh issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.

On being targeted by the NSA

Das Erste has published an article and supporting material showing how the NSA explicitly targets Tor and Tails user through the XKEYSCORE Deep Packet Inspection system. Several other media picked up the news, and it was also discussed in various threads on the tor-talk mailing list (1, 2, 3, 4, 5, 6, 7).

The Tor Project’s view has been reposted on the blog. To a comment that said “I felt like i am caught in the middle of a two gigantic rocks colliding each other”, Roger Dingledine replied: “You’re one of the millions of people every day who use Tor. And because of the diversity of users […], just because they know you use Tor doesn’t mean they know why you use Tor, or what you do with it. That’s still way better than letting them watch all of your interactions with all websites on the Internet.”

More monthly status reports for June 2014

The wave of regular monthly reports from Tor project members for the month of June continued, with submissions from Georg Koppen, Lunar, Noel David Torres Taño, Matt Pagan, Colin C., Arlo Breault, and George Kadianakis.

Mike Perry reported on behalf of the Tor Browser team.

Miscellaneous news

An Austrian Tor exit node operator interpreted their conviction in a first ruling as judging them “guilty of complicity, because he enabled others to transmit content of an illegal nature through the service”. Moritz Bartl from Torservers.net commented: “We strongly believe that it can be easily challenged. […] We will definitely try and find some legal expert in Austria and see what we can do to fight this.”

Linus Nordberg is expanding the idea of public, append-only, untrusted log à la Certificate Transparency to the Tor consensus. Linus submitted a new draft proposal to the tor-dev mailing list for reviews.

Miguel Freitas reported that twister — a fully decentralized P2P microblogging platform — was now able to run over Tor. As Miguel wrote, “running twister on top of Tor was a long time goal, […] the Tor support allows a far more interesting threat model”.

Google Summer of Code students have sent a new round of reports after the mid-term: Israel Leiva on the GetTor revamp, Amogh Pradeep on Orbot and Orfox improvements, Mikhail Belous on the multicore tor daemon, Daniel Martí on incremental updates to consensus documents, Sreenatha Bhatlapenumarthi on the Tor Weather rewrite, Quinn Jarrell on the pluggable transport combiner, Noah Rahman on Stegotorus enhancements, Marc Juarez on website fingerprinting defenses , development, Juha Nurmi on the ahmia.fi project , and Zack Mullaly on the HTTPS Everywhere secure ruleset update mechanism.

sajolida, tchou and Giorgio Maone from NoScript drafted a specification for a Firefox extension to download and verify Tails.

Tor help desk roundup

One way to volunteer for Tor is to run a mirror of the Tor Project website. Instructions are available for anyone wanting to run a mirror. Mirrors are useful for those who, for one reason or another, cannot access or use the main Tor Project website. Volunteers who have successfully set up a synced a mirror can report their mirror to the tor-mirrors mailing list to get it included in the full mirrors list.

Easy development tasks to get involved with

ooniprobe is a tool for conducting network measurements that are useful for detecting network interference. When ooniprobe starts it should perform checks to verify that the config file is correct. If that is not the case, it should fail gracefully at startup. The ticket indicates where this check should be added to the ooniprobe codebase. If you’d like to do some easy Python hacking, be sure to give this ticket a try.

This issue of Tor Weekly News has been assembled by Lunar, harmony, Matt Pagan, and Karsten Loesing.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Tor Blog | The Tor Blog blogs | 2014-07-09 12:00:00

As quoted in the original article on Das Erste:

We've been thinking of state surveillance for years because of our work in places where journalists are threatened. Tor's anonymity is based on distributed trust, so observing traffic at one place in the Tor network, even a directory authority, isn't enough to break it. Tor has gone mainstream in the past few years, and its wide diversity of users -- from civic-minded individuals and ordinary consumers to activists, law enforcement, and companies -- is part of its security. Just learning that somebody visited the Tor or Tails website doesn't tell you whether that person is a journalist source, someone concerned that her Internet Service Provider will learn about her health conditions, or just someone irked that cat videos are blocked in her location.

Trying to make a list of Tor's millions of daily users certainly counts as widescale collection. Their attack on the bridge address distribution service shows their "collect all the things" mentality -- it's worth emphasizing that we designed bridges for users in countries like China and Iran, and here we are finding out about attacks by our own country. Does reading the contents of those mails violate the wiretap act? Now I understand how the Google engineers felt when they learned about the attacks on their infrastructure.

Tor Blog | The Tor Blog blogs | 2014-07-03 23:44:16

Welcome to the twenty-sixth issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.

Tor Weekly News turns one

The very first issue of Tor Weekly News was released on July 3rd last year. Since then, we have been able to provide you news about the Tor community every week (except one).

Tor Weekly News is a community newsletter, so let’s all appreciate everyone who contributed so far: Andreas Jonsson, bastik, Colin, Damian Johnson, David Fifield, David Stainton, dope457, Georg Koppen, George Kadianakis, harmony, Jacob Appelbaum, Jesse Victors, Johannes Fürmann, Karsten Loesing, Kostas Jakeliūnas, Lunar, luttigdev, malaparte, Matt Pagan, Mike Perry, moskvax, murb, Nick Mathewson, Nicolas Vigier, nicoo, Nima, Paul Feitzinger, Peter Palfrader, Philipp Winter, Phoul, qbi, ra, rey, Roger Dingledine, Sandeep, sqrt2, the Tails developers, velope, whabib, Yawning, and several anonymous contributors.

Join us! The Tor community is always growing and there are always interesting topics to report about!

2014 Summer Tor meeting

Dedicated Tor contributors are having a five day meeting this week in Paris. Expect less online activity while keyboards are put away in favor of unmediated human interactions.

Pictures of post-it-note-based brainstorming sessions can already be seen online, and more minutes should be coming soon.

Unfortunately, due to several factors, there will be no widely open event around meeting this time.

Tails user experience experiments

Tails is experimenting on how to improve its user experience.

u. reported on the first Tails UX experiments session. Five people attended, trying to realize three different missions: “create a new encrypted document of your choice […], and save it to Tails, using persistence”, “find out the number of Tails downloads this month, and pass on this information using GPG via email”, “find one or more images [… and] clean up these files to erase any metadata”.

Some of what has been learned by watching users has already been converted into concrete bugs and enhancement proposals. For the rest, read the detailed and insightful report!

In the meantime, the first dialog window that appears when using Tails — also known as “the greeter” — is being redesigned. A first round of test images is now ready for your feedback.

Monthly status reports for June 2014

While Kevin Dyer sent out his report for May, the wave of regular monthly reports from Tor project members for the month of June has started. Damian Johnson released his report first, followed by reports from Pearl Crescent, Nick Mathewson, Karsten Loesing, and Sherief Alaa.

Lunar reported on behalf of the help desk.

Miscellaneous news

Lunar shared some highlights on a trip to Calafou, near Barcelona, to attend Backbone 409, an event for “projects actively building infrastructures for a free Internet from an anti-capitalist point of view”. Topics under discussion included hosting websites in the face of legal threats; secure operating systems; and the logistics of running a Torservers.net partner organization.

Juha Nurmi submitted a status report for the ahmia.fi Google Summer of Code project.

Nusenu warned users of the Tor Project’s RPM repository that an updated package available in the official Fedora repo will cause their tor to stop working, and set out two ways in which they can solve the problem.

starlight gave an account of their experience running a tor relay using versions of OpenSSL and libevent that had been hardened with AddressSanitizer.

While the fteproxy pluggable transport has been integrated into the Tor Browser, documentation on how to setup bridges was lacking. A problem fixed by Colin who took the time to document how to setup FTE bridges.

George Kadianakis gave an insightful answer to Rick Huebneron’s questions about the status of the “UpdateBridgesFromAuthority” feature. The latter should allow bridge users to automatically update the IP address of their bridge when it changes. But the feature is currently turned off by default as several problems are currently preventing it to be useful. Have a look at George’s summary if you want to scratch that itch.

Tor help desk roundup

The help desk has been asked about the “ethics” behind Tor. Tor’s technical design decisions are laid out in the various design documents, but to understand the social and cultural motivations for the Tor Project, videos like Roger’s talk at Internet Days, or Jake and Roger’s talks at the Chaos Communications Congress in 2011 and 2013 are good resources.

This issue of Tor Weekly News has been assembled by Lunar, harmony, Matt Pagan, and Rob Jansen.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Tor Blog | The Tor Blog blogs | 2014-07-02 12:00:00

We now have an official FDroid app repository that is available via three separate methods, to guarantee access to a trusted distribution channel throughout the world! To start with, you must have FDroid installed. Right now, I recommend using the latest test release since it has support for Tor and .onion addresses (earlier versions should work for non-onion addresses):


In order to add this repo to your FDroid config, you can either click directly on these links on your devices and FDroid will recognize them, or you can click on them on your desktop, and you will be presented with a QR Code to scan. Here are your options:

From here on out, our old FDroid repo (https://guardianproject.info/repo) is considered deprecated and will no longer be updated. It will eventually be removed. Update to the new one!

Also, if you missed it before, all of our test builds are also available for testing only via FDroid. Just remember, the builds in the test repo are only debug builds, not fully trusted builds, so use them for testing only.

Automate it all!

This setup has three distribution channels that are all mirrors of a repo that is generated on a fully offline machine. This is only manageable because of lots of new automation features in the fdroidserver tools for building and managing app repos. You can now set up a USB thumb drive as the automatic courier for shuffling the repo from the offline machine to an online machine. The repo is generated, updated, and signed using fdroid update, then those signed files are synced to the USB thumb drive using fdroid server update. Then the online machine syncs the signed files from that USB thumb drive to multiple servers via SSH and Amazon S3 with a single command: fdroid server update. The magic is in setting up the config options and letting the tools do the rest.

New Repo Signing Key

For part of this, I’ve completed the process of generating a new, fully offline fdroid signing key. So that means there is a new signing key for the FDroid repo, and the old repo signing key is being retired.

The fingerprints for this signing key are:

Owner: EMAILADDRESS=root@guardianproject.info, CN=guardianproject.info, O=Guardian Project, OU=FDroid Repo, L=New York, ST=New York, C=US
Issuer: EMAILADDRESS=root@guardianproject.info, CN=guardianproject.info, O=Guardian Project, OU=FDroid Repo, L=New York, ST=New York, C=US
Serial number: a397b4da7ecda034
Valid from: Thu Jun 26 15:39:18 EDT 2014 until: Sun Nov 10 14:39:18 EST 2041
Certificate fingerprints:
 MD5:  8C:BE:60:6F:D7:7E:0D:2D:B8:06:B5:B9:AD:82:F5:5D
 SHA1: 63:9F:F1:76:2B:3E:28:EC:CE:DB:9E:01:7D:93:21:BE:90:89:CD:AD
 SHA256: B7:C2:EE:FD:8D:AC:78:06:AF:67:DF:CD:92:EB:18:12:6B:C0:83:12:A7:F2:D6:F3:86:2E:46:01:3C:7A:61:35
 Signature algorithm name: SHA1withRSA
 Version: 1

Guardian Project | The Guardian Project | 2014-07-01 00:26:39

On Saturday, a new post was relased by Xordern entitled IP Leakage of Mobile Tor Browsers. As the title says, the post documents flaws in mobile browser apps, such as Orweb and Onion Browser, both which automatically route communication traffic over Tor. While we appreciate the care the author has taken, he does make the mistake of using the term “security” to lump together the need for total anonymity up with the needs of anti-censorship, anti-surveillance, circumvention and local device privacy. We do understand the seriousness of this bug, but at the same time, it is not an issue encountered regularly in the wild.

Here are thoughts on the three specific issues covered:

1) HTML5 Multimedia: This is a known issue which is not present on 100% of Android devices, but is definitely something to be concerned about, if you access sites with HTML5 media player content on them. To us, it is a bug in Android, and not in Orweb, since all of the appropriate APIs are called when the browser is configured to proxy. However, it is a problem, and our solution remains to either use transparent proxying feature of Orbot, or to use the Firefix Privacy configuration we provide here: https://guardianproject.info/apps/firefoxprivacy

2) Downloads leak: This is a new issue and one we are trying to reproduce on our end. If our proxied download indeed is not working, we will issue a fix shortly. Again, using Firefox configured in the manner we prescribe, the downloads would be proxied properly.

3) Unique Headers: The inclusion of a unique HTTP header issue in this list is confusing, because it has nothing to do with IP leakage. We have never claimed that a mobile browser can be 100% anonymous, and defending against full fingerprinting of browsers based on headers is something beyond what we are attempting to do at this point.

At this point, we still recommend Orweb for most people who want a very simple solution for a browser that is proxied through Tor. This will defeat mass traffic surveillance, network censorship, filtering by your mobile operator, work or school, and more. Orweb also keeps little data cached on the local system, and so protects against physical inspection and analysis of your device, to retrieve your browser history. HOWEVER if you do seem to visit sites that have HTML5 media players in the them, then we recommend you do not use Orweb, and again, that you use Firefox with our Privacy-Enhanced Configuration.

If you are truly worried about IP leakage, then you MUST root your phone, and use Orbot’s Transparent Proxying feature. This provides the best defense against leaking of your real IP. Even further, if you require even more assurance than that, you should follow Mike Perry’s Android Hardening Guide, which uses AFWall firewall in combination with Orbot, to block traffic to apps, and even stops Google Play from updating apps without your permission.

Finally, the best news is that we are making great progress in a fully privacy-by-default version of Firefox, under the project named “Orfox”. This is being done in partnership with the Tor Project, as a Google Summer of Code effort, along with the Orweb team. We aim to use as much of the same code that Tor Browser does to harden Firefox in our browser, and are getting close to an alpha release. If you are interested in a testing the first prototype build, which address the HTML5 and Download leak issues, you can find it here: https://guardianproject.info/releases/FennecForTor_GSoC_prototype.apk and track the project here: https://github.com/guardianproject/orfox





Guardian Project | The Guardian Project | 2014-06-30 16:43:51

Welcome to the twenty-fifth issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the community around Tor, the “fine-meshed net”.

Tor is out

Tor was released, fixing “a wide variety of remaining issues in the Tor 0.2.5.x release series, including a couple of DoS issues, some performance regressions, a large number of bugs affecting the Linux seccomp2 sandbox code, and various other bugfixes”, in Nick Mathewson’s words. Among the major security improvements is an adjustment to the way Tor decides when to close TLS connections, which “should improve Tor’s resistance against some kinds of traffic analysis, and lower some overhead from needlessly closed connections”.

You can download the source tarball, or install the package by following the instructions for your system. This release is also now available in the Debian and Tor Project repositories.

Debian Wheezy’s tor version to be updated

Following a suggestion by Peter Palfrader, Debian developers are preparing to update the version of tor found in the Debian stable repositories from to Among the chief motives for doing so is that “about a quarter of the Tor network (just considering the relays, not any clients), is on, presumably because they run Debian stable. If they all upgraded to the 0.2.4.x tree, the network as a whole would become a lot more secure as 0.2.4.x allows clients to use stronger crypto for connections built through these nodes.” Other benefits, including the various measures taken to defend against OpenSSL vulnerabilities discovered earlier this year, make this an attractive proposal.

The update will be shipped in the forthcoming point release (7.6) of Debian Wheezy, on July 12th.

Miscellaneous news

Building on the May release of experimental Tor Browsers hardened with AddressSanitizer (ASan), Georg Koppen announced a new set of experimental Linux builds that include both AddressSanitizer and Undefined Behaviour Sanitizer (UBSan), asking for testing and feedback. See Georg’s message for download and build instructions, as well as a couple of known issues.

Nick Mathewson reminded Tor users, relay operators, and especially hidden service administrators that tor’s 0.2.2 series is no longer supported, and many features will soon stop working entirely; if you are affected, then please upgrade!

Several of Tor’s Google Summer of Code students submitted their regular progress reports: Daniel Martí on the implementation of consensus diffs, Mikhail Belous on the multicore tor daemon, Juha Nurmi on the ahmia.fi project, Zack Mullaly on the HTTPS Everywhere secure ruleset update mechanism, Amogh Pradeep on the Orbot+Orfox project, Sreenatha Bhatlapenumarthi on the Tor Weather rewrite, Marc Juarez on the link-padding pluggable transport development, Israel Leiva on the GetTor revamp, Quinn Jarrell on the pluggable transport combiner, Kostas Jakeliunas on the BridgeDB Twitter Distributor, and Noah Rahman on Stegotorus security enhancement.

Researchers from the Internet Geographies project at the Oxford Internet Institute produced a cartogram of Tor users by country, using archived data freely available from the Tor Project’s own Metrics portal, along with an analysis of the resulting image. “As ever more governments seek to control and censor online activities, users face a choice to either perform their connected activities in ways that adhere to official policies, or to use anonymity to bring about a freer and more open Internet”, they conclude.

Andrew Lewman reported that users with email addresses at Yahoo and AOL have been removed from the tor-relays mailing list, as these addresses have been bouncing list emails.

Thanks to the FoDT.it webteam and Maxanoo for running mirrors of the Tor Project’s website!

fr33tux shared the slides for a French-language presentation on Tor, delivered at Université de technologie Belfort-Montbéliard. The source code (in the LaTeX markup language) is also available: “feel free to borrow whatever you want from it!”

Thanks to Ximin Luo, the server component of Flashproxy is now available in Debian in the “pt-websocket” package.

A couple of weeks ago, Roger Dingledine wondered “how many relays are firewalling certain outbound ports (and thus messing with connectivity inside the Tor network)”. ra has just published the results of a three-week-long test of the interconnectivity between 6730 relays. Contacting the operators of problematic relays is probably the next step for those who wish to keep the network at its best.

George Kadianakis slipped on his storyteller costume to guide us through layers of the Tor core, motivated by the quest for knowledge. That accursed riddle, “Why does Roger have so many guards?”, now has an answer. Be prepared for a “beautiful stalagmite” and the “truly amazing” nature of Tor!

Tor help desk roundup

If the Tor Browser stalls while “loading the network status”, please double-check that the system clock is accurate; the same goes for the timezone and daylight saving time settings. Tor needs an accurate clock in order to prevent several classes of attacks on its protocol. It won’t work properly when the local time does not match the one used by other network participants.

Easy development tasks to get involved with

When the tor daemon is configured to open a SOCKS port on a public address, it warns about this possible configuration problem twice: once when it reads the configuration file, and a second time when it opens the listener. One warning should be enough. We had a friendly volunteer two years ago who sketched out possible fixes and even wrote a patch, but then concluded that his patch had a problem and went away. If you’re up to some digging into tor’s configuration file handling, and want to clean up a two-year-old patch potentially to be included in tor 0.2.6, please find the details in the ticket. It’s tagged as easy, so how hard can it be?

This issue of Tor Weekly News has been assembled by harmony, Lunar, Matt Pagan, Karsten Loesing, and Roger Dingledine.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Tor Blog | The Tor Blog blogs | 2014-06-25 12:00:00


We just released Lil’ Debi 0.4.7 into the Play Store and f-droid.org. It is not really different than the 0.4.6 release except in has a new, important property: the APK contents can be reproduced on other machines to the extent that the APK signature can be swapped between the official build and builds that other people have made from source, and this will still be installable. This is known as a “deterministic build” or “reproducible build”: the build process is deterministic, meaning it runs the same way each time, and that results in an APK that is reproducible by others using only the source code. There are some limitations to this, like it has to be built using similar versions of the OpenJDK 1.7 and other build tools, for example. But this process should work on any recent version of Debian or Ubuntu. Please try the process yourself, and let us know if you can verify or not:

The ultimate goal here is to make a process that reproduces the APK exactly, bit-for-bit, so that the anyone who runs the process will end up with an APK that has the exact same hash sum. As far as I can tell, the only thing that needs to be fixed in Lil’ Debi’s process is the timestamps in the ZIP format that is the APK container.

There are a number of other parallel efforts. The Tor Project has written a lot about their process for reproducible builds for the Tor Browser Bundle. Debian has made some progress in fixing the package builders to make the process deterministic.

Guardian Project | The Guardian Project | 2014-06-09 20:41:34

The latest Orbot is out soon on Google Play, and by direct download from the link below:
Android APK: https://guardianproject.info/releases/orbot-latest.apk
(PGP Sig)

The major improvements for this release are:

  • Uses the latest Tor stable version
  • Fix for recent OpenSSL vulnerabilities
  • Addition of Obfuscated Bridges 3 (Obfs3) support
  • Switch from Privoxy to Polipo (semi-experimental)

and much more… see the CHANGELOG link below for all the details.

The tag commit message was “updating to 14.0.0 build 100!”

and the full CHANGELOG is here: https://gitweb.torproject.org/orbot.git/blob_plain/81bd61764c2c300bd1ba1e4de5b03350455470c1:/CHANGELOG

Guardian Project | The Guardian Project | 2014-06-08 03:45:17

One thing we are very lucky to have is a good community of people willing to test out unfinished builds of our software. That is a very valuable contribution to the process of developing usable, secure apps. So we want to make this process as easy as possible while keeping it as secure and private as possible. To that end, we have set up an FDroid repository of apps generated from the test builds that our build server generates automatically every time we publish new code.

After this big burst of development focused on FDroid, it has become clear that FDroid has lots of promise for becoming a complete solution for the whole process of delivering software from developers to users. We have tried other ways of delivering test builds like HockeyApp and Google Play’s Alpha and Beta channels and have found them lacking. The process did not seem as easy as it should be. And of course, both of them leave a lot to be desired when it comes to privacy of the users. So this is the first step in hopefully a much bigger project.

To use our new test build service, first install FDroid by downloading it from the official source: https://f-droid.org. Then using a QR Code scanner like Barcode Scanner, just scan the QR Code below, and send it to FDroid Repositories. You can also browse to this page on your Android device, and click the link below to add it to FDroid:


You can also use our test repo via an anonymized connection using the Tor Hidden Service (as of this moment, that means downloading an official FDroid v0.69 test build). Just get Orbot and turn it on, and the following .onion address will automatically work in FDroid, as long as you have a new enough version (0.69 or later).


Guardian Project | The Guardian Project | 2014-06-06 21:17:01

We’re making the Internet more secure, by taking part in #ResetTheNet https://resetthenet.org

Guardian Project | The Guardian Project | 2014-06-04 23:07:14

FreedomBox version 0.2

For those of you who have not heard through the mailing list or in the project's IRC channel (#freedombox on http://www.oftc.net/), FreedomBox has reached the 0.2 release. This second release is still intended for developers but represents a significant maturation of the components we have discussed here in the past and a big step forward for the project as a whole.

0.2 features

Plinth, our user interface tool, is now connected to a number of running systems on the box including PageKite, an XMPP chat server, local network administration if you want to use the FreedomBox as a home router, and some diagnostic and general system configuration tools. Plinth also has support for downloading and installing ownCloud.

Additionally, the 0.2 release installs Tor and configures it as a bridge. This default configuration does not actually send any of your traffic through Tor or allow those sending traffic over Tor to enter the public net using your connection. Acting as a bridge simply moves data around within the Tor network, much like adding an additional participant to a game of telephone. The more bridges there are in the Tor network, the harder it is to track where that traffic actually comes from.

Availability and reach

As discussed previously, one of the ways we are working to improve privacy and security for computer users is by making the tools we include in FreedomBox available outside of particular FreedomBox images or hardware. We are working towards that goal by adding the software we use to the Debian community Linux distribution upon which the FreedomBox is built. I am happy to say that Plinth, PageKite, ownCloud, as well as our internal box configuration tool freedombox-setup are now all available in the Jessie version of Debian.

In addition to expanding the list of tools available in Debian we have also expanded the range of Freedom-maker, the tool that builds full images of FreedomBox to deploy directly onto machines like our initial hardware target the DreamPlug. Freedom-maker can now build images for DreamPlug, the VirtualBox blend of virtual machines, and the RasbperryPi. Now developers can test and contribute to FreedomBox using anything from a virtual machine to one of the more than two million PaspberryPis out there in the world.

The future

Work has really been speeding up on the FreedomBox in 2014 and significant work has been done on new cryptographic security tools for a 0.3 release. As always, the best places to find out more are the wiki, the mailing list and the IRC channel.

FreedomBox | news | 2014-05-12 21:07:40

security in a thumb driveHardware Security Modules (aka Smartcards, chipcards, etc) provide a secure way to store and use cryptographic keys, while actually making the whole process a bit easier. In theory, one USB thumb drive like thing could manage all of the crypto keys you use in a way that makes them much harder to steal. That is the promise. The reality is that the world of Hardware Security Modules (HSMs) is a massive, scary minefield of endless technical gotchas, byzantine standards (PKCS#11!), technobabble, and incompatibilities. Before I dive too much into ranting about the days of my life wasted trying to find a clear path through this minefield, I’m going to tell you about one path I did find through to solve a key piece of the puzzle: Android and Java package signing.

ACS ACR38-T-IBSFor this round, I am covering the Aventra MyEID PKI Card. I bought a SIM-sized version to fit into an ACS ACR38T-IBS-R smartcard reader (it is apparently no longer made, and the ACT38T-D1 is meant to replace it). Why such specificity you may ask? Because you have to be sure that your smartcard will work with your reader, and that your reader will have a working driver for you system, and that your smartcard will have a working PKCS#11 driver so that software can talk to the smartcard. Thankfully there is the OpenSC project to cover the PKCS#11 part, it implements the PKCS#11 communications standard for many smartcards. On my Ubuntu/precise system, I had to install an extra driver, libacr38u, to get the ACR38T reader to show up on my system.

So let’s start there and get this thing to show up! First we need some packages. The OpenSC packages are out-of-date in a lot of releases, you need version 0.13.0-4 or newer, so you have to add our PPA (Personal Package Archive) to get current versions, which include a specific fix for the Aventra MyEID: (fingerprint: F50E ADDD 2234 F563):

sudo add-apt-repository ppa:guardianproject/ppa
sudo apt-get update
sudo apt-get install opensc libacr38u libacsccid1 pcsc-tools usbutils

First thing, I use lsusb in the terminal to see what USB devices the Linux kernel sees, and thankfully it sees my reader:

$ lsusb
Bus 005 Device 013: ID 072f:9000 Advanced Card Systems, Ltd ACR38 AC1038-based Smart Card Reader

Next, its time to try pcsc_scan to see if the system can see the smartcard installed in the reader. If everything is installed and in order, then pcsc_scan will report this:

$ pcsc_scan 
PC/SC device scanner
V 1.4.18 (c) 2001-2011, Ludovic Rousseau 
Compiled with PC/SC lite version: 1.7.4
Using reader plug'n play mechanism
Scanning present readers...
0: ACS ACR38U 00 00

Thu Mar 27 14:38:36 2014
Reader 0: ACS ACR38U 00 00
  Card state: Card inserted, 
  ATR: 3B F5 18 00 00 81 31 FE 45 4D 79 45 49 44 9A

If pcsc_scan cannot see the card, then things will not work. Try re-seating the smardcard in the reader, make sure you have all the right packages installed, and if you can see the reader in lsusb. If your smartcard or reader cannot be read, then pcsc_scan will report something like this:

$ pcsc_scan 
PC/SC device scanner
V 1.4.18 (c) 2001-2011, Ludovic Rousseau 
Compiled with PC/SC lite version: 1.7.4
Using reader plug'n play mechanism
Scanning present readers...
Waiting for the first reader...

Moving right along… now pcscd can see the smartcard, so we can start playing with using the OpenSC tools. These are needed to setup the card, put PINs on it for access control, and upload keys and certificates to it. The last annoying little preparation tasks are finding where opensc-pkcs11.so is installed and the “slot” for the signing key in the card. These will go into a config file which keytool and jarsigner need. To get this info on Debian/Ubuntu/etc, run these:

$ dpkg -S opensc-pkcs11.so
opensc: /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so
$ pkcs11-tool --module /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so \
>     --list-slots
Available slots:
Slot 0 (0xffffffffffffffff): Virtual hotplug slot
Slot 1 (0x1): ACS ACR38U 00 00
  token label        : MyEID (signing)
  token manufacturer : Aventra Ltd.
  token model        : PKCS#15
  token flags        : rng, login required, PIN initialized, token initialized
  hardware version   : 0.0
  firmware version   : 0.0
  serial num         : 0106004065952228

This is the info needed to put into a opensc-java.cfg, which keytool and jarsigner need in order to talk to the Aventra HSM. The name, library, and slot fields are essential, and the description is helpful. Here is how the opensc-java.cfg using the above information looks:

name = OpenSC
description = SunPKCS11 w/ OpenSC Smart card Framework
library = /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so
slot = 1

Now everything should be ready for initializing the HSM, generating a new key, and uploading that key to the HSM. This process generates the key and certificate, puts them into files, then uploads them to the HSM. That means you should only run this process on a trusted machine, certainly with some kind of disk encryption, and preferably on a machine that is not connected to a network, running an OS that has never been connected to the internet. A live CD is one good example, I recommend Tails on a USB thumb drive running with the secure persistent store on it (we have been working here and there on making a TAILS-based distro specifically for managing keys, we call it CleanRoom).

HSM plugged into a laptop

HSM plugged into a laptop

First off, the HSM needs to be initialized, then set up with a signing PIN and a “Security Officer” PIN (which means basically an “admin” or “root” PIN). The signing PIN is the one you will use for signing APKs, the “Security Officer PIN” (SO-PIN) is used for modifying the HSM setup, like uploading new keys, etc. Because there are so many steps in the process, I’ve written up scripts to run thru all of the steps. If you want to see the details, read the scripts. The next step is to generate the key using openssl and upload it to the HSM. Then the HSM needs to be “finalized”, which means the PINs are activated, and keys cannot be uploaded. Don’t worry, as long as you have the SO-PIN, you can erase the HSM and re-initialize it. But be careful! Many HSMs will permanently self-destruct if you enter in the wrong PIN too many times, some will do that after only three wrong PINs! As long as you have not finalized the HSM, any PIN will work, so play around a lot with it before finalizing it. Run the init and key upload procedure a few times, try signing an APK, etc. Take note: the script will generate a random password for the secret files, then echo that password when it completes, so make sure no one can see your screen when you generate the real key. Alright, here goes!

code $ git clone https://github.com/guardianproject/smartcard-apk-signing
code $ cd smartcard-apk-signing/Aventra_MyEID_Setup
Aventra_MyEID_Setup $ ./setup.sh 
Edit pkcs15-init-options-file-pins to put in the PINs you want to set:
Aventra_MyEID_Setup $ emacs pkcs15-init-options-file-pins
Aventra_MyEID_Setup $ ./setup.sh 
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
About to erase card.
PIN [Security Officer PIN] required.
Please enter PIN [Security Officer PIN]: 
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
About to create PKCS #15 meta structure.
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
Found MyEID
About to generate key.
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
Found MyEID
About to generate key.
next generate a key with ./gen.sh then ./finalize.sh
Aventra_MyEID_Setup $ cd ../openssl-gen/
openssl-gen $ ./gen.sh 
Usage: ./gen.sh "CertDName" [4096]
  for example:
  "/C=US/ST=New York/O=Guardian Project Test/CN=test.guardianproject.info/emailAddress=test@guardianproject.info"
openssl-gen $ ./gen.sh "/C=US/ST=New York/O=Guardian Project Test/CN=test.guardianproject.info/emailAddress=test@guardianproject.info"
Generating key, be patient...
2048 semi-random bytes loaded
Generating RSA private key, 2048 bit long modulus
e is 65537 (0x10001)
Signature ok
subject=/C=US/ST=New York/O=Guardian Project Test/CN=test.guardianproject.info/emailAddress=test@guardianproject.info
Getting Private key
writing RSA key
Your HSM will prompt you for 'Security Officer' aka admin PIN, wait for it!
Enter destination keystore password:  
Entry for alias 1 successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled
[Storing keystore]
Key fingerprints for reference:
MD5 Fingerprint=90:24:68:F3:F3:22:7D:13:8C:81:11:C3:A4:B6:9A:2F
SHA1 Fingerprint=3D:9D:01:C9:28:BD:1F:F4:10:80:FC:02:95:51:39:F4:7D:E7:A9:B1
SHA256 Fingerprint=C6:3A:ED:1A:C7:9D:37:C7:B0:47:44:72:AC:6E:FA:6C:3A:B2:B1:1A:76:7A:4F:42:CF:36:0F:A5:49:6E:3C:50
The public files are: certificate.pem publickey.pem request.pem
The secret files are: secretkey.pem certificate.p12 certificate.jkr
The passphrase for the secret files is: fTQ*he-[:y+69RS+W&+!*0O5i%n
openssl-gen $ cd ../Aventra_MyEID_Setup/
Aventra_MyEID_Setup $ ./finalize.sh 
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
Found MyEID
About to delete object(s).
Your HSM is ready for use! Put the secret key files someplace encrypted and safe!

Now your HSM should be ready for use for signing. You can try it out with keytool to see what is on it, using the signing PIN not the Security Officer PIN:

smartcard-apk-signing $ /usr/bin/keytool -v \
>     -providerClass sun.security.pkcs11.SunPKCS11 \
>     -providerArg opensc-java.cfg \
>     -providerName SunPKCS11-OpenSC -keystore NONE -storetype PKCS11 \
>     -list
Enter keystore password:  

Keystore type: PKCS11
Keystore provider: SunPKCS11-OpenSC

Your keystore contains 1 entry

Alias name: 1
Entry type: PrivateKeyEntry
Certificate chain length: 1
Owner: EMAILADDRESS=test@guardianproject.info, CN=test.guardianproject.info, O=Guardian Project Test, ST=New York, C=US
Issuer: EMAILADDRESS=test@guardianproject.info, CN=test.guardianproject.info, O=Guardian Project Test, ST=New York, C=US
Serial number: aa6887be1ec84bde
Valid from: Fri Mar 28 16:41:26 EDT 2014 until: Mon Aug 12 16:41:26 EDT 2041
Certificate fingerprints:
	 MD5:  90:24:68:F3:F3:22:7D:13:8C:81:11:C3:A4:B6:9A:2F
	 SHA1: 3D:9D:01:C9:28:BD:1F:F4:10:80:FC:02:95:51:39:F4:7D:E7:A9:B1
	 SHA256: C6:3A:ED:1A:C7:9D:37:C7:B0:47:44:72:AC:6E:FA:6C:3A:B2:B1:1A:76:7A:4F:42:CF:36:0F:A5:49:6E:3C:50
	 Signature algorithm name: SHA1withRSA
	 Version: 1


And let’s try signing an actual APK using the arguments that Google recommends, again, using the signing PIN:

smartcard-apk-signing $ /usr/bin/jarsigner -verbose \
>     -providerClass sun.security.pkcs11.SunPKCS11 \
>     -providerArg opensc-java.cfg -providerName SunPKCS11-OpenSC \
>     -keystore NONE -storetype PKCS11 \
>     -sigalg SHA1withRSA -digestalg SHA1 \
>     bin/LilDebi-release-unsigned.apk 1
Enter Passphrase for keystore: 
   adding: META-INF/1.SF
   adding: META-INF/1.RSA
  signing: assets/busybox
  signing: assets/complete-debian-setup.sh
  signing: assets/configure-downloaded-image.sh
  signing: assets/create-debian-setup.sh
  signing: assets/debian-archive-keyring.gpg
  signing: assets/debootstrap.tar.bz2
  signing: assets/e2fsck.static
  signing: assets/gpgv
  signing: assets/lildebi-common

Now we have a working, but elaborate, process for setting up a Hardware Security Module for signing APKs. Once the HSM is setup, using it should be quite straightforward. Next steps are to work out as many kinks in this process as possible so this will be the default way to sign APKs. That means things like figuring out how Java can be pre-configured to use OpenSC in the Debian package, as well as including all relevant fixes in the pcscd and opensc packages. Then the ultimate is to add support for using HSMs in Android’s generated build files like the build.xml for ant that is generated by android update project. Then people could just plug in the HSM and run ant release and have a signed APK!

Guardian Project | The Guardian Project | 2014-03-28 20:54:39

An interesting turn of events (which we are very grateful for!)


Diana Del Olmo, diana@guardianproject.info
Nathan Freitas (in Austin / SXSW) +1.718.569.7272

Get press kit and more at: https://guardianproject.info/press



The Guardian Project is amongst the 10 chosen grantee organizations to be awarded a $100,000 digital age grant due to its extensive work creating open source software to help citizens overcome government-sponsored censorship.

image courtesy of the telegraph.co.ok

NEW YORK, NY (March 10, 2014)—Ten non-profits in the U.S. and abroad
have been named recipients of New Digital Age Grants, funded through a
$1 million donation by Google executive chairman Eric Schmidt. The
Guardian Project is one of two New York City-based groups receiving an

The New Digital Age Grants were established to highlight organizations
that use technology to counter the global challenges Schmidt and
Google Ideas Director Jared Cohen write about in their book THE NEW
DIGITAL AGE, including government-sponsored censorship, disaster
relief and crime fighting. The book was released in paperback on March 4.

“The recipients chosen for the New Digital Age Grants are doing some
very innovative and unique work, and I’m proud to offer them this
encouragement,” said Schmidt. “Five billion people will encounter the
Internet for the first time in the next decade. With this surge in the
use of technology around the world—much of which we in the West take
for granted—I felt it was important to encourage organizations that
are using it to solve some of our most pressing problems.”

Guardian Project founder, Nathan Freitas, created the project based on
his first-hand experience working with Tibetan human rights and
independence activists for over ten years. Today, March 10th, is the
55th anniversary of the Tibetan Uprising Day against Chinese
occupation. “I have seen first hand the toll that online censorship,
mobile surveillance and digital persecution can take on a culture,
people and movement,” said Freitas. “I am elated to know Mr. Schmidt
supports our effort to fight back against these unjust global trends
through the development of free, open-source mobile security

Many of the NDA grantees, such as Aspiration, Citizen Lab and OTI,
already work with the Guardian Project on defending digital rights,
training high-risk user groups and doing core research and development
of anti-censorship and surveillance defense tools and training.

The New Digital Age Grants are being funded through a private donation
by Eric and Wendy Schmidt.

About the Guardian Project

The Guardian Project is a global collective of software developers
(hackers!), designers, advocates, activists and trainers who develop
open source mobile security software and operating system
enhancements. They also create customized mobile devices to help
individuals communicate more freely and protect themselves from
intrusion and monitoring. The effort specifically focuses on users who
live or work in high-risk situations, and who often face constant
surveillance and intrusion attempts into their mobile devices and
communication streams.

Since it was founded in 2009, the Guardian Project has developed more
than a dozen mobile apps for Android and iOS with over two million
downloads and hundreds of thousands of active users. In the last five
years the Guardian Project has partnered with prominent open source
software projects, activists groups, NGOs, commercial partners and
news organizations to support their mobile security software
capabilities. This work has been made possible with funding from
Google, UC Berkeley with the MacArthur Foundation, Avaaz, Internews,
Open Technology Fund, WITNESS, the Knight Foundation, Benetech, and
Free Press Unlimited. Through work on partner projects like The Tor
Project, Commotion mesh and StoryMaker, we have received indirect
funding from both the US State Department through the Bureau of
Democracy, Human Rights and Labor Internet Freedom program, and the
Dutch Ministry of Foreign Affairs through HIVOS.

The Guardian Project is very grateful for this personal donation and
is happy to have its work recognized by Mr Schmidt. This grant will
allow us to continue our work on ensuring users around the world have
access to secure, open and trustworthy mobile messaging services. We
will continue to improve reliability and security of ChatSecure for
Android and iOS and integrate the OStel voice and video calling
services into the app for a complete secure communications solution.
We will support the work of the new I.M.AWESOME (Instant Messaging
Always Secure Messaging) Coalition focused on open-standards,
decentralized secure mobile messaging, and voice and video
communications. Last, but not least, we will improve device testing,
support and outreach to global human rights defenders, activists and
journalists, bringing the technology that the Guardian Project has
developed to the people that need it most.

About the NDA Recipients

Aspiration in San Francisco, CA, provides deep mentorship to build
tech capacity supporting Africa, Asia and beyond. Their NDA grant will
grow their capacity-building programs for the Global South, increasing
technical capacity to meet local challenges.

C4ADS, a nonprofit research team in Washington, DC, is at the cutting
edge of unmasking Somali pirate networks, Russian arms-smuggling
rings, and other illicit actors entirely through public records. Their
data-driven approach and reliance on public documents has enormous
potential impact, and the grant will help with their next big project.

The Citizen Integration Center in Monterrey, Mexico has developed an
innovative public safety broadcast and tipline system on social media.
Users help their neighbors—and the city—by posting incidents and
receiving alerts when violence is occurring in their communities. The
grant will help them broaden their reach.

The Citizen Lab at the Munk School of Global Affairs at the University
of Toronto, Canada, is a leading interdisciplinary laboratory
researching and exposing censorship and surveillance. The grant will
support their technical reconnaissance and analysis, which uniquely
combines experts and techniques from computer science and the social

The Guardian Project, based in New York City, develops open-source
secure communication tools for mobile devices. ChatSecure and OSTel,
their open standards-based encrypted messaging, voice and video
communication services, which are both built on open standards, have
earned the trust of tens of thousands of users in
repressively-censored environments, and the grant will advance their
technical development.

The Igarapé Institute in Rio de Janeiro, Brazil, focuses on violence
prevention and reduction through technology. Their nonprofit work on
anti-crime projects combines the thoughtfulness of a think tank with
the innovative experimentation of a technology design shop. The grant
will support their research and development work.

KoBo Toolbox in Cambridge, MA, allows fieldworkers in far-flung
conflict and disaster zones to easily gather information without
active Internet connections. The grant will help them revamp their
platform to make it easier and faster to deploy.

The New Media Advocacy Project in New York, NY, is nonprofit
organization developing mobile tools to map violence and
disappearances in challenging environments. The grant will allow them
to refine their novel, interactive, video-based interfaces.

The Open Technology Institute at the New America Foundation in
Washington, DC, advances open architectures and open-source
innovations for a free and open Internet. The grant will assist their
work with the Measurement Lab project to objectively measure and
report Internet interference from repressive governments.

Portland State University in Portland, OR, is leading ground-breaking
research on network traffic obfuscation techniques, which improve
Internet accessibility for residents of repressively-censored
environments. The grant will support the research of Professor Tom
Shrimpton and his lab, who—with partners at the University of
Wisconsin and beyond—continue to push the boundaries with new
techniques like Format Transforming Encryption.

Guardian Project | The Guardian Project | 2014-03-10 16:22:34

The HTTPS protocol is based on TLS and SSL, which are standard ways to negotiate encrypted connections. There is a lot of complexity in the protocols and lots of config options, but luckily most of the config options can be ignored since the defaults are fine. But there are some things worth tweaking to ensure that as many connections as possible are using reliable encryption ciphers while providing forward secrecy. A connection with forward secrecy provides protection to past transactions even if the server’s HTTPS private key/certificate is stolen or compromised. This protects your users from large scale network observers that can store all traffic for later decryption, like governments, ISPs, telecoms, etc. From the server operator’s point of view, it means less risk of leaking users’ data, since even if the server is compromised, past network traffic will probably not be able to be encrypted.

In my situation, I was using our development site, https://dev.guardianproject.info, as my test bed, it is Apache 2.2 and openssl 1.0.1 running on Ubuntu/precise 12.04 Long-Term Support, so that means that some of the options are more limited since this is an older release. On Debian, Ubuntu and other Debian-derivatives, you’ll only need to edit /etc/apache2/mods-available/ssl.conf. There are more paranoid resources for perfectly configuring your TLS, but we’re not ready to drop support for old browsers that only support SSLv3, and not TLS at all. So I went with this line to enable SSLv3 and TLSv1.0 and newer:

SSLProtocol all -SSLv2

With TLS connections, the client and the server each present a list of encryption ciphers that represent the ciphers they each support in order of preference. This enables the client and server to choose a cipher that both support. Normally, the client’s list takes precedence over the server’s, but with many browsers that can be changed. Unfortunately it seems that Microsoft Internet Explorer (IE) ignores this and always uses the client’s preference first. Here’s how to make Apache request that the server preferences are preferred:

SSLHonorCipherOrder on

Next up is tweaking the server’s preference list to put ciphers that enable forward secrecy first (don’t worry if you don’t understand the next stuff about my rationale, my aim is to walk thru the process). This is done in most web servers using openssl-style cipher lists. I started out with what Mozilla recommends, then pared down the list to remove AES-256 ciphers, since AES-128 is widely regarded to be faster, quite strong, and perhaps more resistant to timing attacks than AES-256. I also chose to remove RC4-based ciphers, since RC4 might already be broken, and will only get worse with time. RC4 has historically been used to mitigate the “BEAST” attack, but that is mostly happening in the clients now. So with that I ended up with this cipher list (should be all one line in your config file):


One thing to make sure is that all of these ciphers are supported on your system. You can get the list of supported ciphers from openssl ciphers. I used this command line to get them in a nice, alphabetized list:

openssl ciphers | sed 's,:,\n,g' | sort

Lastly, we want to set the HSTS header to tell the browser to always use HTTPS. To enforce this, a header is added to the collection of HTTP headers delivered when connecting to the HTTPS site. This header tells the client browser to always connect to the current domain using HTTPS. It includes an expiration date (aka max-age) after which, the client browser will again allow HTTP connections to that domain. The server might then again redirect the HTTP connection to HTTPS, and again the client will get the HSTS header, and use only HTTPS until the expiration date comes again. To include this header in your Apache server, add this line:

Header add Strict-Transport-Security "max-age=15768000;includeSubDomains"

Now you can check the results of your work with Qualys’ handy SSL Test. You can see the result of my efforts here: https://www.ssllabs.com/ssltest/analyze.html?d=dev.guardianproject.info. A- is not bad. I tried for a good long while to get IE to use FS (Forward Secrecy) ciphers, but failed. IE does not respect the server-side cipher preferences. My guess is that the only way to get IE to use FS ciphers is to make a custom cipher list that does not include anything but FS ciphers and serve that only to IE. I know it is possible to do because bitbucket.com got an A+ for doing it. For a quick way to check out the cipher lists and HSTS header, look at iSEC Partner’s sslyze.

This is only a quick overview of the process to outline the general concepts. To find out more I recommend reading the source articles for this post, including specific directions for nginx and lighttpd:

Guardian Project | The Guardian Project | 2014-02-13 00:14:59

Activity1 sending an Intent that either Activity2 or Activity3 can handle.

Activity1 sending an Intent that either Activity2 or Activity3 can handle.

Android provides a flexible system of messaging between apps in the form of Intents. It also provides the framework for reusing large chunks of apps based on the Activity class. Intents are the messages that make the requests, and Activitys are the basic chunk of functionality in an app, including its interface. This combination allows apps to reuse large chunks of functionality while keeping the user experience seamless and fluent. For example, an app can send an Intent to request a camera Activity to prompt the user to take a picture, and that process can feel integrated into the original app that made the request. Another common use of this paradigm is choosing account information from the contacts database (aka the People app). When a user is composing an new email, they will want to select who the message gets sent to. Android provides both the contacts database, and a nice overlay screen for finding and selecting the person to send to. This combination is an Activity provided by Android. The message that the email program sends in order to trigger that Activity is an Intent.

As usual, one of the downsides of flexibility is increased security risk. This is compounded in the Android system by rules that will automatically export an Activity to receive Intents from any app, when certain conditions are met. If an Activity is exported for any app to call, it is possible for apps to send malicious Intents to that Activity. Many Intents are meant to be public and others are exported as a side effect. Either way, at the very least, it is necessary to sanitize the input that an Activity receives. On the other side of the issue, if an app is trusting another app to provide a sensitive service for it, then malware can pose as the trusted app and receive sensitive data from the trusting app. An app does not need to request any permissions in order to set itself up as a receiver of Intents.

Activity/Service hijacking: watch out for the little devil in the system

Activity/Service hijacking: watch out for the little devil in the system

Android, of course, does provide some added protections for cases like this. For very sensitive situations, an Activity can be setup to only receive Intents from apps that meet certain criteria. Android permissions can restrict other apps from sending Intents to any given exported Activity. If a separate app wants to send an Intent to an Activity that has be set with a permission, then that app must include that permission in its manifest, thereby publishing that it is using that permission. This provides a good way publish an API for getting permission, but leaving it relatively open for other apps to use. Other kinds of controls can be based on two aspects of an app that the Android system enforces to remain the same: the package name and the signing key. If either of those change, then Android considers it a different app altogether. The strictest control is handled by the “protection level”, which can be set to only allow either the system or apps signed by the same key to send Intents to a given Activity. These security tools are useful in many situations, but leave lots of privacy-oriented use cases uncovered.

There are some situations that need more flexibility without opening things up entirely. The first simple example is provided by our app Pixelknot: it needs to send pictures through services that will not mess up the hidden data in the images. It has a trusted list of apps it will send to, based on apps that have proven to pass the images through unchanged. When the user goes to share the image from Pixelknot to an cloud storage app, the user will be prompted to choose from a list of installed apps that match the whitelist in Pixelknot. We could have implemented a permission and asked lots of app providers to implement it, but it seems a mammoth task to get lots of large companies like Dropbox and Google to include our specific permission.

There are other situations that require even tighter restrictions that are available. The first example here comes from our OpenPGP app for Android. Gnu Privacy Guard (GPG) provides cryptographic services to any app that requests it. When the app sends data to GPG to be encrypted, it needs to be sure that the data is actually going to GPG and not to some malware. For very sensitive situations, the Android-provided package name and signing key might not be enough to ensure that the correct app is receiving the unencrypted data. Many Android devices are still unpatched to protect against master key bugs, and for people using Android in China, Iran, etc. where the Play Store is not allowed, they don’t get the exploit scanning provided by Google. Telecoms around the world have proved to be bad at updating the software for the devices that they sell, leaving many security problems unfixed. Alternative Android app stores are a very popular way to get apps. So far, the ones that we have seen provide minimal security and no malware scanning. In China, Android is very popular, so this represents a lot of Android users.

Another potential use case revolves around a media reporting app that relies on other apps to provide images and video as part of regular reports. This could be something like a citizen journalist editing app or a human rights reporting app. The Guardian Project develops a handful of apps designed to create media in these situations: ObscuraCam, InformaCam, and an new secure camera app in the works that we are contributing to. We want InformaCam to work as a provider of verifiable media to any app. It generates a package of data that includes a cryptographic signature so that its authenticity can be verified. That means that the apps that transport the InformaCam data do not need to be trusted in order to guarantee the integrity of the uploaded InformaCam data. Therefore it does not make sense in this case for InformaCam to grant itself permissions to access other apps’ secured Activitys. It would add to the maintenance load of the app without furthering the goals of the InformaCam project. Luckily there are other ways to address that need.

The inverse of this situation is not true. The reporting app that gathers media and sends it to trusted destinations has higher requirements for validating the data it receives via Intents. If verifiable media is required, then this reporter app will want to only accept incoming media from InformaCam. Well-known human rights activists are often the target of custom malware designed to get information from their phones. For this example, a malware version of InformaCam could be designed to track all of the media that the user is sending to the human rights reporting app. To prevent this, the reporter app will want to only accept data from a list of trusted apps. When the user tries to feed media from the malware app to the reporting app, it would be rejected, alerting the user that something is amiss. If an reporting app wants to receive data only from InformaCam, it needs to have some checks setup to enforce that. The easiest way for the reporting app to implement those checks would be to add an Android permission to the receiving Activity. But that requires the sending app, in the example above that is InformaCam, to implement the reporting app’s permission. Using permissions works for tailored interactions. InformaCam aims to bring tighter secure to all relevant interactions, so we need a different approach. While InformaCam could include some specific permissions, the aim is to have a single method that supports all the desired interactions. Having a single method here means less code to audit, less complexity, and fewer places for security bugs.

We have started auditing the security of communication via Intents, while also working on various ideas to address the issues laid out so far. This will include laying out best-practices and defining gaps in the Android architecture. We plan on building the techniques that we find useful into reusable libraries to make it easy for others to also have more flexible and trusted interactions. When are the standard checks not enough? If the user has a malware version of an app that exploits master key bugs, then the signature on the app will be valid. If a check is based only on a package name, malware could use any given package name. Android enforces that only one app can be installed with a given package name, but if there are multiple apps with the same package name, Android will not prevent you from installing the malware version.

TOFU/POP: delicious vegan treat and clever software interaction!

TOFU/POP: delicious vegan treat and clever software interaction!

The strictest possible checks can be based on the hash of the whole APK, while tracking the signing key of a given APK is also often useful. These two data points are the most reliable ways to verify a given app. They can be tracked in two different ways: pinning and trust-on-first-use (TOFU/POP). Pinning means that a verified hash or signing key for the apps that need to be trusted is included in the app that must trust them. Then the trusting app can verify what it is sending or receiving Intents from, the installed app is then compared to the pre-stored pinned value. This kind of pinning allows for checks like the Signature permission level but based on a key that the app developer can select and include in the app. The built-in Signature permissions are fixed on the signing key of the currently running app.

TOFU/POP means Trust-On-First-Use/Persistence Of Pseudonym. In this model, popularized by SSH, the user marks a given hash or signing key as trusted the first time they use the app, without extended checks about that apps validity. That mark then describes a “pseudonym” for that app, since there is no verification process, and that pseudonym is remembered for comparing in future interactions. One big advantage of TOFU/POP is that the user has control over which apps to trust, and that trust relationship is created at the moment the user takes an action to start using the app that needs to be trusted. That makes it much easier to manage than using Android permissions, which must be managed by the app’s developer. A disadvantage is that the initial trust is basically a guess, and that leaves open a method to get malware in there. The problem of installing good software, and avoiding malware, is outside of the scope of securing inter-app communication. Secure app installation is best handled by the process that is actually installing the software, like the Google Play Store or F-Droid does.

To build on the InformaCam example, in order to setup a trusted data flow between InformaCam and the reporting app, custom checks must be implemented on both the sender and the receiver. For the sender, InformaCam, it should be able to send to any app, but it should then remember the app that it is configured to send to and make sure its really only sending to that app. It would then use TOFU/POP with the hash as the data point. For the receiver, the reporting app, it should only accept incoming data from apps that it trusts. The receiver then includes a pin for the signing key, or if the app is being deployed to unupdated devices the pin can be based on the hash to work around master key exploits. From there on out, the receiving app checks against the stored app hashes or signing keys. For less security-sensitive situations, the received can rely on TOFU/POP on the first time that an app sends media.

There are various versions of these ideas floating around in various apps, and we have some in the works. We are working now to hammer out which of these ideas are the most useful, then we will be focusing our development efforts there. We would love to hear about any related effort or libraries that are out there. And we are also interested to hear about entirely different approaches than what has been outlined here.

Guardian Project | The Guardian Project | 2014-01-21 18:51:57

In September, I was pleased to present a talk on the importance of making cryptography and privacy technology accessible to the masses at TED’s Montréal event. In my 16-minute talk, I discussed threats to Internet freedom and privacy, political perspectives, as well as the role open technologies such as Cryptocat can play in this field.

The talk is available here, on the TEDx YouTube channel.

CryptoCat | Cryptocat Development Blog | 2013-10-19 16:43:32

Independent Cryptocat server operators:

We’re issuing a mandatory update for Cryptocat server configuration. Specifically, the ejabberd XMPP server configuration must be updated to include support for mod_ping.

Click here for Cryptocat server setup instructions, including the updated configuration for ejabberd.

We’re doing this in order to allow upcoming Cryptocat versions better connection handling, and the introduction of a new auto-reconnect feature! All Cryptocat versions 2.1.14 and above will not connect to servers without this configuration update. Cryptocat 2.1.14 is expected to be released some time within the coming weeks.

CryptoCat | Cryptocat Development Blog | 2013-09-11 19:38:58

This morning, we’ve begun to push Cryptocat 2.1.13, a big update, to all Cryptocat-compatible platforms (Chrome, Safari, Firefox and OS X.) This update brings many new features and improvements, as well as some small security fixes and improvements. The full change log is available in our code repository, but we’ll also list the big new things below. The update is still being pushed, so it may take around 24 hours for the update to be available in your area.

Important notes

First things first: encrypted group chat in Cryptocat 2.1.13 is not backwards compatible with any prior version. Encrypted file sharing and private one-on-one chat will still work, but we still strongly recommend that you update and also remind your friends to update as well. Also, the block feature has been changed to an ignore feature — you can still ignore group chat messages from others, but you cannot block them from receiving your own.

New feature: Authenticate with secret questions!

Secret question authentication (SMP)

Secret question authentication (SMP)

An awesome new feature we’re proud to introduce is secret question authentication, via the SMP protocol. Now, if you are unable to authenticate your friend’s identity using fingerprints, you can simply ask them a question to which only they would know the answer. They will be prompted to answer — if the answers match, a cryptographic process known as SMP will ensure that your friend is properly authenticated. We hope this new feature will make it easier to authenticate your friend’s identities, which can be time-consuming when you’re chatting with a conversation of five or more friends. This feature was designed and implemented by Arlo Breault and Nadim Kobeissi.



New Feature: Message previews

Message previews

Another exciting new feature is message previews: Messages from buddies you’re not currently chatting with will appear in a small blue bubble, allowing you to quickly preview messages you’re receiving from various parties, without switching conversations. This feature was designed by Meghana Khandekar at the Cryptocat Hackathon and implemented by Nadim Kobeissi.





Security improvements

Better warnings for participants.

We’ve addressed a few security issues: the first is a recurring issue where Cryptocat users could be allowed to send group chat messages only to some participants of a group chat and not to others. This issue had popped up before, and we hope we won’t have to address it again. In a group chat scenario, it turns out that resolving this kind of situation is more difficult than previously thought.

The second issue is related to private chat accepting unencrypted messages from non-Cryptocat clients. We’ve chosen to make Cryptocat refuse to display any unencrypted messages it receives, and dropping them.

Finally, we’ve added better warnings. In case of suspicious cryptographic activity (such as bad message authentication codes, reuse of initialization vectors,) Cryptocat will display a general warning regarding the violating user.

More improvements and fixes

This is a really big update, and there’s a lot more improvements and small bug fixes spread all around Cryptocat. We’ve fixed an issue that would prevent Windows users from sending encrypted ZIP file transfers, made logout messages more reliable, added timestamps to join/part messages, made Cryptocat for Firefox a lot snappier… these are only a handful of the many small improvements and fixes in Cryptocat 2.1.13.

We hope you enjoy it! It should be available as an update for your area within the next 24 hours.

CryptoCat | Cryptocat Development Blog | 2013-09-04 16:56:25

We’re excited to announce the new Cryptocat Encrypted Chat Mini Guide! This printable, single-page two-sided PDF lets you print out, cut up and staple together a small guide you can use to introduce friends, colleagues and anyone else to the differences between regular instant messaging and encrypted chat, how Cryptocat works, why fingerprints are important, and Cryptocat’s current limitations. Download the PDF and print your own!

The goal of the Cryptocat Mini Guide is to quickly explain to anyone how Cryptocat is different, focusing on an easy-to-understand cartoon approach while also communicating important information such as warnings and fingerprint authentication.

Special thanks go to Cryptocat’s Associate Swag Coordinator, Ingrid Burrington, for designing the guide and getting it done. The Cryptocat Mini Guide was one of the many initiatives that started at last month’s hackathon, and we’re very excited to see volunteers come up with fruitful initiatives. You’ll be seeing this guide distributed at conferences and other events where Cryptocat is present. And don’t forget to print your own — we even put dashed lines where you’re supposed to cut with scissors.

CryptoCat | Cryptocat Development Blog | 2013-09-01 20:31:44

Open Source Veteran Bdale Garbee Joins FreedomBox Foundation Board

NEW YORK, March 10, 2011-- The FreedomBox Foundation, based here, today announced that Bdale Garbee has agreed to join the Foundation's board of directors and chair its technical advisory committee. In that role, he will coordinate development of the FreedomBox and its software.

Garbee is a longtime leader and developer in the free software community. He serves as Chief Technologist for Open Source and Linux at Hewlett Packard, is chairman of the Debian Technical Committee, and is President of Software in the Public Interest, the non-profit organization that provides fiscal sponsorship for the Debian GNU/Linux distribution and other projects. In 2002, he served as Debian Project Leader.

"Bdale has excelled as a developer and leader in the free software community. He is exactly the right person to guide the technical architecture of the FreedomBox," said Eben Moglen, director of the FreedomBox Foundation.

"I'm excited to work on this project with such an enthusiastic community," said Garbee. "In the long-term, this may prove to be most important thing I'm doing right now."

The Foundation's formation was announced in Brussels on February 4, and it is actively seeking funds; it recently raised more than $80,000 in less than fifteen days on Kickstarter.

About the FreedomBox Foundation

The FreedomBox project is a free software effort that will distribute computers that allow users to seize control of their privacy, anonymity and security in the face of government censorship, commercial tracking, and intrusive internet service providers.

Eben Moglen is Professor of Law at Columbia University Law School and the Founding Director of the FreedomBox Foundation, a new non-profit incorporated in Delaware. It is in the process of applying for 501(c)(3) status. Its mission is to support the creation and worldwide distribution of FreedomBoxes.

For further information, contact Ian Sullivan at press@freedomboxfoundation.org or see http://freedomboxfoundation.org.

FreedomBox | news | 2013-08-21 18:44:58

Cryptocat Hackathon: Day 1Cryptocat’s first ever hackathon event was a great success. With the collaboration of OpenITP and the New America NYC office, we were able to bring together dozens individuals, which included programmers, designers, technologists, journalists, and privacy enthusiasts from around the world, to share a weekend of discussions, workshops and straight old-fashioned Cryptocat hacking in New York City.

During this weekend, we organized a coding track, led by myself, Nadim, as well as a journalist security track that was led by Carol Waters of Internews, with the participation of the Guardian Project. The coding track brought together volunteer programmers, documentation writers and user interface designers in order to work on various open issues as well as suggest new features, discover and fix bugs, and contribute to making our documentation more readable.

Ingrid Burrington's work-in-progress Cryptocat Quick Start Guide.

Many people showed up, with many great initatives and ideas. Off the top of my head, I remember Meghana Khandekar, of the New York School of Visual Arts, who contributed ideas for user interface improvements. Steve Thomas and Joseph Bonneau helped with discovering, addressing and discussing encryption-related bugs and improvements. Griffin Boyce, from the Open Technology Institute, helped with organizing the hackathon and contributed the first working build of Cryptocat for newer Opera browsers. Ingrid Burrington participated by working on hand-outable Cryptocat quick-start guides. David Huerta and Christopher Casebeer further contributed some code-level and design-level usability improvements. I worked on implementing a user interface for SMP authentication in Cryptocat.

We were very excited to have a team of medical doctors and developers figuring out a Cryptocat-based app for sharing medical records while fully respecting privacy laws. The team was looking to implement a medium for comparing X-ray images over Cryptocat encrypted chat, among other medical field related features.

Cryptocat Hackathon: Day 1

The journalist security track gave a handful of journalists and privacy enthusiasts the opportunity for expert hands-on training in techniques that can help them maintain their privacy and the privacy of their sources online and offline.  In addition, with the help of the Guardian Project, we were able to introduce apps such as Gibberbot and OSTel for secure mobile communications.
We were very pleased with the success of the first Cryptocat hackathon. Code was written, bugs were fixed, food was shared, and prize Cryptocat t-shirts were won. I sincerely thank OpenITP and New America NYC for their organizational aid, and my friend Griffin Boyce for helping me carry food, set up tables and chairs, and generally make sure people were comfortable. And finally, an equally big thanks to all the people who showed up and helped improve Cryptocat. Without any of these people, such a great hackathon would have never happened. Watch out for more hackathons in D.C., San Francisco, and Montréal!

Cryptocat Hackathon

Update: The hackathon is over, and you can find out what happened (and see photos) at our report!

Cryptocat, in collaboration with OpenITP, will be hosting the very first Cryptocat Hackathon weekend in New York City, on the weekend of the 17th and 18th of August 2013.

Join us on August 17-18 for the Cryptocat Hackathon and help empower people worldwide by improving useful tools and discussing the future of making privacy accessible. This two day event will take place at the OpenITP offices, located on 199 Lafayette Street, Suite 3b, New York City. Please RSVP on Eventbrite or email events@crypto.cat.


The Cryptocat Hackathon will feature two tracks to accomodate the diversity of the attendees:

Coding Track with Nadim

Join Nadim in discussing the future of Cryptocat and contributing towards our efforts for the next year. Multi-Party OTR, encrypted video chat using WebRTC, and more exciting topics await your helping hands!

Journalist Security Track with Carol and the Guardian Project

Join Carol in a hands-on workshop for journalists on how to protect your digital security and privacy in your working environment. The Guardian Project will also be swooping in to discuss mobile security, introducing tools and solutions. Carol Waters is a Program Officer with Internews’ Internet Initiatives, and focuses on digital and information security issues. The Guardian Project builds open source mobile apps to protect the privacy and security of all of mankind.

Who should attend?

Hackers, designers, journalists, Internet freedom fighters, community organizers, and netizens. Essentially, anyone interested in empowering activists through these tools. While a big chunk of the work will focus on code, there are many other tasks available ranging from Q&A to communications.



10:00: Introduction and planning

11:00 Some hacking

12:00 Lunch!

1:00 – 5:00 Split into two tracks:

Coding track with Nadim

Journalist security track with Carol Waters


10:00: Some hacking

12:00 Lunch!

1:00 – 4:00 Split into two tracks:

Coding track with Nadim

Journalist security track with Carol

4:00 – 5:00 Closing notes and roundtable

CryptoCat | Cryptocat Development Blog | 2013-08-07 14:48:00

24 hours after last month’s critical vulnerability in Cryptocat hit its peak controversy point, I was scheduled to give a talk at SIGINT2013, organized in Köln by the Chaos Computer Club. After the talk, we held a 70-minute Q&A in which I answered questions even from Twitter. 70 minutes!

In the 45-minute talk, I discuss the recent bug, how we plan to deal with it, what it means, as well as Cryptocat’s overall goals and progress:

In the 70-minute Q&A that followed, I answer every question ranging from the recent bug to what my favourite TV show is:

I’m really pleased with these videos since they present a channel into how the project is dealing with security issues as well as our current position and future plans. If you’re interested in Cryptocat, they are worth watching.

Additionally, I recently gave a talk about Cryptocat at Republika in Rijeka, and will be at OHM2013 in Amsterdam as part of NoisySquare, where there will be Cryptocat talks, workshops and more. See you there!

CryptoCat | Cryptocat Development Blog | 2013-07-23 17:24:14

In the unlikely event that you are using a version of Cryptocat older than 2.0.42, please update to the latest version immediately to fix a critical security bug in group chat. We recommend updating to the 2.1.* branch, which at time of writing is the latest version. We apologize unreservedly for this situation. (Post last updated Sunday July 7, 2:00PM UTC)

What happened?

A few weeks ago, a volunteer named Steve Thomas pointed out a vulnerability in the way key pairs were generated for Cryptocat’s group chat. The vulnerability was quickly resolved and an update was pushed. We sincerely thank Steve for his invaluable effort.

The vulnerability was so that any conversations had over Cryptocat’s group chat function, between versions 2.0 and 2.0.42 (2.0.42 not included), were easier to crack via brute force. The period between 2.0 and 2.0.42 covered approximately seven months. Group conversations that were had during those seven months were likely vulnerable to being significantly easier to crack.

Once Steve reported the vulnerability, it was fixed immediately and the update was pushed. We’ve thanked Steve and added his name on our Cryptocat Bughunt page’s wall of fame.

In our update log for Cryptocat 2.0.42, we had noted that the update fixed a security bug:

  • IMPORTANT: Due to changes to multiparty key generation (in order to be compatible with the upcoming mobile apps), this version of Cryptocat cannot have multiparty conversations with previous versions. However private conversations still work.
  • Fixed a bug found in the encryption libraries that could partially weaken the security of multiparty Cryptocat messages. (This is Steve’s bug.)

The first item, which made some changes in how keys were generated, did break compatibility with previous versions. But unlike what Steve has written in his blog post on the matter, this has nothing at all to do with the vulnerability he reported, which we were able to fix without breaking compatibility.

Due to Steve’s publishing of his blog post, we felt it would be useful to publish an additional blog post clarifying the matter. While the blog post published by Steve does indeed point to a significant vulnerability, we want to make sure it does not also cause inaccuracies to be reported.

Private chats are not affected: Private queries (1-on-1) are handled over the OTR protocol, and are therefore completely unaffected by this bug. Their security was not weakened.

Our SSL keys are safe: For some reason, there are rumors that our SSL keys were compromised. To the best of our knowledge, this is not the case. All Cryptocat data still passed over SSL, and that offers a small layer of protection that may help with this issue. Of course, it does not in any way save from the fact that due to our blunder, seven months of conversations were easier to crack. This is still a real mistake. We should also note that our SSL setup has implemented forward secrecy since the past couple of weeks. We’ve rotated our SSL keys as a precaution.

One more small note: Much has been said about a line of code in our XMPP library that supposedly is a sign of bad practice — this line is not used for anything security-sensitive. It is not a security weakness. It came as part of the third-party XMPP library that Cryptocat uses.

Finally, an apology: Bad bugs happen all the time in all projects. At Cryptocat, we’ve undertaken the difficult mission of trying to bridge the gap between accessibility and security. This will never be easy. We will always make mistakes, even ten years from now. Cryptocat is not any different from any of the other notable privacy, encryption and security projects, in which vulnerabilities get pointed out on a regular basis and are fixed. Bugs will continue to happen in Cryptocat, and they will continue to happen in other projects as well. This is how open source security works. We’ve added a bigger warning to our website about Cryptocat’s experimental status.

Every time there has been a security issue with Cryptocat, we have been fully transparent, fully accountable and have taken full responsibility for our mistakes. We will commit failures dozens, if not hundreds of times more in the coming years, and we only ask you to be vigilant and careful. This is the process of open source security. On behalf of the Cryptocat project, team members and volunteers, I apologize unreservedly for this vulnerability, and sincerely and deeply thank Steve Thomas for pointing it out. Without him, we would have been a lot worse off, and so would our users.

We are continuing in the process of auditing all aspects of Cryptocat’s development, and we assure our users that security remains something we are constantly focused on.

CryptoCat | Cryptocat Development Blog | 2013-07-04 12:04:48

Today, with Cryptocat nearing 65,000 regular users, the Cryptocat project releases “Cryptocat: Adopting Accessibility and Ease of Use as Security Properties,” a working draft which brings together the past year of Cryptocat research and development.

We document the challenges we have faced, both cryptographic and social, and the decisions we’ve taken in order to attempt to bring encrypted communications to the masses.

The full paper is available for download here from the public scientific publishing site, arXiv.


Excerpts of the introduction from our paper:

Cryptocat is a Free and Open Source Software (FL/OSS) browser extension that makes use of web technologies in order to provide easy to use, accessible, encrypted instant messaging to the general public. We aim to investigate how to best leverage the accessibility and portability offered by web technologies in order to allow encrypted instant messaging an opportunity to better permeate on a social level. We have found that encrypted communications, while in many cases technically well-implemented, suffer from a lack of usage due to their being unappealing and inaccessible to the “average end-user”.

Our position is that accessibility and ease of use must be treated as security properties. Even if a cryptographic system is technically highly qualified, securing user privacy is not achieved without addressing the problem of accessibility. Our goal is to investigate the feasibility of implementing cryptographic systems in highly accessible mediums, and to address the technical and social challenges of making encrypted instant messaging accessible and portable.

In working with young and middle-aged professionals in the Middle East region, we have discovered that desktop OTR clients suffer from serious usability issues which are sometimes further exacerbated due to language differences and lack of cultural integration (the technology was frequently described as “foreign”). In one case, an activist who was fully trained to use Pidgin-OTR neglected to do so citing usability difficulties, and as a direct consequence encountered a life-threatening situation at the hands of a national military in the Middle East and North Africa region.

These circumstances have led us to the conclusion that ease of use and accessibility must be treated as security properties, since their absence results in security compromises with consequences similar to the ones experienced due to cryptographic breaks.

Cryptocat is designed to leverage highly accessible mediums (the web browser) in order to offer an easy to use encrypted instant messaging interface accessible indiscriminately to all cultures, languages and age groups. Cryptocat clients are available as Free Software browser extensions written in JavaScript and HTML5.

CryptoCat | Cryptocat Development Blog | 2013-06-24 14:02:02

A frequent question we get here at Cryptocat is: “why don’t you add a buddy lists feature so I can keep track of whether my friends are on Cryptocat?” The answer: metadata.

If you’ve been following the news at all for the past week, you’d have heard of the outrageous reports of Internet surveillance on behalf of the NSA. While those reports suggest that the NSA may not have complete access to content, they still allow the agency access to metadata. If we were talking about phone surveillance, for example, metadata would be the time you made calls, which numbers you called, how long your calls have lasted, and even where you placed your calls from. This circumstantial data can be collected en masse to paint very clear surveillance pictures about individuals or groups of individuals.

At Cryptocat, we not only want to keep your chat content to yourself, but we also want to distance ourselves from your metadata. In this post we’ll describe what metadata you’re giving to Cryptocat servers, what’s done with it, and what parts of it can be seen by third parties, such as your Internet service provider. We assume we are dealing with a Cryptocat XMPP server with a default configuration, served over SSL.

Reminder: No software is likely to be able to provide total security against state-level actors. While Cryptocat offers useful privacy, we remind our users not to trust Cryptocat, or any computer software, with extreme situations. Cryptocat is not a magic bullet and does not protect from all threats.

Who has your metadata?


Cryptocat does not ever store your metadata or share it with anyone under any circumstances. Always be mindful of your metadata — it’s part of your privacy, too! For our default server, we also have a privacy policy, which we recommend you look over.

CryptoCat | Cryptocat Development Blog | 2013-06-08 17:46:54

OpenITP is happy to announce the hire of Nadim Kobeissi as Special Advisor starting in June 2013 Kobeissi is best known for starting Cryptocat, one of the world's most popular encrypted chat applications.

Based in Montreal, Kobeissi specializes in cryptography, user interfaces, and application development. He has done original research on making encryption more accessible across languages and borders, and improving the state of web cryptography. He has also lead initiatives for Internet freedom and against Internet surveillance. He has a B.A. In Political Science and Philosophy From Concordia University, and is fluent in English, French, and Arabic.

As Special Advisor, Kobeissi will collaborate with OpenITP staff to improve and promote Cryptocat, advise on security and encryption matters, and organize developer meetings.

You can find him on @kaepora and @cryptocatapp

OpenITP | openitp.org | 2013-05-30 19:58:46

Hacking to Empower Accessible Privacy Worldwide

Join us on August 17-18 for the Cryptocat Hackathon and help empower activists worldwide by improving useful tools and discussing the future of making privacy accessible. This two day event will take place at the OpenITP offices, located on 199 Lafayette Street, Suite 3b, New York City.

Cryptocat provides the easiest, most accessible way for an individual to chat while maintaining their privacy online. It is a free software that aims to provide an open, accessible Instant Messaging environment that encrypts conversations and works right in your browser.

Who Should Attend?

Hackers, designers, Internet freedom fighters, community organizers, and netizens. Essentially, anyone interested in empowering activists through these tools. While a big chunk of the work will focus on code, there are many other tasks available ranging from Q&A to communications.

For RSVP, please visit http://www.eventbrite.com/event/6904608871 or email nadim AT crypto DOT cat,



10:00 Presentation of the projects

11:00 Brainstorm

12:00 Lunch

1:00 Hack

5:00pm End of Day


10:00-5:00pm Hacking


OpenITP | openitp.org | 2013-05-30 15:38:05

Collateral Freedom: A Snapshot of Chinese Users Circumventing Censorship, just released today, documents the experiences of 1,175 Chinese Internet users who are circumventing their country’s Internet censorship— and it carries a powerful message for developers and funders of censorship circumvention tools. We believe these results show an opportunity for the circumvention tech community to build stable, long term improvements in Internet freedom in China.

This study was conducted by David Robinson, Harlan Yu and Anne An. It was managed by OpenITP, and supported by Radio Free Asia’s Open Technology Fund.

Read Report

The report found that the circumvention tools that work best for Chinese users are technologically diverse, but are united by a shared political feature: the collateral cost of choosing to block them is prohibitive for China’s censors. Survey respondents rely not on tools that the Great Firewall can’t block, but rather on tools that the Chinese government does not want the Firewall to block. Internet freedom for these users is collateral freedom, built on technologies and platforms that the regime finds economically or politically indispensable

The most widely used tool in the survey—GoAgent—runs on Google’s cloud hosting platform, which also hosts major consumer online services and provides background infrastructure for thousands of other web sites. The Great Firewall sometimes slows access to this platform, but purposely stops short of blocking the platform outright. The platform is engineered in a way that limits the regime’s ability to differentiate between the circumventing activity it would like to prohibit, and the commercial activity it would like to allow. A blanket block would be technically feasible, but economically disruptive, for the Chinese authorities. The next most widely used circumvention solutions are VPNs, both free and paid—networks using the same protocols that nearly all the Chinese offices of multinational firms rely on to connect securely to their international headquarters. Again, blocking all traffic from secure VPNs would be the logical way to make censorship effective—but it would cause significant collateral harm.

Read Report

Instead, the authorities steer a middle course, sometimes choosing to disrupt VPN traffic (and commerce) in the interest of censorship, and at other times allowing VPN traffic (and circumvention) in the interest of commerce. The Chinese government is implementing policies that will improve its ability to segment circumvention-related uses of VPNs from business-related uses, including heightened registration requirements for VPN providers and users.

Respondents to the survey were categorically more likely to rely on these commercially widespread technologies and platforms than they were to use special purpose anti-censorship systems with relatively little commercial footprint, such as Freegate, Ultrasurf, Psiphon, Tor, Puff or simple web proxies. Many of the respondents have used these non-commercial tools in the past—but most have now stopped. The most successful tools today don’t make the free flow of sensitive information harder to block—they make it harder to separate from traffic that the Chinese government wishes to allow.

The report found that most users of circumvention software are in what we call the “versatility-first” group: they seek a fast and robust connection, are willing to install and configure special software, and (perhaps surprisingly) do not base their circumvention decisions on security or privacy concerns. To the extent that circumvention software developers and funders wish to help these users, the study found that they should focus on leveraging business infrastructure hosted in relatively freedom respecting jurisdictions, because the Chinese government has greater reason to allow such infrastructure to operate.

The report provided five practical suggestions:

  1. Map the circumvention technologies and practices of foreign businesses in China.
  2. Engage with online platform providers who serve businesses in censored countries.
  3. Investigate the collateral freedom dynamic in other countries.
  4. Diversify development efforts to match the diversity of user needs.
  5. Make HTTPS a corporate social responsibility issue.

Read Report


OpenITP | openitp.org | 2013-05-20 15:49:52