One step closer: USA FREEDOM Act moves US toward greater compliance with human rights law

Access | Access Blog | 2014-08-21 17:46:25

For the past four months, the European Commission has been conducting a public consultation on corporate social responsibility (CSR) to which Access responded. Unfortunately, we found little to praise in the Commission’s efforts on CSR thus far, and have many qualms with the limited consultation process.

Access | Access Blog | 2014-08-21 16:39:51

Welcome to the thirty-third issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the community around Tor, Aphex Twin’s favorite anonymity network.

Tor Browser 3.6.4 and 4.0-alpha-1 are out

Erinn Clark took to the Tor Blog to announce two new releases by the Tor Browser team. The stable version (3.6.4) contains fixes for several new OpenSSL bugs, although since Tor should only be vulnerable to one of them, and “as this issue is only a DoS”, it is not considered a critical security update. This release also brings Tor Browser users the fixes that give log warnings about the RELAY_EARLY traffic confirmation attack explained last month. Please be sure to upgrade as soon as possible.

Alongside this stable release, the first alpha version of Tor Browser 4.0 is now available. Among the most exciting new features of this series is the inclusion of the meek pluggable transport. In contrast to the bridge-based transports already available in Tor Browser, meek relies on a principle of “too big to block”, as its creator David Fifield explained: “instead of going through a bridge with a secret address, you go through a known domain (www.google.com for example) that the censor will be reluctant to block. You don’t need to look up any bridge addresses before you get started”. meek currently supports two “front domains”, Google and Amazon Web Services; it may therefore be especially useful for users behind extremely restrictive national or local firewalls. David posted a fuller explanation of meek, and how to configure it, in a separate blog post.

This alpha release also “paves the way to [the] upcoming autoupdater by reorganizing the directory structure of the browser”, as Erinn wrote. This means that users upgrading from any previous Tor Browser series cannot extract the new version over their existing Tor Browser folder, or it will not work.

You can consult the full list of changes and bugfixes for both versions in Erinn’s post, and download the new releases themselves from the Tor website.

The Tor network no longer supports designating relays by name

Since the very first versions of Tor, relay operators have been able to specify “nicknames” for their relays. Such nicknames were initially meant to be unique across the network, and operators of directory authorities would manually “bind” a relay identity key after verifying the nickname. The process became formalized with the “Named” flag introduced in the 0.1.1 series, and later automated with the 0.2.0 series. If a relay held a unique nickname for long enough, the authoritywould recognize the binding, and subsequently reserve the name for half a year.

Nicknames are useful because it appears humans are not very good at thinking using long strings of random bits. Initially, they made it possible to understand what was happening in the network more easily, and to designate a specific relay in an abbreviated way. Having two relays in the network with the same nickname is not really problematic when one is looking at nodes, or a list in Globe, as relays can always be differentiated by their IP addresses or identity keys.

But complications arise when nicknames are used to specify one relay to the exclusion of another. If the wrong relay gets selected, it can become a security risk. Even though real efforts have been made to improve the situation, properly enforcing uniqueness has always been problematic, and a burden for the few directory authorities that handle naming.

Back in April, the “Heartbleed” bug forced many relays to switch to a new identity key, thus losing their “Named” flag. Because this meant that anyone designating relays by their nickname would now have a hard time continuing to do so, Sebastian Hahn decided to use the opportunity to get rid of the idea entirely.

This week, Sebastian wrote: “Code review down to 0.2.3.x has shown that the naming-related code hasn’t changed much at all, and no issues were found which would mean a Named-flag free consensus would cause any problems. gabelmoo and tor26 have stopped acting as Naming Directory Authorities, and — pending any issues — will stay that way.”

This means that although you can still give your relay a nickname in its configuration file, designating relays by nickname for any other purpose (such as telling Tor to avoid using certain nodes) has now stopped working. “If you — in your Tor configuration file — refer to any relay by name and not by identity hash, please change that immediately. Future versions of Tor will not support using names in the configuration at all”, warns Sebastian.

Miscellaneous news

meejah announced the release of version 0.11.0 of txtorcon, a Twisted-based Python controller library for Tor. This release brings several API improvements; see meejah’s message for full release notes and instructions on how to download it.

Mike Perry posted an overview of a recent report put together by iSEC Partners and commissioned by the Open Technology Fund to explore “current and future hardening options for the Tor Browser”. Among other things, Mike’s post addresses the report’s immediate hardening recommendations, latest thoughts on the proposed Tor Browser “security slider”, and longer-term security development measures, as well as ways in which the development of Google Chrome could inform Tor Browser’s own security engineering.

Nick Mathewson asked for comments on Trunnel, “a little tool to automatically generate binary encoding and parsing code based on C-like structure descriptions” intended to prevent “Heartbleed”-style vulnerabilities from creeping into Tor’s binary-parsing code in C. “My open questions are: Is this a good idea? Is it a good idea to use this in Tor? Are there any tricky bugs left in the generated code? What am I forgetting to think of?”, wrote Nick.

George Kadianakis followed up his journey to the core of what Tor does when trying to connect to entry guards in the absence of a network connection with another post running through some possible improvements to Tor’s behavior in these situations: “I’m looking forward to some feedback on the proposed algorithms as well as improvements and suggestions”.

Arturo Filastò requested feedback on some proposed changes to the format of the “test deck” used by ooni-probe, the main project of the Open Observatory of Network Interference. “A test deck is basically a way of telling it ‘Run this list of OONI tests with these inputs and by the way be sure you also set these options properly when doing so’…This new format is supposed to overcome some of the limitations of the old design and we hope that a major redesign will not be needed in the near future”, wrote Arturo.

Tor’s importance to users who are at risk, for a variety of reasons, makes it an attractive target for creators of malware, who distribute fake or modified versions of Tor software for malicious purposes. Following a recent report of a fake Tor Browser in circulation, Julien Voisin carried out an investigation of the compromised software, and posted a detailed analysis of the results. To ensure you are protected against this sort of attack, make sure you verify any Tor software you download before running it!

Arlo Breault submitted a status report for July.

As the annual Google Summer of Code season draws to a close, Tor’s GSoC students are submitting their final reports. Israel Leiva reported on the revamp of GetTor, Marc Juarez on the framework for website fingerprinting countermeasures, Juha Nurmi on ahmia.fi, Noah Rahman on Stegotorus enhancement, Amogh Pradeep on Orbot+Orfox, Daniel Martí on consensus diffs, Mikhail Belous on the multicore tor daemon work, Zack Mullaly on the secure ruleset updater for HTTPS Everywhere, and Quinn Jarrell on Fog, the pluggable transport combiner.

Tor help desk roundup

The help desk has been asked if it is possible to set up an anonymous blog using Tor. The Hyde project, developed by Karsten Loesing, documents the step-by-step process of using Tor, Jekyll, and Nginx to host an anonymous blog as a hidden service.

News from Tor StackExchange

The Tor StackExchange site is looking for another friendly and helpful moderator. Moderators need to take care of flagged items (spam, me-too comments, etc.), and are liaisons between the community and StackExchange’s community team. So, if you’re interested, have a look at the theory of moderation and post an answer to the question at the Tor StackExchange Meta site.


This issue of Tor Weekly News has been assembled by Lunar, harmony, David Fifield, qbi, Matt Pagan, Sebastian Hahn, Ximin Luo, and dope457.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Tor Blog | The Tor Blog blogs | 2014-08-20 12:00:00

In May, the Open Technology Fund commissioned iSEC Partners to study current and future hardening options for the Tor Browser. The Open Technology Fund is the primary funder of Tor Browser development, and it commissions security analysis and review for all of the projects that it funds as a standard practice. We worked with iSEC to define the scope of the engagement to focus on the following six main areas:

  1. Review of the current state of hardening in Tor Browser
  2. Investigate additional hardening options and instrumentation
  3. Perform historical vulnerability analysis on Firefox, in order to make informed vulnerability surface reduction recommendations
  4. Investigate image, audio, and video codecs and their respective library's vulnerability history
  5. Review our current about:config settings, both for vulnerability surface reduction and security
  6. Review alternate/obscure protocol and application handlers


The complete report is available in the iSEC publications github repo. All tickets related to the report can be found using the tbb-isec-report keyword. General Tor Browser security tickets can be found using the tbb-security keyword.

Major Findings and Recommendations

The report had the following high-level findings and recommendations.

  • Address Space Layout Randomization is disabled on Windows and Mac

  • Due to our use of cross-compilation and non-standard toolchains in our reproducible build system, several hardening features have ended up disabled. We have known about the Windows issues prior to this report, and should have a fix for them soon. However, the MacOS issues are news to us, and appear to require that we build 64 bit versions of the Tor Browser for full support. The parent ticket for all basic hardening issues in Tor Browser is bug #10065.

  • Participate in Pwn2Own

  • iSEC recommended that we find a sponsor to fund a Pwn2Own reward for bugs specific to Tor Browser in a semi-hardened configuration. We are very interested in this idea and would love to talk with anyone willing to sponsor us in this competition, but we're not yet certain that our hardening options will have stabilized with enough lead time for the 2015 contest next March.

  • Test and recommend the Microsoft Enhanced Mitigation Experience Toolkit on Windows

  • The Microsoft Enhanced Mitigation Experience Toolkit is an optional toolkit that Windows users can run to further harden Tor Browser against exploitation. We've created bug #12820 for this analysis.

  • Replace the Firefox memory allocator (jemalloc) with ctmalloc/PartitionAlloc

  • PartitionAlloc is a memory allocator designed by Google specifically to mitigate common heap-based vulnerabilities by hardening free lists, creating partitioned allocation regions, and using guard pages to protect metadata and partitions. Its basic hardening features can be picked up by using it as a simple malloc replacement library (as ctmalloc). Bug #10281 tracks this work.

  • Make use of advanced ParitionAlloc features and other instrumentation to reduce the risk from use-after-free vulnerabilities

  • The iSEC vulnerability review found that the overwhelming majority of vulnerabilities to date in Firefox were use-after-free, followed closely by general heap corruption. In order to mitigate these vulnerabilities, we would need to make use of the heap partitioning features of PartitionAlloc to actually ensure that allocations are partitioned (for example, by using the existing tags from Firefox's about:memory). We will also investigate enabling assertions in limited areas of the codebase, such as the refcounting system, the JIT and the Javascript engine.

Vulnerability Surface Reduction (Security Slider)

A large portion of the report was also focused on analyzing historical Firefox vulnerability data and other sources of large vulnerability surface for a planned "Security Slider" UI in Tor Browser.

The Security Slider was first suggested by Roger Dingledine as a way to make it easy for users to trade off between functionality and security, gradually disabling features ranked by both vulnerability count and web prevalence/usability impact.

The report makes several recommendations along these lines, but a brief distillation can be found on the ticket for the slider.

At a high level, we plan for four levels in this slider. "Low" security will be the current Tor Browser settings, with the addition of JIT support. "Medium-Low" will disable most of the JIT, and make HTML5 media click-to-play via NoScript. "Medium-High" will disable the rest of the JIT, will disable JS on non-HTTPS url bar origins, and disable SVG. "High" will fully disable Javascript, block remote fonts via NoScript, and disable all media codecs except for WebM (which will remain click-to-play).

The Long Term

A web browser is a very large and complicated piece of software, and while we believe that the privacy properties of Tor Browser are better than those of every other web browser currently available, it is very important to us that we raise the bar to successful code execution and exploitation of Tor Browser as well.

We are very eager to see the deployment of sandboxing support in Firefox, which should go a long way to improving the security of Tor Browser as well. To improve security for their users, Mozilla has recently shifted 10 engineers into the Electrolysis project, which provides the groundwork for producing a multiprocess sandbox architecture for the desktop Firefox. This will allow them to provide a Google Chrome style security sandbox for website content, to reduce the risk from software vulnerabilities, and generally impede exploitability.

Until that time, we will also be investigating providing hardened builds of Tor Browser using the AddressSanitizer and Virtual Table Verification features of newer GCC releases. While this will not eliminate all vectors of memory corruption-based exploitation (in particular, the hardening properties of AddressSanitizer are not as good as those provided by SoftBounds+CETS for example, but that compiler is not yet production-ready), it should raise the bar to exploitation. We are hopeful that these builds in combination with PartitionAlloc and the Security Slider will satisfy the needs of our users who require high security and who are willing to trade performance and usability in order to get it.

We also hope to include optional application-wide sandboxes for Tor Browser as part of the official distribution.

Why not Google Chrome?

It is no secret that in many ways, both we and Mozilla are playing catch-up to reach the level of code execution security provided by Google Chrome, and in fact closely following the Google Chrome security team was one of the recommendations of the iSEC report.

In particular, Google Chrome benefits from a multiprocess sandboxing architecture, as well as several further hardening options and innovations (such as PartitionAlloc).

Unfortunately, our budget for the browser project is still very constrained compared to the amount of work that is required to provide the privacy properties we feel are important, and Firefox remains a far more cost-effective platform for us for several reasons. In particular, Firefox's flexible extension system, fully scriptable UI, solid proxy support, and its long Extended Support Release cycle all allow us to accomplish far more with fewer resources than we could with any other web browser.

Further, Google Chrome is far less amenable to supporting basic web privacy and Tor-critical features (such as solid proxy support) than Mozilla Firefox. Initial efforts to work with the Google Chrome team saw some success in terms of adding APIs that are crucial to addons such as HTTPS-Everywhere, but we ran into several roadblocks when it came to Tor-specific features and changes. In particular, several bugs required for basic proxy-safe Tor support for Google Chrome's Incognito Mode ended up blocked for various reasons. The worst offender on this front is the use of the Microsoft Windows CryptoAPI for certificate validation, without any alternative. This bug means that certificate revocation checking and intermediate certificate retrieval happen outside of the browser's proxy settings, and is subject to alteration by the OEM and/or the enterprise administrator. Worse, beyond the Tor proxy issues, the use of this OS certificate validation API means that the OEM and enterprise also have a simple entry point for installing their own root certificates to enable transparent HTTPS man-in-the-middle, with full browser validation and no user consent or awareness. All of this is not to mention the need for defenses against third party tracking and fingerprinting to prevent the linking of Tor activity to non-Tor usage, and which would also be useful for the wider non-Tor userbase.

While we'd love for this situation to change, and are open to working with Google to improve things, at present it means that our only option for Chrome is to maintain an even more invasive fork than our current Firefox patch set, with much less likelihood of a future merge than with Firefox. As a ballpark estimate, maintaining such a fork would require somewhere between 3 and 5 times the engineering staff and infrastructure we currently have at our disposal, in addition to the ramp-up time to port our current feature set over.

Unless either our funding situation or Google's attitude towards the features we require changes, Mozilla Firefox will remain the best platform for us to demonstrate that it is in fact possible to provide true privacy by design for the web for those who want it. It is very distressing that this means playing catch-up and forcing our users to make usability tradeoffs in exchange for improved browser security, but we will continue to do what we can to improve that situation, both with Mozilla and with our own independent efforts.

Tor Blog | The Tor Blog blogs | 2014-08-18 23:13:16

Wikimedia appears to be taking strong actions to protect user data from surveillance and censorship.

Access | Access Blog | 2014-08-18 19:14:41

In a recent interview, former National Security Agency contractor and whistleblower Edward Snowden revealed that the Syrian government was not to blame for a nationwide internet blackout on Nov. 29, 2012, the NSA was.

Access | Access Blog | 2014-08-15 20:19:48

The recently released 4.0-alpha-1 version of Tor Browser includes meek, a new pluggable transport for censorship circumvention. meek tunnels your Tor traffic through HTTPS, and uses a technique called “domain fronting” to hide the fact that you are communicating with a Tor bridge—to the censor it looks like you are talking to some other web site. For more details, see the overview and the Child’s Garden of Pluggable Transports.

You only need meek if your Internet connection is censored so that you can’t use ordinary Tor. Even then, you should try other pluggable transports first, because they have less overhead. My recommended order for trying transports is:

  1. obfs3
  2. fte
  3. scramblesuit
  4. meek
  5. flashproxy

Use meek if other transports don’t work for you, or if you want to help development by testing it. I have been using meek for my day-to-day browsing for a few months now.

All pluggable transports have some overhead. You might find that meek feels slower than ordinary Tor. We’re working on some tickets that will make it faster in the future: #12428, #12778, #12857.

At this point, there are two different backends supported. meek-amazon makes it look like you are talking to an Amazon Web Services server (when you are actually talking to a Tor bridge), and meek-google makes it look like you are talking to the Google search page (when you are actually talking to a Tor bridge). It is likely that both will work for you. If one of them doesn’t work, try the other.

These instructions and screenshots are for the 4.0-alpha-1 release. If they change in future releases, they will be updated at https://trac.torproject.org/projects/tor/wiki/doc/meek#Quickstart.

How to use meek

First, download a meek-capable version of Tor Browser for your platform and language.

Verify the signature and run the bundle according to the instructions for Windows, OS X, or GNU/Linux.

On the first screen, where it says Which of the following best describes your situation?, click the Configure button.

Tor Network Settings “Which of the following best describes your situation?” screen with the “Configure” button highlighted.

On the screen that says Does this computer need to use a proxy to access the Internet?, say No unless you know you need to use a proxy. meek supports using an upstream proxy, but most users don’t need it.

Tor Network Settings “Does this computer need to use a proxy to access the Internet?” screen with “No” selected.

On the screen that says Does this computer's Internet connection go through a firewall that only allows connections to certain ports?, say No. As an HTTPS transport, meek only uses web ports, which are allowed by most firewalls.

Tor Network Settings “Does this computer's Internet connection go through a firewall that only allows connections to certain ports?” screen with “No” selected.

On the screen that says Does your Internet Service Provider (ISP) block or otherwise censor connections to the Tor Network?, say Yes. Saying Yes will lead you to the screen for configuring pluggable transports.

Tor Network Settings “Does your Internet Service Provider (ISP) block or otherwise censor connections to the Tor Network?” screen with “Yes” selected.

On the pluggable transport screen, select Connect with provided bridges and choose either meek-amazon or meek-google from the list. Probably both of them will work for you, so choose whichever feels faster. If one of them doesn’t work, try the other. Then click the Connect button.

Tor Network Settings “You may use the provided set of bridges or you may obtain and enter a custom set of bridges.” screen with “meek-google” selected.

If it doesn’t work, you can write to the tor-dev mailing list, or to me personally at dcf@torproject.org, or file a new ticket.

Tor Blog | The Tor Blog blogs | 2014-08-15 15:19:24

Welcome to the thirty-second issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.

Torsocks 2.0 is now considered stable

Torsocks is a wrapper program that will force an application’s network connections to go through the Tor network. David Goulet released version 2.0.0, blessing the new codebase as stable after more than a year of efforts.

David’s original email highlighted several reasons for a complete rewrite of torsocks. Among the issues were maintainability, error handling, thread safety, and a lack of proper compatibility layer for multiple architectures. The new implementation addresses all these issues while staying about the same size as the previous version (4,000 lines of C according to sloccount), and test coverage has been vastly extended.

Torsocks comes in handy when a piece of software does not natively support the use of a SOCKS proxy. In most cases, the new version may be safer, as torsocks will prevent DNS requests and non-torified connections from happening.

Integrators and power users should watch their steps while migrating to the new version. The configuration file format has changed, and some applications might behave differently as more system calls are now restricted.

Next generation Hidden Services and Introduction Points

When Tor clients need to connect to a Hidden Service, the first step is to create a circuit to its “Introduction Point”. There, the Tor client serving the Hidden Service will be waiting through another circuit to agree on a “Rendezvous Point” and pursue the communication through circuits connecting to this freshly selected Tor node.

This general design is not subject to any changes in the revision of hidden services currently being worked on. But there are still some questions left unanswered regarding the best way to select Introduction Points. George Kadianakis summarized them as: “How many IPs should an HS have? Which relays can be IPs? What’s the lifetime of an IP?”

For each of these questions, George collected possible answers and assessed whether or not they could respond to several attacks identified in the past. Anyone interested should help with the research needed and join the discussion.

In the meantime, Michael Rogers is also trying to find ways to improve hidden service performance in mobile contexts. One way to do so would be to “keep the set of introduction points as stable as possible”. However, a naive approach to doing so would ease the job of attackers trying to locate a hidden service. The idea would be to always use the same guard and middle node for a given introduction point, but this might also open the doors to new attacks. Michael suggests experimenting with the recently published Java research framework to gain a better understanding of the implications.

More status reports for July 2014

The wave of regular monthly reports from Tor project members for the month of July continued, with submissions from Andrew Lewman, Colin C., and Damian Johnson.

Roger Dingledine sent out the report for SponsorF. Arturo Filastò described what the OONI team was up to. The Tails team covered their activity for June and July.

Miscellaneous news

Two Tor Browser releases are at QA stage: 4.0-alpha-1 including meek and a new directory layout, and 3.6.4 for security fixes.

The recent serious attack against Tor hidden services was also a Sybil attack: a large number of malicious nodes joined the network at once. This led to a renewal of interest in detecting Sybil attacks against the Tor network more quickly. Karsten Loesing published some code computing similarity metrics, and David Fifield has explored visualizations of the consensus that made the recent attack visible.

Gareth Owen sent out an update about the Java Tor Research Framework. This prompted a discussion with George Kadianakis and Tim about the best way to perform fuzz testing on Tor. Have a look if you want to comment on Tim’s approaches.

Thanks to Daniel Thill for running a mirror of the Tor Project website!

ban mentioned a new service collecting donations for the Tor network. OnionTip, set up by Donncha O’Cearbhaill, will collect bitcoins and redistribute them to relay operators who put a bitcoin address in their contact information. As the redistribution is currently done according to the consensus weight, Sebastian Hahn warned that this might encourage people to “cheat the consensus weight” because that now means “more money from oniontip”.

Juha Nurmi sent another update on the ahmia.fi GSoC project.

News from Tor StackExchange

arvee wants to redirect some TCP connections through Tor on OS X; Redsocks should help to route packets for port 443 over Tor . mirimir explained that given the user's pf configuration, the setting “SocksPort 8888” was probably missing.

meee asked a question and offered a bounty for an answer: the circuit handshake entry in Tor’s log file contains some numbers, and meee wants to know what their meaning is: “Circuit handshake stats since last time: 1833867/1833868 TAP, 159257/159257 NTor.”

Easy development tasks to get involved with

The bridge distributor BridgeDB usually gives out bridges by responding to user requests via HTTPS and email. A while ago, BridgeDB also gave out bridges to a very small number of people who would then redistribute bridges using their social network. We would like to resume sending bridges to these people, but only if BridgeDB can be made to send them via GnuPG-encrypted emails. If you’d like to dive into the BridgeDB code and add support for GnuPG-encrypted emails, please take a look at the ticket and give it a try.


This issue of Tor Weekly News has been assembled by Lunar, qbi, Karsten Loesing, harmony, and Philipp Winter.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Tor Blog | The Tor Blog blogs | 2014-08-13 12:00:00

The fourth pointfix release of the 3.6 series is available from the Tor Browser Project page and also from our distribution directory.

This release features an update to OpenSSL to address the latest round of OpenSSL security issues. Tor Browser should only be vulnerable to one of these issues - the null pointer dereference. As this issue is only a DoS, we are not considering this a critical security update, but users are advised to upgrade anyway. This release also features an update to Tor to alert users of the RELAY_EARLY attack via a log message, and a fix for a hang that was happening to some users at startup/Tor network bootstrap.

Here is the complete changelog for 3.6.4:

  • Tor Browser 3.6.4 -- All Platforms
    • Update Tor to 0.2.4.23
    • Update Tor launcher to 0.2.5.6
    • Update OpenSSL to 1.0.1i
    • Backported Tor Patches:
      • Bug 11654: Properly apply the fix for malformed bug11156 log message
      • Bug 11200: Fix a hang during bootstrap introduced in the initial
        bug11200 patch.
    • Update NoScript to 2.6.8.36
      • Bug 9516: Send Tor Launcher log messages to Browser Console
    • Update Torbutton to 1.6.11.1
      • Bug 11472: Adjust about:tor font and logo positioning to avoid overlap
      • Bug 12680: Fix Torbutton about url.

In addition, we are also releasing the first alpha of the 4.0 series, available for download on the extended downloads page.

This alpha paves the way to our upcoming autoupdater by reorganizing the directory structure of the browser. This means that in-place upgrades from Tor Browser 3.6 (by extracting/copying over the old directory) will not work.

This release also features Tor 0.2.5.6, and some new defaults for NoScript to make the script permissions for a given url bar domain automatically cascade to all third parties by default (though this may be changed in the NoScript configuration).

  • Tor Browser 4.0-alpha-1 -- All Platforms
    • Ticket 10935: Include the Meek Pluggable Transport (version 0.10)
      • Two modes of Meek are provided: Meek over Google and Meek over Amazon
    • Update Firefox to 24.7.0esr
    • Update Tor to 0.2.5.6-alpha
    • Update OpenSSL to 1.0.1i
    • Update NoScript to 2.6.8.36
      • Script permissions now apply based on URL bar
    • Update HTTPS Everywhere to 5.0development.0
    • Update Torbutton to 1.6.12.0
      • Bug 12221: Remove obsolete Javascript components from the toggle era
      • Bug 10819: Bind new third party isolation pref to Torbutton security UI
      • Bug 9268: Fix some window resizing corner cases with DPI and taskbar size.
      • Bug 12680: Change Torbutton URL in about dialog.
      • Bug 11472: Adjust about:tor font and logo positioning to avoid overlap
      • Bug 9531: Workaround to avoid rare hangs during New Identity
    • Update Tor Launcher to 0.2.6.2
      • Bug 11199: Improve behavior if tor exits
      • Bug 12451: Add option to hide TBB's logo
      • Bug 11193: Change "Tor Browser Bundle" to "Tor Browser"
      • Bug 11471: Ensure text fits the initial configuration dialog
      • Bug 9516: Send Tor Launcher log messages to Browser Console
    • Bug 11641: Reorganize bundle directory structure to mimic Firefox
    • Bug 10819: Create a preference to enable/disable third party isolation
    • Backported Tor Patches:
      • Bug 11200: Fix a hang during bootstrap introduced in the initial
        bug11200 patch.
  • Tor Browser 4.0-alpha-1 -- Linux Changes
    • Bug 10178: Make it easier to set an alternate Tor control port and password
    • Bug 11102: Set Window Class to "Tor Browser" to aid in Desktop navigation
    • Bug 12249: Don't create PT debug files anymore

The list of frequently encountered known issues is also available in our bug tracker.

Tor Blog | The Tor Blog blogs | 2014-08-12 21:48:07

Wikipedia’s vision is “a world in which every single human being can freely share in the sum of all knowledge.” It’s a value that we at Access share. So we were shocked last week when the Wikimedia Foundation, which supports and hosts Wikipedia, turned its back on the greatest driver of open access to information the world has ever known, the open internet.

Access | Access Blog | 2014-08-08 18:52:41

In response to stories in the New York Times, ProPublica, and the Guardian that the National Security Agency (“NSA”) was undermining encryption standards, The Visiting Committee on Advanced Technology (VCAT) released a report that called for increased transparency and internal expertise at the National Institute for Standards and Technologies (“NIST”). The VCAT reviews and makes recommendations regarding general policy for the National Institute of Standards and Technology. The VCAT formed a Committee of Visitors (“COV”) in mid-April to review the relationship between NIST and the NSA.

Access | Access Blog | 2014-08-07 15:50:43

The U.S. cannot so easily ignore its responsibilities under international law and norms, or turn a blind eye to the activities of its corporations abroad.

Access | Access Blog | 2014-08-07 13:52:00

Access received a response from the European Commission acknowledging a notification of infringement sent two weeks ago in a letter to Michel Barnier, Commissioner in charge for Enterprise and Industry. The complaint addresses the United Kingdom’s breach of E.U. law through its adoption of the Data Retention and Investigatory Powers (DRIP) on 18 July 2014.

Access | Access Blog | 2014-08-06 14:13:15

Welcome to the thirty-first issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.

Tor and the RELAY_EARLY traffic confirmation attack

Roger Dingledine ended several months of concern and speculation in the Tor community with a security advisory posted to the tor-announce mailing list and the Tor blog.

In it, he gave details of a five-month-long active attack on operators and users of Tor hidden services that involved a variant of the so-called “Sybil attack”: the attacker signed up “around 115 fast non-exit relays” (now removed from the Tor network), and configured them to inject a traffic header signal consisting of RELAY_EARLY cells to “tag” any hidden service descriptor requests received by malicious relays — a tag which could then be picked up by other bad nodes acting as entry guards, in the process identifying clients which requested information about a particular hidden service.

The attack is suspected to be linked to a now-cancelled talk that was due to be delivered at the BlackHat security conference. There have been several fruitful and positive research projects involving theoretical attacks on Tor’s security, but this was not among them. Not only were there problems with the process of responsible disclosure, but, as Roger wrote, “the attacker encoded the name of the hidden service in the injected signal (as opposed to, say, sending a random number and keeping a local list mapping random number to hidden service name)”, thereby “[putting] users at risk indefinitely into the future”.

On the other hand, it is important to note that “while this particular variant of the traffic confirmation attack allows high-confidence and efficient correlation, the general class of passive (statistical) traffic confirmation attacks remains unsolved and would likely have worked just fine here”. In other words, the tagging mechanism used in this case is the innovation; the other element of the attack is a known weakness of low-latency anonymity systems, and defending against it is a much harder problem.

“Users who operated or accessed hidden services from early February through July 4 should assume they were affected” and act accordingly; in the case of hidden service operators, this may mean changing the location of the service. Accompanying the advisory were two new releases for both the stable and alpha tor branches (0.2.4.23 and 0.2.5.6-alpha); both include a fix for the signal-injection issue that causes tor to drop circuits and give a warning if RELAY_EARLY cells are detected going in the wrong direction (towards the client), and both prepare the ground for clients to move to single entry guards (rather than sets of three) in the near future. Relay operators should be sure to upgrade; a point-release of the Tor Browser will offer the same fixes to ordinary users. Nusenu suggested that relay operators regularly check their logs for the new warning, “even if the attack origin is not directly attributable from a relay’s point of view”. Be sure to read the full security advisory for a fuller explanation of the attack and its implications.

Why is bad-relays a closed mailing list?

Damian Johnson and Philipp Winter have been working on improving the process of reporting bad relays. The process starts by having users report odd behaviors to the bad-relays mailing list.

Only a few trusted volunteers receive and review these reports. Nusenu started a discussion on tor-talk advocating for more transparency. Nusenu argues that an open list would “likely get more confirm/can’t confirm feedback for a given badexit candidate”, and that it would allow worried users to act faster than operators of directory authorities.

Despite being “usually on the side of transparency”, Roger Dingledine described being “stuck” on the issue, “because the arms race is so lopsidedly against us”.

Roger explains: “we can scan for whether exit relays handle certain websites poorly, but if the list that we scan for is public, then exit relays can mess with other websites and know they’ll get away with it. We can scan for incorrect behavior on various ports, but if the list of ports and the set of behavior we do is public, then again relays are free to mess with things we don’t look for.”

A better future and more transparency probably lies in adaptive test systems run by multiple volunteer groups. Until they come to existence, as a small improvement, Philipp Winter wrote it was probably safe to publish why relays were disabled, through “short sentence along the lines of ‘running HTTPS MitM’ or ‘running sslstrip’”.

Monthly status reports for July 2014

Time for monthly reports from Tor project members. The July 2014 round was opened by Georg Koppen, followed by Philipp Winter, Sherief Alaa, Lunar, Nick Mathewson, Pearl Crescent, George Kadianakis, Matt Pagan, Isis Lovecruft, Griffin Boyce, Arthur Edelstein, and Karsten Loesing.

Lunar reported on behalf of the help desk and Mike Perry for the Tor Browser team.

Miscellaneous news

Anthony G. Basile announced a new release of tor-ramdisk, an i686 or x86_64 uClibc-based micro Linux distribution whose only purpose is to host a Tor server. Version 20140801 updates Tor to version 0.2.4.23, and the kernel to 3.15.7 with Gentoo’s hardened patches.

meejah has announced a new command-line application. carml is a versatile set of tools to “query and control a running Tor”. It can do things like “list and remove streams and circuits; monitor stream, circuit and address-map events; watch for any Tor event and print it (or many) out; monitor bandwidth; run any Tor control-protocol command; pipe through common Unix tools like grep, less, cut, etcetera; download TBB through Tor, with pinned certs and signature checking; and even spit out and run xplanet configs (with router/circuit markers)!” The application is written in Python and uses the txtorcon library. meejah describes it as early-alpha and warns that it might contain “serious, anonymity-destroying bugs”. Watch out!

Only two weeks left for the Google Summer of Code students, and the last round of reports but one: Juha Nurmi on the ahmia.fi project, Marc Juarez on website fingerprinting defenses, Amogh Pradeep on Orbot and Orfox improvements, Zack Mullaly on the HTTPS Everywhere secure ruleset update mechanism, Israel Leiva on the GetTor revamp, Quinn Jarrell on the pluggable transport combiner, Daniel Martí on incremental updates to consensus documents, Noah Rahman on Stegotorus enhancements, and Sreenatha Bhatlapenumarthi on the Tor Weather rewrite.

The Tails team is looking for testers to solve a possible incompatibility in one of the recommended installation procedures. If you have a running Tails system, a spare USB stick and some time, please help. Don’t miss the recommended command-line options!

The Citizen Lab Summer Institute took place at the University of Toronto from July 28 to 31. The event brought together policy and technology researchers who focus on Internet censorship and measurement. A lot of great work was presented including but not limited to a proposal to measure the chilling effect, ongoing work to deploy Telex, and several projects to measure censorship in different countries. Some Tor-related work was also presented: Researchers are working on understanding how the Tor network is used for political purposes. Another project makes use of TCP/IP side channels to measure the reachability of Tor relays from within China.

The Electronic Frontier Foundation wrote two blog posts to show why Tor is important for universities and how universities can help the Tor network. The first part explains why Tor matters, gives several examples of universities already contributing to the Tor network, and outlines a few reasons for hosting new Tor nodes. The second part gives actual tips on where to start, and how to do it best.

Tor help desk roundup

Users occasionally ask if there is any way to set Tor Browser as the default browser on their system. Currently this is not possible, although it may be possible in a future Tor Browser release. In the mean time, Tails provides another way to prevent accidentally opening hyperlinks in a non-Tor browser.

Easy development tasks to get involved with

Tor Launcher is the Tor controller shipped with Tor Browser written in JavaScript. Starting with Firefox 14 the “nsILocalFile” interface has been deprecated and replaced with the “nsIFile” interface. What we should do is replace all instances of “nsILocalFile” with “nsIFile” and see if anything else needs fixing to make Tor Launcher still work as expected. If you know a little bit about Firefox extensions and want to give this a try, clone the repository, make the necessary changes, run “make package”, and tell us whether something broke in interesting ways.


This issue of Tor Weekly News has been assembled by Lunar, harmony, Matt Pagan, Philipp Winter, David Fifield, Karsten Loesing, and Roger Dingledine.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Tor Blog | The Tor Blog blogs | 2014-08-06 12:00:00

Today is the first public beta of ChatSecure v13.2, an important update of the user interface, networking code, and overall stability. We’ve spent the last six months tracking down crashes, memory leaks and performance issues, and have reached a stable, functional point which we want to share for public use. Reliability and simplicity our the goals, as we move towards v14 in the next few months.

This beta also features a new account setup wizard that we are eager for feedback on. Our goal is to enable new users to have a much simpler experience in setting up ChatSecure to connect to existing or create new accounts. We have also provided a “one-click burner” option to quickly create throwaway accounts, that require Tor and OTR encryption always, for chatting with a single contact or even just a single conversation.

device-2014-08-05-122247 device-2014-08-05-122226 device-2014-08-05-122048 device-2014-08-05-122039 device-2014-08-05-121908 sidebar device-2014-08-05-121532

We have also removed some features (for now), with the goal of focus on stripping down the experience, and then building it back up again. For example, there is now ONE contact list, that merges all contacts from all accounts together. It can be easily searched, and you don’t have to worry about which account is active – you just selected the person you want to communicate with, and we know which account they are associated with.

We have also removed the ability to manually set presence and status (for now), while we re-think how they should work in a mobile context a bit more. The vast majority of our users do not change either value anyhow, but we do know that smartly managing online vs away, especially if you are logged in from multiple locations to the same account, is important. Expect an update here shortly, and we’d love to have your feedback and fresh ideas on mobile presence.

You can currently access the beta directly via APK download (below),  through our F-Droid Test Build “Nightlies” Repo, or through our Google+ Community Beta Access. We will roll out to our release repos and Google Play public once we get through our initial feedback on the beta.

Download ChatSecure v13.2 Beta 1 Now

chatsecure-latest-qr
APK: https://guardianproject.info/releases/ChatSecure-v13.2.0-BETA-1.apk

PGP Sig: https://guardianproject.info/releases/ChatSecure-v13.2.0-alpha-10.apk.asc

 

 

The source is tagged here: https://github.com/guardianproject/ChatSecureAndroid/releases/tag/13.2.0-beta-1

The release includes fixes from our completed v13 milestone, and our v14 milestone “Armadillo’s Agram”, which you can view on our project tracker (https://dev.guardianproject.info/projects/gibberbot/).

 

 

Guardian Project | The Guardian Project | 2014-08-05 15:35:54

On Friday, July 25th, the German Government raised concerns over the current chapter on the controversial Investor-State Dispute Settlement (“ISDS”) included in the trade agreement between the EU and Canada - known as CETA - currently being discussed on both sides of the Atlantic. This announcement is indicative of the growing resistance to ISDS in trade agreements taking place in the European Union at the moment.

Access | Access Blog | 2014-08-05 09:34:01

Following up on our research on secure Intent interactions, we are now announcing the first working version of the TrustedIntents library for Android. It provides methods for checking any Intent for whether the sending and receiving app matches a specified set of trusted app providers. It does this by “pinning” to the signing certificate of the APKs. The developer includes this “pin” in the app, which includes the signing certificate to trust, then TrustedIntents checks Intents against the configured certificate pins. The library includes pins for the Guardian Project and Tor Project signing certificates. It is also easy to generate the pin using our new utility Checkey (available in our FDroid repo and in Google Play).

Checkey displaying the signing certificate of ChatSecure

Checkey displaying the signing certificate of ChatSecure

We hope to make this process as dead simple as possible by providing developers with this library. TrustedIntents is currently set up as an “Android Library Project” but it could easily be a jar too, the code is currently quite simple, the plan is to add more convenience methods and also support for TOFU/POP in addition to pinning. For usage examples, check out TrustedIntentsExample and the test project under the test/ subdir of the TrustedIntents library source repo.

Checkey includes a simple method for generating the certificate pins. The pin is in the format of Java subclass of ApkSignaturePin, which provides all needed utility functions. The create the pin file, first install the app whose certificate you want to trust. Be sure to get it from a trusted source since you are going to be trusting the signing certificate of the APK that you have installed. Launch Checkey and select that app in the list, you will see the certificate details show up on the top. To generate the .java file for pinning Intents, select Generate Pin from the menu and send the resulting file to yourself. That file is the pin, include it in your project, then load it into TrustedIntents by doing in onCreate() or wherever is appropriate:

TrustedIntents ti = TrustedIntents.get(context);
ti.isTrustedSigner(MySigningCertificatePin.class);

How to generate a pin file with Checkey

How to generate a pin file with Checkey

Gathering all the edge cases

One of the things I’ve focused on in the TrustedIntents library is thinking about all the possible edge cases and how to check for them. It is rare that the main part of a security check algorithm fails, its almost always the edge cases that are the gotcha.

One example: TrustedIntents should properly check all signing certificates on an APK. From what I’ve seen, it is rare that APKs are signed by more than one certificate, but the spec allows for that. There might be exploits related to not handling that.

Another thing is that TrustedIntents uses the method that the Android code uses for comparing signatures: it does a byte-by-byte comparison of the signature byte arrays. Some apps area already doing something similar based on the hash of the signing certificate (i.e. the “fingerprint”). The Android technique will also be faster than hashing since the hash algorithm has to read the whole signature byte array anyway.

We’d love to have feedback, flames, comments, etc on any and all of this. Let us know how it works for you!

Guardian Project | The Guardian Project | 2014-07-31 03:29:23

This advisory was posted on the tor-announce mailing list.

SUMMARY:

On July 4 2014 we found a group of relays that we assume were trying to deanonymize users. They appear to have been targeting people who operate or access Tor hidden services. The attack involved modifying Tor protocol headers to do traffic confirmation attacks.

The attacking relays joined the network on January 30 2014, and we removed them from the network on July 4. While we don't know when they started doing the attack, users who operated or accessed hidden services from early February through July 4 should assume they were affected.

Unfortunately, it's still unclear what "affected" includes. We know the attack looked for users who fetched hidden service descriptors, but the attackers likely were not able to see any application-level traffic (e.g. what pages were loaded or even whether users visited the hidden service they looked up). The attack probably also tried to learn who published hidden service descriptors, which would allow the attackers to learn the location of that hidden service. In theory the attack could also be used to link users to their destinations on normal Tor circuits too, but we found no evidence that the attackers operated any exit relays, making this attack less likely. And finally, we don't know how much data the attackers kept, and due to the way the attack was deployed (more details below), their protocol header modifications might have aided other attackers in deanonymizing users too.

Relays should upgrade to a recent Tor release (0.2.4.23 or 0.2.5.6-alpha), to close the particular protocol vulnerability the attackers used — but remember that preventing traffic confirmation in general remains an open research problem. Clients that upgrade (once new Tor Browser releases are ready) will take another step towards limiting the number of entry guards that are in a position to see their traffic, thus reducing the damage from future attacks like this one. Hidden service operators should consider changing the location of their hidden service.

THE TECHNICAL DETAILS:

We believe they used a combination of two classes of attacks: a traffic confirmation attack and a Sybil attack.

A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her. You can read more about traffic confirmation attacks, including pointers to many research papers, at this blog post from 2009:
https://blog.torproject.org/blog/one-cell-enough

The particular confirmation attack they used was an active attack where the relay on one end injects a signal into the Tor protocol headers, and then the relay on the other end reads the signal. These attacking relays were stable enough to get the HSDir ("suitable for hidden service directory") and Guard ("suitable for being an entry guard") consensus flags. Then they injected the signal whenever they were used as a hidden service directory, and looked for an injected signal whenever they were used as an entry guard.

The way they injected the signal was by sending sequences of "relay" vs "relay early" commands down the circuit, to encode the message they want to send. For background, Tor has two types of cells: link cells, which are intended for the adjacent relay in the circuit, and relay cells, which are passed to the other end of the circuit. In 2008 we added a new kind of relay cell, called a "relay early" cell, which is used to prevent people from building very long paths in the Tor network. (Very long paths can be used to induce congestion and aid in breaking anonymity). But the fix for infinite-length paths introduced a problem with accessing hidden services, and one of the side effects of our fix for bug 1038 was that while we limit the number of outbound (away from the client) "relay early" cells on a circuit, we don't limit the number of inbound (towards the client) relay early cells.

So in summary, when Tor clients contacted an attacking relay in its role as a Hidden Service Directory to publish or retrieve a hidden service descriptor (steps 2 and 3 on the hidden service protocol diagrams), that relay would send the hidden service name (encoded as a pattern of relay and relay-early cells) back down the circuit. Other attacking relays, when they get chosen for the first hop of a circuit, would look for inbound relay-early cells (since nobody else sends them) and would thus learn which clients requested information about a hidden service.

There are three important points about this attack:

A) The attacker encoded the name of the hidden service in the injected signal (as opposed to, say, sending a random number and keeping a local list mapping random number to hidden service name). The encoded signal is encrypted as it is sent over the TLS channel between relays. However, this signal would be easy to read and interpret by anybody who runs a relay and receives the encoded traffic. And we might also worry about a global adversary (e.g. a large intelligence agency) that records Internet traffic at the entry guards and then tries to break Tor's link encryption. The way this attack was performed weakens Tor's anonymity against these other potential attackers too — either while it was happening or after the fact if they have traffic logs. So if the attack was a research project (i.e. not intentionally malicious), it was deployed in an irresponsible way because it puts users at risk indefinitely into the future.

(This concern is in addition to the general issue that it's probably unwise from a legal perspective for researchers to attack real users by modifying their traffic on one end and wiretapping it on the other. Tools like Shadow are great for testing Tor research ideas out in the lab.)

B) This protocol header signal injection attack is actually pretty neat from a research perspective, in that it's a bit different from previous tagging attacks which targeted the application-level payload. Previous tagging attacks modified the payload at the entry guard, and then looked for a modified payload at the exit relay (which can see the decrypted payload). Those attacks don't work in the other direction (from the exit relay back towards the client), because the payload is still encrypted at the entry guard. But because this new approach modifies ("tags") the cell headers rather than the payload, every relay in the path can see the tag.

C) We should remind readers that while this particular variant of the traffic confirmation attack allows high-confidence and efficient correlation, the general class of passive (statistical) traffic confirmation attacks remains unsolved and would likely have worked just fine here. So the good news is traffic confirmation attacks aren't new or surprising, but the bad news is that they still work. See https://blog.torproject.org/blog/one-cell-enough for more discussion.

Then the second class of attack they used, in conjunction with their traffic confirmation attack, was a standard Sybil attack — they signed up around 115 fast non-exit relays, all running on 50.7.0.0/16 or 204.45.0.0/16. Together these relays summed to about 6.4% of the Guard capacity in the network. Then, in part because of our current guard rotation parameters, these relays became entry guards for a significant chunk of users over their five months of operation.

We actually noticed these relays when they joined the network, since the DocTor scanner reported them. We considered the set of new relays at the time, and made a decision that it wasn't that large a fraction of the network. It's clear there's room for improvement in terms of how to let the Tor network grow while also ensuring we maintain social connections with the operators of all large groups of relays. (In general having a widely diverse set of relay locations and relay operators, yet not allowing any bad relays in, seems like a hard problem; on the other hand our detection scripts did notice them in this case, so there's hope for a better solution here.)

In response, we've taken the following short-term steps:

1) Removed the attacking relays from the network.

2) Put out a software update for relays to prevent "relay early" cells from being used this way.

3) Put out a software update that will (once enough clients have upgraded) let us tell clients to move to using one entry guard rather than three, to reduce exposure to relays over time.

4) Clients can tell whether they've received a relay or relay-cell. For expert users, the new Tor version warns you in your logs if a relay on your path injects any relay-early cells: look for the phrase "Received an inbound RELAY_EARLY cell".

The following longer-term research areas remain:

5) Further growing the Tor network and diversity of relay operators, which will reduce the impact from an adversary of a given size.

6) Exploring better mechanisms, e.g. social connections, to limit the impact from a malicious set of relays. We've also formed a group to pay more attention to suspicious relays in the network:
https://blog.torproject.org/blog/how-report-bad-relays

7) Further reducing exposure to guards over time, perhaps by extending the guard rotation lifetime:
https://blog.torproject.org/blog/lifecycle-of-a-new-relay
https://blog.torproject.org/blog/improving-tors-anonymity-changing-guard...

8) Better understanding statistical traffic correlation attacks and whether padding or other approaches can mitigate them.

9) Improving the hidden service design, including making it harder for relays serving as hidden service directory points to learn what hidden service address they're handling:
https://blog.torproject.org/blog/hidden-services-need-some-love

OPEN QUESTIONS:

Q1) Was this the Black Hat 2014 talk that got canceled recently?
Q2) Did we find all the malicious relays?
Q3) Did the malicious relays inject the signal at any points besides the HSDir position?
Q4) What data did the attackers keep, and are they going to destroy it? How have they protected the data (if any) while storing it?

Great questions. We spent several months trying to extract information from the researchers who were going to give the Black Hat talk, and eventually we did get some hints from them about how "relay early" cells could be used for traffic confirmation attacks, which is how we started looking for the attacks in the wild. They haven't answered our emails lately, so we don't know for sure, but it seems likely that the answer to Q1 is "yes". In fact, we hope they *were* the ones doing the attacks, since otherwise it means somebody else was. We don't yet know the answers to Q2, Q3, or Q4.

Tor Blog | The Tor Blog blogs | 2014-07-30 13:00:00

Welcome to the thirtieth issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.

Tor Browser 3.6.3 is out

A new pointfix release for the 3.6 series of the Tor Browser is out. Most components have been updated and a couple of small issues fixed. Details are available in the release announcement.

The release fixes import security updates from Firefox. Be sure to upgrade! Users of the experimental meek bundles have not been forgotten.

New Tor stable and alpha releases

Two new releases of Tor are out. The new 0.2.5.6-alpha release “brings us a big step closer to slowing down the risk from guard rotation, and fixes a variety of other issues to get us closer to a release candidate”.

Once directory authorities have upgraded, they will “assign the Guard flag to the fastest 25% of the network”. Some experiments showed that “for the current network, this results in about 1100 guards, down from 2500.”

The complementary change to moving the number of entry guards down to one is the introduction of two new consensus parameters. NumEntryGuards and NumDirectoryGuards will respectively set the number of entry guards and directory guards that clients will use. The default for NumEntryGuards is currently three, but this will allow a reversible switch to one in a near future.

Several important fixes have been backported to the stable branch in the 0.2.4.23 release. Source packages are available at the regular location . Binary packages have already landed in Debian (unstable, experimental) and the rest should follow shortly.

Security issue in Tails 1.1 and earlier

Several vulnerabilities have been discovered in I2P which is shipped in Tails 1.1 and earlier. I2P is an anonymous overlay network with many similarities to Tor. There was quite some confusion around the disclosure process of this vulnerability. Readers are encouraged to read what the Tails team has written about it.

Starting I2P in Tails normally requires a click on the relevant menu entry. Once started, the security issues can lead to the deanonymization of a Tails user who visits a malicious web page. As a matter of precaution, the Tails team recommends removing the “i2p” package each time Tails is started.

I2P has fixed the issue in version 0.9.14. It is likely to be included in the next Tails release, but the team is also discussing implementing more in-depth protections that would be required in order to keep I2P in Tails.

Reporting bad relays

“Bad” relays are malicious, misconfigured, or otherwise broken Tor relays. As anyone is free to volunteer bandwidth and processing power to spin up a new relay, users can encounter such bad relays once in a while. Getting them out of everyone’s circuits is thus important.

Damian Johnson and Philipp Winter have been working on improving and documenting the process of reporting bad relays. “While we do regularly scan the network for bad relays, we are also dependent on the wider community to help us spot relays which don’t act as they should” wrote Philipp.

When observing unusual behaviors, one way to learn about the current exit relay before reporting it is to use the Check service. This method can be inaccurate and tends to be a little bit cumbersome. The good news is that Arthur Edelstein is busy integrating more feedback on Tor circuits being used directly into the Tor Browser.

Miscellaneous news

The Tor Project, Inc. has completed its standard financial audit for the year 2013. IRS Form 990, Massachusetts Form PC, and the Financial Statements are now available for anyone to review. Andrew Lewman explained: “we publish all of our related tax documents because we believe in transparency. All US non-profit organizations are required by law to make their tax filings available to the public on request by US citizens. We want to make them available for all.”

CJ announced the release of orWall (previously named Torrific), a new Android application that “will force applications selected through Orbot while preventing unchecked applications to have network access”.

The Thali project aims to use hidden services to host web content. As part of the effort, they have written a cross-platform Java library. “The code handles running the binary, configuring it, managing it, starting a hidden service, etc.” wrote Yaron Goland.

Gareth Owen released a Java-based Tor research framework . The goal is to enable researchers to try things out without having to deal with the full tor source. “At present, it is a fully functional client with a number of examples for hidden services and SOCKS. You can build arbitrary circuits, build streams, send junk cells, etc.” wrote Gareth.

Version 0.2.3 of BridgeDB has been deployed. Among other changes, owners of riseup.net email accounts can now request bridges through email.

The first candidate for Orbot 14.0.5 has been released. “This update includes improved management of the background processes, the ability to easily change the local SOCKS port (to avoid conflicts on some Samsung Galaxy and Note devices), and the fancy new notification dialog, showing your current exit IPs and country” wrote Nathan Freitas.

While working on guard nodes, George Kadianakis realized that “the data structures and methods of the guard nodes code are not very robust”. Nick Mathewson and George have been busy trying to come up with better abstractions. More brains working on the problem would be welcome!

Mike Perry posted “a summary of the primitives that Marc Juarez aims to implement for his Google Summer of Code project on prototyping defenses for Website Traffic Fingerprinting and follow-on research”. Be sure to have a look if you want to help prevent website fingerprint attacks.

A new draft proposal “for making all relays also be directory servers (by default)” has been submitted by Matthew Finkel. Among the motivations, Matthew wrote: “In a network where every router is a
directory server, the profiling and partitioning attack vector is reduced to the guard (for clients who use them), which is already in a privileged position for this. In addition, with the increased set size, relay descriptors and documents are more readily available and it diversifies the providers.” This change might make the transition to a single guard safer. Feedback welcome!

Noah Rahman reported on the progress of the Stegotorus Google Summer of Code project.

Tor help desk roundup

A number of Iranian Tor users have reported that Tor no longer works out of the box in Iran, and the Tor Metrics portal shows a corresponding drop in the number of directly-connecting users there. Collin Anderson investigated the situation and reported that the Telecommunication Company of Iran had begun blocking the Tor network by blacklisting connections to Tor’s directory authorities. Tor users can circumvent this block by getting bridges from BridgeDB and entering the bridge addresses they receive into their Tor Browser.


This issue of Tor Weekly News has been assembled by Lunar, Matt Pagan, harmony, and Philipp Winter.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Tor Blog | The Tor Blog blogs | 2014-07-30 12:00:00

Speaking at Vodafone’s annual shareholder meeting in London on Tuesday, July 29, Access Senior Policy Counsel Peter Micek challenged the company to take a greater role in stopping government surveillance.

Access | Access Blog | 2014-07-29 23:14:49

Access urges expedient passage of law to reform NSA surveillance, but warns that additional reforms are needed.

Access | Access Blog | 2014-07-29 14:03:27

To help bridge the substantial differences in how user privacy is protected on the two sides of the Atlantic, the Safe Harbor was established to enable U.S. companies to lawfully transfer data without running afoul of EU data protection law. To make use of the Safe Harbor, companies voluntarily adhere to a set of principles, with oversight from the Federal Trade Commission (FTC), though to date enforcement of corporate policies and practices has been limited.

Access | Access Blog | 2014-07-29 07:49:55

We now have a wiki page which explains how bad relays should be reported to the Tor Project. A bad relay can be malicious, misconfigured, or otherwise broken. Once such a relay is reported, a subset of vigilant Tor developers (currently Roger, Peter, Damian, Karsten, and I) first tries to reproduce the issue. If it's reproducible, we attempt to get in touch with the relay operator and work on the issue together. However, if the relay has no contact information or we cannot reach the operator, we will resort to assigning flags (such as BadExit) to the reported relay which instructs clients to no longer use the relay in the future. In severe cases, we are also able to remove the relay descriptor from the network consensus which effectively makes the relay disappear. To get an idea of what bad behavior was documented in the past, have a look at this (no longer maintained) wiki page or these research papers.

We regularly scan the network for bad relays using exitmap but there are several other great tools such as Snakes on a Tor, torscanner, tortunnel, and DetecTor. We are also dependent on the wider community to help us spot relays which don't act as they should. So if you think that you stumbled upon a bad relay while using Tor, please report it to us by sending an email to bad-relays@lists.torproject.org. To find out which relay is currently being used as your exit relay, please visit our Check service. Just tell us the relay's IP address (Check tells you what your IP address appears to be) and the behavior you observed. Then, we can begin to investigate!

Tor Blog | The Tor Blog blogs | 2014-07-28 23:07:14

2013 was a great year for Tor. The increasing awareness of the lack of privacy online, increasing Internet censorship around the world, and general interest in encryption has helped continue to keep us in the public mind. As a result, our supporters have increased our funding to keep us on the leading edge of our field, this of course, means you. We're happy to have more developers, advocates, and support volunteers. We're encouraged as the general public talks about Tor to their friends and neighbors. Join us as we continue to fight for your privacy and freedom on the Internet!

After completing the standard audit, our 2013 state and federal tax filings are available. We publish all of our related tax documents because we believe in transparency. All US non-profit organizations are required by law to make their tax filings available to the public on request by US citizens. We want to make them available for all.

Part of our transparency is simply publishing the tax documents for your review. The other part is publishing what we're working on in detail. We hope you'll join us in furthering our mission (a) to develop, improve and distribute free, publicly available tools and programs that promote free speech, free expression, civic engagement and privacy rights online; (b) to conduct scientific research regarding, and to promote the use of and knowledge about, such tools, programs and related issues around the world; (c) to educate the general public around the world about privacy rights and anonymity issues connected to Internet use.

All of this means you can look through our source code, including our design documents, and all open tasks, enhancements, and bugs available on our tracking system. Our research reports are available as well. From a technical perspective, all of this free software, documentation, and code allows you and others to assess the safety and trustworthiness of our research and development. On another level, we have a 10 year track record of doing high quality work, saying what we're going to do, and doing what we said.

Internet privacy and anonymity is more important and rare than ever. Please help keep us going through getting involved, donations, or advocating for a free Internet with privacy, anonymity, and keeping control of your identity.

Tor Blog | The Tor Blog blogs | 2014-07-26 20:09:41

Update: SMS finally unblocked in Central African Republic

Access | Access Blog | 2014-07-25 15:30:46

USA FREEDOM Act likely to be considered on Senate floor. Here's a re-cap of the path the bill has taken to get to this point.

Access | Access Blog | 2014-07-24 13:50:26

Access and other groups introduce updates to International Principles one year after their introduction.

Access | Access Blog | 2014-07-23 20:38:22

We now have an official FDroid app repository that is available via three separate methods, to guarantee access to a trusted distribution channel throughout the world! To start with, you must have FDroid installed. Right now, I recommend using the latest test release since it has support for Tor and .onion addresses (earlier versions should work for non-onion addresses):

https://f-droid.org/repo/org.fdroid.fdroid_710.apk

In order to add this repo to your FDroid config, you can either click directly on these links on your devices and FDroid will recognize them, or you can click on them on your desktop, and you will be presented with a QR Code to scan. Here are your options:

From here on out, our old FDroid repo (https://guardianproject.info/repo) is considered deprecated and will no longer be updated. It will eventually be removed. Update to the new one!

Also, if you missed it before, all of our test builds are also available for testing only via FDroid. Just remember, the builds in the test repo are only debug builds, not fully trusted builds, so use them for testing only.

Automate it all!

This setup has three distribution channels that are all mirrors of a repo that is generated on a fully offline machine. This is only manageable because of lots of new automation features in the fdroidserver tools for building and managing app repos. You can now set up a USB thumb drive as the automatic courier for shuffling the repo from the offline machine to an online machine. The repo is generated, updated, and signed using fdroid update, then those signed files are synced to the USB thumb drive using fdroid server update. Then the online machine syncs the signed files from that USB thumb drive to multiple servers via SSH and Amazon S3 with a single command: fdroid server update. The magic is in setting up the config options and letting the tools do the rest.

New Repo Signing Key

For part of this, I’ve completed the process of generating a new, fully offline fdroid signing key. So that means there is a new signing key for the FDroid repo, and the old repo signing key is being retired.

The fingerprints for this signing key are:

Owner: EMAILADDRESS=root@guardianproject.info, CN=guardianproject.info, O=Guardian Project, OU=FDroid Repo, L=New York, ST=New York, C=US
Issuer: EMAILADDRESS=root@guardianproject.info, CN=guardianproject.info, O=Guardian Project, OU=FDroid Repo, L=New York, ST=New York, C=US
Serial number: a397b4da7ecda034
Valid from: Thu Jun 26 15:39:18 EDT 2014 until: Sun Nov 10 14:39:18 EST 2041
Certificate fingerprints:
 MD5:  8C:BE:60:6F:D7:7E:0D:2D:B8:06:B5:B9:AD:82:F5:5D
 SHA1: 63:9F:F1:76:2B:3E:28:EC:CE:DB:9E:01:7D:93:21:BE:90:89:CD:AD
 SHA256: B7:C2:EE:FD:8D:AC:78:06:AF:67:DF:CD:92:EB:18:12:6B:C0:83:12:A7:F2:D6:F3:86:2E:46:01:3C:7A:61:35
 Signature algorithm name: SHA1withRSA
 Version: 1

Guardian Project | The Guardian Project | 2014-07-01 00:26:39

August 2014: New browser development news here, including Orfox, our Firefox-based browser solution: https://lists.mayfirst.org/pipermail/guardian-dev/2014-August/003717.html

 

On Saturday, a new post was relased by Xordern entitled IP Leakage of Mobile Tor Browsers. As the title says, the post documents flaws in mobile browser apps, such as Orweb and Onion Browser, both which automatically route communication traffic over Tor. While we appreciate the care the author has taken, he does make the mistake of using the term “security” to lump together the need for total anonymity up with the needs of anti-censorship, anti-surveillance, circumvention and local device privacy. We do understand the seriousness of this bug, but at the same time, it is not an issue encountered regularly in the wild.

Here are thoughts on the three specific issues covered:

1) HTML5 Multimedia: This is a known issue which is not present on 100% of Android devices, but is definitely something to be concerned about, if you access sites with HTML5 media player content on them. To us, it is a bug in Android, and not in Orweb, since all of the appropriate APIs are called when the browser is configured to proxy. However, it is a problem, and our solution remains to either use transparent proxying feature of Orbot, or to use the Firefix Privacy configuration we provide here: https://guardianproject.info/apps/firefoxprivacy

2) Downloads leak: This is a new issue and one we are trying to reproduce on our end. If our proxied download indeed is not working, we will issue a fix shortly. Again, using Firefox configured in the manner we prescribe, the downloads would be proxied properly.

3) Unique Headers: The inclusion of a unique HTTP header issue in this list is confusing, because it has nothing to do with IP leakage. We have never claimed that a mobile browser can be 100% anonymous, and defending against full fingerprinting of browsers based on headers is something beyond what we are attempting to do at this point.

At this point, we still recommend Orweb for most people who want a very simple solution for a browser that is proxied through Tor. This will defeat mass traffic surveillance, network censorship, filtering by your mobile operator, work or school, and more. Orweb also keeps little data cached on the local system, and so protects against physical inspection and analysis of your device, to retrieve your browser history. HOWEVER if you do seem to visit sites that have HTML5 media players in the them, then we recommend you do not use Orweb, and again, that you use Firefox with our Privacy-Enhanced Configuration.

If you are truly worried about IP leakage, then you MUST root your phone, and use Orbot’s Transparent Proxying feature. This provides the best defense against leaking of your real IP. Even further, if you require even more assurance than that, you should follow Mike Perry’s Android Hardening Guide, which uses AFWall firewall in combination with Orbot, to block traffic to apps, and even stops Google Play from updating apps without your permission.

Finally, the best news is that we are making great progress in a fully privacy-by-default version of Firefox, under the project named “Orfox”. This is being done in partnership with the Tor Project, as a Google Summer of Code effort, along with the Orweb team. We aim to use as much of the same code that Tor Browser does to harden Firefox in our browser, and are getting close to an alpha release. If you are interested in a testing the first prototype build, which address the HTML5 and Download leak issues, you can find it here: https://guardianproject.info/releases/FennecForTor_GSoC_prototype.apk and track the project here: https://github.com/guardianproject/orfox

 

 

 

 

Guardian Project | The Guardian Project | 2014-06-30 16:43:51

determinism

We just released Lil’ Debi 0.4.7 into the Play Store and f-droid.org. It is not really different than the 0.4.6 release except in has a new, important property: the APK contents can be reproduced on other machines to the extent that the APK signature can be swapped between the official build and builds that other people have made from source, and this will still be installable. This is known as a “deterministic build” or “reproducible build”: the build process is deterministic, meaning it runs the same way each time, and that results in an APK that is reproducible by others using only the source code. There are some limitations to this, like it has to be built using similar versions of the OpenJDK 1.7 and other build tools, for example. But this process should work on any recent version of Debian or Ubuntu. Please try the process yourself, and let us know if you can verify or not:

The ultimate goal here is to make a process that reproduces the APK exactly, bit-for-bit, so that the anyone who runs the process will end up with an APK that has the exact same hash sum. As far as I can tell, the only thing that needs to be fixed in Lil’ Debi’s process is the timestamps in the ZIP format that is the APK container.

There are a number of other parallel efforts. The Tor Project has written a lot about their process for reproducible builds for the Tor Browser Bundle. Debian has made some progress in fixing the package builders to make the process deterministic.

Guardian Project | The Guardian Project | 2014-06-09 20:41:34

The latest Orbot is out soon on Google Play, and by direct download from the link below:
Android APK: https://guardianproject.info/releases/orbot-latest.apk
(PGP Sig)

The major improvements for this release are:

  • Uses the latest Tor 0.2.42.22 stable version
  • Fix for recent OpenSSL vulnerabilities
  • Addition of Obfuscated Bridges 3 (Obfs3) support
  • Switch from Privoxy to Polipo (semi-experimental)

and much more… see the CHANGELOG link below for all the details.

The tag commit message was “updating to 14.0.0 build 100!”
https://gitweb.torproject.org/orbot.git/commit/81bd61764c2c300bd1ba1e4de5b03350455470c1

and the full CHANGELOG is here: https://gitweb.torproject.org/orbot.git/blob_plain/81bd61764c2c300bd1ba1e4de5b03350455470c1:/CHANGELOG

Guardian Project | The Guardian Project | 2014-06-08 03:45:17

One thing we are very lucky to have is a good community of people willing to test out unfinished builds of our software. That is a very valuable contribution to the process of developing usable, secure apps. So we want to make this process as easy as possible while keeping it as secure and private as possible. To that end, we have set up an FDroid repository of apps generated from the test builds that our build server generates automatically every time we publish new code.

After this big burst of development focused on FDroid, it has become clear that FDroid has lots of promise for becoming a complete solution for the whole process of delivering software from developers to users. We have tried other ways of delivering test builds like HockeyApp and Google Play’s Alpha and Beta channels and have found them lacking. The process did not seem as easy as it should be. And of course, both of them leave a lot to be desired when it comes to privacy of the users. So this is the first step in hopefully a much bigger project.

To use our new test build service, first install FDroid by downloading it from the official source: https://f-droid.org. Then using a QR Code scanner like Barcode Scanner, just scan the QR Code below, and send it to FDroid Repositories. You can also browse to this page on your Android device, and click the link below to add it to FDroid:

dev.guardianproject.info

You can also use our test repo via an anonymized connection using the Tor Hidden Service (as of this moment, that means downloading an official FDroid v0.71 build). Just get Orbot and turn it on, and the following .onion address will automatically work in FDroid, as long as you have a new enough version (0.69 or later).

k6e4p7yji2rioxbm.onion

Guardian Project | The Guardian Project | 2014-06-06 21:17:01

We’re making the Internet more secure, by taking part in #ResetTheNet https://resetthenet.org

Guardian Project | The Guardian Project | 2014-06-04 23:07:14

FreedomBox version 0.2

For those of you who have not heard through the mailing list or in the project's IRC channel (#freedombox on http://www.oftc.net/), FreedomBox has reached the 0.2 release. This second release is still intended for developers but represents a significant maturation of the components we have discussed here in the past and a big step forward for the project as a whole.

0.2 features

Plinth, our user interface tool, is now connected to a number of running systems on the box including PageKite, an XMPP chat server, local network administration if you want to use the FreedomBox as a home router, and some diagnostic and general system configuration tools. Plinth also has support for downloading and installing ownCloud.

Additionally, the 0.2 release installs Tor and configures it as a bridge. This default configuration does not actually send any of your traffic through Tor or allow those sending traffic over Tor to enter the public net using your connection. Acting as a bridge simply moves data around within the Tor network, much like adding an additional participant to a game of telephone. The more bridges there are in the Tor network, the harder it is to track where that traffic actually comes from.

Availability and reach

As discussed previously, one of the ways we are working to improve privacy and security for computer users is by making the tools we include in FreedomBox available outside of particular FreedomBox images or hardware. We are working towards that goal by adding the software we use to the Debian community Linux distribution upon which the FreedomBox is built. I am happy to say that Plinth, PageKite, ownCloud, as well as our internal box configuration tool freedombox-setup are now all available in the Jessie version of Debian.

In addition to expanding the list of tools available in Debian we have also expanded the range of Freedom-maker, the tool that builds full images of FreedomBox to deploy directly onto machines like our initial hardware target the DreamPlug. Freedom-maker can now build images for DreamPlug, the VirtualBox blend of virtual machines, and the RasbperryPi. Now developers can test and contribute to FreedomBox using anything from a virtual machine to one of the more than two million PaspberryPis out there in the world.

The future

Work has really been speeding up on the FreedomBox in 2014 and significant work has been done on new cryptographic security tools for a 0.3 release. As always, the best places to find out more are the wiki, the mailing list and the IRC channel.

FreedomBox | news | 2014-05-12 21:07:40

security in a thumb driveHardware Security Modules (aka Smartcards, chipcards, etc) provide a secure way to store and use cryptographic keys, while actually making the whole process a bit easier. In theory, one USB thumb drive like thing could manage all of the crypto keys you use in a way that makes them much harder to steal. That is the promise. The reality is that the world of Hardware Security Modules (HSMs) is a massive, scary minefield of endless technical gotchas, byzantine standards (PKCS#11!), technobabble, and incompatibilities. Before I dive too much into ranting about the days of my life wasted trying to find a clear path through this minefield, I’m going to tell you about one path I did find through to solve a key piece of the puzzle: Android and Java package signing.

ACS ACR38-T-IBSFor this round, I am covering the Aventra MyEID PKI Card. I bought a SIM-sized version to fit into an ACS ACR38T-IBS-R smartcard reader (it is apparently no longer made, and the ACT38T-D1 is meant to replace it). Why such specificity you may ask? Because you have to be sure that your smartcard will work with your reader, and that your reader will have a working driver for you system, and that your smartcard will have a working PKCS#11 driver so that software can talk to the smartcard. Thankfully there is the OpenSC project to cover the PKCS#11 part, it implements the PKCS#11 communications standard for many smartcards. On my Ubuntu/precise system, I had to install an extra driver, libacr38u, to get the ACR38T reader to show up on my system.

So let’s start there and get this thing to show up! First we need some packages. The OpenSC packages are out-of-date in a lot of releases, you need version 0.13.0-4 or newer, so you have to add our PPA (Personal Package Archive) to get current versions, which include a specific fix for the Aventra MyEID: (fingerprint: F50E ADDD 2234 F563):

sudo add-apt-repository ppa:guardianproject/ppa
sudo apt-get update
sudo apt-get install opensc libacr38u libacsccid1 pcsc-tools usbutils

First thing, I use lsusb in the terminal to see what USB devices the Linux kernel sees, and thankfully it sees my reader:

$ lsusb
Bus 005 Device 013: ID 072f:9000 Advanced Card Systems, Ltd ACR38 AC1038-based Smart Card Reader

Next, its time to try pcsc_scan to see if the system can see the smartcard installed in the reader. If everything is installed and in order, then pcsc_scan will report this:

$ pcsc_scan 
PC/SC device scanner
V 1.4.18 (c) 2001-2011, Ludovic Rousseau 
Compiled with PC/SC lite version: 1.7.4
Using reader plug'n play mechanism
Scanning present readers...
0: ACS ACR38U 00 00

Thu Mar 27 14:38:36 2014
Reader 0: ACS ACR38U 00 00
  Card state: Card inserted, 
  ATR: 3B F5 18 00 00 81 31 FE 45 4D 79 45 49 44 9A
[snip]

If pcsc_scan cannot see the card, then things will not work. Try re-seating the smardcard in the reader, make sure you have all the right packages installed, and if you can see the reader in lsusb. If your smartcard or reader cannot be read, then pcsc_scan will report something like this:

$ pcsc_scan 
PC/SC device scanner
V 1.4.18 (c) 2001-2011, Ludovic Rousseau 
Compiled with PC/SC lite version: 1.7.4
Using reader plug'n play mechanism
Scanning present readers...
Waiting for the first reader...

Moving right along… now pcscd can see the smartcard, so we can start playing with using the OpenSC tools. These are needed to setup the card, put PINs on it for access control, and upload keys and certificates to it. The last annoying little preparation tasks are finding where opensc-pkcs11.so is installed and the “slot” for the signing key in the card. These will go into a config file which keytool and jarsigner need. To get this info on Debian/Ubuntu/etc, run these:

$ dpkg -S opensc-pkcs11.so
opensc: /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so
$ pkcs11-tool --module /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so \
>     --list-slots
Available slots:
Slot 0 (0xffffffffffffffff): Virtual hotplug slot
  (empty)
Slot 1 (0x1): ACS ACR38U 00 00
  token label        : MyEID (signing)
  token manufacturer : Aventra Ltd.
  token model        : PKCS#15
  token flags        : rng, login required, PIN initialized, token initialized
  hardware version   : 0.0
  firmware version   : 0.0
  serial num         : 0106004065952228

This is the info needed to put into a opensc-java.cfg, which keytool and jarsigner need in order to talk to the Aventra HSM. The name, library, and slot fields are essential, and the description is helpful. Here is how the opensc-java.cfg using the above information looks:

name = OpenSC
description = SunPKCS11 w/ OpenSC Smart card Framework
library = /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so
slot = 1

Now everything should be ready for initializing the HSM, generating a new key, and uploading that key to the HSM. This process generates the key and certificate, puts them into files, then uploads them to the HSM. That means you should only run this process on a trusted machine, certainly with some kind of disk encryption, and preferably on a machine that is not connected to a network, running an OS that has never been connected to the internet. A live CD is one good example, I recommend Tails on a USB thumb drive running with the secure persistent store on it (we have been working here and there on making a TAILS-based distro specifically for managing keys, we call it CleanRoom).

HSM plugged into a laptop

HSM plugged into a laptop

First off, the HSM needs to be initialized, then set up with a signing PIN and a “Security Officer” PIN (which means basically an “admin” or “root” PIN). The signing PIN is the one you will use for signing APKs, the “Security Officer PIN” (SO-PIN) is used for modifying the HSM setup, like uploading new keys, etc. Because there are so many steps in the process, I’ve written up scripts to run thru all of the steps. If you want to see the details, read the scripts. The next step is to generate the key using openssl and upload it to the HSM. Then the HSM needs to be “finalized”, which means the PINs are activated, and keys cannot be uploaded. Don’t worry, as long as you have the SO-PIN, you can erase the HSM and re-initialize it. But be careful! Many HSMs will permanently self-destruct if you enter in the wrong PIN too many times, some will do that after only three wrong PINs! As long as you have not finalized the HSM, any PIN will work, so play around a lot with it before finalizing it. Run the init and key upload procedure a few times, try signing an APK, etc. Take note: the script will generate a random password for the secret files, then echo that password when it completes, so make sure no one can see your screen when you generate the real key. Alright, here goes!

code $ git clone https://github.com/guardianproject/smartcard-apk-signing
code $ cd smartcard-apk-signing/Aventra_MyEID_Setup
Aventra_MyEID_Setup $ ./setup.sh 
Edit pkcs15-init-options-file-pins to put in the PINs you want to set:
Aventra_MyEID_Setup $ emacs pkcs15-init-options-file-pins
Aventra_MyEID_Setup $ ./setup.sh 
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
About to erase card.
PIN [Security Officer PIN] required.
Please enter PIN [Security Officer PIN]: 
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
About to create PKCS #15 meta structure.
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
Found MyEID
About to generate key.
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
Found MyEID
About to generate key.
next generate a key with ./gen.sh then ./finalize.sh
Aventra_MyEID_Setup $ cd ../openssl-gen/
openssl-gen $ ./gen.sh 
Usage: ./gen.sh "CertDName" [4096]
  for example:
  "/C=US/ST=New York/O=Guardian Project Test/CN=test.guardianproject.info/emailAddress=test@guardianproject.info"
openssl-gen $ ./gen.sh "/C=US/ST=New York/O=Guardian Project Test/CN=test.guardianproject.info/emailAddress=test@guardianproject.info"
Generating key, be patient...
2048 semi-random bytes loaded
Generating RSA private key, 2048 bit long modulus
.......................................+++
..................................................+++
e is 65537 (0x10001)
Signature ok
subject=/C=US/ST=New York/O=Guardian Project Test/CN=test.guardianproject.info/emailAddress=test@guardianproject.info
Getting Private key
writing RSA key
Your HSM will prompt you for 'Security Officer' aka admin PIN, wait for it!
Enter destination keystore password:  
Entry for alias 1 successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled
[Storing keystore]
Key fingerprints for reference:
MD5 Fingerprint=90:24:68:F3:F3:22:7D:13:8C:81:11:C3:A4:B6:9A:2F
SHA1 Fingerprint=3D:9D:01:C9:28:BD:1F:F4:10:80:FC:02:95:51:39:F4:7D:E7:A9:B1
SHA256 Fingerprint=C6:3A:ED:1A:C7:9D:37:C7:B0:47:44:72:AC:6E:FA:6C:3A:B2:B1:1A:76:7A:4F:42:CF:36:0F:A5:49:6E:3C:50
The public files are: certificate.pem publickey.pem request.pem
The secret files are: secretkey.pem certificate.p12 certificate.jkr
The passphrase for the secret files is: fTQ*he-[:y+69RS+W&+!*0O5i%n
openssl-gen $ cd ../Aventra_MyEID_Setup/
Aventra_MyEID_Setup $ ./finalize.sh 
Using reader with a card: ACS ACR38U 00 00
Connecting to card in reader ACS ACR38U 00 00...
Using card driver MyEID cards with PKCS#15 applet.
Found MyEID
About to delete object(s).
Your HSM is ready for use! Put the secret key files someplace encrypted and safe!

Now your HSM should be ready for use for signing. You can try it out with keytool to see what is on it, using the signing PIN not the Security Officer PIN:

smartcard-apk-signing $ /usr/bin/keytool -v \
>     -providerClass sun.security.pkcs11.SunPKCS11 \
>     -providerArg opensc-java.cfg \
>     -providerName SunPKCS11-OpenSC -keystore NONE -storetype PKCS11 \
>     -list
Enter keystore password:  

Keystore type: PKCS11
Keystore provider: SunPKCS11-OpenSC

Your keystore contains 1 entry

Alias name: 1
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: EMAILADDRESS=test@guardianproject.info, CN=test.guardianproject.info, O=Guardian Project Test, ST=New York, C=US
Issuer: EMAILADDRESS=test@guardianproject.info, CN=test.guardianproject.info, O=Guardian Project Test, ST=New York, C=US
Serial number: aa6887be1ec84bde
Valid from: Fri Mar 28 16:41:26 EDT 2014 until: Mon Aug 12 16:41:26 EDT 2041
Certificate fingerprints:
	 MD5:  90:24:68:F3:F3:22:7D:13:8C:81:11:C3:A4:B6:9A:2F
	 SHA1: 3D:9D:01:C9:28:BD:1F:F4:10:80:FC:02:95:51:39:F4:7D:E7:A9:B1
	 SHA256: C6:3A:ED:1A:C7:9D:37:C7:B0:47:44:72:AC:6E:FA:6C:3A:B2:B1:1A:76:7A:4F:42:CF:36:0F:A5:49:6E:3C:50
	 Signature algorithm name: SHA1withRSA
	 Version: 1


*******************************************
*******************************************

And let’s try signing an actual APK using the arguments that Google recommends, again, using the signing PIN:

smartcard-apk-signing $ /usr/bin/jarsigner -verbose \
>     -providerClass sun.security.pkcs11.SunPKCS11 \
>     -providerArg opensc-java.cfg -providerName SunPKCS11-OpenSC \
>     -keystore NONE -storetype PKCS11 \
>     -sigalg SHA1withRSA -digestalg SHA1 \
>     bin/LilDebi-release-unsigned.apk 1
Enter Passphrase for keystore: 
   adding: META-INF/1.SF
   adding: META-INF/1.RSA
  signing: assets/busybox
  signing: assets/complete-debian-setup.sh
  signing: assets/configure-downloaded-image.sh
  signing: assets/create-debian-setup.sh
  signing: assets/debian-archive-keyring.gpg
  signing: assets/debootstrap.tar.bz2
  signing: assets/e2fsck.static
  signing: assets/gpgv
  signing: assets/lildebi-common
[snip]

Now we have a working, but elaborate, process for setting up a Hardware Security Module for signing APKs. Once the HSM is setup, using it should be quite straightforward. Next steps are to work out as many kinks in this process as possible so this will be the default way to sign APKs. That means things like figuring out how Java can be pre-configured to use OpenSC in the Debian package, as well as including all relevant fixes in the pcscd and opensc packages. Then the ultimate is to add support for using HSMs in Android’s generated build files like the build.xml for ant that is generated by android update project. Then people could just plug in the HSM and run ant release and have a signed APK!

Guardian Project | The Guardian Project | 2014-03-28 20:54:39

An interesting turn of events (which we are very grateful for!)

******

FOR IMMEDIATE RELEASE
Diana Del Olmo, diana@guardianproject.info
Nathan Freitas (in Austin / SXSW) +1.718.569.7272
nathan@guardianproject.info

Get press kit and more at: https://guardianproject.info/press

Permalink:
https://docs.google.com/document/d/1kI6dV6nPSd1z3MkxSTMRT8P9DcFQ9uOiNFcUlGTjjXA/edit?usp=sharing

GOOGLE EXECUTIVE CHAIRMAN ERIC SCHMIDT AWARDS GUARDIAN PROJECT A “NEW DIGITAL AGE” GRANT

The Guardian Project is amongst the 10 chosen grantee organizations to be awarded a $100,000 digital age grant due to its extensive work creating open source software to help citizens overcome government-sponsored censorship.

eric-schmidt-portr_2546680b
image courtesy of the telegraph.co.ok

NEW YORK, NY (March 10, 2014)—Ten non-profits in the U.S. and abroad
have been named recipients of New Digital Age Grants, funded through a
$1 million donation by Google executive chairman Eric Schmidt. The
Guardian Project is one of two New York City-based groups receiving an
award.

The New Digital Age Grants were established to highlight organizations
that use technology to counter the global challenges Schmidt and
Google Ideas Director Jared Cohen write about in their book THE NEW
DIGITAL AGE, including government-sponsored censorship, disaster
relief and crime fighting. The book was released in paperback on March 4.

“The recipients chosen for the New Digital Age Grants are doing some
very innovative and unique work, and I’m proud to offer them this
encouragement,” said Schmidt. “Five billion people will encounter the
Internet for the first time in the next decade. With this surge in the
use of technology around the world—much of which we in the West take
for granted—I felt it was important to encourage organizations that
are using it to solve some of our most pressing problems.”

Guardian Project founder, Nathan Freitas, created the project based on
his first-hand experience working with Tibetan human rights and
independence activists for over ten years. Today, March 10th, is the
55th anniversary of the Tibetan Uprising Day against Chinese
occupation. “I have seen first hand the toll that online censorship,
mobile surveillance and digital persecution can take on a culture,
people and movement,” said Freitas. “I am elated to know Mr. Schmidt
supports our effort to fight back against these unjust global trends
through the development of free, open-source mobile security
capabilities.”

Many of the NDA grantees, such as Aspiration, Citizen Lab and OTI,
already work with the Guardian Project on defending digital rights,
training high-risk user groups and doing core research and development
of anti-censorship and surveillance defense tools and training.

The New Digital Age Grants are being funded through a private donation
by Eric and Wendy Schmidt.

About the Guardian Project

The Guardian Project is a global collective of software developers
(hackers!), designers, advocates, activists and trainers who develop
open source mobile security software and operating system
enhancements. They also create customized mobile devices to help
individuals communicate more freely and protect themselves from
intrusion and monitoring. The effort specifically focuses on users who
live or work in high-risk situations, and who often face constant
surveillance and intrusion attempts into their mobile devices and
communication streams.

Since it was founded in 2009, the Guardian Project has developed more
than a dozen mobile apps for Android and iOS with over two million
downloads and hundreds of thousands of active users. In the last five
years the Guardian Project has partnered with prominent open source
software projects, activists groups, NGOs, commercial partners and
news organizations to support their mobile security software
capabilities. This work has been made possible with funding from
Google, UC Berkeley with the MacArthur Foundation, Avaaz, Internews,
Open Technology Fund, WITNESS, the Knight Foundation, Benetech, and
Free Press Unlimited. Through work on partner projects like The Tor
Project, Commotion mesh and StoryMaker, we have received indirect
funding from both the US State Department through the Bureau of
Democracy, Human Rights and Labor Internet Freedom program, and the
Dutch Ministry of Foreign Affairs through HIVOS.

The Guardian Project is very grateful for this personal donation and
is happy to have its work recognized by Mr Schmidt. This grant will
allow us to continue our work on ensuring users around the world have
access to secure, open and trustworthy mobile messaging services. We
will continue to improve reliability and security of ChatSecure for
Android and iOS and integrate the OStel voice and video calling
services into the app for a complete secure communications solution.
We will support the work of the new I.M.AWESOME (Instant Messaging
Always Secure Messaging) Coalition focused on open-standards,
decentralized secure mobile messaging, and voice and video
communications. Last, but not least, we will improve device testing,
support and outreach to global human rights defenders, activists and
journalists, bringing the technology that the Guardian Project has
developed to the people that need it most.

About the NDA Recipients

Aspiration in San Francisco, CA, provides deep mentorship to build
tech capacity supporting Africa, Asia and beyond. Their NDA grant will
grow their capacity-building programs for the Global South, increasing
technical capacity to meet local challenges.

C4ADS, a nonprofit research team in Washington, DC, is at the cutting
edge of unmasking Somali pirate networks, Russian arms-smuggling
rings, and other illicit actors entirely through public records. Their
data-driven approach and reliance on public documents has enormous
potential impact, and the grant will help with their next big project.

The Citizen Integration Center in Monterrey, Mexico has developed an
innovative public safety broadcast and tipline system on social media.
Users help their neighbors—and the city—by posting incidents and
receiving alerts when violence is occurring in their communities. The
grant will help them broaden their reach.

The Citizen Lab at the Munk School of Global Affairs at the University
of Toronto, Canada, is a leading interdisciplinary laboratory
researching and exposing censorship and surveillance. The grant will
support their technical reconnaissance and analysis, which uniquely
combines experts and techniques from computer science and the social
sciences.

The Guardian Project, based in New York City, develops open-source
secure communication tools for mobile devices. ChatSecure and OSTel,
their open standards-based encrypted messaging, voice and video
communication services, which are both built on open standards, have
earned the trust of tens of thousands of users in
repressively-censored environments, and the grant will advance their
technical development.

The Igarapé Institute in Rio de Janeiro, Brazil, focuses on violence
prevention and reduction through technology. Their nonprofit work on
anti-crime projects combines the thoughtfulness of a think tank with
the innovative experimentation of a technology design shop. The grant
will support their research and development work.

KoBo Toolbox in Cambridge, MA, allows fieldworkers in far-flung
conflict and disaster zones to easily gather information without
active Internet connections. The grant will help them revamp their
platform to make it easier and faster to deploy.

The New Media Advocacy Project in New York, NY, is nonprofit
organization developing mobile tools to map violence and
disappearances in challenging environments. The grant will allow them
to refine their novel, interactive, video-based interfaces.

The Open Technology Institute at the New America Foundation in
Washington, DC, advances open architectures and open-source
innovations for a free and open Internet. The grant will assist their
work with the Measurement Lab project to objectively measure and
report Internet interference from repressive governments.

Portland State University in Portland, OR, is leading ground-breaking
research on network traffic obfuscation techniques, which improve
Internet accessibility for residents of repressively-censored
environments. The grant will support the research of Professor Tom
Shrimpton and his lab, who—with partners at the University of
Wisconsin and beyond—continue to push the boundaries with new
techniques like Format Transforming Encryption.

Guardian Project | The Guardian Project | 2014-03-10 16:22:34

In September, I was pleased to present a talk on the importance of making cryptography and privacy technology accessible to the masses at TED’s Montréal event. In my 16-minute talk, I discussed threats to Internet freedom and privacy, political perspectives, as well as the role open technologies such as Cryptocat can play in this field.

The talk is available here, on the TEDx YouTube channel.

CryptoCat | Cryptocat Development Blog | 2013-10-19 16:43:32

Independent Cryptocat server operators:

We’re issuing a mandatory update for Cryptocat server configuration. Specifically, the ejabberd XMPP server configuration must be updated to include support for mod_ping.

Click here for Cryptocat server setup instructions, including the updated configuration for ejabberd.

We’re doing this in order to allow upcoming Cryptocat versions better connection handling, and the introduction of a new auto-reconnect feature! All Cryptocat versions 2.1.14 and above will not connect to servers without this configuration update. Cryptocat 2.1.14 is expected to be released some time within the coming weeks.

CryptoCat | Cryptocat Development Blog | 2013-09-11 19:38:58

This morning, we’ve begun to push Cryptocat 2.1.13, a big update, to all Cryptocat-compatible platforms (Chrome, Safari, Firefox and OS X.) This update brings many new features and improvements, as well as some small security fixes and improvements. The full change log is available in our code repository, but we’ll also list the big new things below. The update is still being pushed, so it may take around 24 hours for the update to be available in your area.

Important notes

First things first: encrypted group chat in Cryptocat 2.1.13 is not backwards compatible with any prior version. Encrypted file sharing and private one-on-one chat will still work, but we still strongly recommend that you update and also remind your friends to update as well. Also, the block feature has been changed to an ignore feature — you can still ignore group chat messages from others, but you cannot block them from receiving your own.

New feature: Authenticate with secret questions!

Secret question authentication (SMP)

Secret question authentication (SMP)

An awesome new feature we’re proud to introduce is secret question authentication, via the SMP protocol. Now, if you are unable to authenticate your friend’s identity using fingerprints, you can simply ask them a question to which only they would know the answer. They will be prompted to answer — if the answers match, a cryptographic process known as SMP will ensure that your friend is properly authenticated. We hope this new feature will make it easier to authenticate your friend’s identities, which can be time-consuming when you’re chatting with a conversation of five or more friends. This feature was designed and implemented by Arlo Breault and Nadim Kobeissi.

 

 

New Feature: Message previews

Message previews

Another exciting new feature is message previews: Messages from buddies you’re not currently chatting with will appear in a small blue bubble, allowing you to quickly preview messages you’re receiving from various parties, without switching conversations. This feature was designed by Meghana Khandekar at the Cryptocat Hackathon and implemented by Nadim Kobeissi.

 

 

 

 

Security improvements

Better warnings for participants.

We’ve addressed a few security issues: the first is a recurring issue where Cryptocat users could be allowed to send group chat messages only to some participants of a group chat and not to others. This issue had popped up before, and we hope we won’t have to address it again. In a group chat scenario, it turns out that resolving this kind of situation is more difficult than previously thought.

The second issue is related to private chat accepting unencrypted messages from non-Cryptocat clients. We’ve chosen to make Cryptocat refuse to display any unencrypted messages it receives, and dropping them.

Finally, we’ve added better warnings. In case of suspicious cryptographic activity (such as bad message authentication codes, reuse of initialization vectors,) Cryptocat will display a general warning regarding the violating user.

More improvements and fixes

This is a really big update, and there’s a lot more improvements and small bug fixes spread all around Cryptocat. We’ve fixed an issue that would prevent Windows users from sending encrypted ZIP file transfers, made logout messages more reliable, added timestamps to join/part messages, made Cryptocat for Firefox a lot snappier… these are only a handful of the many small improvements and fixes in Cryptocat 2.1.13.

We hope you enjoy it! It should be available as an update for your area within the next 24 hours.

CryptoCat | Cryptocat Development Blog | 2013-09-04 16:56:25

We’re excited to announce the new Cryptocat Encrypted Chat Mini Guide! This printable, single-page two-sided PDF lets you print out, cut up and staple together a small guide you can use to introduce friends, colleagues and anyone else to the differences between regular instant messaging and encrypted chat, how Cryptocat works, why fingerprints are important, and Cryptocat’s current limitations. Download the PDF and print your own!

The goal of the Cryptocat Mini Guide is to quickly explain to anyone how Cryptocat is different, focusing on an easy-to-understand cartoon approach while also communicating important information such as warnings and fingerprint authentication.

Special thanks go to Cryptocat’s Associate Swag Coordinator, Ingrid Burrington, for designing the guide and getting it done. The Cryptocat Mini Guide was one of the many initiatives that started at last month’s hackathon, and we’re very excited to see volunteers come up with fruitful initiatives. You’ll be seeing this guide distributed at conferences and other events where Cryptocat is present. And don’t forget to print your own — we even put dashed lines where you’re supposed to cut with scissors.

CryptoCat | Cryptocat Development Blog | 2013-09-01 20:31:44

Open Source Veteran Bdale Garbee Joins FreedomBox Foundation Board

NEW YORK, March 10, 2011-- The FreedomBox Foundation, based here, today announced that Bdale Garbee has agreed to join the Foundation's board of directors and chair its technical advisory committee. In that role, he will coordinate development of the FreedomBox and its software.

Garbee is a longtime leader and developer in the free software community. He serves as Chief Technologist for Open Source and Linux at Hewlett Packard, is chairman of the Debian Technical Committee, and is President of Software in the Public Interest, the non-profit organization that provides fiscal sponsorship for the Debian GNU/Linux distribution and other projects. In 2002, he served as Debian Project Leader.

"Bdale has excelled as a developer and leader in the free software community. He is exactly the right person to guide the technical architecture of the FreedomBox," said Eben Moglen, director of the FreedomBox Foundation.

"I'm excited to work on this project with such an enthusiastic community," said Garbee. "In the long-term, this may prove to be most important thing I'm doing right now."

The Foundation's formation was announced in Brussels on February 4, and it is actively seeking funds; it recently raised more than $80,000 in less than fifteen days on Kickstarter.

About the FreedomBox Foundation

The FreedomBox project is a free software effort that will distribute computers that allow users to seize control of their privacy, anonymity and security in the face of government censorship, commercial tracking, and intrusive internet service providers.

Eben Moglen is Professor of Law at Columbia University Law School and the Founding Director of the FreedomBox Foundation, a new non-profit incorporated in Delaware. It is in the process of applying for 501(c)(3) status. Its mission is to support the creation and worldwide distribution of FreedomBoxes.

For further information, contact Ian Sullivan at press@freedomboxfoundation.org or see http://freedomboxfoundation.org.

FreedomBox | news | 2013-08-21 18:44:58

Cryptocat Hackathon: Day 1Cryptocat’s first ever hackathon event was a great success. With the collaboration of OpenITP and the New America NYC office, we were able to bring together dozens individuals, which included programmers, designers, technologists, journalists, and privacy enthusiasts from around the world, to share a weekend of discussions, workshops and straight old-fashioned Cryptocat hacking in New York City.

During this weekend, we organized a coding track, led by myself, Nadim, as well as a journalist security track that was led by Carol Waters of Internews, with the participation of the Guardian Project. The coding track brought together volunteer programmers, documentation writers and user interface designers in order to work on various open issues as well as suggest new features, discover and fix bugs, and contribute to making our documentation more readable.

Ingrid Burrington's work-in-progress Cryptocat Quick Start Guide.

Many people showed up, with many great initatives and ideas. Off the top of my head, I remember Meghana Khandekar, of the New York School of Visual Arts, who contributed ideas for user interface improvements. Steve Thomas and Joseph Bonneau helped with discovering, addressing and discussing encryption-related bugs and improvements. Griffin Boyce, from the Open Technology Institute, helped with organizing the hackathon and contributed the first working build of Cryptocat for newer Opera browsers. Ingrid Burrington participated by working on hand-outable Cryptocat quick-start guides. David Huerta and Christopher Casebeer further contributed some code-level and design-level usability improvements. I worked on implementing a user interface for SMP authentication in Cryptocat.

We were very excited to have a team of medical doctors and developers figuring out a Cryptocat-based app for sharing medical records while fully respecting privacy laws. The team was looking to implement a medium for comparing X-ray images over Cryptocat encrypted chat, among other medical field related features.

Cryptocat Hackathon: Day 1

The journalist security track gave a handful of journalists and privacy enthusiasts the opportunity for expert hands-on training in techniques that can help them maintain their privacy and the privacy of their sources online and offline.  In addition, with the help of the Guardian Project, we were able to introduce apps such as Gibberbot and OSTel for secure mobile communications.
We were very pleased with the success of the first Cryptocat hackathon. Code was written, bugs were fixed, food was shared, and prize Cryptocat t-shirts were won. I sincerely thank OpenITP and New America NYC for their organizational aid, and my friend Griffin Boyce for helping me carry food, set up tables and chairs, and generally make sure people were comfortable. And finally, an equally big thanks to all the people who showed up and helped improve Cryptocat. Without any of these people, such a great hackathon would have never happened. Watch out for more hackathons in D.C., San Francisco, and Montréal!

Cryptocat Hackathon

Update: The hackathon is over, and you can find out what happened (and see photos) at our report!

Cryptocat, in collaboration with OpenITP, will be hosting the very first Cryptocat Hackathon weekend in New York City, on the weekend of the 17th and 18th of August 2013.

Join us on August 17-18 for the Cryptocat Hackathon and help empower people worldwide by improving useful tools and discussing the future of making privacy accessible. This two day event will take place at the OpenITP offices, located on 199 Lafayette Street, Suite 3b, New York City. Please RSVP on Eventbrite or email events@crypto.cat.

Tracks

The Cryptocat Hackathon will feature two tracks to accomodate the diversity of the attendees:

Coding Track with Nadim

Join Nadim in discussing the future of Cryptocat and contributing towards our efforts for the next year. Multi-Party OTR, encrypted video chat using WebRTC, and more exciting topics await your helping hands!

Journalist Security Track with Carol and the Guardian Project

Join Carol in a hands-on workshop for journalists on how to protect your digital security and privacy in your working environment. The Guardian Project will also be swooping in to discuss mobile security, introducing tools and solutions. Carol Waters is a Program Officer with Internews’ Internet Initiatives, and focuses on digital and information security issues. The Guardian Project builds open source mobile apps to protect the privacy and security of all of mankind.

Who should attend?

Hackers, designers, journalists, Internet freedom fighters, community organizers, and netizens. Essentially, anyone interested in empowering activists through these tools. While a big chunk of the work will focus on code, there are many other tasks available ranging from Q&A to communications.

Schedule

Saturday

10:00: Introduction and planning

11:00 Some hacking

12:00 Lunch!

1:00 – 5:00 Split into two tracks:

Coding track with Nadim

Journalist security track with Carol Waters

Sunday

10:00: Some hacking

12:00 Lunch!

1:00 – 4:00 Split into two tracks:

Coding track with Nadim

Journalist security track with Carol

4:00 – 5:00 Closing notes and roundtable

CryptoCat | Cryptocat Development Blog | 2013-08-07 14:48:00

24 hours after last month’s critical vulnerability in Cryptocat hit its peak controversy point, I was scheduled to give a talk at SIGINT2013, organized in Köln by the Chaos Computer Club. After the talk, we held a 70-minute Q&A in which I answered questions even from Twitter. 70 minutes!

In the 45-minute talk, I discuss the recent bug, how we plan to deal with it, what it means, as well as Cryptocat’s overall goals and progress:

In the 70-minute Q&A that followed, I answer every question ranging from the recent bug to what my favourite TV show is:

I’m really pleased with these videos since they present a channel into how the project is dealing with security issues as well as our current position and future plans. If you’re interested in Cryptocat, they are worth watching.

Additionally, I recently gave a talk about Cryptocat at Republika in Rijeka, and will be at OHM2013 in Amsterdam as part of NoisySquare, where there will be Cryptocat talks, workshops and more. See you there!

CryptoCat | Cryptocat Development Blog | 2013-07-23 17:24:14

In the unlikely event that you are using a version of Cryptocat older than 2.0.42, please update to the latest version immediately to fix a critical security bug in group chat. We recommend updating to the 2.1.* branch, which at time of writing is the latest version. We apologize unreservedly for this situation. (Post last updated Sunday July 7, 2:00PM UTC)

What happened?

A few weeks ago, a volunteer named Steve Thomas pointed out a vulnerability in the way key pairs were generated for Cryptocat’s group chat. The vulnerability was quickly resolved and an update was pushed. We sincerely thank Steve for his invaluable effort.

The vulnerability was so that any conversations had over Cryptocat’s group chat function, between versions 2.0 and 2.0.42 (2.0.42 not included), were easier to crack via brute force. The period between 2.0 and 2.0.42 covered approximately seven months. Group conversations that were had during those seven months were likely vulnerable to being significantly easier to crack.

Once Steve reported the vulnerability, it was fixed immediately and the update was pushed. We’ve thanked Steve and added his name on our Cryptocat Bughunt page’s wall of fame.

In our update log for Cryptocat 2.0.42, we had noted that the update fixed a security bug:

  • IMPORTANT: Due to changes to multiparty key generation (in order to be compatible with the upcoming mobile apps), this version of Cryptocat cannot have multiparty conversations with previous versions. However private conversations still work.
  • Fixed a bug found in the encryption libraries that could partially weaken the security of multiparty Cryptocat messages. (This is Steve’s bug.)

The first item, which made some changes in how keys were generated, did break compatibility with previous versions. But unlike what Steve has written in his blog post on the matter, this has nothing at all to do with the vulnerability he reported, which we were able to fix without breaking compatibility.

Due to Steve’s publishing of his blog post, we felt it would be useful to publish an additional blog post clarifying the matter. While the blog post published by Steve does indeed point to a significant vulnerability, we want to make sure it does not also cause inaccuracies to be reported.

Private chats are not affected: Private queries (1-on-1) are handled over the OTR protocol, and are therefore completely unaffected by this bug. Their security was not weakened.

Our SSL keys are safe: For some reason, there are rumors that our SSL keys were compromised. To the best of our knowledge, this is not the case. All Cryptocat data still passed over SSL, and that offers a small layer of protection that may help with this issue. Of course, it does not in any way save from the fact that due to our blunder, seven months of conversations were easier to crack. This is still a real mistake. We should also note that our SSL setup has implemented forward secrecy since the past couple of weeks. We’ve rotated our SSL keys as a precaution.

One more small note: Much has been said about a line of code in our XMPP library that supposedly is a sign of bad practice — this line is not used for anything security-sensitive. It is not a security weakness. It came as part of the third-party XMPP library that Cryptocat uses.

Finally, an apology: Bad bugs happen all the time in all projects. At Cryptocat, we’ve undertaken the difficult mission of trying to bridge the gap between accessibility and security. This will never be easy. We will always make mistakes, even ten years from now. Cryptocat is not any different from any of the other notable privacy, encryption and security projects, in which vulnerabilities get pointed out on a regular basis and are fixed. Bugs will continue to happen in Cryptocat, and they will continue to happen in other projects as well. This is how open source security works. We’ve added a bigger warning to our website about Cryptocat’s experimental status.

Every time there has been a security issue with Cryptocat, we have been fully transparent, fully accountable and have taken full responsibility for our mistakes. We will commit failures dozens, if not hundreds of times more in the coming years, and we only ask you to be vigilant and careful. This is the process of open source security. On behalf of the Cryptocat project, team members and volunteers, I apologize unreservedly for this vulnerability, and sincerely and deeply thank Steve Thomas for pointing it out. Without him, we would have been a lot worse off, and so would our users.

We are continuing in the process of auditing all aspects of Cryptocat’s development, and we assure our users that security remains something we are constantly focused on.

CryptoCat | Cryptocat Development Blog | 2013-07-04 12:04:48

Today, with Cryptocat nearing 65,000 regular users, the Cryptocat project releases “Cryptocat: Adopting Accessibility and Ease of Use as Security Properties,” a working draft which brings together the past year of Cryptocat research and development.

We document the challenges we have faced, both cryptographic and social, and the decisions we’ve taken in order to attempt to bring encrypted communications to the masses.

The full paper is available for download here from the public scientific publishing site, arXiv.

__________________________________________

Excerpts of the introduction from our paper:

Cryptocat is a Free and Open Source Software (FL/OSS) browser extension that makes use of web technologies in order to provide easy to use, accessible, encrypted instant messaging to the general public. We aim to investigate how to best leverage the accessibility and portability offered by web technologies in order to allow encrypted instant messaging an opportunity to better permeate on a social level. We have found that encrypted communications, while in many cases technically well-implemented, suffer from a lack of usage due to their being unappealing and inaccessible to the “average end-user”.

Our position is that accessibility and ease of use must be treated as security properties. Even if a cryptographic system is technically highly qualified, securing user privacy is not achieved without addressing the problem of accessibility. Our goal is to investigate the feasibility of implementing cryptographic systems in highly accessible mediums, and to address the technical and social challenges of making encrypted instant messaging accessible and portable.

In working with young and middle-aged professionals in the Middle East region, we have discovered that desktop OTR clients suffer from serious usability issues which are sometimes further exacerbated due to language differences and lack of cultural integration (the technology was frequently described as “foreign”). In one case, an activist who was fully trained to use Pidgin-OTR neglected to do so citing usability difficulties, and as a direct consequence encountered a life-threatening situation at the hands of a national military in the Middle East and North Africa region.

These circumstances have led us to the conclusion that ease of use and accessibility must be treated as security properties, since their absence results in security compromises with consequences similar to the ones experienced due to cryptographic breaks.

Cryptocat is designed to leverage highly accessible mediums (the web browser) in order to offer an easy to use encrypted instant messaging interface accessible indiscriminately to all cultures, languages and age groups. Cryptocat clients are available as Free Software browser extensions written in JavaScript and HTML5.

CryptoCat | Cryptocat Development Blog | 2013-06-24 14:02:02

A frequent question we get here at Cryptocat is: “why don’t you add a buddy lists feature so I can keep track of whether my friends are on Cryptocat?” The answer: metadata.

If you’ve been following the news at all for the past week, you’d have heard of the outrageous reports of Internet surveillance on behalf of the NSA. While those reports suggest that the NSA may not have complete access to content, they still allow the agency access to metadata. If we were talking about phone surveillance, for example, metadata would be the time you made calls, which numbers you called, how long your calls have lasted, and even where you placed your calls from. This circumstantial data can be collected en masse to paint very clear surveillance pictures about individuals or groups of individuals.

At Cryptocat, we not only want to keep your chat content to yourself, but we also want to distance ourselves from your metadata. In this post we’ll describe what metadata you’re giving to Cryptocat servers, what’s done with it, and what parts of it can be seen by third parties, such as your Internet service provider. We assume we are dealing with a Cryptocat XMPP server with a default configuration, served over SSL.

Reminder: No software is likely to be able to provide total security against state-level actors. While Cryptocat offers useful privacy, we remind our users not to trust Cryptocat, or any computer software, with extreme situations. Cryptocat is not a magic bullet and does not protect from all threats.

Who has your metadata?

table

Cryptocat does not ever store your metadata or share it with anyone under any circumstances. Always be mindful of your metadata — it’s part of your privacy, too! For our default server, we also have a privacy policy, which we recommend you look over.

CryptoCat | Cryptocat Development Blog | 2013-06-08 17:46:54

OpenITP is happy to announce the hire of Nadim Kobeissi as Special Advisor starting in June 2013 Kobeissi is best known for starting Cryptocat, one of the world's most popular encrypted chat applications.

Based in Montreal, Kobeissi specializes in cryptography, user interfaces, and application development. He has done original research on making encryption more accessible across languages and borders, and improving the state of web cryptography. He has also lead initiatives for Internet freedom and against Internet surveillance. He has a B.A. In Political Science and Philosophy From Concordia University, and is fluent in English, French, and Arabic.

As Special Advisor, Kobeissi will collaborate with OpenITP staff to improve and promote Cryptocat, advise on security and encryption matters, and organize developer meetings.

You can find him on @kaepora and @cryptocatapp

OpenITP | openitp.org | 2013-05-30 19:58:46

Hacking to Empower Accessible Privacy Worldwide

Join us on August 17-18 for the Cryptocat Hackathon and help empower activists worldwide by improving useful tools and discussing the future of making privacy accessible. This two day event will take place at the OpenITP offices, located on 199 Lafayette Street, Suite 3b, New York City.

Cryptocat provides the easiest, most accessible way for an individual to chat while maintaining their privacy online. It is a free software that aims to provide an open, accessible Instant Messaging environment that encrypts conversations and works right in your browser.

Who Should Attend?

Hackers, designers, Internet freedom fighters, community organizers, and netizens. Essentially, anyone interested in empowering activists through these tools. While a big chunk of the work will focus on code, there are many other tasks available ranging from Q&A to communications.

For RSVP, please visit http://www.eventbrite.com/event/6904608871 or email nadim AT crypto DOT cat,

Schedule

Saturday

10:00 Presentation of the projects

11:00 Brainstorm

12:00 Lunch

1:00 Hack

5:00pm End of Day

Sunday

10:00-5:00pm Hacking

Tags: 

OpenITP | openitp.org | 2013-05-30 15:38:05

Collateral Freedom: A Snapshot of Chinese Users Circumventing Censorship, just released today, documents the experiences of 1,175 Chinese Internet users who are circumventing their country’s Internet censorship— and it carries a powerful message for developers and funders of censorship circumvention tools. We believe these results show an opportunity for the circumvention tech community to build stable, long term improvements in Internet freedom in China.

This study was conducted by David Robinson, Harlan Yu and Anne An. It was managed by OpenITP, and supported by Radio Free Asia’s Open Technology Fund.

Read Report

The report found that the circumvention tools that work best for Chinese users are technologically diverse, but are united by a shared political feature: the collateral cost of choosing to block them is prohibitive for China’s censors. Survey respondents rely not on tools that the Great Firewall can’t block, but rather on tools that the Chinese government does not want the Firewall to block. Internet freedom for these users is collateral freedom, built on technologies and platforms that the regime finds economically or politically indispensable

The most widely used tool in the survey—GoAgent—runs on Google’s cloud hosting platform, which also hosts major consumer online services and provides background infrastructure for thousands of other web sites. The Great Firewall sometimes slows access to this platform, but purposely stops short of blocking the platform outright. The platform is engineered in a way that limits the regime’s ability to differentiate between the circumventing activity it would like to prohibit, and the commercial activity it would like to allow. A blanket block would be technically feasible, but economically disruptive, for the Chinese authorities. The next most widely used circumvention solutions are VPNs, both free and paid—networks using the same protocols that nearly all the Chinese offices of multinational firms rely on to connect securely to their international headquarters. Again, blocking all traffic from secure VPNs would be the logical way to make censorship effective—but it would cause significant collateral harm.

Read Report

Instead, the authorities steer a middle course, sometimes choosing to disrupt VPN traffic (and commerce) in the interest of censorship, and at other times allowing VPN traffic (and circumvention) in the interest of commerce. The Chinese government is implementing policies that will improve its ability to segment circumvention-related uses of VPNs from business-related uses, including heightened registration requirements for VPN providers and users.

Respondents to the survey were categorically more likely to rely on these commercially widespread technologies and platforms than they were to use special purpose anti-censorship systems with relatively little commercial footprint, such as Freegate, Ultrasurf, Psiphon, Tor, Puff or simple web proxies. Many of the respondents have used these non-commercial tools in the past—but most have now stopped. The most successful tools today don’t make the free flow of sensitive information harder to block—they make it harder to separate from traffic that the Chinese government wishes to allow.

The report found that most users of circumvention software are in what we call the “versatility-first” group: they seek a fast and robust connection, are willing to install and configure special software, and (perhaps surprisingly) do not base their circumvention decisions on security or privacy concerns. To the extent that circumvention software developers and funders wish to help these users, the study found that they should focus on leveraging business infrastructure hosted in relatively freedom respecting jurisdictions, because the Chinese government has greater reason to allow such infrastructure to operate.

The report provided five practical suggestions:

  1. Map the circumvention technologies and practices of foreign businesses in China.
  2. Engage with online platform providers who serve businesses in censored countries.
  3. Investigate the collateral freedom dynamic in other countries.
  4. Diversify development efforts to match the diversity of user needs.
  5. Make HTTPS a corporate social responsibility issue.

Read Report

Tags: 

OpenITP | openitp.org | 2013-05-20 15:49:52