Following on Eben Moglen’s mind-warping series of talks about life after Snowden, the Software Freedom Law Center has invited Bruce Schneier to join Eben for a conversation informed by Bruce’s own analysis of the leaked documents. Bruce is one of the smartest thinkers around when it comes to understanding how security and surveillance operate in the real world. And he is unsurpassed at presenting complicated security concepts even to people who lack his expertise. Between Moglen’s sophisticated thoughts and Bruce’s grounded approach, we’re sure to learn a lot about where we stand and what we can do next!
I set my IRC client to keep logs. Among other things, that means I have a record of all my away messages from the past couple of centuries or so… Some themes emerge, notably a nostalgia for an imagined Paleolithic past:
kfogel is away: http://www.rants.org/2013/12/10/irc_away_messages/
kfogel is away: slowly leaching toxins from bloodstream while in a
state of severely lowered consciousness
kfogel is away: replacing busted USB port replicator so can haz
mousez and keyboardz simultaneouzly. Stay away from the IOGEAR
4-port USB 2.0 hub model GUH285 if you ever need a replicator.
kfogel is away: Upgrading to Lucid. Send a posse if you don't
hear from me in 30 min.
kfogel is away: enumerating the integers
kfogel is away: dealing with some reasonable subset of todo list
kfogel is away: errand in the city that never stops talking about
how it never sleeps
kfogel is away: communing with sessile benthic fronds
kfogel is away: pursuing striated brachiators
kfogel is away: metabolizing
kfogel is away: attending to metabolic requirements
kfogel is away: gym time -- yes, geeks are allowed to exercise,
stop looking at me like that
kfogel is away: bun run
kfogel is away: piano time is the only sacred thing
kfogel is away: deep in concentration
kfogel is away: ululating
kfogel is away: Post Office, possibly including metabolic detour.
kfogel is away: gradually converting oxygen to heat
kfogel is away: Speaking of certs and Apache redirects, it's time
to put my laundry in the dryer and start the next load.
kfogel is away: synthesis is the new creativity
kfogel is away: put the pencil in the suitcase when the whiskey
bottle faces the moon
kfogel is away: pipette herbivore cesium bricolage
kfogel is away: Drinking the blood of innocents.
kfogel is away: Q: How many Semantic Web advocates does it take to
screw in a lightbulb? A: What exactly do you mean by "a"?
kfogel is away: Every odd integer > 5 is the sum of three primes.
That made my day.
kfogel is away: weeping, once again, that "ombudsman" has neither
a sex-neutral form nor a verb form.
kfogel is away: One of the nice things about a downtown Chicago
office is exiting into a Daley Plaza protest most weekdays.
kfogel is away: gallivanting with brachiators
kfogel is away: afk for a bit; ask the NSA if you need to find me
kfogel is away: attentiveness to metabolic needs
kfogel is away: avoiding stobor
kfogel is away: oiling my Turing Machines
kfogel is away: pubpat.org is my hero -- victory for
non-patentable genes at U.S. Supreme Court!
kfogel is away: Literally heading to a restaurant whose motto is
"We Serve People". I am not making this up.
kfogel is away: Correction to previous away message: it might be
"Serving People", sorry
kfogel is away: realizing that public school systems are useful
for teaching children how to handle bullies well, and how to
subvert hierarchical authority structures, therefore they should
kfogel is away: Is that Edward Snowden in a tuxedo, disguised
among the penguins in Antarctica?
kfogel is away: Seeing what that sound is.
kfogel is away: Researching the hallucinogenic properties of
oxygen -- hmm, continuous consumption appears to cause delusions
kfogel is away: fulfilling humankind's millennia-long dream of
flight, albeit in a cramped, commercialized, sadly routinized and
perhaps slightly tawdry way.
kfogel is away: Neither hunting nor gathering.
kfogel is away: avoiding subsidizing further Mesopotamian
kfogel is away: wondering why web sites use Flash in situations
where HTML+CSS+images would actually have been easier
kfogel is away: pondering the futility of empathy in a universe
made mostly of hydrogen
kfogel is away: pining for the fjords
kfogel is away: nostalgia-drenched lunch in NYC Chinatown
kfogel is away: stalking the wild asparagus
kfogel is away: converting sunlight to metabolic energy
kfogel is away: traipsing
kfogel is away: checking in on the progress of my escape tunnel
kfogel is away: flossing pulsars
kfogel is away: off to hear Chicago Schola Antiqua in concert --
I'm sure all of FreeNode writhes in jealousy
kfogel is away: contributing some heat back to the Universe
kfogel is away: consumption of sunlight, indirectly, via organic
kfogel is away: "What do we want?" "TIME TRAVEL!" "When do we
kfogel is away: Los Angeles looks exactly like Los Angeles
kfogel is away: neural network nightly reset
kfogel is away: accepting silver medal for the 200 meter "not
thinking about the Olympics" challenge
kfogel is away: transferring heat from one location to another
kfogel is away: time to pay the cafe fee again -- maybe it'll be
another wheat-based sugary substance this time
kfogel is away: re-spending my misspent youth
kfogel is away: eating arugula in honor of Barack Obama
kfogel is away: improving my Sogdian accent
kfogel is away: converting matter into heat, using only my body
kfogel is away: Converting sunlight into energy, indirectly.
kfogel is away: admiring your gritty urban authenticity even as he
prices you out of your neighborhood.
kfogel is away: luxuriating in the knowledge that no matter how
bad things get, there's always xkcd
kfogel is away: pontificating somewhere, about something
kfogel is away: It's just about time for historical inevitability
to come back into fashion.
kfogel is away: ancient sunlight will now be converted to
particles of pure energy in my bloodstream
kfogel is away: seeking gourd for use in repurposed pagan ritual
kfogel is away: When you've just typed the same phrase three
times, it is time to take a break. When you've just typed the
same phrase three times, it is time to take a break. When you've
just typed the same phrase three times, it is time to FAKEOUT, YOU
THOUGHT YOU KNEW THIS JOKE BUT YOU DON'T.
kfogel is away: just going to start using "friblopen" to avoid the
whole "free"/"libre"/"open" debate
kfogel is away: Paying money to increase my cardiopulmonary
activity level in a socially-approved and non-disruptive manner &
kfogel is away: taking The Jacket for repairs
kfogel is away: pursuing Outsider to galactic core to see what the
big deal is
kfogel is away: weekly spur waxing
kfogel is away: oak-sporting
kfogel is away: getting away from the computer for a bit and
fondly recalling my paleolithic past
Bring your cash and your curiosity to our FIRST EVER HOLIDAY CRAFT FAIR December 15, from 12-6pm. We’ll offer a free soldering workshop that day as well as host a number of makers in our area who’ll be selling their items. Also for our first 100 visitors, a free NYC Resistor holiday ornament!
Anyone from the community also interested in selling at the event is welcome to email us for an open spot while they last at email@example.com.
Next weekend at NYC Resistor we are teaching a class on the Adafruit FLORA and Neopixel. These round Arduino compatible controller boards are a great base for wearable projects like watches, jackets and neck ties, as well as holiday decorations. Bring your laptop and we’ll teach you to make the LED ring blink with patterns of your own design. No prior programming required. The class fee includes a FLORA board, batteries, cabling, 4 RGB LED pixels and a 16 RGB LED ring.
You are an information portal. Information enters through your senses, like your ears and eyes, and exits through your expressions, like your voice, your drawing, your writing, and your movements.
In order for culture to stay alive, we have to be open, or permeable. According to Wikipedia, Permeance is “the degree to which a material admits a flow of matter or energy.” We are the material through which information flows.
It's through this flow that culture stays alive and we stay connected to each other. Ideas flow in, and they flow out, of each of us. Ideas change a little as they go along; this is known as evolution, progress, or innovation.
But thanks to Copyright, we live in a world where some information goes in, but cannot legally come out.
Often I hear people engaged in creative pursuits ask, “Am I allowed to use this? I don't want to get in trouble.”
In our Copyright regime, “trouble” may include lawsuits, huge fines, and even jail. ”Trouble” means violence. ”Trouble” has shut down many a creative enterprise. So the threat of “trouble” dictates our choices about what we express.
Copyright activates our internal censors. Internal censorship is the enemy of creativity; it halts expression before it can begin. The question, “am I allowed to use this?” indicates the asker has surrendered internal authority to lawyers, legislators, and corporations.
This phenomenon is called Permission Culture. Whenever we censor our expression, we close a little more and information flows a little less. The less information flows, the more it stagnates. This is known as chilling effects.
I have asked myself: did I ever consent to letting “Permission Culture” into my brain? Why am I complying with censorship? How much choice do I really have about what information goes in and comes out of me?
The answer is: I have some choice regarding what I expose myself to, and what I express, but not total control. I can choose whether to watch mainstream media, for example. And I can choose what information to pass along.
But to be in the world, and to be open, means all kinds of things can and do get in that are beyond my control. I don’t get to choose what goes in based on its copyright status. In fact proprietary images and sounds are the most aggressively rammed into our heads. For example:
“Have a holly jolly Christmas, It’s the best time of the year
“I don’t know if there’ll be snow, but have a cup of cheer
“Have a holly jolly Christmas, And when you walk down the street
“Say hello to friends you know and everyone you meet!”
I hate Christmas music. But because I live in the U.S., and need to leave the house even in the months of November and December, I can't NOT hear it. It goes right through my earholes and into my brain, where it plays over and over ad nauseum.
Here are some of the corporations I could “get in trouble with” for sharing that song and clip in public. I wasn’t consulted by them before having their so-called “intellectual property” blasted into my head as a child, so I didn’t ask their permission to put it in my slide show.
Copyright is automatic and there's no way to opt out. But you can add a license granting some of the permissions copyright automatically takes away. Creative Commons, the most widespread brand of license, allows its users to lift various restrictions of copyright one at a time.
The problem with licenses is that they're based on copyright law. The same threat of violence behind copyright is behind alternative licenses too. Licenses actually reinforce the mechanism of copyright. Everyone still needs to seek permission – it’s just that they get it a little more often.
Like copyright itself, licenses are often too complex for most people to understand. So licenses have the unfortunate effect of encouraging people to pay even MORE attention to copyright, which gives even more authority to that inner censor. And who let that censor into our heads in the first place?
Although I use Free licenses and would appreciate meaningful copyright reform, licenses and laws aren't the solution. The solution is more and more people just ignoring copyright altogether. I want to be one of those people.
A few years ago I declared sovereignty over my own head. Freedom of Speech begins at home. Censorship and “trouble” still exist outside my head, and that’s where they’ll stay – OUTSIDE my head. I’m not going to assist bad laws and media corporations by setting up an outpost for them in my own mind.
I no longer favor or reject works based on their copyright status. Ideas aren't good or bad because of what licenses people slap on them. I just relate to the ideas themselves now, not the laws surrounding them. And I try to express myself the same way.
Like millions of others who don't give a rat's ass about copyright, I hope you join me. Make Art, Not Law.
Nick Bilton's Hatching Twitter tells of four friends who became rivals in the claim for having "founded" Twitter. (A recurring narrative well captured in the 2001 film Startup.com.) Even beyond founding, others can claim to have "invented" something twitter-like before Twitter, including myself!
Unlike the vision of Ev Williams who conceived the focus of micro-blogging as real-time events experienced and Jack Dorsey who conceived of it as broadcasting updates about one's self, I was focused on sharing stuff I had done.
Over ten years ago I wrote up what I thought were the important requirements for a busy sponge
I spend a lot of my time typing things into various interfaces: such as a log of important/useful things I've done during the day, an outline of things I need to do, a list of interesting links and my thoughts on them, web site passwords, proto-ideas and scribbles, annotations/comments on things I've read, and travel information. Some of these things are stored in (different) html pages and some in (different) flat text files, and I use different editors/browsers for these files! I'd like to have a single easy to use interface for entering all these things. This will require a data store/model, an interface, and perhaps some syntactical conventions for easy freeform entry..
This was when I worked at the W3C and was motivated, in part, by our weekly meetings in which we shared our "two minutes." I wanted a way to capture stuff I had done and share it with my colleagues. Once I had a way to capture these events, I naturally created an RSS feed that my peers could subscribe to.
Since then the tool has evolved to do many things for me, most importantly capturing bibliographic data about Web sources for my online ethnographies and histories. For the past couple of years, I've thought I should send some of my stream of busy to Twitter and/or Google+. I prefer to use Google+, but they've so far refused to create an API for creating Plus status updates. With the semester winding down, I finally gave Twitter a go via the nice command line tool twidge. Hence, busy now has an option to send an update to twitter.
def yasn_publish(comment, title, url, tag): comment_delim = ": " if comment else "" comment = comment + comment_delim + title comment_room = 140 - len(comment) - len(tag) - len(url) if comment_room < 0: # the comment is too big comment = comment[0:-17] + '...' # url will be shortened to 20 chars message = "%s %s #%s" %(comment, url, tag) call(['twidge', 'update', '%s' %message]) # tweet via twidge
It's been a little over a month since the November 6th fire that destroyed the scanning center building at the Internet Archive. No one was hurt, but as Archive founder Brewster Kahle wrote in a blog post from November 6th (emphasis added):
We lost maybe 20 boxes of books and film, some irreplaceable, most already digitized, and some replaceable. From our point of view this is the worst part. We lost an array cameras, lights, and scanning equipment worth hundreds of thousands of dollars. Insurance will cover some but not all of this.
The Internet Archive is far more important to the long-term interests of Internet users than, say, Facebook, and they'll make a little go a long way. If you can, please donate to help them recover and grow. I just sent in a check for $200 -- which really means $800 for the Archive, because...
Donations made before 2014 are being matched three-to-one by an anonymous donor!
So, if you can, please give. There are few more obvious calls on the Internet right now!
I can't stress this enough — if you're still wondering about the connection between copyright and civil liberties, nothing could make it clearer than Eben Moglen's four-lecture series Snowden and the Future at Columbia Law School in New York City. The fourth lecture is this coming Wednesday, December 4th, at 4:30pm (Eastern US) in Room 101 of Jerome Greene Hall:
If you are in New York City on Wednesday, we strongly recommend going to that fourth and last lecture. Transcripts of the first three are already online (though I found them worth watching on video). Quoting from the third:
privacy is an ecological rather than a transactional substance
Moglen goes on to explain why very eloquently. It is a point of prime concern to copyright resistors: when every email, every post in a social network, every online communication among human beings, is subject to surveillance, then the system will always err in one direction: toward over-enforcement of already overly-strong restrictions. Surveillance naturally serves monopoly: the watcher is centralized, the watched decentralized. Thus, for example, it becomes your problem to fight fraudulent takedowns and other censorship, rather than being the censor's problem to justify the restrictions in the first place.
Thursday, 12 Dec: Eben Moglen and Bruce Schneier:
Then on Thursday the 12th at 6:30pm ET, Prof. Moglen will be talking with the renowned security expert Bruce Schneier about what we can learn from the Edward Snowden documents and from the NSA's efforts to weaken global cryptography, and how we can keep free software tools from being subverted. That event is also at Jerome Greene Hall; see here for details.
There is no freedom of thought without freedom of communication, and ultimately there is no freedom of communication without privacy. Privacy means secrecy, anonymity, and autonomy for individuals freely associating.
Monopoly will never argue for this. People have to do it. Copyright restrictions originated in centralized censorship and are increasingly supported by centralized surveillance. No one is analyzing the larger dynamic of surveillance better than Prof. Moglen. If you're in New York this Wednesday and next Thursday, you know where to go.
(Previous post in this series here.)
Later this week I'll be participating on a panel of the New Media in American Literary History Interdisciplinary Symposium. I plan to talk about bibliography and bitrot.
When I began my historical study of Wikipedia I was sad to note that much early material was lost to bit rot. Hence, I was pleased to find remnants of the Nupedia lists (Wikipedia's predecessor) on the Way Back Machine and create and archive -- by way of
wget -- for others. Similarly, the first edits to Wikipedia were considered lost until Tim Starling found old logs files from Wikipedia. I used these to reconstruct the first ten thousand contributions, about six weeks' worth of edits, to Wikipedia.
These were nice finds. But these were two steps forward in an otherwise persistent jog backwards. For instance, in 2008 I noted that in my own writing my references to contemporary sources were quickly rotting.
So, doing a quick check-link analysis of the largest mindmap I find the following: 941 of those resources are "OK"; 21 are "404" (no longer there); and 10 "Timeout". So, just within a few years ~2% aren't readily available. For example, the link to Sanger's 2005 information about his (then) new Digital Universe project is already broken; but I must say news sites are the worst.
Hence, when I finished my dissertation I took the step of crawling all my sources to create and share an archive.
Others have begun to note this problem as well. Earlier this year Zittrain, Albert, and Lessig wrote of their work in the legal domain.
We found that half of the links in all Supreme Court opinions no longer work. And more than 70% of the links in such journals as the Harvard Law Review (in that case measured from 1999 to 2012), currently don't work. As time passes, the number of non-working links increases.
Hence a number of legal libraries have launched perma.cc, which aims to make the process of archiving and citing online sources much easier and consistent than my own efforts with
wget. I hope this effort spreads well beyond the legal discipline.
In a show of solidarity with our oppressed Meleagris gallopavo brethren, there will be no craft night this Thursday, November 28th. We recommend gathering together with friends and loved ones and sharing a hearty seasonal meal of kale and pine nuts instead. See you all next week!
Last year we wrote about building HID Proxcard RFID tags with attiny85 microcontrollers (based on Micah’s avrfid.s code). The C version only supported classic 26-bit cards, but I recently needed to support the “secure” HID Corporate 1000 35-bit format.
Based on Daniel Smith’s writeup on the format and some digging around, I figured out that the MFG_CODE for this format is 10-bits long with the value 0x005. He also pointed out that the 26-bit firmware had the wrong code — it is not the 20-bit code 0x01002, but is instead the 19-bit code 0x0801 and the bottom bit is part of the parity computation for the card id. If you’re using a HID branded Proxcard reader, the value that it outputs is the entire data portion, including all of the parity bits, but does not include the MFC_CODE part. If anyone knows of a table of these codes, please let me know!
I’ve updated my firmware with these changes and it works great. Emulating a 35-bit card takes 846 bytes of flash (nine more than the 26-bit cards since the state machine stores one bit per byte), so it might be possible to port this to the attiny10. I’ve also found that the tags work much better with a small capacitor across the two clock pins, as shown in the above photo.
If I had known earlier that Denny Chin was to deliver his decision on the fair use question in the Google Books case, I would made my way to Madison Avenue and lurk outside the office of the Authors Guild, the plaintiffs. There I might perhaps have heard a pitiful wailing and gnashing of teeth, sounds no doubt echoed in many a lawyer’s chamber around the city. For Denny Chin drop the bomb on their hopes, and found an affirmative fair use defense for Google’s scanning project. That the result was pronounced in the Federal District court of what has historically been the centre of the US publishing industry is also noteworthy. But this has been a never-ending saga of litigation so first let’s recap, check the reasoning, and lastly ponder the consequences.
1. Google Books comprises three classes of texts from a legal point of view: public domain works which can be made available in their entirety; books which are made available to preview through partner agreements between Google and publishers; and books which were scanned by Google without permission, the searching of which produces small ‘snippets’ of the text as results. This court case concerns the final group of books.
2. The Authors Guild and Association of American Publishers launched their legal action in 2005. In 2008 a settlement was announced by Google, it would be subsequently be amended, but the substance was to (a) make a payment to affected authors (b) pay the plaintiffs lawyers and (c) fund the establishment of a Book Rights Registry. This settlement was eventually rejected n multiple grounds by Denny Chin in early 2010. At this time he was a Federal District Court Judge in New York. Chin was subsequently promoted to the Court of appeals for the 2nd circuit, but as able to hold onto several cases from his previous post – including the Google Books case.
3. While the various parties involved attempted to reach a modified agreement which would be acceptable to the courts, Chin set a schedule for litigation of the original copyright infringement action. As the Authors Guild were to put the case for all the authors whose works were copied, they had to get ‘certification’ of the class – basically a decision from the court that it is appropriate that the plaintiff represent all members of the class and has the means to do so. Certification was issued by Chin in May and then appealed by Google in July. Obviously Chin did not hear the appeal of his own decision. The Court of Appeal sent the case back to Chin at the District Court to make a determination on the fair use defense to the charge of copyright infringement, as a decision in Google’s favour would make the certification issue irrelevant.
i. Google got access to the books from participating libraries, who received a digital copy of each of their books in exchange. All texts are processed for optical character recognition (OCR) so that a full word index can be constructed to enable search.
ii. Much emphasis was placed on the restrictions on access to those books scanned without permission, of which only ‘snippets’ are displayed. Each snippet is one eighth of a page and only three snippets are ever returned in the results field. In addition to this limitation, one out of the eight snippets is never displayed, and no snippets are available from one in ten pages. The upshot of all this is that the full text of the book is never displayed to users, even over long periods of time in a fragmentary fashion.
A. Chin found in favour of Google in the fair use determination. He analyzed the facts against the four factors of the fair use test codified in the law, but did so in the shadow of what his interpretation of copyright’s ‘very purpose’: “Copyright law seeks to achieve that purpose by providing sufficient protection to authors and inventors to stimulate creative activity, while at the same time permitting others to utilize protected works to advance the progress of the arts and sciences.” (page 16)
B. He then stressed that a key issue was whether the alleged infringement is ‘transformative’:
that is, whether the new work merely”supersedes” or “supplants” the original creation, or whether it: instead adds something new, with a further purpose or different character, altering the first with new expression, meaning, or message; it asks, in other words, whether and to what extent the new work is “transformative.” (page 18)
In the recent past this approach has been used to provide the fair use imprimatur for the basic technology of search, the cases Kelly v Arriba and Perfect 10 v Google.
C. He then applied the four factors in turn (pages 19-25).
- ’the purpose and character of the use’; Chin found the use to be highly transformative, as (a) its cross-corpus index of words in books had quickly become crucial for research as well as (b) making possible whole new types of research such as text and data mining base on the quantitative analysis enabled, whilst (c) the service did not offer a competing way to actually read the books. Given all this it was of less import that Google is a commercial enterprise and undertook the project motivated by profit.
- ‘the nature of the copyrighted work’; most of the books scanned were non-fiction works whereas ‘works of fiction are entitled to greater copyright protection’
- ‘amount and substantiality of the portion used’; Google copies the entirety of the work, and whilst the making of full copies does not exclude the possibility of a fair use finding, this is the only point which Chin felt went against a fair use finding.
- ‘Effect of Use Upon Potential Market or Value’; this is often the determinant part of the analysis. Here the plaintiffs claimed that the value of their works was being undermined, but Chin disagreed. He argued that given that Google was not selling the scans it produced as part of building the library, what they were effectively doing is helping to build potential sales by making it easier to discover forgotten, lost or neglected works.
The Fair Use analysis is followed by a summary of the social benefits of the service:
In my view, Google Books provides significant public benefits. It advances the progress of the arts and sciences, while maintaining respectful consideration for the rights of authors and other creative individuals, and without adversely impacting the rights of copyright holders. It has become an invaluable research tool that permits students, teachers, librarians, and others to more efficiently identify and locate books. It has given scholars the ability, for the first time, to conduct full-text searches of tens of millions of books. It preserves books, in particular out-of-print and old books that have been forgotten in the bowels of libraries, and it gives them new life. It facilitates access to books for print-disabled and remote or underserved populations. It generates new audiences and creates new sources of income for authors and publishers. Indeed, all society benefits. (page 26)
As far as Chin is concerned the same analysis applies to objections to the libraries use of their scanned copies. And that’s that, a knock-out for Google and the libraries in the Southern district of New York.
Momentous as it is, for now this is just a District Court judgement; endorsement by a higher court will be necessary before its full impact is felt. In the short term the decision will surely be appealed. How willing will the 2nd circuit be to reverse one of its own judges, and one who has been sleeping with this litigation for so many years? Does that mean it will go to the Supreme Court?
More broadly, the fact that this went to court meant that this defense is now open/applicable to others as well. A huge concern with the Google books settlement was that it was a private agreement granting them exclusive shield from liability with regard to the corpus of books – the path is now open to others to do the same, like the Internet Archive perhaps. Furthermore the concept of transformative use comes out of this emboldened, and available potentially to others working with different forms of archives, such as moving images for example.
Of course the problems for those who would follow in their footsteps is that the rules are different for Google. Not only do they have the money to fight infinite legal battles, but they have the reach into our habits such as to make their tools ‘useful’ and ‘socially beneficial’. They benefit from a presumption of legitimacy because of our reliance upon their services. Should this decision survive the coming challenges, the real test for it will be whether it provides a shelter for the next technologists developing tools that upset an incumbent industry.
Battelle Energy Alliance, LLC v. Southfork Security, Inc., raised eyebrows last month when the Chief Judge of the Idaho District Court ordered open source software developer Corey Thuen to surrender his hard drive for imaging, on the grounds that he had once described himself as a “hacker.” “Call yourself a hacker, lose your Fourth Amendment rights,” the headlines went. In fact, there is probably no constitutional issue here, but the case does raise a number of interesting legal issues related to open source software.
The facts and and the temporary restraining order
Thuen had been employed at Battelle, the government contractor that operates the Idaho National Laboratory, where he worked on a network analysis tool called “Sophia.” Battelle wanted to license Sophia to utility companies to monitor their industrial control system networks and began soliciting bids from third-party contractors to commercialize the software. Thuen expressed interest in starting his own company to bid on the contract, and Battelle let him take a year’s leave of absence for that purpose. After submitting a bid, however, Thuen’s company Southfork abruptly withdrew it and announced a competing open source project, Visdom. Battelle’s lawsuit alleges that, in doing so, Thuen breached his employment agreement and infringed Battelle’s copyrights in Sophia.
The hard drive seizure came out of an ex parte temporary restraining order (TRO), meaning that the judge heard from one side (Battelle) before granting the order. Ex parte hearings are reserved for extraordinary circumstances, as the court notes in its order, since one-sided hearings tend to yield one-sided results. In this case, the judge decided an ex parte order was warranted on two grounds. First, Thuen had called himself a “hacker” and so was likely to destroy evidence given advance notice of the order. Second, assuming Thuen’s software was based on Sophia’s code (Battelle argued that it must be, because it had been developed in only a few months), making it available as open source would compromise the security of the utilities that used Sophia to defend their networks and endanger the national energy infrastructure. (I know, I know.) Predictably, the court relied on only the most negative definition of “hacker.” As many have noted, Southfork’s website (where the offending reference was found) almost certainly meant that he was a hacker in the more benign sense used by the security and programming communities: an enthusiastic programmer and problem solver (or a white-hat penetration tester).
After more facts come out, the court lifts the TRO
The court expedited the next hearing to give Southfork a chance to respond to the TRO. The order that came from that hearing demonstrates why no-notice TROs are trouble. Turns out, Thuen had already posted Visdom’s source code to Github in July, three months before the order issued. This fact essentially obviated the need to image Thuen’s computer, since the complete source code of Visdom was available on Github for analysis against Sophia’s code. Even if he’d hastily removed the repository, it’s likely that a backup could have been subpoenaed from Github. The existence of the code on Github also, of course, mooted the court’s order not to publish the code. (The court apparently believed Battelle that it wasn’t aware of the Visdom release until after the TRO, which explains why the second order doesn’t contain any profanity.)
The legal standard for a TRO (as for a preliminary injunction) requires the moving party to demonstrate “(1) a likelihood of success on the merits; (2) a likelihood of irreparable harm to the moving party in the absence of preliminary relief; (3) that the balance of equities tips in favor of the moving party; and (4) that an injunction is in the public interest.” In light of the July release of Visdom’s source code, the court was compelled to revisit every element of this analysis.
Having learned that Visdom was written in a different language (Go, rather than C) and had a completely different interface than Sophia, the court rightly concluded that Battelle was far less likely to prove infringement that it had previously thought. Regardless, the court found that Battelle was likely to succeed in its breach-of-contract claims. Thuen had allegedly agreed to work on nothing during his leave of absence except commercializing Sophia; the court held that he likely breached this agreement by building Visdom instead.
In revisiting Battelle’s argument that releasing Visdom would cause irreparable harm to national security, the judge considered testimony (now from both parties) on the age-old “security through obscurity” question: is software more secure if its source is kept secret or if it’s open to inspection and correction? (This isn’t a serious question to most security professionals, but open source security is counterintuitive to people unfamiliar with the topic.) The court noted the conflicting testimony on this issue and also that Battelle hadn’t shown that Sophia was currently in use at any utility companies and decided there was insufficient evidence of an imminent harm to national security.
Finally, the court had learned that Battelle waited five months after learning of Southfork’s plans to release Visdom before it sought the TRO. This delay seriously undermined any claim to urgency, as did failure of the preceding three months to yield a single catastrophe for Battelle’s business opportunities.
What happens next
The court’s swing to Southfork’s side on the TRO issue feels almost like repentance for credulously buying Battelle’s line on the TRO. Tellingly, the word “hacker” does not appear in the second order. However, whatever favoritism the judge may have showed Thuen on this latest order, it may not last long. According to the facts laid out by the court, two factors favor Battelle’s case. The first we already discussed: Thuen apparently violated his agreement to work solely on Sophia during his leave. In addition, however, his employment agreement was likely still in effect during his leave. Developers’ employment agreements often contain IP provisions assigning anything related to the employer’s business—even work done on the employee’s own time and computer—to the employer. If that was the case here, Battelle will have a strong argument that it owned Visdom from the beginning. If the employment agreement was silent on outside work, it will be a closer call.
A chance to revisit nonliteral copying?
If the case proceeds to trial, it will raise a really interesting infringement issue. Since Visdom is written in a different language from Sophia, Battelle will be in the possibly novel position of proving cross-language copying. This would mean showing that the non-literal elements of Visdom—its structure and organization, essentially—are similar enough to Sophia’s that it’s infringing. This is an evolving area of law. A cause of action for infringement of nonliteral elements of software was first considered in the 1992 case Computer Associates v. Altai, in which the court acknowledged that such a claim could succeed (but found against the plaintiff). Most recently, it arose in Oracle v. Google, where Oracle asserted copyright in naming and grouping of items in the Java API. Oracle lost, and the judge strongly suggested that the potential grounds for nonliteral infringement claims are even narrower than those first articulated in Altai.
The holding in Oracle rests in part on the theory that an API’s method names and inputs comprise a program’s (or programming language’s) “method of operation” and are thus are excluded from copyright under 17 U.S.C. § 102. If Visdom’s internal structure (as opposed to its externally facing API) is sufficiently similar to Sophia’s, Battelle may be able to avoid a similar conclusion—a program need not copy another’s private method names and internal organization to remain compatible with other programs “operating” it. If Thuen literally translated Sophia’s code to a different language, preserving the relationships between the methods as well as their internal functionality, Battelle might have an argument. Short of that, there’s probably no hope for an infringement claim.
No Fourth Amendment issue
A lot of the outrage surrounding this case focuses on the seizure of Thuen’s laptop. As I said above, I think the court was imprudent in granting the TRO, and should have done its homework on the meaning of “hacker” (or pushed Battelle harder for evidence of likely tampering). But in general, civil discovery requests do not implicate the Fourth Amendment, so that issue is unlikely to get litigated here.
I was a teacher in Seattle Public Schools in the early 2000s. I had a generalist certification and I had a background in puppetry so, even though I was qualified to teach anything K-8, I mostly taught middle school art with a two year stint thrown in there teaching K-5.
I'm really happy to be announcing MakerBot Academy. It's our initiative to put a MakerBot in every school in the USA. I'm personally jumpstarting the movement by putting in a chunk of change and Ralph Crump and Autodesk are joining me to empower the next generation with advanced technology.
We also put together a Thingiverse Challenge to make math manipulatives so that the community can do their part to get materials ready so that when teachers get their MakerBots, they can have things ready to make that will make their classroom better.
I can't wait to see what teachers and students do with MakerBots!
I’d like to share a neat Eagle hack for all our the people who have taken our Eagle CAD classes (myself included) and our Eagle-using friends.
BOM-EX is nifty little ULP (User Language Program) that extends the functionality of the built-in BOM ULP. BOM-EX not only helps you assemble a coherent BOM (Bill of Materials) right from your Eagle schematic, but it also makes it easy to assemble a database of parts, and associate those parts with parts numbers for DigiKey, Mouser, Newark, etc.
I made a nice little script that lets me build my BOM database without ever leaving the DigiKey website…
My usual process for selecting parts goes something like this:
1) Drop a part in the schematic.
2) Check to see if this part (in this size) exists, is available, is affordable, etc.
3) No? Keep looking.
4) When the project is done, go back and find every part again.
5) Assemble a BOM by hand, or spend time cleaning up the output from BOM.ulp.
6) Punch everything in to DigiKey, triple check to see what I forgot, etc.
7) Place the order, hope I didn’t forget anything, hope I ordered enough, etc.
BOM-EX makes things much simpler; I wish I has been using it from the start:
1) Find the part on DigiKey, etc.
2) Drop it into BOM-EX
3) When the project is done, export a BOM file that can be uploaded straight to DigiKey, Mouser, etc.
4) Order your parts with confidence!
But wait, how can we make this even simpler? How about populating our database straight from DigiKey?
First make sure you have BOM-EX set up. You can download the latest version of BOM-EX from Cadsoft’s webpage. Search for BOM-EX and download the latest version (bom-ex156 as of this writing). There’s a nice tutorial for setting it up here, and also a nice PDF included with BOM-EX itself.
Choose a location for your parts database file to live – it can be inside your Eagle project folder, but I recommend keeping it global, so you have an easy-to-reference library of common components for all your projects.
Next, download our Grab-Bag repository and find the python script in the Bom-ex folder. Open the script and change the
PARTSDB path to point to where you keep your parts database file.
You can now add parts to your database file straight from the command line:
python addDKPartToBom-ex.py ATMEGA644A-AU-ND
will add the following line to your parts database file:
ATMEGA644A-AU Atmel DK ATMEGA644A-AU-ND IC MCU 8BIT 64KB FLASH 44TQFP 44-TQFP
Note: you may need to install the BeautifulSoup module for Python. You can install it with
pip install beautifulsoup4 or
easy_install beautifulsoup4. It’s also available as the
python-beautifulsoup4 package in recent versions of Debian, Ubuntu, and Fedora, or you can download it from their website.
For extra coolness, drop addDKPartToBom-ex.py into
/usr/local/bin (or anywhere else in your PATH, make it executable (
chmod +x addDKPartToBom-ex.py), and now you can run it from anywhere.
But wait, there’s more! If you’re on a Mac, grab the
Add Parts To Library.workflow file in the repository and double-click. Select Open With Automator
If you followed the above instructions, you shouldn’t have to change anything, otherwise update the “Run Shell Script” box in Automator to reflect the correct location for your addDKPartToBom-ex.py script. Close Automator, double-click on the workflow again, and select Install.
Now, when you’re browsing the DigiKey website, just right click on the DigiKey part number, select Services, and select Add Part to Library
Et Voilà, your parts are appended to your BOM-EX parts database, no manual entry required! Enjoy a few hours saved in BOM-hell on your next Eagle project. See this tutorial for more information on using BOM-EX.
Do you have any slick Eagle hacks? Share them in the comments!
The Granny-bag index of celebrity: this picture was taken in early September on the platform of Spingpfuhl station in Marzahn (East Berlin). The legend printed on the bag says: “Edward Snowden, a traitor in the USA, but a hero for humanity!”
If you’re trying to install enigmail and icedove on Debian, you might find that the enigmail and icedove package versions conflict. Never fear, just install enigmail all by itself. It will remove icedove if present but then install iceape, which contains iceape mail.
Today the Copyright Review Committee in Ireland published its report, ‘Modernising Copyright‘ (beware, largish file). As mentioned elsewhere, I too made a submission to the committee. Eoin O’Dell (head of the CRC) posted an announcement of its release here.
Like the consultation paper which preceded it in 2012, the final report looks long at first glance. On closer inspection however, its analysis is confined to the first one hundred pages, thereafter follows seventy pages of draft legislative proposals, and the last ten pages are the skimpers’ delight – a precis of the report’s contents for the unmotivated.
Given that my nerdy interest in copyright is not equally distributed, I will not pretend to offer a full overview, but instead focus on the parts which strike me as most salient. these will be dealt with in the order that they appear in the paper, which means that Fair Use comes last, and whilst this initially seems strange, it makes sense with in the waft and warp of the Report’s own logic.
1. The consultation paper was enthusiastic about the creation of Copyright Council (CC) to serve as a policy talk shop open to the vying interests at play in the copyright arena, so no surprise to see a formal recommendation that it be created.
Its membership is to be drawn from all interested parties, which it is noted would distinguish it from analogous superficially similar organisations elsewhere whose principals tend to be rightsowners, or their licensees, or their friends or whatever.
The CC’s functions are to be many and varied, from promoting ‘awareness’ about copyright to researching the social and cultural consequences of the law, providing insight about technical issues and drawing up codes of best practice; all very worthy indeed.
The prospect of more serious responsibilities for the CC is also held out – as possible operators of the eventual domestic system to manage orphan works and of collective licensing agreements devised within a potential digital Copyright Exchange.
2. The cost of intellectual property litigation is a common complaint. The Report argues that the District Court should be enabled to hear cases up to its threshold of 15,000 euros. Another source of whinging is the shortage of Judges capable of tackling the complexity arising in IP cases, here it is suggested that a dedicated court be established at Circuit Court level.
3. Even on the part of those committed to maintaining the basic structure of copyright there has been discomfort at the scale of punishments being meted out to what are ultimately rather mild defendants, remember Jammie Thomas? (How quickly our martyrs fade back into obscurity.) The report has this to say:
there was a great deal of support in the submissions for the idea that remedies for breaches of copyright should be proportionate, and that civil sanctions (such as injunctions and damages) should be graduated. In this way, at one end of the scale, unintentional breaches would not be met with significant awards of damages, and that, at the other end of the scale, the most serious breaches would be appropriately dealt with by the award, for example, of restitutionary, exemplary or punitive damages.
4. The chapter dedicated to ‘Rightsowners’ contains nothing momentous. The request to make circumvention of digital rights management into an independently actionable form of infringement was rejected. A legislative lunar eclipse creating potentially perpetual copyright in the case of some unpublished works is listed for elimination.
Photographers receive a bone here: they were voluble during the whole process and have been especially worried that the Orphan Works proposals could be used as cover by their enemies and exploiters (everybody!) to strip the attribution from their work, declare it orphaned, and use it without payment. Actually although I’m a bit sarcastic about the tone of their contribution, I have some sympathy for them, caught as they are between a market ever more heavily populated by what were formally amateurs (now armed with high level equipment and the means to get their photos quickly to agencies over the web), an agency business ever more concentrated Getty etc munching all the competition, and cost-cutting publishers who really would screw them if they could. To allay their fears the Report argues that metadata should be protected, and stripping of same punished.
On a related point however no change is suggested regarding the use of photographs for news as part of fair dealing. I recall trying to research the logic behind this a year ago and could find no clear explanation, and that made me feel a bit dumb. So is it to serve the public interest in news access? To reduce the costs of reporting? Answers on a postcard please.
5. The real action begins in the section dedicated to ‘Users’. The tone is captured by the first proposed change: fair dealing is to ‘include’ rather than ‘mean’ the exceptions which follow thereafter – consequently the category is to be kept open, available for expansion in the future, in line with further technological change or opportunity.
A range of exceptions permitted under the EU Copyright Directive – but which had never been implemented into Irish law – are then reviewed and it is recommended that each be integrated into the statute, these include:
- private copies and format-shifting, including into formats for storage ‘in the cloud’
- non-commercial user-generated content
- extended exceptions for educational purposes (this is limited to ‘formal educational establishments’, something which seems flawed to me given the capacity and actuality of self-organised education online, by definition occurring in largely informal environments.
- enhanced exceptions for people with ‘disabilities’
6. The above exceptions are all derived from the language of the EUCD and thus of unimpeachable pedigree. In the following sections on ‘Entrepreneurs and innovation’, the Report moves into more creative territory. The crux of it is the proposal for a new exception for transformative works or uses of otherwise protected works. The opening part of the proposed legislative language is worth quoting:
(1) It is not an infringement of the rights conferred by this Part if the
owner or lawful user of a work (the initial work) derives from it an
(2) An innovative work is an original work which is substantially different
from the initial work, or which is a substantial transformation of the
(3) The innovative work must not—
(a) conflict with the normal exploitation of the initial work, or
(b) unreasonably prejudice the legitimate interests of the owner of
the rights in the initial work.
This is then followed by a series of sections limiting its applicability but the overall design represents something of a breakthrough. As an aside, it seems to me appropriate to point out that this move is to my knowledge based on the rather brilliant work of Prof. Lionel Bently at Cambridge University, who submitted a carefully argued submission to both the Hargreaves Report in the UK and then to our Irish Iteration. Therein he argued that whilst the reproduction right had been harmonised, leaving little wiggle room, the adaption right had not, and member states are free to do what they want within the limitations of the Berne Treaty. The proposed section 106 integrates the language and logic of the Berne Three Step test (the threshold legitimate exceptions must meet), but there is a strong case that this is not as stringent as might initially seem, otherwise the US’s fair use clause would already have been found in violation of Berne. Anyway, if one is going to read one technical submission in this whole process it should be Bently’s, IMHO.
7. Next up are proposals relating to heritage institutions, not my cup of tea.
8. Lastly, as if to conclude with a crescendo: fair use. And the Committee has decided that Ireland needs it, whilst being at pains to point out that this is a specifically Irish version rather than some US idea baldly imported.
The test as to whether a use qualifies as fair comprises eight criteria and the language is to be found under section 49A.
(a) the extent to which the use in question is analogically similar or related to the other acts permitted by this Part,
(b) the purpose and character of the use in question, including in particular whether
it is incidental, non-commercial, non-consumptive, personal or transformative in nature, or
if the use were not a fair use within the meaning of the section, it would otherwise have constituted a secondary infringement of the right conferred by this Part.
(c) the nature of the work, including in particular whether there is a public benefit or interest in its dissemination through the use in question,
(d) the amount and substantiality of the portion used, quantitatively and qualitatively, in relation to the work as a whole,
(e) the impact of the use upon the normal commercial exploitation of the work, having regard to matters such as its age, value and potential market,
(f) the possibility of obtaining the work, or sufficient rights therein, within a reasonable time at an ordinary commercial price, such that the use in question is not necessary in all the circumstances of the case,
(g) whether the legitimate interests of the owner of the rights in the work are unreasonably prejudiced by the use in question, and
(h) whether the use in question is accompanied by a sufficient acknowledgement, unless to do so would be unreasonable or inappropriate
These eight elements are structured into three groups: the first cluster (three factors) probes for elements which could legitimate the use; the next two criteria touch on general matters; the final group of three tests those elements which would weigh against a finding of fairness.
Overall i think there is a lot to like in this report. It display some fancy footwork in working with the constraints of the EU copyright acquis whilst responding to a need for flexibility which can serve as an incubator for economic opportunities. Let’s not fool around here: is still under the Troika and will be dealing with the fallout of the rabid tomcat and its property bubble for a long time to come.
The grand design and originality thus of ‘Modernising Copyright’ thus is the injection of targeted flexibility into the legal framework – this is no mere echo of the Hargreaves Report in the UK, which backed away from Fair Use out of fear at the uncertainty it would necessarily entail. If the Report’s authors have their way, contested uses in Ireland will first be examined to see if they fit the exceptions spelled out in the EUCD, or checked against the innovation exception if they are derivative works/adaptations. Only if they have fallen at those two fences, will the fair use test be their last chance saloon.
Now I’m curious to hear the responses of the various interests involved.
Later there will be time to ponder my reservations: the Report kicked for touch on questions around secondary liability, safe harbours etc and remained silent on the conflicts around enforcement.
And then there’s the politics – will Fine Gael and Labour actually do anything with it or will it just be buried?
This petition was on a table by the doorway at a Starbucks near my house, and the top sheet had even collected a lot of signatures. I wonder what those people thought they were accomplishing. You can click on the photo to get an enlarged version, but here’s what the text says:
To our leaders in Washington DC,
now is the time to come together to:
- Reopen our government to serve the people.
- Pay our debts on time to avoid another financial crisis.
- Pass a bipartisan and comprehensive long-term budget deal by the end of the year.
It’s as though Starbucks CEO Howard Schultz hears someone getting mugged outside his window and shouts “Hey you all down there, quit fighting!”
Memo to Starbucks: the way to solve this crisis is by taking a side. It’s literally true: as their poll numbers have dropped (i.e., as more people have taken sides), the Republicans have started to abandon their demands. When enough of them abandon enough of their hostage-taking ways, the government will re-open, the debt ceiling will be raised, and conversation will be possible. Humiliating defeat is also a bipartisan solution.
If you’re a gigantic publicly-held company and don’t feel you can afford to take a side, then at least don’t put out pointless petitions in favor of unicorns and rainbows and everbody getting along. That’s worse than useless. You might confuse some poor person who hasn’t yet had their morning coffee into thinking they’re actually participating in politics when all they’re doing is donating their name to your misguided and implicitly partisan publicity drive.
Refusal to take a side almost always equals taking one side. In this case, by legitimizing the Republicans’ extortionary tactics, Starbucks is supporting their side. All the people signing that petition are doing so too, but — especially knowing the demographics of Hyde Park, Chicago, where that particular Starbucks is located — they probably don’t think of themselves as doing that. That what makes this worse than useless.
I’m not sure how one conspicuously refuses to sign a petition. Maybe cross out one line? Sign your name and then cross over it? What I did at that Starbucks was write a note at the top of the petition about “false equivalency” and how the only constructive action to take here is to take a side. If you stop by a Starbucks today, please do the same :-).
Our laser cutter is suffering from the need of a replacement part and will probably not be working by Monday. You’re still welcome to come over for our regular open Thursday night, but you may not be able to do any laser cutting, so bring your arduino or your hyperbolic crochet instead.
Our laser has been restored to full operation. Do not attempt to bring your hyperbolic crochet to laser night. We apologize for the inconvenience. Thanks for your patience!
Arrgh, I wish I could go to this!
Eben Moglen is giving a series of talks entitled “Snowden and the Future” on four Wednesday nights, spread across October, November, and December. I’d even fly into New York to attend some of them, but I have choir rehearsal on Wednesday nights (and I’ve already missed rehearsals due to travel, so don’t want to do more of that).
But if you’re in New York, you should go! They’ll be at Columbia Law School, Jerome Greene Hall room 101 (map), from 4:30pm – 5:30pm, on Wednesdays Oct 9th, Oct 30th, Nov 13th, and Dec 4th. More information at snowdenandthefuture.info. The talks will be live-streamed at that site too.
Eben Moglen will be giving a series of four public talks in New York City, entitled "Snowden and the Future", starting Wednesday, October 4th (the other dates are Oct. 30th, Nov 13th, and Dec 4th, all Wednesdays).
All talks will take place at Columbia Law School, in room 101 of Jerome Greene Hall (map), from 4:30pm - 5:30pm. For those who can't be there, streaming video of the events as they take place will be available from snowdenandthefuture.info.
Why you should go to these talks:
The connection between copyright restrictions and civil liberties violations is clear and unavoidable. We've written about it here (and here and here and here). It's been the key to the Pirate Party's political success in Europe, and the subject of one of Nina Paley's excellent minute memes. Eben Moglen, the founder and director of the Software Freedom Law Center, is one of the clearest thinkers talking about digital freedom today -- and one of the most inspiring: a previous public lecture of his led directly to the creation of the Freedom Box Foundation. He's also a terrific speaker. You won't be disappointed; go, and bring all your friends.
The surveillance state is aided and enabled by information monopolists who assert that watching people's Internet usage for unauthorized use of copyrighted material is so important that it trumps both privacy concerns and freedom of expression. That's why we keep a close eye on surveillance news here at QuestionCopyright.org, and encourage you to as well.
For more information on these lectures, visit snowdenandthefuture.info.
The rosetta stone for our talk was Fred Turner’s seminal paper Burning Man at Google: a cultural infrastructure for new media production (published by New Media and Society, the same journal that published my and Aram’s paper on The End of Forgetting (preprint)), which Turner also presented at Google, where his talk was recorded.
We tried to connect Burning Man to a central question in education — the question of transference. Do skills learned under simulated conditions transfer over to real world settings? We started out with the grand question, “What Educates?”, and tried to narrow that down to the question of how we can view commons-based peer-production in an educational context? What can Burning Man, and crucially, the Maker Spaces that make Burning Man possible, teach educators about teaching and learning?
Now that we have presented this to CCNMTL, some of the librarians have gotten wind of our talk, and have invited us to re-present it at a tech brownbag lunch later this Fall
To the evolution!
It is easy to find arguments that social media are wonderful and that they are horrid: that they will enable new heights of connectivity and creativity or rot our brains. I've come to the opinion that while these arguments are interesting, and get a lot of attention, they are of little utility. The technology isn't going away, nor is our easily distractable monkey-minds. What we need is an understanding of how to use digital communications well. This is the task I've set for myself in a new course, Communication in a Digital Age, with the help of Howard Rheingold's excellent Net Smart: How To Thrive Online.
Hexascroller has been a central fixture at NYCR for the past few years, with a few ups and downs. It’s replacement, Octoscroller, improves on our classic message alert polygon by having two more sides and two more colors of LEDs.
The userspace application renders images into a shared memory frame buffer, or in this case receives UDP packets containing video images from the Disorient Pyramid transmitter. The PWM algorithm can do between eight and sixteen levels of brightness for each color, producing approximately 12-bit color.
See it in person at MakerFaire in NYC this year and read on for details of how to wire up a driver for the panels, as well as a walkthrough of some of the PRU code.
The brains are a BeagleBone Black running the LEDscape custom PRU firmware. The AM355 CPU in the BBB has two separate realtime microcontrollers built into its die, both with full access to the GPIO lines and cache coherent access to main memory. This bit of hardware/software allows the user application to simply render into a frame buffer, which is then driven to the panels by the PRU.
The key component of Octoscroller are these 16×32 RGB panels from Adafruit. Unlike the popular WS281x LED strips that have their own PWM hardware builtin to each pixel, these panels are very inexpensive since they require continuous refresh by an external driver. The Arduino firmware can only drive a single panel and consumes a significant amount of CPU interrupt time to maintain the image. With LEDscape, the BBB can drive up to four chains of four panels each at 0% CPU load.
Powering the panels and the BeagleBone WAS a 5V 10A DC supply, but it turns out that each panel draws up to 2.6 amps in RGB mode and 3.5 amps while displaying full white.
The ten amp supply melted during high-brightness testing and was replaced with a 60 amp open frame supply. This should be sufficient once octoscroller grows into hexadecascroller…
The panels are built as six parallel shift registers, each with 32 bits, and twelve 16-channel constant current LED drivers. The connectors have six data and six control inputs each: R1, G1, B1, R2, G2 and B2, and A, B, C, CLK, LTC, OE. The three address select lines, A, B, C, select which two rows are currently displayed (0/8, 1/9, 2/10, etc). On each falling edge of the CLK line, a new bit is shifted in on the six data inputs. On the falling edge of the LAT line, the new data is latched and, when OE is held low, displayed. To save on GPIO lines, the PRU shares the control lines between all output chains and only needs the six additional data lines per chain. If the HDMI hardware is disabled then four chains can be driven by the single board.
R1 __XXXXXX..XXX________ Row ABC + 0 G1 __XXXXXX..XXX________ B1 __XXXXXX..XXX________ R2 __XXXXXX..XXX________ Row ABC + 8 G2 __XXXXXX..XXX________ B2 __XXXXXX..XXX________ CLK --_-_-_-.._-_-------- Clock in 32 bits per panel in the chain OE _____________----____ Disable output while changing ABC and latch A -------------XXX----- Select new address B -------------XXX----- C -------------XXX----- LAT ---------------_----- Latch new data to outut
The panels also shift out the data to the left as they go, allowing them to be daisy chained. Since the entire row needs to be clocked out before it can be displayed, this limits the maximum desirable number of panels in the chain. The highest reliable bit clock seems to be on the order of 10 MHz, which means that each row of each panel takes about 5usec to output, or 40usec for all eight scan lines. If there are ten panels in series, this would rescan each panel every 400usec, which seems to be the limit of persistence of vision. (2.5KHz, 8:1 duty cycle.)
// If the brightness is less than the pixel number, // turn off but keep in mind that this is the brightness // of the previous row, not this one. // \todo: Test turning OE on and off every other, // every fourth, every eigth, etc pixel based on // the current brightness. LSL p2, pixel, 1 QBLT no_blank, bright, p2 DISPLAY_OFF no_blank:
However, this duty cycle is only if the pixels are fully on or off. To create variable brightness using PWM requires that the panels be rescanned at a much higher frequency so that there can be more variation between the levels. In practice I’ve found that a maximum of four can be chained together and still retain decent performance (eight levels per pixel) or two panels with sixteen brightness levels. My current PWM algorithm uses the !OE line to toggle the display off after a few pixels on the next line have been clocked out. This seems to work fairly well, although I’m sure there are improvements that can be made.
Want to build your own? We’ll have a class on it sometime soon that will include all the parts and a custom BeagleBone cape to make your own network attached low-res colorful polygonal display device.
There are now circuit boards for driving up to eight chains of eight LED panels each. That’s 64 LED matrices for a mini-jumbotron! Once we have more panels house, we’ll have an announcement for signups for the class.
Just for the record, I know I could have created the English-language wikipedia entry for LOVEINT myself. But I wanted to see how long it would take from tweeting (and denting) it until someone else creates the entry :-).
Embarrassing — German Wikipedia has an entry for "LOVEINT" but English doesn't. (Yet?) http://t.co/dT6kpKB6uO
— Karl Fogel (@kfogel) September 8, 2013
(I think the only Wikipedia article I’ve actually started is the one on William Binney, which has developed very nicely since then. This time I’m taking the lazy route, though, and calling it an “experiment” since people have more respect for science than for laziness.)
Update: Okay, looks like the deed was done on 12 September 2013, by Wikipedia editor Koavf (Justin A. Knapp). He did it as a redirect to the “2013 mass surveillance disclosures” article, which mentions & defines “LOVEINT”. No idea whether he ever saw this blog post, but anyway, thanks Justin!
This year the Disorient Camp at Burning Man built a 7m tall pyramid with over half a kilometer of LED strips. Several artists developed patterns for the panels, including Disorient founder Leo Villareal and Jacob Joaquin from Fresno Idea Works. Every night there was a party in front of the pyramid, with bicycles blocking the entire Esplanade.
The pyramid was visible from just about everywhere on the playa and served as a great beacon for finding the camp after a long night out. Read more for the technical details of how it was constructed and links to all the source code.
One of the longer patterns shows an evolution from the double helix of DNA, to the game of life to
worms wiggly snakes before transitioning back to more abstract patterns.
A Toughbook ran all of the pyramidTransmitter code, which rendered the frames to 24-bit bitmaps and sent them over UDP to the network. This machine spent all night in the dust and each morning was covered in several milimeters of fresh playa dust.
Driving each face of the pyramid was a BeagleBone Black running LEDscape, which sliced the images into the individual panels. It then sent the individual pieces over USB to four teensy3 microcontrollers, each which had eight WS281x 30 LED/m strips. The tall panels had 5V 40A supplies, the smaller ones used 30A.
The project was a bi-coastal one — the large panels were soldered and wired in California and much of the low level software was developed in New York. The boards were designed by Naim and available under an OSHW license. On site a crew of volunteers assembled the panels onto the pyramid structure built by the rest of the Disorient camp.
Operating in the harsh conditions of the Black Rock desert is difficult for electronics. We had lots of trouble with the powersupplies failing due to the heat and dust; next year perhaps we’ll use more, but lower amperage, sealed DC supplies. The adhesive holding many strips to the pegboard failed. Several of the solder joints broke due to stress during mounting. Screw terminals rattled loose from the 25KW of sound system in the Disorient dome. There was late night soldering while hanging from the rafters of the pyramid. Despite all these problems, the panels and the sixteen thousand LEDs put on quite a show the entire week.
QuestionCopyright.org has pledged $500.00 to the Tupi 2D Animation Software Kickstarter campaign, and we're posting this to help spread the word.
Please join us and the other project backers, with whatever amount you can pledge! Remember, your pledge is only called in if Tupi reaches their $30,000 goal by September 26th.
Tupi is already runnable code. They're on version 0.2 right now, and their goal in this campaign is to reach their 1.0 feature set, including installers for Macintosh and Windows. (It's already packaged for Debian and Ubuntu GNU/Linux; I've installed it.)
Our Artist-in-Residence Nina Paley (who also backed Tupi's campaign personally) explained very well why projects like Tupi are important, in her post "It's 2013. Do you know where my Free vector animation software is?". When you're an artist, you're dependent on your tools — and that means when someone has a monopoly over your tools, they can play havoc with your art and your livelihood. That's exactly what happened with Adobe's Macromedia Flash 8. Read Nina's post for the details, but basically Adobe decided to remove features from their Flash authoring software, in order to sell those features separately in other programs. As Nina points out, the problem with this isn't just the extra expense, it's the increase in workload and production time. And the looming threat that they might do it again in the next version. They can yank the rug out from under their users at any time, and there's nothing the users can do about it, except refuse to upgrade (which becomes less and less feasible as time goes on, of course).
Free, open source programs can't do this to their users, because no one has a monopoly over the software. If one group puts out a version of the software that is missing important features, users will shrug and start using a competing fork that treats them better. It also means that if enough artists need a particular bugfix or improvement in the software, they have a path to make it happen — they don't have to be programmers, as long as they can band together and hire programmers. Users are not vulnerable to arbitrary decisions handed down from management, they way they are with proprietary software. (Of course, the more likely scenario is that artists would band together and just pay Tupi's original development team to make the necessary changes. The fact that the users have the option to go elsewhere is precisely what makes the original authors likely to be responsive to true demand — a free-market ideal that proprietary software is structurally biased against attaining.)
Tupi has another thing going for it: Nina, an extremely experienced animator who knows the major competing proprietary tool very well, has publicly volunteered to test and provide feedback to open source animation projects, including Tupi. (Nina says "Tupi’s strength is its simplicity; it’s great for kids and anyone new to animation. It doesn’t yet have the power I need to produce feature films, but its development is a good thing for all of us. ...")
III.-Les personnes coupables de la contravention définie au I peuvent, en outre, être condamnées à la peine complémentaire de suspension de l’accès à un service de communication au public en ligne pour une durée maximale d’un mois, conformément aux dispositions de l’article L. 335-7-1.
[Persons guilty of an infringement, as defined in section 1, may furthermore be sentenced to the additional punishment of the suspension of access to a service for communication to the public online for a maximum duration of one month, in conformity with the provisions of Article 335-7-1]
Décret n° 2010-695 du 25 juin 2010
Le III de l’article R. 335-5 du même code est abrogé.
[Section III of Article 335-5 of the same law is abrogated]
So goes the begining of the end of Hadopi. Having begun operation only in 2010, the abolition France’s newly minted institution for the application of ‘graduated response’ to acts of copyright infringement, was foretold in May this year. Closely associated with the persona of one Nicolas Sarkozy it was predictable that his successor Francois Hollande would jettison it.
The opportunity arose with the delivery of the Leclure report prepared by a committee operating under a former head of Canal+. Sections relating to this topics were but a small part of this mammoth document on the future of the media in France, but it provided the necessary ammunition for a government keen to pull the trigger.
The key argument the report made for a policy change was the failure of the Hadopi system to positively effect the take-up pf ‘legal offers.’ Over the last year sales of music in France have continued to fall, as have Video On Demand viewings, cinema ticket sales etc.
The authors made the case for the annulment of the ‘third strike’- disconnection for repeat offenders – and for a reduction of the fines for infringements. Thus disconnection was officially abolished in a decree published July 9th. Up until then only one person was actually disconnected – for two weeks – and this occurred in June after it became clear that the regime was to be abandoned!
But despite being on the brink of disappearance, Hadopi continues sending out mails castigating alleged infringers 92,000 more in July bringing them to a cumulative total of two million since inception. This figure encompasses only initial warnings. In addition, another two hundred thousand letters have been sent to repeat offenders, and seven hundred more escalated to the prosecutors. Not that many really, at least when compared with the larger number of threatening legal letters dispatched to theri peers in neighbouring Germany, demanding payment of up to 1,200 euros from each unfortunate recipient (with what level of success it remains unclear).
Another recommendation in the report that the fine for infringers be reduced from 1,500 euros to 60 was not implemented. However, in the sole case where a fine was imposed the amount came to 150 euros.
What remains of its competencies are to be passed on the CSA (Conseil supérieur de l’audiovisuel). When this will actually happen remians unclear, so Hadopi limps on for now, on a reduced budget and likely low morale.
Aurélie Filippetti (the new Minister in charge) & Co. were understandably keen to distance themselves from the toxic Hadopi brand, but copyright enforcement initiatives are far from dead – they’ve just changed target. In January the head of the Rights Protection at Hadopi, Mireille Imbert-Quaretta, will deliver proposals regarding measures to target online streaming and direct download providers facilitating large scale infringement. Such sites will be required to filter uploadeds and weed out unathorised works. Failure to do so will result in blacklisting by ISPs. Intermediaries providing advertising placement and payment services for sites deemed rogue would also be targeted. And as La Quadrature Du Net pointed out in their press release at the time the overall Hadopi apparatus in France remains in place, so while the most egregious elements have been killed there is more to do.
Nonetheless this development represents another setback for the copyright industry campaign against user. In addition to France one can add Britain (delays in implementing the DEA), Germany (where the regulations concerning copyright abmahnung are due to be reformed) and the US (where the 6 strikes concrete application looks distinctly vague). So while there is no room for complacency, some modest celebration is in order.
I’d been waiting for this! (N.b.: had inside information it was coming.) The code behind my favorite RSS reader, feedbin.me, has been open sourced. See the announcement, or grab the code from github.com/feedbin/feedbin.
Feedbin is the RSS reader I use every day now. The minimal design is a pleasure: nothing gets between me and the articles I’m trying to read, but at the same time the knobs I need are there when & where I need them. It supports import/export, and has a documented API.
I don’t host my own Feedbin instance, of course. I just use the service run by Feedbin’s author, Ben Ubois, at feedbin.me. At the eminently reasonable price of $3/month, it’s well worth it for me not to have to worry about configuration and hosting administrivia. At the same time, knowing that the code is open source is important: that means it can never be taken away from its users. It means that the investment I make as a user can’t be suddenly rendered obsolete by one party’s decision to yank the rug out from under everyone.
If for some reason Ben Ubois ever shut down his Feedbin commercial service (unlikely), that still wouldn’t mean I’d have to set up my own instance. Someone else would probably do so, and I’d just pay them instead. Or if no one did so immediately, well, that’s a market gap I might be interested in stepping into… but then many others would be having the same thought. Open source is not about doing it yourself; it’s about removing barries to people doing things for each other.
That’s why it’s important for commercial services like Feedbin to also be open source.
Congratulations to Ben! I hope he gets many new users from among those who feel that commerce and freedom taste better together.
Here’s a screenshot of Feedbin’s three-column layout (feeds, [un]read articles, then the current article in the rightmost large pane):
Yesterday we announced that we closed our merger with Stratasys. We have a lot of work to do so this isn't a finish line, it's a milepost. It's like we've finished a video game level and it feels really good to have completed the level, but now we're on the next level and the terrain is a little different and the challenges are fresh.
In the early days of MakerBot in 2009 after we started the company and before we started shipping, I really thought that MakerBot would be a side project. I met Zach and Adam in the early days of NYCResistor. It promptly became clear that MakerBot wasn't a side project as we started logging 100 hour weeks. We spent the early days lasercutting, banging on keyboards, acquiring parts, and packing boxes. We lived on ramen which turns out to not be very healthy. I sold my musical instrument collection and started a secret cafe in my apartment to get through that first year before we paid ourselves.
Fast forward to 2013 and, as of yesterday, MakerBot is now a public company as a part of Stratasys. Since 2009 we've shipped an epic amount of machines and now have the capacity to keep cranking them out at our Factory here in Brooklyn. We're getting ready to launch the MakerBot Digitizer which allows people to make 3D models out of physical objects using LASERS.
We didn't get here alone. All of the people who have ever bought a MakerBot, everyone who has ever worked at MakerBot, and everyone who works at MakerBot now helped us get this far. I am proud of the work we've done together and thankful to get to have worked and be working with such smart people. There is a lot of potential energy in the universe and exploring how we use it to empower creative people is our frontier. We've got a lot of work to do and the future looks bright.
(Factory image: Core 77)
In June the Irish High court granted an application by four music companies to order six ISPs to block access to the Pirate Bay web site within thirty days. The decision was widely reported in the press at the time but the written judgement wasn’t published until July. It is notable that the country’s biggest ISP, Eircom, was not amongst the parties subject to the order, because they had agreed to block TPB voluntarily.
This case represents the first action taken on the basis of the amendment to the copyright law last year, controversially enacted by means of statutory instrument. The decision is brief, with McGovern J citing and accepting the analysis of Judge Charleton whose interpretation of the legislation in 2010 gave rise to the amendment process. It was specified that were the Pirate Bay to move to another web address the applicants will not need to apply for a new order, but simply inform the ISPs, who will be implementing the domain blocks at their own expense. IRMA – the trade organisation representing the music industry – has said that they plan to seek similar orders against up to twenty more sites in the near future.
In May an application by Digital Rights Ireland to be appointed amicus curiae was separately rejected by Justice Kelly. Their involvement had been opposed by music industry representatives and the court took the position that DRI could not be regarded as a neutral party, nor were they ‘charged in either domestic or international law with a public role in the area which is the subject of this litigation‘. DRI had argued that their participation was warranted due to the potential for decisions made under the amended legislation to impact on parties not represented in the proceedings. this was rejected.
Ben Kuchera wrote a piece for the Penny Arcade Report about how online harassment drove Phil Fish, developer of the game Fez, to publicly scrap plans for a sequel. The thrust of the piece is that abuse by online commenters is real force that affects everyone who creates art in public in the 21st century. Around the middle of the article, Kuchera makes a point of saying that online harassment isn’t limited to women. In a section titled “Abuse isn’t localized, rare, or limited to one gender,” Kuchera writes:
This has nothing to do with politics, or gender. I know women who have been threated [sic] physically because of their thoughts on real-time strategy games. I knew men who had their spouses and children threatened, or had racial or sexual harassment thrown their way, because of review scores.
What’s wrong with this? One friend suggested that Kuchera did a non-sexist thing here by calling out specific experiences of both men and women online, but that’s exactly the problem I have with it: it shows us the experience of some women over here, and some men over there, and how they’re more or less the same. But we know that they’re not the same, because women—particularly those who emit even the faintest signal of feminism—face harassment online earlier, more often, more harshly, and in a more gendered way than men do. By comparison, while men with a high profile online inevitably catch abuse, and many men get it occasionally, men’s experience with online harassment is markedly less common and less brutal.
To say “this has nothing to do with gender” denies that difference. And I get that Kuchera meant that every creative person, male and female, who interacts with their audience online will face some abuse. But still it won’t be equal, because the creative woman who manages to garner an online audience will have it worse at every step of the way. It’s a safe bet, for example, that Anita Sarkeesian received more believable threats of rape and murder on her way to getting 6,000 backers for Tropes v. Women in Video Games than Phil Fish received on his way to selling 200,000 copies of Fez (and Sarkeesian didn’t even have a movie made about her). For women, it’s always about gender.
Beyond the suggestion that men’s and women’s experience of harassment online is equivalent, Kuchera’s denial of the role of gender felt defensive because it was so out of place. The first 2/3 of the story is about a man who gave up his project because of harassment. What about that lede suggests that only women face harassment online? It’s a clarification with nothing to clarify. (Or was Kuchera concerned that his readership would believe from Fish’s story that only men faced harassment?) Given Penny Arcade’s reaction to feminist criticism in the past, it’s easy to read the reference (and maybe the whole piece) as a denial that men and women have qualitatively different experiences online.
About fifty million years ago, I encountered a minor bug in the OpenOffice word processor. It was an easy fix, a menu layout problem or something like that, so I thought I’d have a go at patching it. Of course, the first step would be to build the latest development version of the code and see if the bug was still present.
Well, I got stopped on that step. I spent an entire day trying to build OpenOffice, and didn’t succeed. I don’t think I even came close, though it was hard to tell. I eventually concluded that to be an OpenOffice developer, you’d need to first get a Ph.D. in building OpenOffice, and gave up in frustration. It brought home to me the importance of making software easy for developers to build — especially in open source software, where you depend on developers who bring their own energy and who will quickly take that energy elsewhere if it is not rewarded.
Years later, the OpenOffice project forked — well, the actual story is a bit more complicated than that, but basically today there is LibreOffice and Apache OpenOffice. Both are active open source projects, and it’s fair to think of LibreOffice as one of two equally legitimate inheritors of the old OpenOffice mantle in the sense of development continuity. (Do search://apache openoffice libreoffice ”document foundation”/ for the detailed story.)
I happened to be talking to some of the LibreOffice developers recently, and related my build experience from years ago, and how it had turned me off from ever considering OpenOffice development again, and from even considering LibreOffice development after the fork happened. The whole thing had left me scarred: buildability was such an obvious non-priority then that I didn’t see how a project could possibly ever get from there to something a normal mortal might build in finite time.
Wait, it’s gotten better, they said.
I expressed skepticism, but they swore it was true. Really?, I said. Okay, I’ll start from the top of the LibreOffice.org home page and see if I can find my way to useable build instructions, right now, right here, while we’re on the phone.
And you know what? They were right!
$ sudo apt-get update $ sudo apt-get build-dep libreoffice $ git clone git://anongit.freedesktop.org/libreoffice/core libreoffice $ cd libreoffice $ ./autogen.sh $ make dev-install
The whole thing built. Without errors. I had working libreoffice debug binaries in six easy, well-documented steps.
That was amazing — it changed my mind about how much a project can improve its build experience if the developers really decide to prioritize it. (Disclaimer: I haven’t tried the same with Apache OpenOffice; it might well be equally easy.)
They asked me if as penance I’d fix another minor bug, since I wasn’t able to fix that menu bug all those years ago, and offered bugs.launchpad.net/ubuntu/+source/libreoffice/+bug/1141106 as the victim. This seemed like a completely fair request; I didn’t make any promises but I said I’d take a look. Sadly, I have to admit that I’m not going to fix it any time soon, only due to other commitments. It’s not a hard fix in theory, but verifying that it works everywhere could take some back-and-forth with various bug reporters and testers, since it’s a modification to run-time shell scripts, and right now I need to ruthlessly cut down on small-scale random commitments.
So as an apology for not fixing that bug, I wrote this blog post. Kudos to the LibreOffice team for having given such a complex piece of software such an easy build process. Although by not fixing bug 1141106 I guess I’m contradicting my own claim, still, I think that being so conveniently buildable must be a major ingredient in getting developers in the door, and that this pays off for the project in the long run.
After an additional year of production work, our free-film project "Lunatics!" is back up on Kickstarter. We have a lot more done - some "finished" animation, voice acting and soundtrack mixing, a lot more completed 3D models, including some of the toughest mech modeling, and several characters. We are still 100% free-culture, using CC By-SA license for everything we release, and we're still open-source, making our models and other elements available to the commons. We use only music with By-SA compatible licenses, and we are working entirely with free-software, especially Blender, Kdenlive, and Audacity.
The Kickstarter video starts with our recently completed "teaser" demo video, which is meant to show at least one possible rendering and final animation style for the project (though we're still experimenting). This version is toon-shaded, but lacks outlining -- I'm actually pretty happy with the way that looks. The limited PoV/hyperreal concept for this trailer was originally conceived to minimize the number of 3D assets we'd use (originally it was all PoV and didn't show even show the character). However, as the video goes on to show, we actually have quite a few other models, including the Soyuz exterior completed now.
As I outlined in my update on licensing and business models, "Lunatics!" is entirely under the same free CC By-SA 3.0 as Wikipedia and other bastions of free culture. Unlike several other "free" film projects, we've actually decided to be strict about the music licensing as well -- every piece of music we use is under a By-SA compatible license so that we can release it to you under By-SA.
An important part of our business model -- creator-endorsed post-release sales -- is a concept born right here on QuestionCopyright.org.
We're also part of a growing group of projects relying on and promoting free-software tools like Blender, Kdenlive, Audacity, Inkscape, and Gimp to realize our concept.
For those friends I’m not seeing in Portland this year: sorry to miss you this time!
Going to OSCON is always enjoyable, I always learn new things, and it’s wonderful to catch up with old friends and meet new people… but one can’t do everything everywhere. I wasn’t scheduled to speak this year, and there’s just too much on my plate. So, I decided to skip it this once.
See you next year!
This piece by Bob Ostertag was originally published at On The Commons. We're reprinting it here because it's a great description of exactly how distribution networks are still strongly weighted against free-as-in-freedom. The cost of maintaining the monopoly sidewalk is that freedom can grow only in the cracks — and the increasingly eager auto-detection bots keep "repairing" the cracks, because their masters only see the value of the sidewalk. Bob is an active performing and recording musician, and a long-time friend of QuestionCopyright.org (he was one of our founding Board members). His biography is at the end of this article.
We of course hope Bob makes plenty of money from A Book of Hours on CD Baby — there's no contradiction between freedom-friendly licensing and making money! And yes, we recognize the contradiction between his original Creative Commons Attribution-NonCommercial licensing and the freedom of other artists to pursue the same strategy Bob describes below while incorporating his music in their works. In the usual QCO so-transparent-it's-kind-of-edgy fashion, we'll discuss this with Bob and see if he's open to using truly free licensing while still selling his music on CDBaby and similar sites [Update 2013-08: He was, and we're now working with him to relicense his music under free licenses wherever possible.]. But the outcome of that conversation doesn't affect his message below, which is that right now freedom is much harder work than it needs to be, because the major distribution networks still regard it as a weed, and because the few distributors that prioritize a direct audience-artist financial connection are small and don't have the clout — yet — to change how the sidewalks are maintained.
Update from Bob Ostertag
[This update also comes from the On The Commons article. —QCO Editors] Bob Ostertag’s article (below) about how the music industry makes it increasingly difficult for musicians to share their work online for free got a massive response on On The Commons. Ostertag shares some of the reactions here.
My article seems to have fostered a lots of discussion on OTC, on FaceBook, and around the web. Many shared their own experiences with unjustified “take-downs” of their music off the Web. For example, this form Eva Orgidea:
“Last week Youtube added a copyright notice in one of my videos (a mere sound performance I did in a gallery) saying part of the content was owned by Harry Fox agency. I felt so offended, that I disputed the claim and simultaneously deleted the video. I do not want, in any shape or form, to get involved into those huge corporate names… Nevertheless although I removed the video I am aware that if they do not find my dispute legitimate, they will terminate my Youtube account.”
Others wrote in defense of the people at SoundCloud, which they argue are doing their best given the circumstances, and that their dispute resolution system is actually fast and fairly simple. But beyond discussion of which music and video hosting sites are “good guys” or “bad guys,” the bigger point is that there is a strong incentive for “false positives” built into the whole netbot-auto-take-down system. The software that SoundCloud, YouTube, and others across the web began as a service used by the big labels to analyze music content directly at CD pressing plants to prevent unauthorized mass duplication, That system is now applied to everyone. It is strongly in the interest of the big corporate labels to over-detect rather than under-detect. The result is a system in which the interest of the handful of superstars of the world in not missing out on a penny of their millions in royalties trumps the interest of the vast majority of musicians in getting their music heard.
On YouTube, artists the netbots have identified as violating copyright may now be offered a choice of taking down their video or allowing the alleged copyright holder to advertise on the page. The incentive for false positives here is even stronger, since the more false-positive takedown notices get sent, the more free advertising is muscled from people they have no relationship to: legal, artistic, or otherwise.
Finally, there is one clarification in order concerning my account of the YouTube takedown of Jacques Sirot’s video, which we ultimately traced to netbots operating on behalf of the Seeland label associated with the group Negativland. My intention was to use that story to illustrate how easy it is for copyright-policing-by-netbot to get out of hand, not to question the integrity of Seeland or Negativland. I consider Negativland to be innovative artists, trustworthy collaborators, upstanding individuals, and personal friends. When we finally figured out what had happened, the Negativland people were as horrified as I was and acted immediately to resolve the situation.
Why I No Longer Give Away My Music
How the digital music biz makes it difficult for musicians to offer free downloads
In 2006 I gave my music away. That music had previously existed on CDs and LPs (yes, I began making music in the days of vinyl and tape). I moved all of it to the Web, downloadable for free.
Today, seven years later, I see that giving away music for free is not as easy as I had imagined. In some ways, it turns out to be impossible. The reasons why this is so say a lot about creativity,property, and power in a networked world of corporately owned digital commons policed by netbots and stochastic algorithms.
My music is now available under a Creative Commons “Attribution-Non Commercial 2.5 License,” which allows anyone to download it, copy it, remix it, slice and dice it, and so on. They just can’t sell it, or profit from it. If they incorporate it into music of their own, they should note that they did so, and since they used my music as their source material for free, they should not charge for their music either.
But that’s all just words. In the real world, I have no resources with which to enforce those conditions. And as we shall see, the problems I have encountered in this endeavor have been entirely in another direction.
Deciding to do this was the easy part, since the “record business” never worked for me in the first place. This was no big surprise, as the record business never worked for most musicians. What is surprising is how many musicians seem either to not know this or to have forgotten it.
The whole structure of the industry put corporate interests first, musician interests a distant second. Actually, this is not quite true. The biggest stars get taken care of pretty well. Lady Gaga should have no complaints. But many people would be shocked to know how many bands whose names they know and CDs they bought never saw any money from those sales. For musicians like myself making “non-commercial” music which does not fit easily any any genre or marketable category, the situation was hopeless from the start.
My income comes from concerts, not recordings (I have performed internationally since 1978). For most of my audience, before the Internet came along just finding my recordings was a major undertaking. Concerts in various parts of the world were often attended by people who travelled long distances to get to the show, hoping to find some recording for sale which they had heard about but were never able to find.
Enter the Internet. Suddenly, world-wide “distribution” of audio recordings – which had formerly required an infrastructure of pressing plants, trucks, ships, planes, warehouses, retail shops, accountants, lawyers, and more – became instantly available to everyone at the push of a button.
Who needs the “record business”? What was a difficult, tentative, and ultimately impossible decision for big name groups like Radiohead was a no-brainer for me. I wrote an essay called The Professional Suicide of a Recording Musician that was widely read and commented upon. I was invited on to the board of a non-profit called Question Copyright, which is all about trying to create a real digital commons.
The first thing that happened after ‘freeing’ my music was that people began to access it in far greater numbers than previously possible. My first release to bypass the CD stage and go directly to the Internet for free download, w00t, was downloaded about 40,000 times. And the total downloads of all my recordings has gone well over 100,000. (Since numerous sites now offer my music for downloading I do not have an accurate total).
But I have learned that “accessing” music and actually listening to it are two different things. Free downloading has created a kind of collector or hoarder who is unique to the digital age. In my university classes, I query my students about their downloading habits, and everyone who is deeply into music has figured out how to download music for free, despite the best efforts of the record business to stop them, and have far, far more music downloaded to their laptops and iPods than they will ever have time to listen to in their entire lives. Gigabytes and gigabytes of meaningless data. These same students invariably report that they have actually listened to all the music they paid for.
If a virtual tree falls in a virtual forest and no one opens the file, does it still make a sound? This is a real conundrum. If by “commons” we mean, say, communally owned pastures in England, we are talking about finite resources that were valued as such and cared for accordingly by the surrounding community. But if by “commons” we mean a vast expanse of server farms that seems capable of expanding without meaningful limit, then we are speaking of something very different. Have I cheapened my music by not monetizing its recorded artifact?
For most people for whom new music is an important part of their lives, however, the most relevant commons has become iTunes, Spotify, Pandora and so on – Web sites that allow the user to begin from their favorite music and then link outwards to music that has been somehow identified as similar. College kids and fanatical collectors might work late into the night figuring out how to get their files for free, but for most people, the sites listed above are the main way they discover new music. And these sites do not accept music that is free. They are all about making money. By giving away my music for free, I seem to have shut myself out of the new “commons”.
The Mysterious Case of the Missing Copyright
Jacques Sirot is an independent French artist and film-maker. He used my music as the soundtrack for one of his recent films, as I have made clear he (and everyone else) is free to do. Making sure to dot every i and cross every t, when he posted his film on YouTube he noted: This Creative Commons film uses Bob Ostertag’s music, Say No More, which is distributed with a Creative Commons license; its usage has moreover been personally agreed by the musician.
Yet soon after the film was posted, it was blocked for copyright violation, with a notice that “it may have content that is owned or licensed by IODA [Independent Online Distribution Alliance].” Jacques appealed:
“This video contains elements protected by rights of the author in question, but with the appropriate license or written authorization of the holder of those rights. Bob Ostertag was notified of this use of his music (which he distributes via “Creative Commons”) and granted his authorization. I believe in good faith that the claims described above are not valid, and that I hold the necessary rights to the contents of my video, for reasons cited. I have not knowingly made a false declaration and am not voluntarily using this contestation procedure in an abusive manner to undermine third party rights. I understand that the forwarding of fraudulent contestations can lead to the closure of my YouTube account.”
He received the following reply:
Dear Jacques Sirot,
IODA has reviewed your dispute and reinstated its copyright claim on your video, “TSUNAMI”. For more information, please visit your Copyright Notice page.
The YouTube Team
Working with scholar Sally-Jane Norman, Jaques spent considerable time researching the matter, and eventually contacted me, and I spent hours more looking into it. Finally I figured out what had happened.
Years back, I released some CDs on Seeland, a label run by the notorious media guerrilla group Negativland. Negativland was famously sued for a parody of a song by U2, which made them into icons of free expression and resistance to absurd claims about the reach of copyright. I had left their label years ago when I put my music under the free Creative Commons license. As is often the case with tiny, underfunded labels, there had been a disagreement about accounting, with the Negativland people arguing I owed them money for unsold CDs that were returned by stores. Just the sort of thing that led me to give up on small labels and give away my music. Well, it turned out that, without informing me, Seeland had continued to collect royalties on that music in an effort to recoup what they claimed to be their losses. Through a byzantine circuit of contracts and enforcements, the banishment of Jacques Sirot’s video from YouTube for copyright violation, for using my music which I had given him and everyone else explicit permission to use, was the result of a secret account collecting royalties on my music operated by a label that had built its reputation on resistance to overblown copyright claims!
Kanye West vs. Etienne Noreau-Hebert
There are some Web sites, like YouTube, SoundCloud and BandCamp, which are set up to allow free music and video sharing. But even these are problematic. They are policed by “netbots,” software algorithms that constantly search for sounds allegedly owned by someone or other. My friend Etienne Noreau-Hebert recently uploaded a new work to SoundCloud, to share with others for free, and received back the following reply:
Our automatic content protection system has detected that your sound “121223-Muhamarra-v0.3” may contain the following copyright content: “Love Lockdown (as made famous by Kanye West)” by Future Hit Makers Of America, owned by Big Eye Music. As a result, its publication on your profile has been blocked.
Kanye West, of course, is a major figure in the world of corporate hip hop, with megahit records, movies, a fashion line, and more than 30 million paid digital downloads of his songs. Etienne is an unknown musician making abstract electronic music he would like to share with others for free. There is nothing in his music that sounds even remotely like Kanye West. But some netbot has judged that Etienne has infringed on Kanye’s rights, and so Etienne’s composition is banned from SoundCloud. Perhaps there is someone at SoundCloud to whom Etienne could appeal, if he dug through their web site, sent the emails, waited through various levels of phone robots, etc. Perhaps not. But Etienne is giving away his music for free. Where is he going to get all that time? Or rather, he is trying unsuccessfully to give away his music for free.
Little guys like Etienne are not the only victims of netbot police. The YouTube live stream of Michelle Obama’s speech during the last Democratic Convention was suddenly shut down midsentence by YouTube’s “preemptive content filters,” leaving viewers staring at a black screen with text informing them that: This video contains content from WMG, SME, Associated Press (AP), UMG, Dow Jones, New York Times Digital, The Harry Fox Agency, Inc. (HFA), Warner Chappel, UMPG Publishing and EMI Music Publishing, one or more of whom have blocked it in your country on copyright grounds.
If you want to know who rules the roost on the Internet, that list would be a good place to start. A live webcast of the Hugo Awards for science fiction writers was blocked when a netbot ruled that the stream was showing copyrighted film clips. This was true, but the Hugo Awards had secured permission to use them. No one told the netbots. YouTube’s “preemptive content filters” repeatedly blocked footage from NASA’s Curiosity rover Mars landing, even though the images are in the public domain.
Music for Almost Free
My newest work is A Book of Hours, featuring the extraordinary vocal talents of Shelly Hirsch, Phil Minton, and Theo Bleckman, as well as saxophone legend Roscoe Mitchell. I have decided not to give this one away, but to use a relatively new service with the unlikely name of CDBaby. CDBaby will host the files for download on their site, with all the now-standard ability to share, comment, and so on. But more importantly, they will place the music on iTunes, Pandora, Spotify, and so on. I have to pay them for this service, and they will not accept my work unless I charge for it. I have chosen a very low amount: $1.99 for nearly an hour of music. All my previous works will remain available on my web site for free.
In a way this feels like a retreat from the across-the-board music-for-free stance I have taken for the last seven years. But really I am just trying to keep my head above water in the digital deluge.
Bob Ostertag’s newest work, A Book of Hours, can be purchased here at CD Baby
About Bob Ostertag
Composer, performer, historian, instrument builder, journalist, activist, kayak instructor, Bob Ostertag has published 21 CDs of music, two movies, two DVDs, and three books. His writings on contemporary politics have been published on every continent and in many languages. Electronic instruments of his own design are at the cutting edge of both music and video performance technology. He has performed at music, film, and multi-media festivals around the globe.
His radically diverse collaborators include the Kronos Quartet, avant garder John Zorn, heavy metal star Mike Patton, jazz great Anthony Braxton, dyke punk rocker Lynn Breedlove, drag diva Justin Bond, Quebecois film maker Pierre Hébert, and others. He is currently Professor of Technocultural Studies and Music at the University of California at Davis. For more information, BobOstertag.com
This week we have looked at the three main elements of the NSA’s surveillance system: Bulk data collection and the construction of an index for all communications in the country, use of private companies to store and process the content of our domestic data, and partnerships with other government agencies at home and abroad. We have examined all of these elements to so that we can try and judge the NSA’s surveillance system based on how it is constructed rather than by the motives and ideals of those currently using it. Now that we have examined the components, it is time to look at the bigger picture.
Technology of Power
Wholesale collection of data, use of private companies as data refineries, and partnerships of mutual convenience with other government surveillance agencies. Those are the functional components of the NSA system, the bits of code out of which it is built. What does that tell us about the system as a whole? We know that tapping into fiber optic lines naturally leads to wholesale data collection. We know that during wholesale data collection it is difficult or impossible to tell just whose data is being collected. We know that possessing all of the data turns what were once external checks and balances, like the prohibition on the NSA collecting US citizen data, into matters of self-policing and internal procedure design. We also know that, given all of this for a decade, the NSA has sought to increase how much data on US citizens they can search, radically increase how long they can keep data, and expand partnerships with groups that can volunteer information for the system that is free of any regulations. Now that we know that, we can ask the real question: is this going to be the kind of system we use to police democratic societies for the rest of our lives?
Before you decide, take a minute and watch this talk. The speaker, Malte Spitz, is a member of the German Parliament and used the German freedom of information laws to get a copy of all the “metadata” that his phone company stored about him. You can watch six months of his life reconstructed on that video. Everywhere he went, everyone he talked to, and all the groups he spoke with are captured in that metadata. There is power in being able to reconstruct someone’s life like that. Being able to reconstruct everyone’s lives at once is not just powerful, it is the kind of technology that could keep a government in power. Whether the NSA system was built to chase down terrorists or to disrupt political dissent does not matter. The power of the system matters and how much power we are comfortable giving to the secret operators of such a system matters.
In our names the US government is building a new kind of surveillance system, one that upends all the laws meant to regulate such activity and that is tied directly into the internet connections that will be the primary communication infrastructure for the rest of our lives. We have perhaps the best opportunity we will ever get to examine the actions taken in our names and set new rules for how a democratic society governs itself in this area. Our deliberations and decisions will have wide ranging repercussions. As the price of technology continues to fall there will be many others capable of building similar systems and the choices we make now will set the standard of behavior when that happens.
If we push back and we decide that this kind of monitoring is incompatible with a democratic society, our position as the central hub of the global internet means that we can hold that line for the next generation. If we move in the other direction and commit the center of the network to constant monitoring and recording, what will we say when those same tools are used to prop up the next “Axis of Evil” or suppress the next Arab Spring?
In the technology community “code is law” is said as a reminder that our technologies are governed not by our intentions but by the way they are put together. It is also sometimes spoken in a hopeful note because, while code may be law, we write the code. We determine how our technology is built. It can be hard and it can be complicated, but we need to do it because, if we don’t do it right, someone else will do it wrong.
Every Fourth of July, the New York Times prints the entire Declaration of Independence of the United States on the back page of its main section, in facsimile and in text. I read the whole thing on the subway this morning, just to remind myself what they were thinking of.
I’m pretty sure they were not thinking of a country where the government classifies the extent and nature of its surveillance, and even lies about it when citizens and their representatives ask. The distinction between discussing the overall process of surveillance and discussing individal targets of surveillance is crucial. Edward Snowden informed us about the process; he has been careful not to leak the targets (unless you count revelations of a very general nature, such as that we spy on the governments of our allies). No terrorist knows more today about whether they’re being watched than they did before Snowden’s leaks. Anyone trying to blow something up would naturally assume, and behave as if, they were under surveillance already.
Can anyone point to any real harm to national security from Snowden’s leaks? I have yet to hear of any. The leaks merely informed the citizenry of what we should have been informed of all along. It’s not a question of whether the government should sometimes be able to eavesdrop, or about whether there is rigorous enough judicial review or oversight. It’s that whatever we’re going to do, the policies about when and how we do it are legitimate matters of public debate — and we can’t debate them if we don’t know them. This is about civilian control over the military and intelligence services. Snowden himself said this eloquently enough, as have many others, so I won’t belabor the point.
But there is one slightly different argument I’d like to respond to:
Some people say that, even if in some abstract sense it is right that this information should come out and be debated, Snowden was wrong to leak it because in doing so he violated his oath to guard the secrets he had been entrusted with.
But he had a conflict of oaths: on the one hand, he and those around him were sworn to uphold the Constitution; on the other hand, he’d made a promise to keep secrets secret. What is the right thing to do when you promise to keep a secret, and then the secret they tell you is that some people aren’t keeping their promises?
For those who still don’t feel that conflict as Snowden felt it, on this July Fourth I’d like point out that George Washington was an officer of the British militia in the American colonies. Well before the American revolution of 1776, he led a military force acting on behalf of the British crown, defending first part and then all of the Virginia colony’s borders. I don’t know enough about colonial militias to know if holding those positions required swearing an oath of loyalty, but it seems likely that it did (the new United States Army itself instituted an oath of allegiance fairly early on during the Revolutionary War). In any case there is at least some conflict in serving in a country’s militia and then leading an army against that same country’s army. But by their nature revolutions involve broken promises. You can read the Declaration of Independence as one long justification for when and why they should be broken (seriously, take a look).
Oaths sometimes conflict with each other, and you don’t always find out how until it’s too late. Then you have to decide what to do. Edward Snowden did the right thing in a difficult situation, and the debate that has ensued is evidence of this.
(It’s interesting that people who fret that Snowden broke his oath don’t seem to get as worked up when people get divorced and thus, in many cases, break their marriage vows. Marriage isn’t about national security… but then again, neither were Snowden’s leaks.)
If you agree, please say so, preferably in public — on your blog, if you have one, or on Facebook, or on Twitter, or on the bumper of your car, or on the back of your laptop. It’s important. There are a lot of people right now, especially politicians who are worried either about being attacked on national security or about losing the trust of the intelligence community, who feel they have to condemn what Snowden did. In some cases they’re sincere; in other cases they sense which way the wind is blowing in their particular environment and they say what they need to say to keep their position. I don’t even blame them, but it’s important that they not be the only voices out there. Say you’re glad to have the information that Snowden leaked. Explain clearly why it’s important that the public be able to talk about these things. Don’t let anyone feel they’re alone in thinking this, and you won’t be alone either.
Happy Fourth of July.
So far this week we have looked at two of the three main components of the NSA’s surveillance system: how the NSA collects raw data from fiber optic cables and uses that to build an index of “metadata” that maps nearly all communications in the country going back to 2001 and how they enlist private companies as data distilleries holding and processing the contents of our domestic data. Today we will finish looking at the functional elements of the NSA system with a look at how government agencies at home and abroad partner with the NSA, skirting all effective data protection regulations as a result.
Sharing is caring
The NSA is a single government agency. It may be the "largest, most covert, and potentially most intrusive intelligence agency ever" and it may sit at the center of the global communications network, but it is still just one agency and it has limits. They are still somewhat prohibited from directly targeting US citizens, which is the only factor limiting which domestic fiber optic cables they can tap into with splitter prisms. They also lack domestic access to the 7.25% of global internet traffic that does not pass through the US during transmission. The essential allies for overcoming these obstacles are other government agencies, both those at home and abroad.
At home the NSA cooperates directly with numerous government agencies, most importantly the CIA, FBI, and the little known National Counter Terrorism Center (NCTC). In addition to sharing expertise, connections, and personnel resources, when these agencies work together they also benefit by skirting around laws designed to control just where they can operate. The NSA’s intelligence gathering is limited by law to foreign communications. In order to collect and store the phone records of purely domestic phone calls, as we can now confirm they are doing, someone other than the NSA must do the collection. In the case of phone records, the FBI is the one actually requesting records from the phone companies. The same is true of PRISM requests for internet communications. In all cases the NSA is the one who stores and analyzes the data; the intermediary agencies are used as legal cover. The reason for this game of digital hot potato is that data that is lawfully obtained by the government becomes fair game for other parts of the government to search. So, once the FBI has obtained everyone’s phone records the NSA no longer feels that the legal prohibitions on collecting data about US citizens apply.
Making it easier for different government agencies to exchange information was one of the main reasons for creating the NCTC in 2003. Initially this information was limited. Information about US citizens who were not suspected of any crime could be included but could not be kept for longer than 180 days. Then in press release last march the Attorney General changed that from 180 days to five full years. Perhaps unsurprisingly this is the same length of time the NSA keeps such data on citizens. This one government partnership alone is a significant expansion of the NSA’s surveillance system. The NCTC brings access to all Federal databases including flight records, financial forms submitted by people seeking federally backed mortgages, the health records of people who sought treatment at Veterans Administration hospitals and many others. The only restriction on what databases the NCTC may keep is that they must be “reasonably believed” to contain “terrorism information.” With databases this large it seems reasonable to believe they contain everything.
When foreign governments cooperate in surveillance even these trivial restrictions fade away. Just as we place no restrictions on what the NSA may do with information about non-US citizens, other governments place no restrictions on what their spy agencies can do with information about US citizens. Theoretically then it would be possible for two nations to spy on each other and then exchange information, much like strangers on a train. By accident or by design, this is much what happens with the British intelligence agency GCHQ, who we help access more than 200 fiber optic cables. In return we gain access to the processed metadata they collect. Any data we wish to share with them can be done through the NCTC. The only difference between our two programs is how long we each keep data. While we keep information on our citizens for up to five years the UK government only stores information on their subjects for a maximum of 30 days.
Tomorrow we will put all these pieces into context and draw some conclusions about what these components mean for the surveillance system as a whole: Part 4 – The End.
Update July 8: We learned over the weekend more details about the GCHQ cable tapping and have now have information about how the Australian and other close international partners operate their own social monitoring stations. The geographical diversity of these partner nations means that nearly all of the undersea fiber optic cables that tie the internet together are open to unregulated monitoring by one of our partners. As other nations build their communication storage capacities to match our own this means it will be legally and architecturally possible for this small group of democratic governments to access complete records of all internet communications. As long as nations only store information about each other’s citizens, no domestic surveillance laws will be triggered. As long as the records are complete, each nation will know that any information about their own citizens they wish to access at a later date can be simply requested from a partner.
Yesterday we looked at how the NSA collects raw data from fiber optic cables and uses that to build an index of “metadata” that maps nearly all communications in the country going back to 2001. Today we take a look at the second component of that system: using private companies to store and process the contents of our data.
Distilling Our Data
By tapping into our nation’s fiber optic cables the NSA has built what is likely the largest data collection tool in the world. It is enough to make the Stasi jealous. Processing through all this data is an immense task and no doubt one reason they are building the world’s largest computer. Until that comes online, the NSA relies on an older method that they call “contact chaining” to search through all the data they collect. Contact chaining is when you start with a single person and look through the NSA index of communications to identify every person they have phoned or emailed. From there you can begin searching each of those newly identified contacts to see who they have phoned or emailed, proceeding out however many degrees of separation you wish until, we can assume, you invariably end up searching through Kevin Bacon’s address book. If this contact chain includes someone the NSA is interested in, one of the FISA judges instructs that person’s email, social network, and other online account providers to turn over all information they have about the individual. This collaboration with our largest technology companies is the PRISM program.
Architecturally, using private companies to store data is a powerful strength of the NSA’s system. Data stored by private companies has almost no legal protection against government search, cost nothing to the NSA to store, and are kept essentially forever. Perhaps most importantly, because all these tech companies make their money by studying our activities for advertisers, the data they produce to the NSA has been tagged, cross-referenced, and refined into useful formats. While this form of “share everything” plan might be objectionable to consumers, and no doubt this accounts for some of the current upset over the NSA’s activities, in the normal course of events the technology companies are not even allowed to disclose whether they have received demands form the FISA court, let alone what data may have been turned over.
Put on a happy face
Access to the data warehouses of Google, Facebook, Microsoft, and others fills a vital role in the NSA surveillance system by turning the organizations we trust with our data into informants against us. While many of these companies may participate in PRISM unwillingly, Yahoo for example sued the government in secret court to avoid participation, part of the PRISM program is no doubt designed to improve relations with these companies and accustom them to providing information. Such positive relationships with private companies can be quite productive for the NSA. In 2001 it was voluntary cooperation from network operators that enabled the NSA to install all those fiber optic splitters, which operated for four months before the panel of judges charged with overseeing NSA surveillance were informed of the program.
Good relationships also encourage some companies to go beyond merely complying with demands for data and actively make it easier to access such data about customers, as Sprint did when building a web portal for police that made it so easy to search for the location of individual phones that it was used 8 million times in 2008 alone. We now know that there are more than 80 companies voluntarily cooperating with the NSA, including one major US network operator that is steering data from around the US past the NSA splitters. It is unclear whether the NSA is gathering credit card information from one of these voluntary relationships or through PRISM demands.
Maintaining positive relationships with the companies participating in PRISM also goes a long way toward preventing those technology giants from making changes that would reduce the amount of information the NSA can access. These technology companies are as close as we currently have to a civil society infrastructure for digital communications. If they were significantly against the NSA’s activities, they could do significant damage to the NSA’s capabilities simply by changing their own business practices. When faced with a similarly board government monitoring program in Sweden, internet providers in the country decided to stop keeping records of user activity so that there would be no information to turn over. Similarly, our own tech companies could decide to keep less information about us, to encrypt more of it by default, or make other architectural changes that would reduce the volume of information they are required to transmit to the NSA. The $100 million dollars the NSA spent collecting data from private companies between 2001 and 2006 likely helps prevent those kinds of changes.
Yet, no matter how cozy the relationship or how extensive a private company’s resources, to build a truly global surveillance system you need the cooperation of governments: Part 3.
If you have heard anything about the NSA this month, you have heard grand statements and sweeping generalizations. More than likely you have heard a whole gallery of commentators try and relate the news to ideals like “liberty”, “security”, and “privacy”, as if we could all agree about what those ideals mean. In the technology world we have a saying, “code is law”, to remind everyone that the systems we build are not governed by our ideals, they are governed by the practical way we put them together. What the NSA has built is a tool: a system of technology, personnel, and regulations. To judge this tool based on the ideals of those involved or the reasons for its creation is a job for pundits. Us? We know to look at the code.
Prisms, internet giants, and James Bond.
So, what exactly is the “code” of a national surveillance system? Unpacking the avalanche of NSA information this month we can see three major components of the system: collection of wholesale raw data, use of private companies as data refineries, and collaboration with other spy agencies, including the British NSA equivalent, the GCHQ. These three components determine how the system works, what its limitations are, and what it is capable of; they are its “code” and they each have important ramifications for the system as a whole so we will look at them each in turn.
Carbon copying the internet
Of all the NSA programs reveled recently, PRISM has gotten perhaps the most press. We will be focusing on the specifics of this program in the next section but it is worth mentioning here for its name alone. Have you ever wondered why they would name a data collection program “Prism”? While the actual reasons are still classified, my guess is that the name is an homage to the NSA’s practice of using actual glass prisms-like devices for data collection.
Glass is useful for data collections because most internet traffic that travels any distance is converted into patterns of light and sent over fiber optic cables. If you can tap into the fiber optic cable you can install a prism-like device * you can split that light, sending part of it further down the line as intended while sending a duplicate copy somewhere else. We learned back in 2006 that the NSA began installing prism-like “splitter” devices in all the major fiber optic cables in the country, installing secret rooms at the nation’s leading phone and internet companies to capture copies of everything flowing over the network.
Notice that this approach is only useful when you want to copy everything going over a cable; you cannot, for instance, have the splitter recognize what information is bound for overseas and what is just moving over to the next town. Once you get down to the actual cables all our communications run through, all our data looks the same. This is fundamentally important because the NSA is legally prohibited from monitoring US citizens but, once you tap into the cables, the only way to make sure that you will end up with the particular data you want is to take all of it and look through it later. While the NSA has varied what portions of this information it keeps, and under what legal authority it claims the right to keep them, those changes are governed by internal decisions at the agency, not by the technology of the system itself.
Your Permanent Record
It is impossible to say just how much of this raw data the NSA has kept since 2001. Because there are no legal restrictions on storing information about non-US citizens, the recently disclosed documents pay little attention to the issue. We have learned that in Germany alone the NSA collects half a billion records a month. One possible indication of the scale of the data being stored is the new $2 billion data center the NSA is opening this September: estimates are that it will be able to store all the traffic that moves over the internet for years to come.
For US citizens we know that the NSA collected a nearly complete index for all emails sent between 2001 and 2011, when they halted the program for “operational and resource reasons”. This index includes a record of each email sent, who sent it, and what computer network they were on when sending it. They appear to have collected some form of credit card transaction history, likely a list of purchase times, amounts, and merchants. Similarly, the NSA has been collecting records of all phone calls made on US carriers, what numbers they call, how long they talk, and, potentially, where they call from if they are using mobile phones. This sort of communications history for an individual has historically been called a “pen register” and government agencies normally need a court order to create one. The NSA argues that they are not governed by these rules because they collect data in bulk and only search through it later while the older laws were designed for devices that did both at once. This recording of phone activity is still going on today.
In the press this index of everyone’s activity is refereed to as “metadata” because it is information about our communications but not the contents of those communications. Storing the contents of our communications would run afoul of wiretapping laws and would require many times more storage than keeping an index does. Until that new data center goes online, such activity might be operationally difficult for the NSA as well as legally treacherous. Instead, the NSA keeps an index of our communications and, whenever they want to see the contents, they request them from the tech companies that run our email and social networks.
Tomorrow we will look at the role that private companies play in distilling our data: Part 2.
Want to see your blog on this planet? It's maintained by James Vasile (james AT hackervisions DOT org). Get in touch and let them know you want to join!
If you find Planeteria.org or the free software on which it runs useful, please help support this site.
Posts are copyright their respective authors. Click through to see each site's terms for redistribution.
- 2013-12-13 08:21:21
- Admin interface