I finally set aside some time to read my old boss' open letter responding to criticisms of the FDL process. I read gladly his discussion of the responsibilities of software freedom license stewardship.
Reading
Late last week, the FTP Masters of Debian — who, absent a vote of the Debian developers, make all licensing decisions — posted their ruling that AGPLv3 is DFSG-Free. I was glad to see this issue was finally resolved after months of confusion; the AGPLv3 is now approved by all known FLOSS licensing ruling bodies (FSF, OSI, and Debian).
Our Twitter Kokoda Updates
- tregeagle: CORRECTION we all had injuries EXCEPT DI. Di is a hard ass.
- diemma: Thank you 4 ur support over the weekend. We r in the car on our way home. It has been amazing …
I had yet to mention in my blog that I now co-host a podcast at SFLC. I found myself, as we launched the podcast last week, in a classic hacker situation of having one project demand the need to write code for a tangentially related project.
Specifically, we needed a way to easily publish show notes and otherwise make available the podcast on the website and in RSS feeds. Fortunately, we already had a few applications we'd written using Django. I looked briefly at django podcast, but the interface was a bit complicated, and I didn't like its (over)use of templates to do most of the RSS feeding.
Since the release of GPLv3, technology pundits have been opining about how adoption is unlikely, usually citing Linux's still-GPLv2 status as (often their only) example. Even though I'm a pro-GPLv3 (and, specifically, pro-AGPLv3) advocate, I have never been troubled by slow adoption, as long as it remained on a linear upswing from release day onward (which it has).
Only expecting linear growth is a simple proposition, really. Free, Libre and Open Source Software (FLOSS) projects do not always have the most perfectly organized of copyright inventories, nor is the licensing policy of the project the daily, primary focus of the developers. Indeed, most developers have traditionally seen a licensing decision as something you think about once and never revisit!
Today is International Software Freedom Day. I plan to spend the whole day writing as much Free Software as I can get done. I have read about lots of educational events teaching people how to use and install Free Software, and those sound great. I am glad to read stories about how well the day is being spent by many, and I can only hope to have contributed as much as people who spend the day, for example, teaching kids to use GNU/Linux.
What troubles me, though, is the some events today are sponsored by companies that produce proprietary software. I notice that even the official Software Freedom Day site lists various proprietary (or semi-proprietary) software companies as sponsors. Indeed, I declined an invitation to an event sponsored and hosted by a proprietary software company.
So often, a particular strategy becomes dogma. Copyleft licensing constantly allures us in this manner. Every long-term software freedom advocate I have ever known — myself included — has spent periods of time slipping on the comfortable shoes of belief that copyleft is the central catalyst for software freedom.
Copyleft indeed remains a successful strategy in maximizing software freedom because it backs up a community consensus on software sharing with the protection of the law. However, most people do not comply with the GPL merely because they fear the consequences of copyright infringement. Rather, they comply for altruistic reasons: because it advances their own freedom and the freedom of the people around them.
Twenty-five years ago this month, I had just gotten my first computer, a Commodore 64, and was learning the very basics (quite literally) of programming. Unfortunately for my education, it would be a full eight years before I'd be permitted to see any source code to a computer program that I didn't write myself. I often look back at those eight years and consider that my most formative years of programming learning were wasted, since I was not permitted to study the programs written by the greatest minds.
Fortunately for all the young programmers to come after me, something else was happening in an office at an MIT building in September 1983 that would make sure everyone would have the freedom to study code, and the freedom to improve it and contribute to the global library of software development knowledge. Richard Stallman announced that he would start the GNU project, a complete operating system that would give all its users freedom.
For ten years, I've been building up a bunch of standard advice on GPL compliance. Usually, I've found myself repeating this advice on the phone, again and again, to another new GPL violator who screwed it all up, just like the last one did. In the hopes that we will not have to keep giving this advice one-at-a-time to each violator, my colleagues and I have finally gotten an opportunity to write out in detail our best advice on the subject.
Somewhere around 2004 or so, I thought that all of the GPL enforcement was going to get easier. After Peter Brown, Eben Moglen, David Turner and I had formalized FSF's GPL Compliance Lab, and Dan Ravicher and I had taught a few CLE classes to lawyers in the field, we believed that the world was getting a clue about GPL compliance. Many people did, of course, and we constantly welcome new groups of well-educated people in the commercial space who comply with the GPL correctly and who interact positively with our community.
There has been much chatter and coverage about the court decision related to the Artistic License decision last week. Having spent a decade worrying about the Artistic License, I was surprised and relieved to see this decision.
At the OSCON Google Open Source Update, Chris Dibona
reiterated his
requirement to see significant adoption
before code.google.com will host AGPLv3 projects
(his
words). I asked him to tell us how tall we in the AGPLv3 community
need to be to ride this ride
, but unfortunately he reiterated only
the bar of “significant adoption”. I therefore am
redoubling my efforts to encourage projects to switch to the AGPLv3, and
for our community to build a list of AGPLv3'd projects, so that we can
convince them.
About two hours ago, Harald Welte received the 2008 Open Source Award entitled the Defender of Rights. (Open Source awards are renamed for each individual who receives them.) This award comes on the heels of the FSF Award for the Advancement of Free Software in March. I am glad that GPL enforcement work is now receiving the recognition it deserves.
When I started doing GPL enforcement work in 1999, and even when, two years later, it became a major center of my work (as it remains today), the violations space was a very lonely place to work. During that early period, I and my team at FSF were the only people actively enforcing the GPL on behalf of the Software Freedom Movement. When Harald started gpl-violations.org in 2004, it was a relief to finally see someone else taking GPL violations as seriously as I and my colleagues at the FSF had been for so many years.
The Network Services committee that I alluded to recently in various interviews is now officially public and named: Autonomo.us. (Thanks to one of the committee members, Evan Prodromou, who donated the domain name. ) Autonomo.us is officially endorsed by the FSF.
A company called Control Yourself, led by Evan Prodromou (who serves with me and many others on the FSF-endorsed Freedom for Network Services Committee) yesterday launched a site called identi.ca. It's a microblogging service similar to Twitter, but it is designed to respect the rights and freedoms of its users.
I got a phone call yesterday from someone involved with one of the many socially responsible investment houses. It appears that in some (thus far, small) corners of the socially responsible investment community, they've begun the nascent stages of adding “willingness to contribute to FLOSS” to the consideration map of social responsibility. This is an issue that has plagued me personally for many years, and I was excited to receive the call.
Ian Sullivan showed me an article that he read about eavesdropping on Internet telephony calls. I'm baffled at the obsession about this issue on two fronts. First, I am amazed that people want to hand their phone calls over to yet another proprietary vendor (aka Skype) using unpublished, undocumented non-standard protocols and who respects your privacy even less than the traditional PSTN vendors. Second, I don't understand why cryptography experts believe we need to develop complicated new technology to solve this problem in the medium term.
Today [18th June] is Download Day!
The third version of Firefox has been released today.
Firefox is synonymous with: security, stability and ease of use. If you want to try it out… it is only a small download and installing is a cinch.
If not, well that’s fine …
I was amazed to be involved in yet another discussion recently regarding the old debate about the scope of the GPL under copyright law. The debate itself isn't amazing — these debates have happened somewhere every six months, almost on cue, since around 1994 or so. What amazed me this time is that some people in the debate believed that the GPL proponents intend to sneakily pursue an increased scope for copyright law. Those who think that have completely misunderstood the fundamental idea behind the GPL.
I'm disturbed by the notion that some believe the goal of the GPL is to expand copyrightability and the inclusiveness of derivative works. It seems that so many forget (or maybe they never even knew) that copyleft was invented to hack copyright — to turn its typical applications to software inside out. The state of affairs that software is controlled by draconian copyright rules is a lamentable reality; copyleft is merely a tool that diffuses the proprietary copyright weaponry.
Sunday, Di, Emma, Chris and I went up and down the Syndicate Ridge. It was a fantastic workout for our legs and lovely mornings walk. Chris went all ‘Bush Tucker Man’ on us and distributed lots of odd berries which I nibbled, took …
When I started building our apt-mirror, I ran into a problem: the machine was throttled against ubuntu.com's servers, but I had completed much of the download (which took weeks to get multiple distributions). I really wanted to roll out the solution quickly, particularly because the service from the remote servers was worse than ever due to the throttling that the mirroring created. But, with the mirror incomplete, I couldn't so easily make available incomplete repositories.
The solution was to simply let apache redirect users on to the real servers if the mirror doesn't have the file. The first order of business for that is to rewrite and redirect URLs when files aren't found. This is a straightforward Apache configuration:
Working for a small non-profit, everyone has to wear lots of hats, and one that I have to wear from time to time (since no one else here can) is “sysadmin”. One of the perennial rules of system administration is: you can never give users enough bandwidth. The problem is, they eventually learn how fast your connection to the outside is, and then complain any time a download doesn't run at that speed. Of course, if you have a T1 or better, it's usually the other side that's the problem. So, I look to use our extra bandwidth during off hours to cache large pools of data that are often downloaded. With a organization full of Ubuntu machines, the Ubuntu repositories are an important target for caching.
Suppose you have a domain name, example.org, that has a primary MX host (mail.example.org) that does most of the delivery. However, one of the users, who works at example.com, actually gets delivery of <user@example.org> at work (from the primary MX for example.com, mail.example.com). Of course, a simple .forward or /etc/aliases entry would work, but this would pointlessly push email back and forth between the two mail servers — in some cases, up to three pointless passes before the final destination! That's particularly an issue in today's SPAM-laden world. Here's how to solve this waste of bandwidth using Postfix.
This tutorial here assumes you have a some reasonable background knowledge of Postfix MTA administration. If you don't, this might go a bit fast for you.
I thought the following might be of use to those of you who are still using Apache 2.0 with LDAP and wish to upgrade to 2.2. I found this basic information around online, but I had to search pretty hard for it. Perhaps presenting this in a more straightforward way might help the next searcher to find an answer more quickly. It's probably only of interest if you are using LDAP as your authentication system with an older Apache (e.g., 2.0) and have upgraded to 2.2 on an Ubuntu or Debian system (such as upgrading from dapper to gutsy.)
When running dapper on my intranet web server with Apache 2.0.55-4ubuntu2.2, I had something like this:
Many people don't realize that the GPLv3 process actually began long before the November 2005 announcement. For me and a few others, the GPLv3 process started much earlier. Also, in my view, it didn't actually end until this week, the FSF released the AGPLv3. Today, I'm particularly proud that stet was the first software released covered by the terms of that license.
In my previous post about Xen, I talked about how easy Xen is to configure and set up, particularly on Ubuntu and Debian. I'm still grateful that Xen remains easy; however, I've lately had a few Xen-related challenges that needed attention. In particular, I've needed to create some surprisingly messy solutions when using vif-route to route multiple IP numbers on the same network through the dom0 to a domU.
I tend to use vif-route rather than vif-bridge, as I like the control it gives me in the dom0. The dom0 becomes a very traditional packet-forwarding firewall that can decide whether or not to forward packets to each domU host. However, I recently found some deep weirdness in IP routing when I use this approach while needing multiple Ethernet interfaces on the domU. Here's an example: