Looking Back, Looking Forward

Charlie Stross posted an interesting essay today, Reasons To Be Cheerful recapping some of the great things that have happened in the world over the past decade, primarily in the developing world. A great read, and honestly inspiring/heartwarming for the disheartened humanists. It’s easy to get overly cynical about it all. This is a good antidote.

That being said, I don’t think Charlie goes quite far enough. The essay starts framed by the thesis that in the world, things haven’t much improved, and the besides a few specific counterpoints about disease and the general march of technology, it feels like he gives up on really repudiating that thesis… for the developed world. And it’s easy to see why. In terms of general socio-economic trends, it’s hard to be all that positive. Things are downright unsettling heading towards dystopian. However, there’s at least one aspect, the very medium where we are commenting on that is worth, uh, commenting on.

Yes, the interwebbytubes, as Stross puts it, is quite a different place than it was at the beginning of the millennium. We are looking at a 2X adoption growth in developed nations (from plurality to supermajority, if not ubiquity). Worldwide, 2 billion people are now online. Beyond the quantitative changes, the qualitative changes are even more intriguing. In 2000 there was no Web 2.0. Blogging was in its infancy. Most of the things we take for granted online today were not invented yet. Among them: Wikipedia (2001), Facebook (2004), Google Maps (2005), Twitter (2006). I list these in particular because I don’t think there’s a day that goes by where I don’t use these particular services, but I’m sure that others have their own lists. Lest you think that this was a singular period of growth, I’ll throw in that the iPhone (2007) and iPad (2010) have kicked us into another era of hyper-growth that will be just as (if not more) life-changing.

We’re just starting to see what happens when the Internet starts engaging with us in a location/context aware fashion. We’re also starting to see what happens when Internet-style/scale dynamics are applied outside traditional consumer Internet contexts (e.g. Obama Campaign, 2008). In a historical scale, we’re still at the very beginning stages of figuring out what it means to live in a digital, massively inter-networked world, and similarly just starting to get a handle how that will change society (attention, communications and collaboration in particular).

All that’s a really long way of saying… well, there’s a pretty dang bright spot in the developed world too. One that has the potential of being turned into the shovel we need to dig ourselves out. So, here’s looking to the future. Happy New Year.

Wikileaks, Net Neutrality, Architectures of Participation

This post is mostly a placeholder/notes for further thinking I’ve yet to do about a few related threads that seem connected this past week. Before, but particularly since my experience working on the 2008 Obama campaign, I’ve been thinking about the most potentially transformative aspects of the technologies that we deployed: specifically, deploying methods and means for self-directed organization and participation.

In the meantime, the things that some things that have caught my attention.

In regards to the capitulation of Net Neutrality, this thread on building a alternative mesh network. I wonder if it’ll come to that?

On Gitmo and normalization of indefinite detention, davidasposted’s sobering analysis of the situation.

And of course, there is Bruce Sterling’s Wikileaks missive – melodramatic, oversweeping, but truly compelling, and a must read (counterpoint).

Also, Julian Assange’s impressively articulate recent interviews, and more information on Bradley Manning’s continued mistreatment.

Winter Songs, Winter Tour

Just a few tracks, new and old, for the cold and dark.

Owen Pallett released the Export Demo EP for download on Soundcloud last week. (awesome!)

Oh, and thanks to FUELTV’s Green Label Experience for getting a whole bunch of songs stuck in my head.

Gawker Passwords, etc.

I have work deadlines, so I haven’t been able to been able to write a well constructed post about this, however, a few things:

  • To check if you had a Gawker account (there are 1.25M of them, so you might have one even if you didn’t realize it) I recommend: http://gawkercheck.com/. Note: even if your password wasn’t unhashed, consider it compromised. These passwords are encrypted with DES crypt, which is not adequate to stop attackers. The keyspace is too small. For more info on DES (and probably the best post-mortem so far), see this Forbes blog post.
  • This is as good a time as any to manage your passwords properly. A lot of people (including me) are using 1Password. It’s currently available as part of the MacUpdate December 2010 Software Bundle. LastPass also looks like a good solution and is free ($12/yr for mobile support). PwdHash and KeePass are also options.
  • According to the FAQ, Gawker claims to be sending emails eventually (and some people are doing so as well now). What I did last night, and maybe a good thing to do for your friends if you are an uber-geek is to go through your friends list and grep through the torrent database and them personally know if their account has been compromised, especially if the password has been unhashed.
  • Oh, lastly, if you’re a geek w/ your hash and want to check on whether it’s a reused password or not, you can pretty easily fire up a python shell and see if it matches:
    password = 'your_password'
    hash = 'your_hash'
    salt = hash[0:2]
    import crypt
    crypt.crypt(password, salt)
    

    If you’re not sure though, audit your passwords anyway when you have a spare hour or two. You’ll feel better, trust me.

Learning New Things

Today was an average afternoon – taking way too long to accomplish a seemingly trivial task, but looking up and learning a bunch of new things along the way. It seems there should be a better/easier (almost automatic, transparent) way to track the sources (links/pages), process (things tried) and results (code fragments)….

The basic goal in this case was to automate some execution of some javascript on a page. Because execution of the script caused a page load, it wasn’t a matter of writing the calls into the console. The faster and easier way would have been to write a Greasemonkey or Chrome Extension script (because there were timing issues, the script would have to write a time-based state file on actions), however, I figured I would try to see what kind of options were available with a control-script oriented model, as having that handy would be more generally useful in the future (more and more, straight mechanize is less useful as more JS proliferates).

Before getting started, I had to strip out just the lines that I wanted. I always forget how to do it, but that was a simple vim command lookup.

I looked at Selenium and Selenium RC, which probably would have worked fine (but that I didn’t use because I didn’t want to install extensions and the RC docs weren’t directly linked, but would have probably saved me time in the end).

Instead I decided to try out Watir (cross-platform, cross-browser, and my Ruby is rusty so this was a good excuse). I started out with SafariWatir, however, after a bit of poking, came up with a dead end on executing JavaScript. There’s a scripter object, but even after getting access to it via monkey-patching (did I mention my Ruby-fu sucks?), I was still getting errors and there wasn’t much help In general.

Instead of slogging through a potentially losing battle, I decided to jump ship to FireWatir. FireWatir uses JSSh, which communicates directly via JavaScript to Firefox, so it seemed like it might be a surer thing. My Firefox profiles were corrupted from my last system transfer, so there was a bit of messing with the profile folder until I gave up and started a new, but after that it seemed like I was home free.

Except, that when running js_eval, it turns out the scope that JSSh puts you in, isn’t in the document DOM, but rather the browser’s XUL DOM. For whatever reason, I couldn’t find a reference for even with the direct object type refereces (i.e. getWindows and getBrowser return ChromeWindow objects, which just don’t seem to have docs. Introspection via domDump() or inspect() just returned a huge amount of stuff to go through). Luckily, while searching, one of the results that turned up was a StackOverflow question on firewatir + jQuery which answered the question – ChromeWindow.content gets you into the HTMLDocument DOM. I’m a bit mystified why this isn’t in the firewatir or JSSh docs, as this seems to be one of the most common things that people would want to do, but well, that is the life of the developer…