Bashing Through Tedium

I’ve been meaning to do this for a while, but now that I’m working on lots of identical systems, I finally have checked my bashrc into source control. Over the past couple years, I’ve accrued some useful tricks. My number one useful alias is virc, which I’ve now adapted to scm form:

alias virc='svn update; vi ~/.bashrc; source ~/.bashrc; svn ci ~/.bashrc -m "virc bashrc auto-update"'

This is, I think the secret to being sure that you put that tricky 20 command pipe, or the path that you type over and over again. With the virc, I’m 4-letters away from pasting that and having it immediately available at the prompt. The scm calls make sure that I’m editing the latest version and that I check it right in.

A couple bits of laziness philosophy:

  • single letters – I used to have things like cdwww, but my laziness has progressed to new levels, so now I alias single letters to changing folders. Typically they go like this:

    export w='/var/www'
    alias u="cdd $u"

    The cdd is an alias I have to pushd. I also have some things bound to relative paths. For example, I have h bound to htdocs, so I can type w to get to my web folder, and either h to get to the htdocs or l to get to the lib folder.

  • hostnames – Same premise as above but for SSHing. I used to have things like sshrf, but now I do whatever’s shorter (ssh[shortcut] or [hostname])

  • tl – Simple shortcuts, like in this case for tailing logs, are great for chaining together. If you’re doing a lot of tl | grep ‘foo’, you’ll probably save as much time over the years as that mega-command you create that you don’t use all that often
  • multiple hosts – here’s what I use to create host specific branches (I probalby will need to change to globbing or regex matching)

    if [ `hostname` == "muffins" ]; then
    export svn='file:///var/svn'

    fi

  • multiple files – for stuff that I don’t want check in, I have a separate file now:

    if [ -f ~/.mybashrc ]; then
    source ~/.mybashrc
    fi

    The multi-file loading can be nested in the multi-host conditionals for loading files for specific hosts.

Unfortunately, I haven’t seen any good shell-tip sharing sites, but it would be interesting. — I got sidetracked here, and it’s late, so maybe to be continued.

Audioscrobbler, Getting Better!

Having used it since inception, I’ve been unsatisfied with how it actually records and presents my listening habits, but I have to admit that the journaling/community features have improved tremendously. I followed a recommended posting and was intrigued by the musical connection term extraction for the post, so I posted my Belated Best of 2005 list to try it out.

The connections require manual linking with UBB-style tags (there’s an AJAX lookup on the right) so it requires a certain amount of dedication to type the songs in (smart autocompletion is no doubt the obivous next step), but I have to say, that it’s definitely on the right (and similar…) track. Now, the test, is will like-minded people be able to find, and comment on my post and get me roped into participating more in the Last.fm community? 🙂

Ahh, PCs, How I’ve Forgotten

Yes, my Mozilla leaks memory like a sieve on my Powerbook, but I’ve really been pretty spoiled by using Macs. I just put together a new gaming rig, which yes, is quite zippy, but also seems to have weird funky things, like oh, network data corruption. Apparently, using Nvidia’s network and IDE drivers are bad news for system stability and uncorrupt data (oops!).

Source MD5
Mac (uncorrupt) 9c0b53c07f2e02fbfcbf46a1264a402a
Firefox on Nvidia 2bd3033e581d816e22011e26d5bffa1a
IE on Nvidia c801b2b558209fb1197318c90eed0f6a
wget on Nvidia 3358c3f7b06f06e287eade0bf26639fd
Firefox on Marvell 9c0b53c07f2e02fbfcbf46a1264a402a

The Marvell LAN port appears unaffected since it doesn’t use the Nvidia firewall (which corrupts downloads even when turned off. Also, the Nvidia tools load a local version of Apache, which wastes around 15MB of memory…)

Music Stats

The other day Andy mentioned it didn’t seem like I had many female artists on my iPod, which is accurate, but it got me wondering what my actual mix was, so I ran a couple of random sets of 25 and 100 songs, and it comes out to about 70% male vocals, 15% female, and 15% instrumental, with a slight edge on the female vocals vs instrumental pieces. Which seems backwards to me, but it’s occurred to me that the instrumental songs (primarily post-rock and various types of electronic music) are longer on average.

It also occurred to me, while manually counting, that this type of structured tagging and listening pattern/music analysis would be insanely addictive and potentially useful, but I don’t think that there’s anything out there that currently does anything like that, and certainly nothing integrated with your music library.

On Engelbart

Doug Engelbart speaking

Tonight I went to Doug Engelbart’s presentation dubbed “Raising the Collective IQ,” sponsored by Future Salon. I wasn’t sure what exactly to expect, but I was glad to report that Doug was both quite lucid and the topic matter fairly interesting (if slightly vague towards the end).

The talk began with some insightful anecdotes recounting his early experiences that led him into his research, and then centered primarily on discussing capability infrastructure, mostly based on the last paper he published in 1992, Toward High-Performance Organizations:
A Strategic Role for Groupware
.

There were definite insightful comments on scaling these kinds of infrastructure, and the points on human system and physiological “component capabilities” have definitely been foremost in my mind recently in thinking about hacking process and organizational issues, both in corporate organization (ahem) and in social software contexts (ahem).

Also, these points definitely strongly touch two books I’ve read and recently enjoyed: Jeff Hawkins’ On Intelligence, which is about describing human intelligence in a memory-prediction neurological model, and Douglas Rushkoff’s latest book, Get Back in the Box, which has more than a few great a-ha moments.

Here’s a quote from one of Engelbart’s slides on capability infrastructure:

Consider that human capability (individual or collective) depends upon an integrated infrastructure of component capabilities.

That being said, the talk wasn’t all sunshine and roses. An implication of Engelbart’s talk was that the concepts he has been talking about, the Dynamic Knowledge Repository, and his CoDIAK/bootstrap model haven’t been implemented yet, and when one looks at the type of tools that are currently being developed and are obviously serving to augment collective intelligence, it seems to me that that’s not the case. Engelbart continues to talk about first steps and different models needed to implement DKRs and seems to dismiss things like the Wikipedia, and well, the Internet.

When you consider all the components that make up the modern online experience: search engines and the related portal tools, IM, social networks, community sites, forums, blogs, feed readers, etc, the amount of added mental bandwidth is pretty astounding. Sure, it’s loose and messy and a work in progress, but it seems to be exactly what Engelbart espouses. And like the command line of his AUGMENT system, yes it has a harsh learning curve, but when you master it, there’s a similarly proportionate payoff.

I’m not sure if this seeming blind spot is an artifact of a “worse is better” blindspot, or if its something else. Apparently, Engelbart has been using AUGMENT every day for the past 40 years. I’m sure in comparison, the response time and hypertext capabilities of the Web must be offputting, but again, worse is better prevails because it was one thing that AUGMENT wasn’t: open.

I didn’t get to ask this question, but seems ironic to me, especially in light of Engelbart’s praise early on in open source, that AUGMENT, even though in continuous use for the past 40 years has never been released or cloned. Can you imagine the improvements that could be made if AUGMENT had itself had adhered to the CoDIAK model and been collaboratively improved? Instead, we have the web, which did exactly that: a resource that has been incrementally and collaboratively developed over the past decade, and has turned into what is now the world’s largest information repository and communication tool.

Doug Engelbart is a visionary, his work and writing as far as 40 years back still has incredible relevance, his goals are laudable, and it was a real treat to hear him talk. I guess I’m just puzzled and a bit disappointed when he doesn’t see CoDIAK and NICs when they appear (at least to me) to be sitting right there. Maybe it’s one of those paridigmatic issues he talks about.

I’m online!

I finally got my net connection set up at home yesterday morning, and an unfortunate power outage led me to revisit my router setup tonight. It turns out that OpenWRT has been going through a lot of changes over the past year, with its latest veresion being a series of “white russian” releases.

Since I wanted to do an set up a captive portal anyway, I reinstalled and have been futzing around for the past couple hours, which, while not horribly productive, has been educational and fun, and most importantly, has not resulted in me bricking anything.

Right now it’s looking like WifiDog is my best bet for generating a captive portal, although I haven’t looked at how the ACLs are set up (I’m thinking that with the proper firewall (ebtables) and encryption setup, it might be possible to both have an open AP and something somewhat secure (I might be better off giving that up on that or just having a separate AP for that though).

Worth Posting

The MyWeb integration convinced me to switch my homepage to Yahoo! Search but my Firefox toolbar still defaults to Google, and I have to admit, I usually like the results for the latter better still. So, I thought it’d be worth noting that here’s a search where Yahoo! definitively did better:

What’s interesting to note, like my MyWeb experience, it wasn’t the pure algorithmic results that provided a better results, but the Yahoo! Shortcut that answered my question. I love these sort of things (both on Google, which started pioneering the search as cmdline, and still has better parsing on some things), but I think the YubNub-like Shortcuts are just getting started in terms of fullfilling their potential. My idea of the new hotness: linking your shortcuts into your MyWeb community and then auto-promoting useful shortcuts based on a simple Ajax ‘was this helpful’ feedback mechanism.

End of the Year, Beginning of the Year

It’s been a hectic past few months, so I’m spending this new years eve wrapping up things and getting a head start on some things. I just finished making some last-minute year-end contributions (EFF, FSF, Wikipedia, Media Matters, and the Red Cross, among others). I’ve been spending the past couple days on cleaning and reorganizing my OmniOutliner todo list. I’m hoping to get to cleaning my notes files into the appropriate wikis (I’ve decided that this is the year that I move towards moving all my permanent online-reference/storage off of single machines and onto something theoretically more stable and easier to back up). I’ve also been ignoring this site recently, but as I get settled in (I finally have gotten my [fingers-crossed] permanent residence up north), I’m hoping to have some time and energy to try something new here.

Here’s to 2006.