Why audiophiles are suckers

Here’s a direct quote from Evan Cornog’s Slate article, Portable Audio for Snobs:

Even using lossless files, none of the players sounded quite as good to me as the same music on CD played on a $50 Discman. But portable audio has to involve trade-offs, after all. Given the limited disk space of all these players, a lossless format is a reasonable compromise between low-quality, small MP3s and uncompressed files. If you value sound quality over convenience, buy a $50 CD player and take the money you saved to buy better headphones and a headphone amp.

Lets have a laugh and read that again: Given the limited disk space of all these players, a lossless format is a reasonable compromise between low-quality, small MP3s and uncompressed files. So, how are the bits different between an uncompressed file and a losslessly compressed file? (But the quads, man. It needs more quads.)

Computing History

I stopped by Tom Jennings’ lecture on computer hardware at Machine yesterday, which was worthwhile not so much for the description on assembly (sort of muddled), but for the tangents and historical bits.

For example, I had no idea that core memory was manufactured primarily by Filipino seamstresses… (actually, all the early memory types are pretty interesting. Like acoustic delay line memory. Crazy.)

Some things that I’ll have to follow up on myself: assembly programming in relation to advanced features (handling out-of-order execution, pipelining, parallel processing), and the historical development of synchronously clocked architectures.

I’m sure it seemed like a good idea at the time…

You’ll have to excuse the lack of posting or other productivity lately… I’ve been programming Java.

*buh-dum-dum*

This is for work, as I’m not usually so masochistic. Right now we’re crunching to push out a uPortal based student portal (to replace the stopgap I wrote).

Now before I rant, there are some good things about Java. Mainly that Eclipse is a pretty stupendous IDE. Having advanced code-insight, library exploration, and a real live debugger are great things (oh, and try out that refactoring functionality – really slick). But then actually writing Java, there are the things that in theory make it good, but in practice don’t.

Oh, it starts innocently enough. Some patterns, interfaces, oh, and factories for everything, and then some property files… my God, the property files… And then all of a sudden, you’re on the other side of the line. You’ve just spent 20 minutes boxing/unboxing primitives, creating new Long objects just so you can increment (all those convenient data structures that require Objects, you know), or you’ve spent another half an hour chasing down wrappers and managers and inherited classes when all you want is to write out a simple parameter (ugh, I don’t even want to start bitching about XSLT right now)… Is this supposed to be easier?

Anyway, I can see who David is selling Rails to when he talks about build[ing] real-world applications in less lines of code than other frameworks spend setting up their XML configuration files. (On the other hand, I’ve been running into code scaling problems of my own w/ PHP, especially wrt handling complex front-end code. Obviously I don’t have the answers, but I’ve started playing w/ Rails, and I’m not sure that it’s the answer, especially on the JS integration end).

Well, that wasn’t much of a rant. Too tired to care that much. Some links of some tools I’ve been using lately:

  • TestXSLT – I wouldn’t go as far as saying it makes playing around and learning XSLT fun, but it’s definitely the best OS X XSLT tool I’ve found so far
  • skEdit – I’ve been trying this out, decent but not really astounding. I’d rather have a Mac port of TopStyle
  • DbVisualizer – a tad slow, but good for digging around
  • iSQL-Viewer – another useful JDBC DB tool

Media Convergence

Yesterday Tom posted three articles on a really rough proposal for an Apple Media Hub. I’m not up for writing anything long (or coherent) right now, so some random thoughts:

  • I see a much more direct link between a media center (playing/organizing digital media) both to acquisition and organizing of personal data and cataloguing of physical media
    • automatic population of a Delicious Library type application based on media usage (might as well catalogue as you play, right?)
    • As the media hub will likely be many people’s first file server, easy and automatic syncing and remote backups would be something I think would be pretty obvious, especially if it’s acting as a gateway as well (automatic encrypted backups of my Quicken files would rock)
  • Watching the finger contortions a non-techy friend put to do what he wanted w/ his PVR and HDTV setup really opened my eyes 1) about how people have taken to the power these new appliances have given them, but also 2) how the interfaces suck
    • Macros, etc: so, nothing that can’t be done with super high-end remote controls, but why not go one up and use a simplified OSD interface, something Automator like
    • Scheduling: and not just limited to media applications — I think agree that home automation is definitely a logical next step

Apple introduced its Mac mini today. It looks great. Add a breakout box w/ a nice VFD (I’d like 2 hdtv tuners, component in/out, toslink, spdif coax, and 6+ channel analog out) and you’d have a perfect base for a true digital hub. Personally, I’m hoping that sooner, rather than later, someone will release a platform w/ the building blocks for betting tapping into the opportunities that convergence could provide.

Addendum: I just spotted an interesting post about the mini and automobile computing – this is a perfect illustration of the kind of potential I see w/ convergence. What’s important to note is that general computing doesn’t go away, but rather gets infused into embedded applications. What used to require embedded toil and custom hardware instead moves towards scripting or even higher level development. Tinkertoys ready for the pro-ams.

OmniOutliner 3 Auto-save

One of the most exciting things for me about the new OmniOutliner 3 is its auto-save capability. This is a feature that is configurable in the preferences (defaults to saving every 5 minutes).

Unfortunately, I just discovered that rather than backing up into a separate folder, auto-saves are actually committed directly to the outline files. This means that files that haven’t been saved at all (say the 20 untitled outlines I had open) won’t be around after your power mishap.

FYI.

New Year, New Music

Not feeling so hot today, but it’s been a while, so I thought I’d kick off the new year with some new music.

First, for some leftovers from 2004:

OK, now onto 2005.

Tsunami Video Bandwidth Notes

First of all, to help with Tsunami relief, Google’s page is a good start. As is Amazon’s Red Cross Diaster Relief page ($3.8M and counting), where I donated. Apple.com also has direct links to relief/support agencies.

I threw in two machines tonight to help out Andy w/ his Tsunami videos that he’s been hosting. There’s apparently a ginormous demand, mostly coming from Google searches (#2 for tsunami video).

On my EV1 server, I’m averaging 85Mbps+ on the 100Mbps interface, and have just upped MaxClients and ServerLimit to 600 (after hitting 500). At this rate (36GB/hr) I’ll have to take it off the rotation tomorrow. Same with my SM machine, which is averaging about 21GB/hr. Between the machines on the round robin, I’m guessing that conservatively, we’re averaging at least 100GB/hr.

What’s interesting is that this dwarfs the BitTorrent traffic of the videos being served elsewhere. From my earlier Crossfire and Internets Vets experience, I’ve found this ratio to be between about 50:1 to 100:1 when both the torrents and direct links are given equal visual standing.

This isn’t to say that CacheLogic’s traffic numbers are inaccurate (they say BT accounts for 35% of all Internet traffic). I believe it. But the numbers I see from my own experience lead me to believe that the majority of that traffic is still being consumed by a relative minority of users (centered around predictable types of content), and that most people still tend toward direct downloads when available, regardless of performance.

Blog Torrent and Prodigem look like steps in the right direction for simplifying tracking, but I believe the fundamental bottleneck still lies in user adoption. Unfortunately, I don’t think this problem can be solved in a significant fashion until torrents are made transparent either by being integrated/bundled with browsers or available as ActiveX and XPI plugins, but we’ll see. In the near future, I should be able to have a much larger dataset to work against.

[Update: Hosting these tsunami videos was interesting. For the day that I threw in, at no point was transfer below 150Mbps (about 1TB of transfer). The bandwidth available was probably over 0.5Gbps and it was completely saturated. After Andy moved the videos over to Archive.org, the demand took them offline. When you have that much demand, it becomes pretty obvious how to get people to use BitTorrent – make it the only option available.]