Summary Mode

While there are public tools like Storify that do an OK job for tweets, and I personally end up using Evernote or for most of my (private) tracking of topics, it’d be useful if there was a good tool for collecting/collating and snapshotting primary sources and making notes/comments on any particular topic (and groups of topics). Useful things would include seen-on and publish dates (a la Zotero etc). And being able to contextualize/parse interesting pieces (Evernote and Clipmarks do a pretty good job with this)

For example, for the controversy of the past few days on the whole PyCon fiasco:

* Summary: How “dongle” jokes got two people fired—and led to DDoS attacks

A series of discussions on Hacker News related to the topic:
* 2013-03-17 Inappropriate comments at pycon 2013 called out
* 2013-03-20 The PyCon Incident
* 2013-03-21 PyCon Code of Conduct changed to avoid public shaming
* 2013-03-21 SendGrid Fires Company Evangelist After Twitter Fracas
* 2013-03-21 Adria Richards, PyCon, and How We All Lost
* 2013-03-21 A Difficult Situation

Performance Comparison of Image Libraries Revisited

A few years ago, I wrote up a brief comparison of various image libraries running a series of operations (image compositing and resizing) that we use for Lensley on an OS X + Python setup.

Just recently, I started doing work with Ruby + RMagick, however I ran into some issues doing basic operations (PNG resizing on a set of images) that was just incredibly slow.

ruby 1.9.3p385 (rvm) + RMagick 2.13.2 (ImageMagick 6.8.0-7 2013-02-19 Q16)
real	9m37.735s
user	4m11.995s
sys	3m16.644s

What’s going on here? Looking more closely, Ruby started out maxing out the CPU but this actually declined as the script ran, and memory steadily climbed, reaching 6GB by the end (which took about 30s after processing just to release). Obviously a GC issue, and sure enough, there was a thread about it. Adding a couple destroy! calls at the end seemed to fix things nicely:

real	3m49.132s
user	3m44.673s
sys	0m3.150s

Now, how did that compare with running convert via Python, I wondered?

Python 2.7.3 + envoy + convert (ImageMagick 6.8.0-7 2013-02-19 Q16)
real	4m58.882s
user	4m40.536s
sys	0m15.840s

Seems about right (one interesting thing to note is that the processing was actually shorter than Activity Monitor’s refresh so it never showed maxed CPU usage). Now how about running ImageMagick directly?

time mogrify -path test-seq3 -scale 800x450  test-in/*.png
real	3m17.050s
user	3m14.937s
sys	0m1.879s

OK, and since we’re just doing simple manipulation, lets see how it does against sips.

real	0m56.272s
user	0m51.000s
sys	0m5.150s

Well now, that’s a bit embarrassing isn’t it? Still, one thing with all of these so far was that only a single processor was being maxed out.

I decided to try multiprocessing (this was easier for me in Python) to see how fast I could really process these images. I used multiprocessing + Queue w/ 8 processes for my cores (similar to this example).

Python MP + envoy + convert (ImageMagick 6.8.0-7 2013-02-19 Q16)
real	0m51.472s
user	4m44.998s
sys	0m19.737s

Python MP + PIL 1.1.7
real	0m18.123s
user	2m5.540s
sys	0m2.721s

Python MP + Wand (ImageMagick 6.8.0-7 2013-02-19 Q16)
real	0m39.012s
user	4m39.162s
sys	0m4.145s

Python MP + pgmagick (GraphicsMagick 1.3.17 2012-10-13 Q8)
real	0m17.148s
user	1m47.560s
sys	0m1.593s

Python MP + envoy + sips
real	0m52.984s
user	0m58.504s
sys	0m13.715s

The biggest surprise was that sips had virtually no gain and no effect on the actual processing. I wonder if there’s some pipelining going on or what the loss in subprocesses was… PIL and GraphicsMagick beat the pants of ImageMagick, both being over twice as fast in processor and wall time.

I would have liked to have tried comparing to freeimage, but alas couldn’t get wrappers to work. smc.freeimage and FreeImagePy had problems talking to the dylib, and I was able to get mhotas‘ freeimage wrapper mostly working but it was giving me fits on resizing. Maybe next time.

Python and ImageMagick

UPDATE 2013-10-27: Wand continues to have frequent updates and now supports sequences, line/text drawing, reading EXIF, chops, and some effects among other things. You can keep track w/ the latest CHANGELOG. I can highly recommend it if it does what you need.

I’m a big Python fan and it’s my preferred language these days, but sometimes you just have to shake your head and give up. One area where Python falls down is in its ImageMagick bindings. You can read more about the history of some of the libs here.

Here’s my experiences on OS X (10.6, 10.8) and MacPorts that hopefully will save some people time:

  • PythonMagick (BROKEN) – the official bindings, and somewhat up-to-date (last update 2012-09-19), I could get it to compile, but it threw TypeErrors when trying to run. You can compile it like so:

    ./configure --prefix=/opt/local CPPFLAGS="-I/opt/local/include -I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7" LDFLAGS=-L/opt/local/lib

  • pythonmagickwand (BROKEN) – this shows up in searches, but it hasn’t been updated in 5 years. It’s as you can imagine, broken.
  • python-magickwand (BROKEN) – last updated 2012-01-22, it includes a nice README (w/ the aforementioned history) and examples and I wanted it to work, alas, I couldn’t get it to.
  • Wand (WORKS!) – Wand is under active development (last commit 2013-01-31), is Pythonic, has community contributions, and works! Sounds perfect, right? Alas, it doesn’t support many features currently (layers, effects, animation etc) although it’s on the roadmap.
  • pgmagick (WORKS!) – This lib was the closest to doing what I needed. It’s very active (last commit 2013-02-10) and is much more comprehensive than Wand.. however, while supposedly working for ImageMagick, I could only get it working w/ GraphicsMagick, which in my case, was missing features that I needed. There are decent docs but very little real code out there. Here btw, is how I got it running (it has some boost issues; also, pip doesn’t work):

    sudo port install boost
    cd /opt/local/lib
    sudo ln -s libboost_python-mt.a libboost_python.a
    sudo ln -s libboost_python-mt.dylib libboost_python.dylib
    sudo easy_install pgmagick

    After wrestling for quite a while, I threw in the towel and rewrote my code in Ruby w/ RMagick, which is well maintained and up-to-date, has comprehensive documentation, and lots of example code floating around the web.

    If you’re tied to Python, using PIL or subprocess/envoy to call convert/mogrify directly is probably your best bet, but if you are doing anything substantially complex, calling out to an RMagick script will probably save yourself a lot of pain.

RIP Aaron Swartz

I was first introduced to Aaron (impossibly young), over a decade ago at a tech conference (OSCON?). And, while we were never close, we often floated in the same circles (tech, activism, civic and political tech) and over the years our paths crossed many times, in emails, projects, at conferences or meetups. The last time I saw him was in Boston, June 2010. We met up outside a food court in Cambridge and caught up on the projects we were starting/wrapping up and swapped some thoughts on civic and campaign tech.

More than a friend, Aaron Swartz was a fellow traveler. He was one of us. In many ways, the best of us. It was a punch in the gut when I read the headline last night. He dedicated much of his life and his many talents in fighting injustice and trying to make a difference.

And beyond the sense of loss, there’s a bitter taste that injustice and indifference has won the day.

Rest in peace Aaron Swartz.

Raw Nerve – some of Aaron’s best writing.

F2C2012: Aaron Swartz keynote – “How we stopped SOPA”

Fall 2012 Mix

It’s been a while hasn’t it? One of the nice things is that there’s a huge backlog of stuff to choose from, although that also makes things a bit harder to remember what you were diggina while ago.

A few of these tracks are from my This Is My Jam – unfortunately it doesn’t really have a history so is less than useful for actually remembering anything…

Anyway, a few tracks. Some new artists, some old. A bit of a future-groove that gets washed away w/ the tide.

MacPorts without XCode

Post-XCode 4.3 (when XCode was moved to the Mac App Store), the Command Line Tools are now a separate download (either from within XCode or from ADC). This also means that you can install the Command Line Tools without needing all 4.4GB of XCode thanks largely to Kenneth Reitz’s work on OSX-GCC-Installer. Pretty sweet, and designed to work with Homebrew. The easiest way to install the command line tools is of course, via the terminal:

xcode-select --install

MacPorts has served me well over the past few years without much fuss, but it’s apparently not done well for people upgrading XCode (see the Problem Hotlist and sample Stack Overflow answer if you have that problem). Now this is all fine and good with XCode, but what if you want to just use the Command Line Tools and skip XCode entirely?

Well, I’m glad you asked. Mostly it involves running the following:

xcode-select --switch /usr/bin

This makes xcrun point to the correct location when running binaries, however, if MacPorts is giving you guff about not having XCode installed this is what I did (as root):

cd /usr/bin
mv xcodebuild xcodebuild.orig
touch xcodebuild
chmod a+x xcodebuild

And for xcodebuild:

#!/bin/bash
if [[ $1 == '-version' ]]; then
  echo "Xcode 4.4"
  echo "Build version 4F250"
else
  # Pass Through
  /usr/bin/xcodebuild.orig $@
fi

MacPorts seems to use xcrun xcodebuild -version to check the XCode version, so I just get it to lie when it asks.

Now, this is quite ugly, and the proper way to do things would probably be to either actually install XCode or perhaps, finally make the switch to Homebrew, but I thought I’d post this in case it comes in handy for people…

New Windows Netbook: User Experience Report

TL;DR/SPOILER: Pretty Pathetic

Yesterday, I ended up helping a friend pick up a new PC at Fry’s. Not super high on my list of things to be doing on a Sunday, but I’m rolling with it. All we needed was a minimal computer to run a browser, so after looking at the laptop and nettop selection, we decide to go for a 10″ EeePC. There was a 12″, but we were told that it was out of stock, and after we decided on the 10″, we were told that was out of stock (and discontinued) as well. These netbooks were running Windows 7 Starter and I asked what the difference was (vs Home, Professional, or whatever), but the sales associate didn’t know. In the end, he brought out an also-discontinued but similar (single-core N455, not dual-core D525 Atom) netbook. At this point, I’m jonesing to get the hell out of dodge, and after another 20 minutes of dicking around, it’s sent to the front desk for checkout/pickup. Overall shopping experience grade: pathetic.

Anyway, I don’t want that to overshadow what’s coming next, so lets just move on (had I known I would have just told my friend to order online via Amazon or something else civilized). Now for the setup… It’s been close to a decade since I’ve unboxed and booted a retail Windows PC, so I was sort of looking forward to see how the experience has improved.

It doesn’t start off too bad. A nice bootup logo, some simple form fields to fill out, and then a Samsung installer that automatically runs to install some system software. We leave for lunch, and when we return, it’s… still… running. All told, it takes just under an hour before I reach the Desktop, after which the laptop (with 1GB of RAM) is almost unusably slow. Sure enough, looking at the Task Manager shows that it has 0 memory free. Interestingly, the Bing Bar is the app using the second most memory. After another hour+ of uninstalling the apps that presumably were just installed in the previous hour (Norton first, and then the Bing Bar, and the Samsung System Tools being some of the worst offenders), I downloaded Google Chrome, and ran the “boot performance” tool (another half-hour), and ended up with a usable laptop with a passable web browsing experience. Overall initial boot/setup experience grade: super pathetic.

Now granted, this is a <$300 device, but I'm honestly surprised at how horrible the first boot experience was. Much worse than I remembered, much less what I was expecting in 2011. How can a manufacturer get away delivering this sort of experience and still be in the business of selling computers? After a decade of using Macs (and occasionally imaging Linux systems on similar class hardware), my mind is just boggled. I wonder if people buying these things don't know any better, or if they fully understand the horror, but simply must endure it (like me in this case, I suppose). It certainly occurred to me more than once during this ordeal that if I had brought my USB stick, it would have been much faster to have wiped the netbook with an Ubuntu installation and be done with it.

HP Touchpad + webOS

So yeah, I went and ordered a TouchPad (a few actually, as they look like they’ll be useful as web/input devices). If you’re interested in picking one up for cheap, the epic SlickDeals thread (11K+ posts) has the latest stock info. (for general info, there’s another thread w/ some useful links, and the PreCentral TouchPad forums). It’s not for everyone, but $100 for a tablet w/ a 9.7″ XGA IPS screen, dual-core 1.2GHz Scorpion SoC (APQ8060 + Adreno 220), and 6300mAh battery that has a clean embedded Linux (and can easily chroot Ubuntu) is a steal.

Of course, if you’re not gonna be hacking on one, it’ll be a decent web browser or photoframe, and I have no doubt that the homebrew guys will keep plugging away for a while, but I’d treat it more as a disposable $100 purchase. (My thinking is the upcoming Amazon tablets will split the difference in pricing, but ultimately have much better longevity).

Now, back to the hardware for a bit. While there have been a rash of articles blaming the TouchPad’s performance on the hardware, I think that’s baloney. For those that aren’t regularly comparing ARM specs, all you need to know is that in terms of raw power, the Scorpion should hold it’s own – equivalent to current-gen Cortex-A9/GPU combo like Apple’s A5/SGX54x or Nvidia’s Tegra2 (maybe a little less memory bandwidth/IPC, but it has a faster clock). There’s an Anandtech article that does a good job summarizing.

The Anandtech article has a SunSpider comparison, which mirrors the launch benchmarks. The TouchPad is slow because the web layer is slow. Luna, webOS’s GUI, runs entirely on web layer. QED. This mirrors my cursory prelaunch SDK testing (NDA lifted 6/30).

I mostly gave up on webOS back in the summer of 2010, pre-HP acquisition, and although I retain a fondness for the idea of webOS, the execution has always caused ambivalence for me, primarily because of performance. I think Dion makes a bit of an understatement, when saying they should have spent more time profiling. More than any other feature or app (well, Maps), lag, OOM erros, and unresponsiveness was the primary issue that drove me away.

Although a lot of it comes down to doing (hard) low-level optimization work (or dumb easy stuff like turning off logging), I think at least some chunk is just due to running on old software. Last year at the Palm Developer Day, the excuse given about why webOS software was so out of date was due to recertification issues, but w/ webOS 3.x being a tablet-only fork, this obviously didn’t prove to be the ultimate reason.

Based on the 3.0.2 SDK Emulator, here’s a rundown of some of the stack:

Linux Kernel 2.6.26 was originally released Jul 13, 2008. In comparison, Android Honeycomb runs 2.6.36 (Oct 10, 2010). There is in fact an active project that’s done great work patching the kernel (better schedulers, governors, compcache), although I’m not sure if all the modules required to upgrade vs backport are available.

webOS reports using AppleWebKit/534.6. WebKit was tagged Safari-534.6 on Aug 27, 2010. This might not seem too bad when comparing w/ kernels, but to give some perspective, Chrome 7.0.517 was released with AppleWebKit/534.7 in Oct 21, 2010. I’m currently running Chrome 13.0.782.112, which uses AppleWebKit/535.1 (tagged on Aug 11, 2011). Safari 5.1 is using AppleWebKit/534.48.3 (tagged Jun 24, 2011). webOS has ACID compliance and other standards issues, and is lacking in many useful HTML5 features, which is somewhat ironic considering.

Probably more relevant to performance, however, is the V8 version. webOS’s node.js is compiled against V8 2.5.9.22-2 (released Nov 11, 2010). The current latest version, released last week, is 3.5.6. Especially for JS runtimes, improvements have been coming at a blistering pace. Running V8 Benchmark v6 on Chrome 13/Canary 15 (V8 3.3.10.25 and V8 3.5.6) on my desktop gave results in the 9400/9500 range. An old version of Chrome 7 (V8 2.3.11.22) scored… 5400 on the same test. (There BTW is your 2X performance.)

webOS 2.x+’s services are based on a node.js layer. That’s great. The version of node.js they are using is 0.2.3, which was released on Oct 02, 2010. The current version is 0.4.11 (stable) and 0.5.4 (unstable). node appears to run standalone, so that can probably be upgraded (and the JS tested) without too much trouble.

The much bigger challenge for people sticking with webOS is how to deal with all the custom-compiled/embedded bits. The biggest pieces (at least memory-wise) are the WebAppMgr, LunaSysMgr, and BrowserServer, but updating any the luna bits are completely dependent on the whim of HP.

If they don’t open source webOS, hopefully whoever’s left can push out as many of the low-level performance optimizations and maintain some sort of robust build/update system.

Sadly, the most likely scenario is that in a couple months we’ll just all be flashing an Android port.

Not So Summer Songs

Hard to be believe it’s the end of August already. It’s been a long while since I’ve made a proper playlist, but in the name of procrastination here’s some stuff that’s caught my ear this summer. Honestly, there’s not much of a theme here except that there’s not much in the way of actual “Summer” songs here. Well, maybe a few in there…