Web Page Speed Report: random($foo) – hmm, it says that I’m not gzipping, but I am. Patrick’s analyzer is currently down, should test with that sometime too.
See also: GetContentSize, Spam Filtering with gzip
random($foo) is the occassionally still updated blog of Leonard Lin. My pics are on Flickr, code is on Github. @lhl on Twitter. More »
Web Page Speed Report: random($foo) – hmm, it says that I’m not gzipping, but I am. Patrick’s analyzer is currently down, should test with that sometime too.
See also: GetContentSize, Spam Filtering with gzip
Tumult HyperEdit is a lightweight HTML editor with a preview pane that displays the web page live as you type.
Weekend Todo:
Ernie: …so in conclusion, this p-p-project parses strings
taken from the MS-XML object and on a user interface level, mimics
Microsoft Excel f-f-functionality using DHTML functions.
Web Dev #1: But your function uses a method that’s IE exclusive.
Ernie: That’s because the client only used Internet Explorer.(Insert sound of crickets chirping here.)
Web Director: So, show us some of your other projects?
Ernie: I created a accessibility focused, DHTML project that
supports web-standards that dismantles all nuclear weapons worldwide.
It also cures cancer and and sings your children to sleep.
Web Dev #8: I did that once. But mine also worked for Mozilla.(Insert image of a tumbleweed, rolling through the conference room.)
Between work and school, it’s been hard to get much else done (I just got home after finishing an animation assignment), but I did want to do some braindumping, spurred partly by some various writing around the web recently, partly by the realization that I haven’t ever really written in-depth on a project that’s consumed quite a lot of free time over the past couple of years, but especially because it’s unlikely that I’m going to get much time to work on this anytime soon(or maybe I need to write this down to better motivate myself to work on this instead of sleeping in the coming monhts).
Those who know me probably know about my longstanding fascination w/ KM, and KBs in particular. I started blogging in earnest in late ’99. Around mid-2000 or so I started giving major thought into how blogs as knowledge capture vector (and how to store/make sense of this info), and have been playing around with ways to do so since. To answer Marc’s comment, my latest exploration has taken me pretty close to full circle. The basic concept is to store micro-content as atomic units within a graph structure, with fragments assembled into multiple, faceted collections (blog posts, categories, pages, etc).
For the past year or so, I’ve been calling this a ‘blikiliner.’ Like others, I’ve been working at this from a wiki base, but for various reasons, this has proven problematic. Here’s some basic qualites that I believe such a system should have:
I’d include links, but I’m lazy. If I had my search engine up you might be able to find a lot of links pertaining to the subject, but err, yeah, this is a strictly low-tech operation at the moment. (it’s no coincidence that my requirement-set mirrors my particular itches 🙂
Anyway, none of this is particularly new or unique or insightful, although there are a lot of tough design choices to get hung up on. My main surprise is that while discussion on this seems to come up fairly often, no one has put together a system like this yet (specifically the rtediting + shared dataset aspect).
Hopefully I haven’t relegated myself into the talking rather than doing class (although I certainly do more than my share of that already)
Related: online bookmarking, aggregation, infinite caching
Socializer – IBM’s version of WASTE, with a geo-locational (well, subnet) twist. Plug in ZeroConf for fun. (via Stewart)
gangstories – really compelling stuff (via matt)
IBM KICKS OFF GLOBAL LINUX AD PUSH (has link to WMV). IBM has it in RP, QT, and MPG