Random Thoughts on Twitter

With the Oprah thing and the rather bizarre Maureen Dowd interview (ripe for parody), I thought I might as well throw my two cents in.

Actually, this article was actually the one that actually convinced me to write something, specifically this quote:

I used to think Twitter would never catch on in the mainstream because it’s somewhat stupid. Now I realize I was exactly wrong. Twitter will catch on in the mainstream because it’s somewhat stupid. It’s blogging dumbed down for the masses, and if there’s one surefire way to build something popular, it’s to take something else that is already popular and simplify.

To clarify, I think this is fundamentally wrong and completely misses the point. (As an aside, similar things were said about blogging when it started taking off. These comments were also fundamentally wrong in the same way.)

Now, for some context, even though I was a relatively early adopter (my first tweet – I believe it was still called twttr then with a snot-themed logo and a focus on SMS), that’s not to say my own understanding and thinking hasn’t evolved along with the service (and its audience)…

The first time Twitter really picked up on my radar was while I was in London, as it had gotten a fair amount of traction as a cheaper way to text. Along those lines, it took off, again as a “group chat” style tool the next year at SXSW as a way for friends to coordinate in a lighter-weight and less annoying way than Dodgeball. At this time, it was still focused around SMS delivery, although there were some interesting clients starting to pop up. Also around this time (post-SXSW) that my (and others’) focus turned upon looking at Twitter through the lens of ambient awareness (Clive Thomspon did a great writeup writeup last year) and what we began to refer to in conversation as “statuscasting” (a term, which I might have made up, but I assume must have been on the tip of everyone’s tongue). Then there was a big explosion in clients, mashups, and the use of Twitter as a “command line” interface. And, of course, through all of this, Twitter continued to build up steam in the way that social tools do, as waves of adopters and their networks jumped on board. While there has always been the dialectic between semi-private conversation and broadcast/publishing that continues to make Twitter really interesting, the trend arguably has been toward the latter (especially with the “collection” of followers).

Now with some context that hopefully hints at some level of complexity to the Twitter phenomena, here’s where I return to directly smacking down that original quote and offering an alternative interpretation…

Twitter isn’t “retarded blogging” anymore than blogging was “retarded long-form writing.” What blogging uncovered was a “web-native” sort of communication – one focused on links-both hyperlinks and permalinks, temporality (dated posts, reverse chronological order) and decentralized conversation – at first manually since people simply read each other’s blogs (when I started, it all fit on a single list), then later with comments and formalized through trackbacks, pingbacks, and dedicated aggregation tools. It took a while, but I believe that Twitter has revealed a communication style that is native to the “web” today. What is this web? It’s one filled with activity streams – the “social web” and the “continuous partial web,” and one that exists beyond the browser and beyond the desktop – the mobile web and the “widget web.” The ingestion characteristics of these media are focused around intermittent (but constant) bursts of attention and the ability to scan both gestalt and to track details, and the output is about the “in-between times” of other activities. You don’t sit around for an hour writing a tweet. In fact, most people start with time that otherwise would have been spent idling (hence the large proportion of airport complaint messages).

That, I suppose is one aspect of the quote that is right – Twitter is does have more mass appeal because it can take root by fill a vacuum rather than being an activity that requires active displacement (at least to begin with!). The point is that it’s high immediate reward with low incremental commitment. And of course, the innocuousness of that small text box is part of Twitter’s genius…

Now, if there is a better (or different) model, my suspicion is that it’s in finer scoping. Sure geeks like to talk about interop and decentralization (and while that may come as it did for email, it may not (like for IM)), but I think it’s ultimately less interesting than figuring out how Twitter (or a similar type of service/activity) ends up bifurcating or integrating the aforementioned pull between public and private (groups? targeted/typed messages? ).

I think that’s where already see some interesting things like how location services have splintered off, and I think that’s what Facebook is attacking – in the same way that it created a semi-private place for photo and online-discussion activity, it’s trying to do so for tweets as well.

For those that recall, this harks back to discussions on semipermiability (ironically semipermanant, here’s the archive of Joyce’s paper on that), which never really took off (again, a niche that Facebook expanded into, I think).

Well, there’s not much of a conclusion here. This is entitled random thoughts after all. Maybe two last things while I’m here for those who remember the milieu and impetus of blogging… Firstly, my friendfeed, which is currently aggregating my activity streams across over a dozen services, and second, a graph of my blog output over the past few years:

See also:

Some Notes on Distributed Key Stores

Last week I ended up building a distributed keystore for a client. That wasn’t my original intention, but after doing testing on just about every project out there, it turned out to be the best (only?) solution for our needs.

Specifically, a production environment handling at least 100M items with an accelerating growth curve, very low latency retrievals, and the ability to handle 100s of inserts/s w/ variable-sized data (avg 1K, but up in many cases well beyond) … on EC2 hardware. The previous system had been using S3 (since SDB is limited to 1K values) – err, the lesson there, BTW is don’t do that.

So, these requirements are decent – something that actually requires a distributed system, but something that shouldn’t be beyond what can be handled by a few nodes. My assumption was that I’d actually just be doing some load testing and documenting installation on the keystore the client picked out, and that would be that. This was not the case.

I’m still catching up on a number of other projects, so I don’t have a great deal of time to do a formal writeup, hoewver, the work I’ve done may be useful for those who might actually need to implement a production keystore.

Some other recent useful starting points may be Richard Jones’ Anti-RDBMS roundup and Bob Ippolito’s Drop ACID and think about data Pycon talk.

  • MySQL – while the BDB backend is being phased out, MySQL is a good baseline. With my testing, on a single m1.large, I was able to store 20M items within one table at 400 inserts/s (with key indexes). Key retrievals were decently fast but sometimes variable. There are very large production keystores are being run on MySQL setups. Friendfeed has an interesting writeup of something they’re doing, and I have it on good authority that there are others running very big key stores w/ very simple distribution schemes (simple hashing into smaller table buckets). If you can’t beat this, you should probably take your ball and go home.
  • Project Voldemort – Voldemort has a lot of velocity, and seems to be the de facto recommendation for distributed keystores. A friend had used this recently on a similar-scale (read-only) project, and this was what I spent the majority of my time initially working with. However, some issues…
    • Single node local testing was quite fast – 1000+ inserts/s, however, once run in a distributed setup, it was much slower. After about 50M insertions, a multinode cluster was running at <150 inserts/s. This… was bad and led me to ultimately abandon Voldemort, although there were other issues…
    • There is currently only a partially complete Python client. I added persistent connections in as well as client-side routing w/ the RouteToAll strategy, but well, see above
    • Embedded in the previous statement is something worth mentioning – server-side routing currently doesn’t exist.
    • While I’m mentioning important things that don’t exist, there is currently no way to rebalance or migrate partitions, either online, or, as far as I could tell, even offline. This puts a damper on things, no?
    • As a Dynamo implementation, a VectorClock (automatic versioning) is used – this is potentially a good thing for a large distributed infrastructure, but without the ability to add nodes or rebalance, it means that for a write-heavy load, it would lead to huge growth with no way for cleanup of old/unused items (this of course, also is not implemented)
  • LightCloud – this is a simple layer on top of Tokyo Tyrant but the use of two hash rings was a bit confusing and the lack of production usage beyond by the author (on a whopping 2 machines containing “millions” of items) didn’t exactly inspire confidence. Another problem was that it’s setup was predicated on using master-master replication which requires update-logs to be turned on (again, storing all updates == bad for my use case). This was of course, discovered rooting through the source code, as the documentation (including basic setup or recommendations for # of lookup & storage nodes, etc is nonexistent). The actual manager itself was pretty weak, requiring setup and management on a per-machine basis. I just couldn’t really figure out how it was useful.
  • There were a number of projects that I tried, including Cassandra (actually has some life to it now, lots of checkins recently), Dynomite and Hypertable that I tried and could not get compiled and or set up – my rule of thumb is that if I’m not smart enough to get it up and running without a problem, the chances that I’ll be able to keep it running w/o problems are pretty much nil.
  • There were a number of other projects that were unsuitable due to non-distributed nature or other issues like lack of durable storage or general skeeviness and so were dismissed out of hand, like Scalaris (no storage), memcachedb (not distributed, weird issues/skeeviness, issues compiling) and redis (quite interesting but way too alpha). Oh, although not in consideration at all because of previous testing with a much smaller data set, on the skeeviness factor, I’ll give CouchDB a special shout out for having a completely aspirational (read: vaporware) architectural post-it note on its homepage. Not cool, guys.
  • Also, there were one or two projects I didn’t touch because I had settled on a working approach (despite the sound of it, the timeline was super compressed – most of my testing was done in parallel with lots of EC2 test instances spun up (loading millions of nodes and watching for performance degradation just takes a long time no matter how you slice it). One was MongoDB, a promising document-based store, although I’d wait until the auto-sharding bits get released to see how it really works. The other was Flare, another Japanese project that sort of scares me. My eyes sort of glazed over while looking at the setup tutorial (although having a detailed doc was definitely a pleasant step up). Again, I’d finished working on my solution by then, but the release notes also gave me a chuckle:

    released 1.0.8 (very stable)

    • fixed random infinite loop and segfault under heavy load

OK, so enough with all that, What did I end up with you might ask? Well, while going through all this half-baked crap, what I did find that impressed me (a lot), was Tokyo Cabinet and its network server, Tokyo Tyrant. Here was something fast, mature, and very well documented with multiple mature language bindings. Testing performance showed that storage-size/item was 1/4 of Voldemort’s, and actually 1/2 of actual size (Tokyo Cabinet comes with built-in ZLIB deflation).

Additionally, Tokyo Tyrant came with built-in threading, and I was able to push 1600+ inserts/s (5 threads) over the network without breaking a sweat. With a large enough bucket size, it promised to average O(1) lookups and the memory footprint was tiny.

So, it turns out the easiest thing to do was just throw up a thin layer to consistently hash the keys across a set of nodes (starting out with 8 nodes w/ a bucket-size of 40M – which means O(1) access on 80% of keys at 160M items). There’s a fair amount of headroom – I/O bottlenecks can be balanced out with more dedicated EC2 instances/EBS volumes, and the eventual need to add more nodes shouldn’t be too painful (i.e. adding nodes and either backfilling the 1/n items or adding inline moves).

There are some issues (an issue w/ hanging on idle sockets) but current gets are at about 1.2-3ms across the network (ping is about 1ms) and it seems to otherwise be doing OK.

Anyway, if you made it this far, the takeaways:

  1. The distributed stores out there is currently pretty half-baked at best right now. Your comfort-level running in prod may vary, but for most sane people, I doubt you’d want to.
  2. If you’re dealing w/ a reasonable number of items (<50M), Tokyo Tyrant is crazy fast. If you're looking for a known, MySQL is probably an acceptable solution.
  3. Don’t believe the hype. There’s a lot of talk, but I didn’t find any public project that came close to the (implied?) promise of tossing nodes in and having it figure things out.
  4. Based on the maturity of projects out there, you could write your own in less than a day. It’ll perform as well and at least when it breaks, you’ll be more fond of it. Alternatively, you could go on the conference circuit and talk about how awesome your half-baked distributed keystore is.

UPDATE: I’d be remiss if I didn’t stress that you should know your requirements and do your own testing. Any numbers I toss around are very specific to the hardware and (more importantly) the data set. Furthermore, most of these projects are moving at a fast clip so this may be out of date soon.

And, when you do your testing, publish the results – there’s almost nothing out there currently so additional data points would be a big help for everyone.

Virgin America’s Crappy Online User Experience

These days I mostly prefer to fly on Virgin America. Their flight experience is a huge step above most of the other domestic carriers (friendly service, decent seats, regular non-prison inmate faucets, etc.) and touches like plugs in every seat, a good entertainment system (although there’s also a huge unfinished post about improving that), and now wifi, all at a competitive price makes it pretty much a no-brainer for me.

So, it’s always been a little surprising that for an airline with such a strong focus on branding and flight experience that seems targeted at people like me would have such a bad online experience.

I’m actually not going to bitch too much about the website (you know, about how it’s slow, has weird bookmark-unfriendly urls with weird sessions, is much too dependent on Flash with lots of weird interactions where it consistently takes me multiple times to log in because it’s login form doesn’t tab properly, etc). but rather to focus something that happened to me today that should have been a good thing.

I had a 2PM-ish flight back home today. At 11:30AM, an email gets sent to me from telling “Virgin America Guest Services” about an “Important Schedule Change Notification”:

Your flight has been impacted by a schedule change which may result in the departure time of your flight being earlier than previously scheduled.

That’s actually great – well, certainly better to be notified as soon as possibly than not to find out at all. And besides being good customer service, I’m sure it’s good on VA’s end if they can reduce the amount of shuffled seating that kind of schedule change might cause. However, it continues:

We’d encourage you to login to the Check-In / Travel Manager section of our website at virginamerica.com to view your current itinerary. You’ll need your elevate login information or your confirmation code (see below) and your last name to access your itinerary. If you have any questions regarding the new time please contact our Reservations call center at 1.877.FLY.VIRGIN (1.877.359.8474) between the hours of 3:30am – 11:30pm PST. You may already be aware of the new departure time and will not need to take any action at this time.

Now, this is cut and pasted directly from the email. It is an HTML email, but it doesn’t include even a link to the site, not to mention a link to the flight information. This of course is made doubly frustrating by the fact that it is a personalized email that includes my name, address, and confirmation number. Now, I’m not a rocket scientist, couldn’t they just save a step and include the flight information and what changed? If for some reason they couldn’t, why wouldn’t they include a direct link to that information? That’s all before you try to load the VA site on your phone. (Which works, barely, on my iPhone. Good luck with that if you don’t have 3G or WebKit.)

It seems that VA would actually save money if they could streamline this, since as it is, they probably get a lot of people calling rather than looking at the email and finding out what they need.

Since I like VA, the next step for me was replying and letting them know that it’d be great if they could include the information, a link or something mobile friendly. Unfortunately, once I got home, I saw that it was sent to a no-reply email address (bounced!). There’s no other contact VA from the email, unless you want to spend time on the call center, which isn’t a good use of anyone’s time.

Well, since I really do like VA (have I mentioned it’s incredibly easy to standby on an earlier flight?), I decide to go to the website and contact them… and after writing out my brief issues with 4 bullet points, it turns out there’s a 1024 character limit (yes, that’s 7 tweets and no dynamic character counter).

At this point, I probably should have given up, but I’m a sucker for sunk costs, so I went to look for an online character counter and started shaving off characters and doing some txt squeezing. In the end, they got my “feedback,” but it did get me thinking about this whole chain of events, and about how lots of these little bad UX decisions can compound to ultimately burn good will really quickly (and how difficult this sort of thing is to measure).

Now, I don’t think that this had a particularly big effect on my feelings about VA getting me from point A to point B decently, however it’s interesting to me when I compare say their level of quality/attention to detail for things like their safety video (the best I’ve seen) vs their online/digital UX.

From my perspective, I also think that there’s a pretty strong business case, and at least from some of these, ROI is calculable (ie, bucket-testing call % or missed flight percentage if you A/B test variations of the initial email), but for most of the rest of it, it’s not. To some degree, I also wonder whether a company like VA (or almost any company) really values how much of their UX and ultimately, (marketing, customer service, and brand) is dictated/deeply impacted by their online experience. They must have the numbers on what percent of their sales come through the website and what percentage of them are subscribed to email or use the mobile web.

Anyway, enough rambling. Now I’m just putting off all the work I need to do before my next flight…

SXSW Music 2009 Wrapup

Like last year, I tried to shoot some video of the sets I went to for SXSW Music. This time, I tried to get more substantive clips. I also upgraded from a Leica D-Lux 3, which had OK video quality but very poor sound quality to a Samsung NV24HD, which shot 720p H.264 video and higher bitrate audio. The sound (mics) still leave a bit to be desired, but overall I was pretty happy with the results, especially since the camera is very pocketable (camera battery life was my main bane).

I’ve finished posting most of my videos on Vimeo. Here are some of my favs:


I wanted to catch them after hearing them on the SXSW torrent and saw they were only playing once. They got started a bit late (good for me, since I first hit the British Embassy), and didn’t disappoint.


I tried to catch Yppah last year after discovering him on a Ninja Tune compilation, but it was just a DJ set. This year, he was w/ a live band which was more along the lines of what I was expecting.


Just a tiny clip of Amanda Palmer in church. The funny story behind this is that there are two church venues right next to each other and I was totally confused on why the schedule was all off…


Unfortunatley, by the time I got a better view, my battery had given up the ghost. He played a song at the end of Quincy Jones’ keynote, so I knew I had to see him play at the Elephant Room that night…


For whatever reason I never gave them a good listen, which thankfully is rectified because their set was great.


The only artist I saw twice this SXSW because I lurve Alcatraz Kid (his first album) so much. The venue was a bit out of the way, but I also had flan while I was out there, so that worked out.


Tell me when P.O.S. will be in town and I will be there.


The best set I saw this year. Great songs, amazing performance, front row. Damn skippy.


One of my few “I wanna catch that” artists, I missed him in the previous days, and with the venue opening super late, I was fearing the worst, but I got to see a great, if short set, so that worked out.


I had only caught one track from the torrent, but got it in my mind to check her out and was pretty pleasantly surprised.


This isn’t the greatest clip, but Nosaj Thing’s set roxored. Will definitely need to catch him next time he’s rocking a party in SF or LA.

I didn’t catch as many day shows as I had some work to do, but overall, I was again very happy w/ my SXSW Music experience.

This was also my 10th SXSW Interactive – which flew by way too quickly (and was surprisingly, even larger than last year). I want to say that maybe next year is a good time to take a break, but I can’t say that I didn’t have a lot of fun catching up with old friends again.

Jim Cramer on The Daily Show

Jon Stewart is able to articulate some of the things that are so exasperating about this whole situation and that the “real” media has been remiss on. Worth watching.

For geeks wondering about whether these systemic issues might be fixable, Toby Segaran and Jesper Andersen gave an interesting talk at ETech about developing a more robust credit rating system (it picks up in the last third where they start demoing what they’ve been doing). Check out Freerisk to see what they’re up to.

What I’ve Been Up To Lately

Since the beginning of the year, I’ve been spending most of my waking hours working on a new project with an old friend. It’s still a bit of a work in progress, but we’ll be at ETech this week and at events at SXSWi and SXSWm the week after, so what better time then now for a long rambly blog post introducing the Lensley Automatic.

Our new photobooth
hello.
A couple years ago, Jaime decided to build a photobooth (and with no prior experience, headed off to Home Depot…) and it’s been percolating along since. We’ve done events at the X-Games, the US Open, and with clients like Nike, Adidas, Diesel, Fuel.TV, Fuse.TV, MTV, etc. Towards the end of last year, after returning from a several month long interruption working on the Obama campaign (that worked out OK, btw :), we decided that it was time to take it to the next level.

It’s been an incredibly busy past few months, but what we’ve ended up with I think is something pretty unique (with a lot of potential). We have a new and improved enclosure (although, admittedly, a new version is already cooking), and more interestingly (well, it certainly took a lot more of my time) our own custom software for the booth, visualizations, and network interaction, giving us the ability to completely customize the printed output, the booth user experience, and the digital followup. For a start, we’ll be tweeting and posting photos to flickr w/ autotagging by way of RFID (fingers crossed on that!) at ETech. Just the first of the cool things we have planned.

And, while learning Cocoa hasn’t been all roses, it has been a great deal of fun working on a project that touches on hardware, visualization, photography, events, and the social web (and soon, video and mobile) – it’s a big cross section of “things I’m interested in.” Plus, all the joys of starting a small business (that’s half facetious, but also half genuine). Sure the timing might not be ideal, but all in all, it’s been a great experience in terms of stretching out some different muscles after being a bit cooped up. And well, there’s no time like the present to do your own thing.

Oh, if you’ve seen me in person in the past couple months (not likely!) and I’ve been more scatterbrained than usual (or have been responding in a zombie-like fashion), now you know why. (Not helped by the fact that for whatever reason, I spent a good few weeks of development time on a 4pm-10am schedule.)

New To Me

Here are a few tracks that are a year or two old that recently caught my ear…

Anchorsong (it’s better when you realize that it’s sequenced live w/ an MPC and Triton) was from the SXSW torrent – my original goal was to go through the entire torrent, but that may be a bit ambitious (a week and a half left)…

Take Away Shows

Recently I noticed that La Blogotheque now has a Vimeo account. This of course, led to an afternoon catching up on Take Away Shows.

Here are some of my favs on Vimeo:


Amanda Palmer – Amsterdam (A Take Away Show) from La Blogotheque on Vimeo.


Piano Session by Why? & Son Lux @ BBMix – Back on Stages from La Blogotheque on Vimeo.


Margot and the Nuclear So & So’s – As Tall as cliffs – A Take Away Show from La Blogotheque on Vimeo.


Satine “October Dane” – Concert à Emporter from La Blogotheque on Vimeo.


Bloc Party, ‘This Modern Love’ – A Take Away Show from La Blogotheque on Vimeo.

Book Reviews

The book review of Daemon on Boing Boing reminded me that I’ve been meaning to put in my 2-cents in. A friend recommended Daemon to me last month – it’s a techno-thriller that has an interesting meta-story of being self-published and gaining popularity originally from word of mouth of bloggers and the like. It’s definitely a page turner, and I quite liked the premise and how it started, but towards the end, it unravels into extreme silliness (and stupidity), which was sort of annoying. (some spoilers) I wasn’t as bothered by the central, and ever-expanding, conceit of having an ultra-coordinated organization powered by a bunch of shell scripts (although it really strains credibility as the story goes on), but rather by how stupid and incompetently TPTB were depicted at a pure tactical level. The last incident at the bunker could have never happened for at least a half-dozen simple reasons. Instead of being exciting or thrilling or whatever it was meant to be, it just offended my sensibilities. Anyway…

The second novel that I finished recently was Neal Stephenson’s latest, Anathem. I’m not sure when I really committed to reading it – I spent a long while near the beginning going through a page or two at a time (basically thinking about that xkcd comment) which wasn’t helped by the format – reading it on the Kindle meant that it was pretty impossible to flip to the glossary or appendices (of course, in theory, ebooks should actually be better with annotations and definitions for things like this, but no one’s taken the time to do that properly yet). On the bright side, the Kindle is probably at least a pound or two lighter than the 960pg hardcover, so I guess that wasn’t the worst trade off.

Once it got up and running, however, the story actually starts to zip along. Not only are you rewarded for the initial slog, but that experience is actually tied in, both thematically and within the plot itself – which I found to be pretty clever. And, there’s an honest to god real ending to boot. So, kudos to Stephenson for that. I’m sure that I’ll be re-reading this at some point, and that it’ll be worth it.

Stimulus 101

I’ve been otherwise occupied this month, so I’ve only had a chance to keep an intermittent eye on the Stimulus Plan and its development. One of the things that I was a bit disappointed to find is that despite the copious amounts back-and-forth prattle coverage and the much better ongoing discussions in the economic/biz blogs that I follow, that I couldn’t find a really good single page/resource for describing in simple, understandable terms what’s happening to the economy and why economic stimulus is needed and how the plan will help.

One of the first places I went to was WhiteHouse.gov and the complete failure in communicating and selling the stimulus plan on there, especially coming from my experience on the campaign, was a (surprising) failure on the part of the Obama administration. And, while I think that this past week has been much better with the President’s recent WP op-ed and tonight’s press conference (video), I think that having a good, concise one-pager would still be an enormous benefit.

It’s one thing to talk about a full-blown crisis, but an image like this (posted Friday on Nancy Pelosi’s blog) makes things much clearer:

Job Losses in Recent Recessions

(Here’s a version with job losses from all post-WWII recessions, although I don’t know if they’re using consistent measurements)

Also, from a understanding macro-economics or fractional banking perspective, I think that there’s a pretty big gap in terms of understanding what’s going on (about 3min in to hear about the details of the bank run):

For me, the things I’d be most interested in are the economic projections (job losses? GDP shortfall?) and accessible breakdowns of the stimulus plan effects (hint: a 100pg PDF is not the ideal format), the effects of the infrastructure investments, and comparisons of tax cuts vs direct spending:

Fiscal Stimulus Bang for the Buck

Lastly and perhaps most importantly I think is that spotlight on the economy needs to force us to talk about and address the huge structural problems that we’ve been ignoring – sustainability of growth, consumption, income inequality, etc. Here’s a video of Robert Reich discussing some of this:

While I’m disappointed that neither the MSM nor the Stimulus Plan backers have done something like this, it’s occurred to that this is the perfect sort of project some good designers to tackle in conjunction w/ either some of the economist bloggers that have covering this stuff or some of the civic groups out there. (Just saying.)

(FWIW, I recently started reading Krugman‘s The Return of Depression Economics and the Crisis of 2008 which has been pretty interesting so far. For those looking for a better idea of how our money and our economy works, Chris Martenson’s Crash Course is good (and depressing) start. The Wikipedia entry on the Economy of the United States is also a good place to start surfing.)