Why I Do Not Yammer

Yammer, the enterprise social network, has an epic fail for a brand name.  Who in their right mind wants to yammer?  Is that a pleasant word?  No, my friend, it is not.  Here are some definitions found online for the word “yammer”:

  1. to whine or complain
  2. to make an outcry or clamor.
  3. to talk loudly and persistently.

So, this is supposed to make my work life better?  To have yet another channel of communication I must attend to — this one purpose-built for people to whine or complain loudly and persistently?

Yes, Twitter is also a silly brand-name, but at least the word “twitter” brings connotations of merrily chirping birds.  Yammer sounds like a headache.

Message to branding wizards: respect the meanings of words.  Yammer is a horrible name for something that you want people to like and use.

Message to the working world: if your boss tells you, “We’re all going to Yammer now, it’s some next-gen Internet shit that will help us synergize,” or something to that effect, please tell him or her, “Sorry, but I think that’s the dumbest name for a communication product I’ve ever heard.  In the name of intelligent life on Earth, let’s not use it.”

Now, I’ll admit, at least 95% of my irritation with Yammer is because their name is asinine.  But having actually used it at my actual job, I can also say that the marginal benefit of having a Twitter-for-the-enterprise deliver the random thoughts of my co-workers (who do, in fact, have high quality random thoughts) to my desktop or mobile device nearly instantaneously is far outweighed by the negative effect of distraction.  So, really, I’ve tried it and I can do without, thank you very much.

Also, to be clear — I wish the hard working folks at Yammer the best of luck.  I hope they’re all rolling in money, post-acquisition.  And I also hope that they all get to move on to something else soon and work on a product with a better name.

QCon San Francisco 2010 Redux

So, I just got back from QCon SF 2010 last night.  All in all, a very good conference.  Rather than write up any kind of extensive summary, I’ll offer up my rapid digest of the major themes from the sessions I attended.  Without further ado, here it is in outline form:

  1. Dealing with data at large scale
    1. OLTP
      1. Those who can get away with it are using systems that have more flexible consistency models than traditional RDBMS (CAP theorem trade-offs)
        1. Most using some form of eventual consistency
        2. Many sites implementing their own Read-Your-Own-Writes consistency on top of more general storage systems
        3. These systems must deal with data growth (partitioning data across nodes)
        4. Must deal with hot spots (redistributing / caching hot data across many nodes)
        5. Must deal with multiple data centers (some are simply punting on this)
      2. Twitter and Facebook both built their own key-value stores on top of MySql, Memcache
        • Twitter’s solution seemed a little cleaner, Facebook’s a little more crusty
      3. Amazon S3: also key value store with own caching, replication, consistency models
        • This one had the most sophisticated seeming solution for dealing with hot spots
    2. OLAP
      1. Lots of people using Hadoop to crunch offline data
        1. Good tools for workflow of jobs, dependency management, monitoring are essential
        2. Quantcast found that EC2 was not adequate for their needs in terms of throughput compared to an owned, highly-tuned cluster, though it has improved over time
          • still good to have on hand for surge capability
    3. Operating on the public cloud
      1. Increased demand for monitoring — and most monitoring tools not built for cloud instances that wink in and out of existence
      2. Increased demand for fault-tolerance — latency can vary more widely, hardware failures happen out of your control
      3. Increased demand for sophisticated deployment automation
      4. Motivation is that you want to use a cloud, not build one
        1. Capacity planning is difficult when you’re in a huge growth scenario
        2. Leverage the staffing and expertise of the public cloud companies (Amazon, Gigaspaces, etc)
        3. Data center is a large, inflexible capital commitment
      5. Traditional CDNs are still necessary and useful for low-latency, high bandwidth media delivery
      6. PCI compliant storage in the cloud is not a solved problem
    4. Serious interest in alternative languages, both on and off the JVM
      1. There are lots of serious choices available in this sphere (scala, jruby, javascript -> node.js, erlang, clojure)
      2. Lots of enthusiasm for JVM, less enthusiasm for oracle’s ability or intention to be good stewards of it
    5. Though there were many very good sessions, especially in the Architectures You Always Wondered About track, in terms of sheer rock-star appeal these two presentations appeared to be the standouts that had everyone talking:
      1. LMAX – How to do over 100k concurrent transactions per second at less than 1ms latency
      2. Node.js

jdb: pretty f#*king useful

I rediscovered the joys of using Java’s built in command line debugger, jdb, today.  I’d gotten so used to using the graphical debugger in Eclipse that I’d forgotten all about jdb, but lately Eclipse has been choking on debugging projects with many maven dependencies for me.  I really needed a low-tech, high-reliability solution for debugging and going back to the old “add logging statement, recompile, run” routine was not the answer (yes, it helps in production, but in your own dev environment you deserve better).  jdb actually does the job pretty well.  It could definitely do with some improvements, like more shell-like line editing behavior and the ability specify to specify alternate init scripts at runtime.  But like the can opener on a Swiss Army knife, it will get the job done.


Last week, I wrote a quick-and-dirty GUI application to help my wife count down the years, months, days, hours and seconds until the end of her professional training.  I’m posting the source code, licensed according to the Apache License 2.0.  This is trivial stuff, and I coded in the most lazy fashion possible (no unit tests, very little thought for reusability, etc), so your mileage may vary.  To be honest, I was kind of reveling in the freedom to be unprofessional.  So much for that ethos of craft I’ve been going on about, eh?  Hey — just like the occasional candy bar, late night bender, or evening spent watching trash TV, the occasional tossed-off project is good for your soul.

However, it does actually work well.  It builds via maven in the usual way, and produces an executable jar as it’s final build artifact, so in true GUI fashion you can double click it to start.   Enjoy.

The Web Browser as a NeoVictorian Computing Triumph

I had a minor realization today about what Mark Bernstein was talking about in his blog posts about NeoVictorian Computing (which I mentioned earlier).  I had emotionally connected to what he was saying about how the software industry should return to an ethos of craft and artisanship, and how software should not try to hide its joints or its materials, but rather be constructed honestly and display its structure.  I got it, kind of, but the ideas remained abstract for me.

But I realized today that in fact the web browser, now so ubiquitous, is in its own way very much of a kind with his ideas.  Sure, browsers now are built by large teams, but the original Mosaic browser was designed and implemented by a few talented people at NCSA. And though browsers have gained a lot of functionality and efficiency through years of revision, they still retain the essential form of the original.  The exposed URLs in the location bar giving evidence of the network protocols and spaces between the loaded documents. That simplicity and willingness to make the user meet the technology head on is part of what has made the browser such a success as a technology.