We all know (well, most people on the internet) know that a Tweet has 140 characters, which you can (typically) store in 140 bytes. Plus overhead for username, datestamp, etc.
But how many bytes does a tweet actually take up in a week’s lifetime? Everywhere, on everything?
Let’s see: Twitter.com has my tweet on their servers. Probably on a handful of hard drives at various points in their internal infrastructure. And I bet they use a content delivery network, which means it’s replicated on another handful of hard drives around the world.
Each Twitter follower gets at least one copy in their client – so in my case that’s another 100 or so hard disks that have a copy somewhere. (Yes, it’s true, I only have ~100 followers. My twitter ego is sad.)
All of my feeds and my friend’s feeds store a copy of my tweet. That’s another whole handful of feed aggregator server systems that it’s stored on, to say nothing of the number of web/RSS/Atom clients that cache a copy of the page locally when someone reads the feed.
With Twitter’s popularity, tweets get widely searched. This week, for example, #MoonFruit is giving away MacBooks by randomly selecting tweets with their hashtag. That means plenty of people are searching for that hashtag, and all those people will get copies of my tweet as well.
And nowhere have we talked about how Google and other search engines store crawl and query results across their labs full of machines – that probably adds dozens of other instances of at least bits of each tweet.
So – what’s the peak number of aggregate storage bytes that one tweet uses over a week’s lifetime?
It’s interesting to think that all of that storage – something that just 20 years ago would have been quite expensive – is now used for something as mundane as telling your friends and random followers when you’re taking a coffee break. Moore’s Law certainly enables us to do some amazing things with information and communication – as well as lots and lots of inane things.