An Approximate Solution for TL;DR [~50 Year Old Text Summarization Hack Presented as a ~1.7MB Animated GIF]
Posted on November 13, 2013 1 Comment
Suffering from information overload? Too much TL;DR happening in your life? Attention span just isn’t what it used to be?
Watch this short ~30 second screencast (a ~1.7MB animated GIF) that demonstrates a 50+ year old hack for summarizing news articles and other types of online content. After all, it seemed fitting that the presentation of a text summarization algorithm would be as compressed and summarized as possible, right?
The text summarization code itself is taken from Mining the Social Web.

Click on the image above to watch a higher resolution version of this ~30 second animated GIF screencast. This preview version is ~360KB while the higher resolution version is still only 1.7MB. (WordPress wouldn’t render the full version of the GIF containing the animation inline because of the constraints imposed by this site’s theme.)
For those who prefer it, the video version of this “screencast” is also available.
Getting Started with Twitter’s API: From Zero to Firehose in ~2.5 Minutes
Posted on November 12, 2013 4 Comments
Mining the Social Web‘s goal is to teach you how to transform curiosity into insight, and its virtual machine features two IPython Notebooks that are designed to get you up and running with Twitter’s API as quickly as possible. The following ~2.5 minute screencast shows how to generate OAuth credentials, establish a Twitter API connection, and make API requests for all sorts of things. By the end of the video, you’ll be able to tap into Twitter’s Streaming API to create filters for @mentions, #hashtags, stock symbols, and more.

This short screencast, teaches you how to access Twitter’s API. In less than ~2.5 minutes, you’ll be able to tapping into the Streaming API to query for screen names, hashtags, stock tickers, and more.
The Chapter 1 (Mining Twitter) notebook provides an orientation and gentle introduction to Twitter’s API while the Chapter 9 (Twitter Cookbook) notebook includes a collection of more than two dozen recipes that are designed to solve recurring problems that typically come up as part of any data analysis. First, follow along with the first notebook and learn the fundamentals. Then, copy, paste, and massage the code in the second notebook to create data processing pipelines as part of your own data science experiments and analyses.
How To Harvest Millions of Twitter Profiles Without Violating the ToS (Computing Twitter Influence, Part 3)
Posted on October 22, 2013 1 Comment
In the last post in this continuing series on computing Twitter influence, we developed a wrapper function called make_twitter_request that handles the various sorts of HTTP error codes and network failures that you are likely to experience as you aspire to acquire non-trivial amounts of data from Twitter’s API. Although you are somewhat unlikely to need a wrapper function like make_twitter_request if you are just making a few ad-hoc API requests, you’re guaranteed to experience HTTP error codes when making non-trivial numbers of requests if for no other reason than exceeding the notorious Twitter API rate limits that allot you a fixed number of requests per rate-limit time interval (currently defined as 15-minutes.)
Although it may have seemed like an unnecessary detour, the beauty make_twitter_request will soon start to shine, because it allows us to write code, walk away, and rest assured that the computer is still hard at work accumulating the data we desire. Without its benefit, you are much more likely to come back to your console only to discover a stack trace that prevented you from getting the data that you would much rather have seen. It’s not fun experiencing these types of errors when they happen half-way into harvesting many millions of followers, because there’s not always a good way to recover and pick back up from the point of failure.
Harvesting Account IDs
In terms of computing Twitter influence, we previously determined that the problem can be framed as a data mining exercise against a collection of followers for an account, so let’s think about how to start harvesting what might be potentially massive numbers of followers. The first step is enumerating the list of follower IDs for a screen name of interest, and Twitter’s GET /followers/ids API does a nice job of taking care of this for you. Given a screen name, it returns up to 5,000 follower IDs per API request, and you are allotted 15 requests per rate-limit window.
When you do the math, you’ll find that you can pull down 75,000 IDs per 15-minute window, 300,000 IDs per hour, and ultimately accrue about 7.2 million user IDs per day. The most popular and most Twitter users such as @LadyGaGa or @BarackObama have upwards of 40 million followers, so you’d spend the better part of a week pulling down all of the data for one of those accounts as an upward bound.
But do really need to pull down all of the followers or can you just request, say, just the first N accounts? It depends on the assumptions that you can make about the sample that you’d get by requesting only the first N accounts. As it turns out, the account IDs are currently documented to be returned in the order in which the follow interaction occurred, which means that they are not necessarily in random order.
If you are planning to do some rigorous statistical analysis that is predicated upon random sampling assumptions, you might find that the lack of guarantee in randomness by fetching only the first N accounts just isn’t good enough. If you need guarantees about randomness, you’ll probably want to go ahead and pay the price for harvesting all of an account’s follower IDs so that you can randomly sample from it in the next step, which is using the user ID to fetch an account profile. (All that said, bear in mind that you probably shouldn’t make any rigorous assumptions about the order in which follower IDs are returned either, since the API docs state that it may change at a moment’s notice.)
Harvesting Account Profiles
Given a collection of account IDs, Twitter’s GET /users/lookup API returns up to 100 profiles per request with an allotted 180 user profile requests per rate limit interval. When you do the math, that works out to be 18,000 profiles per 15-minute interval, which means that you can ultimately collect 72,000 profiles per hour or up to 1,728,000 account profiles per day.
Let’s take a moment to think about what this means: for the vast majority of Twitter users, you’ll be able to collect all of the profile data that you need in minutes or hours. Many experiments that involve random samples require little more than 400 items in the sample, but you could easily work with 4,000 or even 40,000 items in the sample without encountering too many problems so far as wait times are concerned so long as you aren’t analyzing ultra-popular users.
Even a popular tech leader such as @timoreilly has right at 1.7 million followers, so it would only require a day or so to collect the totality of his followers’ profiles. The most popular Twitter users such as @LadyGaGa or @BarackObama, however, have upwards of 40 million users, so you probably want to start a background process on a server or desktop machine that will have a reliable and constant Internet connection, or rely on random sampling to pull full profiles from a collection of account IDs.
Sample Code
Conceptually, pulling all of the follower IDs or profiles for an account is just a couple of tight loops around make_twitter_request as previously described. Examples 9-19 and 9-17 from Mining the Social Web introduce the get_user_profile and get_friends_followers_ids functions that take care of the heavy lifting for these tasks as part of a “Twitter Cookbook.” Sample invocations for these functions follow that illustrate how to use these functions. (Take a look at the full source code for the Twitter Cookbook for all of the details.)
# Create an API connection twitter_api = oauth_login() # Pull all of the friend and follower IDs for an account friends_ids, followers_ids = get_friends_followers_ids(twitter_api, screen_name="ptwobrussell") # XXX: Store the ids... # Pull all of the profiles for friends/followers friends_profiles = get_user_profile(twitter_api, user_ids=friends_ids) followers_profiles = get_user_profile(twitter_api, user_ids=followers_ids) # XXX: Store the profiles...
Next Time
Did you notice that the sample invocations define variables like friends_ids or followers_profiles that could potentially contain far too much data to hold in memory and blow the heap? In the next post, we’ll wrap up the data collection process by introducing MongoDB, a document-oriented database that’s ideal for storing the kind of JSON data that’s returned by Twitter’s API and use it to ensure that memory requirements for our data collection process remain modest. We’ll also package of the code that’s been introduced up to that point into a convenient general-purpose utility that you can easily invoke to harvest data with little more than a few keystrokes.
Having then aspired to compute influence and acquired the necessary data , we’ll be able to analyze and summarize our findings as part of our 4-step general-purpose framework for mining social web data like a pro.
Why Is Twitter All the Rage?
Posted on October 9, 2013 4 Comments
Next week, I’ll be presenting a short webcast entitled Why Twitter Is All the Rage: A Data Miner’s Perspective that is loosely adapted from material that appears early in Mining the Social Web (2nd Ed). Given that the webcast is now less than a week away, I wanted to share out the content that inspired the topic. This remainder of this post is a slightly abridged reproduction of a section that appears early in Chapter 1. If you enjoy it, you can download all of Chapter 1 as a free PDF to learn more about mining Twitter data.
Why Is Twitter All the Rage?
How would you define Twitter?
There are many ways to answer this question, but let’s consider it from an overarching angle that addresses some fundamental aspects of our shared humanity that any technology needs to account for in order to be useful and successful. After all, the purpose of technology is to enhance our human experience.
As humans, what are some things that we want that technology might help us to get?
- We want to be heard.
- We want to satisfy our curiosity.
- We want it easy.
- We want it now.
In the context of the current discussion, these are just a few observations that are generally true of humanity. We have a deeply rooted need to share our ideas and experiences, which gives us the ability to connect with other people, to be heard, and to feel a sense of worth and importance. We are curious about the world around us and how to organize and manipulate it, and we use communication to share our observations, ask questions, and engage with other people in meaningful dialogues about our quandaries.
The last two bullet points highlight our inherent intolerance to friction. Ideally, we don’t want to have to work any harder than is absolutely necessary to satisfy our curiosity or get any particular job done; we’d rather be doing “something else” or moving on to the next thing because our time on this planet is so precious and short. Along similar lines, we want things now and tend to be impatient when actual progress doesn’t happen at the speed of our own thought.
One way to describe Twitter is as a microblogging service that allows people to communicate with short, 140-character messages that roughly correspond to thoughts or ideas. In that regard, you could think of Twitter as being akin to a free, high-speed, global text-messaging service. In other words, it’s a glorified piece of valuable infrastructure that enables rapid and easy communication. However, that’s not all of the story. It doesn’t adequately address our inherent curiosity and the value proposition that emerges when you have over 500 million curious people registered, with over 100 million of them actively engaging their curiosity on a regular monthly basis.
Besides the macro-level possibilities for marketing and advertising—which are always lucrative with a user base of that size—it’s the underlying network dynamics that created the gravity for such a user base to emerge that are truly interesting, and that’s why Twitter is all the rage. While the communication bus that enables users to share short quips at the speed of thought may be a necessary condition for viral adoption and sustained engagement on the Twitter platform, it’s not a sufficient condition. The extra ingredient that makes it sufficient is that Twitter’s asymmetric following model satisfies our curiosity. It is the asymmetric following model that casts Twitter as more of an interest graph than a social network, and the APIs that provide just enough of a framework for structure and self-organizing behavior to emerge from the chaos.
In other words, whereas some social websites like Facebook and LinkedIn require the mutual acceptance of a connection between users (which usually implies a real-world connection of some kind), Twitter’s relationship model allows you to keep up with the latest happenings of any other user, even though that other user may not choose to follow you back or even know that you exist. Twitter’s following model is simple but exploits a fundamental aspect of what makes us human: our curiosity. Whether it be an infatuation with celebrity gossip, an urge to keep up with a favorite sports team, a keen interest in a particular political topic, or a desire to connect with someone new, Twitter provides you with boundless opportunities to satisfy your curiosity.
Think of an interest graph as a way of modeling connections between people and their arbitrary interests. Interest graphs provide a profound number of possibilities in the data mining realm that primarily involve measuring correlations between things for the objective of making intelligent recommendations and other applications in machine learning. For example, you could use an interest graph to measure correlations and make recommendations ranging from whom to follow on Twitter to what to purchase online to whom you should date. To illustrate the notion of Twitter as an interest graph, consider that a Twitter user need not be a real person; it very well could be a person, but it could also be an inanimate object, a company, a musical group, an imaginary persona, an impersonation of someone (living or dead), or just about anything else.
For example, the @HomerJSimpson account is the official account for Homer Simpson, a popular character from The Simpsons television show. Although Homer Simpson isn’t a real person, he’s a well-known personality throughout the world, and the @HomerJSimpson Twitter persona acts as an conduit for him (or his creators, actually) to engage his fans. Likewise, although this book will probably never reach the popularity of Homer Simpson, @SocialWebMining is its official Twitter account and provides a means for a community that’s interested in its content to connect and engage on various levels. When you realize that Twitter enables you to create, connect, and explore a community of interest for an arbitrary topic of interest, the power of Twitter and the insights you can gain from mining its data become much more obvious.
There is very little governance of what a Twitter account can be aside from the badges on some accounts that identify celebrities and public figures as “verified accounts” and basic restrictions in Twitter’s Terms of Service agreement, which is required for using the service. It may seem very subtle, but it’s an important distinction from some social websites in which accounts must correspond to real, living people, businesses, or entities of a similar nature that fit into a particular taxonomy. Twitter places no particular restrictions on the persona of an account and relies on self-organizing behavior such as following relationships and folksonomies that emerge from the use of hashtags to create a certain kind of order within the system.
==
If you found this content interesting and want to learn more about how to mine Twitter and other social media data, you can download all of Chapter 1 as a free PDF.
All source code for the book is available at GitHub and screencasts are available to help get you started as part of the book’s turn-key virtual machine experience.
Writing Paranoid Code (Computing Twitter Influence, Part 2)
Posted on September 23, 2013 1 Comment
In the previous post of this series, we aspired to compute the influence of a Twitter account and explored some relevant variables to arriving at a base metric. This post continues the conversation by presenting some sample code for making “reliable” requests to Twitter’s API to facilitate the data collection process.
Given a Twitter screen name, it’s (theoretically) quite simple to get all of the account profiles that follow the screen name. Perhaps the most economical route is to use the GET /followers/ids API to request all of the follower IDs in batches of 5,000 per response, followed by the GET /users/lookup API to retrieve full account profiles for up to Y of those IDs in batches of 100 per response. Thus, if an account has X followers, you’d need to anticipate making ceiling(X/5000) API calls to GET /followers/ids and ceiling(X/100) API calls to GET /users/lookup. Although most Twitter accounts may not have enough followers that the total number of requests to each API resource presents rate-limiting problems, you can rest assured that the most popular accounts will trigger rate-limiting enforcements that manifest as an HTTP error in RESTful APIs.
Although it seems more satisfying to have all of the data you could ever want, you really should ask yourself if you really need every follower profile for an account of interest, or if a sufficiently large random sample will do. However, be advised that in order to truly collect a random sample of followers for an account, you must sample from the full population of all follower IDs as opposed to just taking the first N follower IDs. The reason is that Twitter’s API docs state that IDs are currently returned with “the most recent following first” but the order may change with little to no notice. Even in the latter case, there’s no expectation or guarantee of randomness. We’ll revisit this topic in the next post in which we begin harvesting profiles.
Write Paranoid Code
Only a few things are guaranteed in life: taxes, death, and that you will encounter inconvenient HTTP error codes when trying to acquire remote data. It’s never quite as simple as assuming that there won’t be any “unexpected” errors associated with code that makes network requests, because the very nature of making calls to remote web server inherently introduces the possibility of failure.
Only a few things are guaranteed in life: taxes, death, and that you will encounter inconvenient HTTP error codes when trying to acquire remote data.
In order to successfully harvest non-trivial amounts of remote data, you must employ robust code that expects errors to happen as a normal occurrence as opposed being an exceptional case that “probably won’t happen.” Write code that expects a mysterious kind of network error to crop up deep somewhere deep in the guts of the underlying HTTP library that you are using, be prepared for service disruptions such as Twitter’s “fail whale,” and by all means, ensure that your code accounts for rate-limiting and all other well-documented HTTP error codes that the API documentation provides.
Finally, ensure that you don’t experience any data loss if your code fails despite your best efforts by persisting the data that is returned from each request so that your code doesn’t run for an extended duration only to fail and leave you with nothing at all to show for it — even though you might otherwise be able to easily recover by restarting from the point of failure as opposed to starting from scratch. For what it’s worth, I’ve found that consistently being able to think about writing code that behaves this way is a little easier said than done, but like anything else, it gets easier with a little bit of practice.)
Making Paranoid Twitter API Requests
Example 9-16 [viewable IPython Notebook link from Mining the Social Web’s GitHub repository] presents a pattern for making paranoid Twitter API requests and is reproduced below. It accounts for the HTTP errors in Twitter’s API documentation as well as a couple of other errors (such as urllib2’s infamous BadStatusLine exception) that sometimes appear, seemingly without rhyme or reason. Take a moment to study the code to see how it works.
import sys import time from urllib2 import URLError from httplib import BadStatusLine import json import twitter def oauth_login(): # XXX: Go to http://twitter.com/apps/new to create an app and get values # for these credentials that you'll need to provide in place of these # empty string values that are defined as placeholders. # See https://dev.twitter.com/docs/auth/oauth for more information # on Twitter's OAuth implementation. CONSUMER_KEY = '' CONSUMER_SECRET = '' OAUTH_TOKEN = '' OAUTH_TOKEN_SECRET = '' auth = twitter.oauth.OAuth(OAUTH_TOKEN, OAUTH_TOKEN_SECRET, CONSUMER_KEY, CONSUMER_SECRET) twitter_api = twitter.Twitter(auth=auth) return twitter_api def make_twitter_request(twitter_api_func, max_errors=10, *args, **kw): # A nested helper function that handles common HTTPErrors. Return an updated # value for wait_period if the problem is a 500 level error. Block until the # rate limit is reset if it's a rate limiting issue (429 error). Returns None # for 401 and 404 errors, which requires special handling by the caller. def handle_twitter_http_error(e, wait_period=2, sleep_when_rate_limited=True): if wait_period > 3600: # Seconds print >> sys.stderr, 'Too many retries. Quitting.' raise e # See https://dev.twitter.com/docs/error-codes-responses for common codes if e.e.code == 401: print >> sys.stderr, 'Encountered 401 Error (Not Authorized)' return None elif e.e.code == 404: print >> sys.stderr, 'Encountered 404 Error (Not Found)' return None elif e.e.code == 429: print >> sys.stderr, 'Encountered 429 Error (Rate Limit Exceeded)' if sleep_when_rate_limited: print >> sys.stderr, "Retrying in 15 minutes...ZzZ..." sys.stderr.flush() time.sleep(60*15 + 5) print >> sys.stderr, '...ZzZ...Awake now and trying again.' return 2 else: raise e # Caller must handle the rate limiting issue elif e.e.code in (500, 502, 503, 504): print >> sys.stderr, 'Encountered %i Error. Retrying in %i seconds' % \ (e.e.code, wait_period) time.sleep(wait_period) wait_period *= 1.5 return wait_period else: raise e # End of nested helper function wait_period = 2 error_count = 0 while True: try: return twitter_api_func(*args, **kw) except twitter.api.TwitterHTTPError, e: error_count = 0 wait_period = handle_twitter_http_error(e, wait_period) if wait_period is None: return except URLError, e: error_count += 1 print >> sys.stderr, "URLError encountered. Continuing." if error_count > max_errors: print >> sys.stderr, "Too many consecutive errors...bailing out." raise except BadStatusLine, e: error_count += 1 print >> sys.stderr, "BadStatusLine encountered. Continuing." if error_count > max_errors: print >> sys.stderr, "Too many consecutive errors...bailing out." raise # Sample usage twitter_api = oauth_login() # See https://dev.twitter.com/docs/api/1.1/get/users/lookup for # twitter_api.users.lookup response = make_twitter_request(twitter_api.users.lookup, screen_name="SocialWebMining") print json.dumps(response, indent=1)
In the next post, we’ll continue the conversation by using make_twitter_request to acquire account profiles so that the data science/mining can begin. Stay tuned!
===
If you missed the first post in this series (Computing Twitter Influence, Part 1: Arriving at a Base Metric), you can find it here.
Read more about the journey of authoring Mining the Social Web, 2nd Edition and how I tried to apply lean practices to make it the best possible product for you in Reflections on Authoring a Minimum Viable Book.
Arriving at a Base Influence Metric (Computing Twitter Influence, Part 1)
Posted on September 19, 2013 2 Comments
This post introduces a series that explores the problem of approximating a Twitter account’s influence. With the ubiquity of social media and its effects on everything from how we shop to how we vote at the polls, it’s critical that we be able to employ reasonably accurate and well-understood measurements for approximating influence from social media signals.
[ 24 Sept 2013 – Made a few light edits in preparation for a cross-post on the O’Reilly Programming Blog]
Unlike social networks such as LinkedIn and Facebook in which connections between entities are symmetric and typically correspond to a real world connection, Twitter’s underlying data model is fundamentally predicated upon asymmetric following relationships. Another way of thinking about a following relationship is to consider that it’s little more than a subscription to a feed about some content of interest. In other words, when you follow another Twitter user, you are expressing interest in that other user and are opting-in to whatever content it would like to place in your home timeline. As such, Twitter’s underlying network structure can be interpreted as an interest graph and mined for insights about the relative popularity of one user when compared to another.
…Twitter’s underlying network structure can be interpreted as an interest graph…
There is tremendous value in being able to apply competitive metrics for identifying key influencers, and there’s no better time to get started than right now since you can’t improve something until after you’re able to measure it. Before we can put some accounts under the microscope and start measuring influence, however, we’ll need to think through the problem of arriving at a base metric.
Subtle Variables Affecting a Base Metric
The natural starting point for approximating a Twitter account’s influence is to simply consider its number of followers. After all, it’s reasonable to think that the more followers an account has accumulated, then the more popular it must be in comparison to some other account. On the surface, this seems fine, but it doesn’t account for a few subtle variables that turn out to be critical once you begin to really understand the data. Consider the following subtle variables (amongst many others) that affect “number of followers” as a base metric:
- Spam bot accounts that effectively are zombies and can’t be harnessed for any utility at all
- Inactive or abandoned accounts that can’t influence or be influenced since they are not in use
- Accounts that follow so many other accounts that the likelihood of getting noticed (and thus influencing) is practically zero
- The network effects of retweets by accounts that are active and can be influenced to spread a message
Even though some non-trivial caveats exist, the good news is that we can take all of these variables into account and still arrive at a reasonable set of features from the data that could be implemented, measured, and improved as an influence metric. Let’s consider each of these issues and think about how to appropriately handle them.
Forging a Base Metric
The cases of (1) and (2) present what is effectively the same challenge with regard to computing an influence score, and although there’s not a single API that we can use to detect whether or not an account is a spam bot or inactive, we can use some simple heuristics that turn out to work remarkably well for determining if an account is effectively irrelevant. For example, if an account is following fewer than X accounts, hasn’t tweeted in Y days, or hasn’t retweeted any other account more than Z times (or some combination thereof), then it’s probably not an account of relevance in predicting influence. Reasonable initial values for parameterizing the heuristics might be some weighting of X=10, Y=30, and Z=2; however, it will take some data science experiments to arrive at optimal values.
In the case of (3), we can also take into account the total number of retweets associated with the account and even hone in on whether the it has ever retweeted the other account in question. For example, if a very popular account is following you, but it’s also following tens of thousands of other people (or more) and seldom (or never) retweets anyone (especially you), then you probably shouldn’t count on influencing it with any reasonable probability.
By the way, this shouldn’t surprise you; it’s just not humanly possible to do much with Twitter’s chronologically-oriented view of tweets as displayed in a home timeline. However, despite the sheer lack of the home timeline’s usability for following more than trivial numbers of users, Twitter does offer a coping mechanism: you can organize users of interest into lists and monitor the lists as opposed to the home timeline. The number of times a user is “listed” is certainly an important variable worth keeping in mind during data science experiments to arrive at an influence metric. (However, be advised that spam bots are increasingly using it as well these days as a means of getting noticed.)
In the case of (4), it would be remiss not to consider network effects such as what happens when you get retweeted, because this can completely change the dynamics of the situation. For example, even though an account of interest might have relatively few followers of its own, all it takes is for one of those followers to be popular enough for a retweet to light the initial spark and reach a larger audience. Consider the case in which an account has fewer than 100 followers, but one or more of those followers have tens of thousands of their own followers and opts to retweet as a case in point.
…even though an account of interest might have relatively few followers of its own, all it takes is for one of those followers to be popular enough for a retweet to light the initial spark and reach a larger audience…
As a final consideration, let’s just go ahead and acknowledge the serendipity of Twitter. The percentage of “active” followers who will probably even see any particular tweet for someone that they’re not very intentionally keeping up with is generally going to be a small fraction of what is theoretically possible. After all, most people have a lot more to do in life than carefully and thoughtfully monitor Twitter feeds. Furthermore, the popular users that would create the most significant network effects from a retweet must have done something to earn their “popular” status, which probably means that they’re quite busy and are unlikely to notice any given tweet on any given day.
To make matters worse, even if they do notice your tweet, they may opt to mark it as a “favorite” instead of retweeting it, which is another variable that we should consider in arriving at a base metric. Getting “favorited” is certainly a compliment, is useful data to consider for certain analytics, and serves a purpose of validation; however, it’s secondary effects don’t compare to a retweet because of the comparatively little visibility available to favorites as opposed to retweets.
Next Time
In the next post, we’ll introduce some turn-key example code for making robust Twitter requests in preparation to acquire and store all of the follower profiles for one or more users of interest so that we can eventually mine the profiles and try out some variations of our follower metric. Stay tuned…