Building a Killer (Twitter) Search UI

Gnip is primarily an API company, providing its customers with API access to the social data activities they request and putting the onus of processing, analysis and display of the data on said customer.

In the case of our Search API product, many of our customers want to expose the functionality and control directly to their paying users, in a way that is similar to traditional search engines (Google, Twitter Search, etc). This type of integration is different from our HTTP Stream-based products, which require more behind-the-scenes management by our customers. So to demonstrate to our customers and prospects the type of applications they could build into their product suites using the Search API, we built Twitter Search on Rails.

I want to explain how I developed this bootstrap project for your use and why you would want to use it. The code may be Rails and CoffeeScript, but the concepts stand on their own. First, a demo:

Twitter Search on Rails Demo

Multiple Visualizations

There are many ways to slice and dice interesting tweets. Three ways our customers frequently do so include: trend analysis, a dashboard view of actual Tweet content, and geographic distribution.

Frequency over time

One request to Gnip’s Search Count API gives a nice history of frequency in the form of a line chart like this one.

Frequency of Black Friday tweets over time

You can see how I configured Highcharts to make the above chart in chart.coffee.

Geographic distribution

You can extract a great geographic distribution using the Profile Geo Enrichment mapped with MapBox:

Geo distribution of Black Friday tweets

Toss in a Marker Clusterer (for grouping geographically co-located points) and you’ll add another unique perspective to viewing the data.

Tweets that look like tweets

Visualizations are nice, but content is king™. We convert Activity Streams JSON into a tweet that conforms to Twitter’s Developer Display Guidelines using an yet-to-be-announced project. That means that entities like usernames and hashtags are linked; Retweet, Reply and Favorite work through Twitter Web Intents; and tweets support RTL display among other things.

When it comes to search, the query results have to stand out and behave the way a user would expect them to.

Decoupled Web Components

One can break the typical search app into fairly obvious pieces like the search form, the results, data retrieval etc. These pieces can operate independently of one another through the use of JavaScript events. This Pub/Sub model is perfect for decoupling parts of web apps.

Could not embed GitHub Gist 7989857: Bad credentials

Not only that, but if you also inject a DocumentFragment or some other container to be rendered to, you can unit/functional test this part of your webapp independently! Check out the less trivial implementation in activityList.coffee

Fast, Elegant Transitions

I think everyone would agree that they want a fast search experience. That doesn’t mean that it can’t be pretty, we just have to avoid animations that are known to be slow. The HTML5Rocks article on High Performance Animations explains what is performant and why much better than I could. TL;DR — Use CSS transitions, not JS; only transform opacity, rotate, translate and scale.

Hover over me

WABAM!

 

With that in mind, I found inspiration from Hakim El Hattab’s stroll.js when coming up with a simple but slick way to load search results.

Could not embed GitHub Gist 7990148: Bad credentials

About Search API on Rails

I originally wrote an application intended to show customers what they could build on top of Gnip’s Search API. It was such a hit that we had many requests from customers to license the code for building their own applications — so we decided to open-source it to facilitate integrations with the Search API.

We chose Rails here due to its popularity and, hell, we already know Rails. CoffeeScript’s classes made it easier to reason about web components that had an inheritance chain and because I like its syntax. Sass is our CSS preprocessor of choice here at Gnip.

This project isn’t just some proof-of-concept. It was written deliberately and with care and here are some features that prove it:

We hope you’ll find this background helpful when building your application on top of the Search API with Twitter Search on Rails! If you find an improvement, we welcome your contribution through GitHub. Enjoy!

Application Deployment at Gnip

Managing your application code on more than 500 servers is a non-trivial task. One of the tenets we’ve held onto closely as an engineering team at Gnip is “owning your code all the way to the metal”. In order to promote this sense of ownership, we try to keep a clean and simple deployment process.

To illustrate our application deployment process, let’s assume that we’re checking in a new feature to our favorite Gnip application, the flingerator. We will also assume that we have a fully provisioned server that is already running an older version of our code (I’ll save provisioning / bootstrapping servers for another blog post). The process is as follows:

1. Commit: git commit -am “er/ch: checking in my super awesome new feature”
2. Build: One of our numerous cruisecontrol.rb servers picks up the changeset from git and uses maven to build an RPM.
3. Promote: After the build completes, run cap flingerator:promote -S environment=review.
4. Deploy: cap flingerator:roll -S environment=review

Let me break down what is happening at each step of the process:

Commit
Every developer commits or merges their feature branch into master. Every piece of code that lands on master is code reviewed by the developer who wrote the code and at least one other developer. The commit message includes the initials of the developer who wrote the feature as well as the person who code reviewed it. After the commit is made, the master branch is pushed up to github.

Build
After the commit lands on master, our build server (cruisecontrol.rb) uses maven to run automated tests, build jars, and create RPM(s). After the RPM is created, cruisecontrol.rb then copies said RPM to our yum repo server into the “build” directory. Although the build is copied to our yum repo server, it is not ready for deployment just yet.

Promote
After cruise.rb has successfully transferred the RPM to the yum server’s “build” directory, the developer can promote the new code into a particular environment by running the following capistrano command: cap flingerator:promote -S environment=review. This command uses capistrano to ssh to the yum repo server and creates a symlink from the “build” directory to the review (or staging or prod) environment directory. This action makes said RPM available to install on any server in a particular environment via yum.

Deploy
Now that the RPM has been promoted, it is now available via the gnip yum repo. It is now up to the dev to run another capistrano command to deploy the code: cap flingerator:roll -S environment=review. This command ssh’es to each flingerator server and runs “yum update flingerator”. This installs the new code onto the filesystem. After successful completion of the “yum install” the application process is restarted and the new code is running.

This process uses proven technologies to create a stable and repeatable deployment process, which is extremely important in order to provide an enterprise grade customer experience.