Fun with Redis

August 15, 2010

I’ve been using Redis for projects on and off for some time, and there are some little hacks I’ve been doing and never extracted from bigger projects. Yesterday I had to sit home for some time doing a job that involved some idle time waiting, so it was time to hack.

First, I nailed a small RestMQ using Sinatra and Redis. Here’s the gist for the first version, which already works nice along with RestMQ. You can use it to expose a small part of your broker to the outside world. Later, talking with a colleague at work, I changed it a bit and ended up having the whole queue list and hard/soft get (deletes the message or just reads it). Another gist.

Then it was time of extracting Message Queue and Load Balancing patterns from code to my branch of the Redis Cookbook . Apart from the basic algorithm for RestMQ, there is a pattern which I sometimes use to do load balancing and replica spreading. It uses scored sets and although it seems naive, works pretty well along with consistent hashing.

After that I fixed some issues on RestMQ and txredisapi, the first were related to configuration issues and the later related to publish/subscribe.

About Pub/Sub, I ended up extracting a small PubSub server using Websockets, Redis and Node.js. It was initially embedded in another proxy I tried for RestMQ but it works well alone. Check the code. A little bit of code twisting and it can turn into a very flexible actor-based library for node.js. And of course, the PubSub thingy can also use redis as a presence server.

/me deserves pizza

More on memcached

February 17, 2009

Sharing memcache data between different applications is useful and easy, be it as a glorified IPC, a robust distributed cache, rate limit control or any other suggested architecture approach.

There are some caveats tho:

  1. The captain obvious one: if its the case, make sure the way you store your data is readable between different languages. For example, storing in python and reading in java or ruby a pickled object is trivial, but persisting some specific objects, like rails is prone to do, may render the data almost unreadable. Try to use simple serialization formats if possible (like yaml, json, xml).
  2. The other captain obvious one: saving and invalidating data must be done by the application responsible for its integrity, for simplicity and safety sake. Remember cache 101: a cache is not a database. It’s not searchable, and its data must reflect a coherent source of data.
  3. The not so obvious one: if you use more than a memcached server, make sure both clients understand the hashing algorithm which is used to select the right server for the key you are asking. When using the same language and client this is transparent, but there different known ways to select the right server as:
  • md5 hash of the key
  • crc32 based hash
  • native hash (as String.hashCode() in java)
  • pure magic hash (some clients implement non-standard memcache

The case in point is a ruby application using the memcache-client gem and a java application using whalin’s client. If you use more than one server, the ruby client uses it’s unique algorithm, which is CRC32 based. The java client defaults to a NATIVE based algorithm, but contains 3 more algorithms. Keys would never get correct hits this way.

Let’s see how it works :

Code for hashing in Ruby (straight from memcache_client gem)

# Note that the method crc32_ITU_T is a patch for the String class from memcache_client

def hash_for(key)
 (key.crc32_ITU_T >> 16) & 0x7fff

Code for the right hashing algorithm in JAVA:

private static long newCompatHashingAlg( String key ) {
                CRC32 checksum = new CRC32();
                checksum.update( key.getBytes() );
                long crc = checksum.getValue();
                return (crc >> 16) & 0x7fff;

The algorithm is selected by this piece of code, whalin’s memcache client library:

    case NATIVE_HASH:
        return (long)key.hashCode();
        case OLD_COMPAT_HASH:
        return origCompatHashingAlg( key );
        return newCompatHashingAlg( key );
        return md5HashingAlg( key );
        // use the native hash as a default
        hashingAlg = NATIVE_HASH;
        return (long)key.hashCode();

So, before using the client in java, we need to issue setHashingAlg( SockIOPool.NEW_COMPAT_HASH ); on the right SockIOPool object.

That’s it.

Now, for a change …

Really unneccessary section !

We can test the CRC32 based algorithm like this:

Start irb and type:

irb --> require "rubygems"
    ==> true
irb --> require "memcache"
    ==> true
irb --> a = "mykey"
    ==> "mykey"
irb --> (a.crc32_ITU_T() >> 16) & 0x7fff
    ==> 17510

From this, we see that 17510 is the resulting hash for “mykey” key.

The memcache client was required just to attach the crc32_ITU_T() method to the String class, but if you dont want to install it, just paste the following code (which is part of memcache_client) instead:

class String

  # Uses the ITU-T polynomial in the CRC32 algorithm.

  def crc32_ITU_T
    n = length
    r = 0xFFFFFFFF

    n.times do |i|
      r ^= self[i]
      8.times do
        if (r & 1) != 0 then
          r = (r>>1) ^ 0xEDB88320
          r >>= 1

    r ^ 0xFFFFFFFF


Let’s test it in JAVA’s end:


public class TestCRC {

        public static void main(String[] args) {

                CRC32 checksum = new CRC32();
                long crc = checksum.getValue();
                System.out.println(((crc >> 16) & 0x7fff));

Compile and run as:

$ javac
$ java -cp . TestCRC

Again, 17510, as in Ruby. That’s the right value for “mykey”.

Both cases lent 17510 as result, which would then be divided by the number of machines in the pool (e.g. 2) and the mod of this operation is the index of the right server, both in JAVA and Ruby. Weee.

I’ve been trying to finish this post since middle of December, but after about 4 almost complete rewrites I’ve decided to put it online. I still mean to make it better, because I didn’t wanted to sound cocky or give the wrong impression that it is about evaluating the best text classification algorithm out there.

Here it goes: Practical text classification with Ruby

Thanks to Renato for reviewing it beforehand.

I wrote this guide as a result of one of the cross-compiling oddities I’ve been thru the last month. Let me know if you have any suggestions about the build process I’ve been using.

icalendar gem

November 19, 2007

ICalendar (iCal) is a standard for calendar data interchange. There’s a gem called icalendar, which helps to parse and generate such file, so you may use data from your google or exchange calendar to feed your app (or make it generate data to feed your calendar, e.g., a link to Digg or Facebook in each post of your blog to setup a TODO item).

To parse a .ics file (iCal invite or TODO item) it’s just a matter of looping thru the elements in a given calendar. A ics file may hold more than one calendar, end each calendar may contain events and TODO itens.

#!/usr/bin/env ruby
require 'rubygems'

require 'icalendar'

if (ARGV.size < 1) then
 puts "Usage: ical_parse.rb <calendar.ics>"

cal_file =[0])

cals = Icalendar.parse(cal_file)
if (cals.size==0) then
 puts "Empty calendar"

cals.each {|c|

 puts "\nEvents\n\n"

	if ( == 0) then

 	puts "Empty event list"

 else { |e|

 		puts "---------------------------------------"

 		puts "Seq:"+e.sequence.to_s
 		puts "UID:"+e.uid.to_s
 		puts "DTSTART: "+e.dtstart.to_s
 		puts "summary: " + e.summary
 		puts "location: " + e.location
 		puts "description: "+e.description

 		if (not e.attendees.nil?) then

 			puts "attendee: "
 				puts "\t"


 		puts "---------------------------------------"



	puts "\nTODO\n\n"

 if (t.size == 0) then

 	puts "Empty TODO list"


 	puts "---------------------------------------"

 	t.each {|oi|

 		puts "Seq:"+oi.sequence.to_s
 		puts "UID:"+oi.uid.to_s
 		puts oi.dtstart
 		puts "summary "+oi.summary


 	puts "---------------------------------------"




When scraping for info on any website, the most time consuming part is locating where is what you need, and how it’s enclosed. Most of time, automatically generated HTML can be pretty convoluted due to templating systems. Hand made HTML tends to be more cleaner but it’s not so common these days.

Firebug is an extension for Firefox which among other things, can help you find URL, XPath for certain elements, discover action names, find out how does the forms are handled and so on.

Having a full XPath or the right URL for a form in a few clicks is a great productivity improvement. To show how to do it, I will download my contacts stored in a GMail account.

First and foremost, we need to know how to export contacs manually. It’s a matter of logging in, clicking in the Contacts link below your folders, clicking again the export button, selecting the proper options (All contacts, Outlook CSV format) and clicking another export button.

What we may need more than XPath harvesting is an automation tool, so it can navigate to the right URL. Better yet, we need the export action URL so we may not need to simulate ‘clicking’ as most automation libraries do.

Apart from Firefox loaded with Firebug, we will use Ruby and WWW::Mechanize. WWW::Mechanize uses Hpricot to handle XPath and has nice features like a cookiejar to handle all cookies, redirection following and form handling.

The first step is login using gmail’s form. It’s a simple html form, the first one of the page. Let’s find out the names of the input fields. Start Firefox, points to and activate Firebug by clicking the icon in the low right corner.

login page

Use the inspect feature to see the HTML code for a given element. inspect may return its full XPath or DOM name. Take some time to explore the login screen and note that the field’s name are Email and Passwd, and they are case-sensitive. To login in using www::mechanize the code would be like:

agent = { |obj| obj.log =‘gmail.log’) }
page = agent.get(‘;)

form = page.forms.first
form.Email = ‘username’
form.Passwd = ‘passwd’

page = agent.submit(form)

After logging in, mechanize will take care of any redirection and cookies. We may proceed requesting for any other element.

Our goal is exporting a contact list and clicking the way to it is not the smartest idea. We need the exact URL to get it. Let’s find it:

contact management

export contacts

Enable firebug, select the ‘Net’ tab and click into export.

Contact export screen

Check Firebug’s console for the list of net requests. There we will find the exact URL we need:

network requests

Mouse over the itens to see the URL value. In gmail it will be the one labelled export, but go on to see the other backgrounds request it does.
contact list download

The contact list export URL is

After logging in, it’s a matter of just requesting this URL and saving the file:

page = agent.get(‘;)


And that’s it.

Check Firebug documentation and scripts to learn other ways to avoid heavy work by perusing it. See the full script below.

——- gmail-scrap.rb ————

#!/usr/bin/env ruby

require 'rubygems'
require 'mechanize'
require 'logger'

agent = { |obj| obj.log ='gmail.log') }

page = agent.get('')

form = page.forms.first
form.Email = 'username'
form.Passwd = 'passwd'

page = agent.submit(form)

page = agent.get('')


——- gmail-scrap.rb ————

Ruby and document indexing

October 30, 2007

I did some Ferret testing and the results were pretty fine. Check it out.