Simple Quick SSH Tunneling to expose your web app

I’ve been a localtunnel user for quite some time now and I really love the fact that its a free service, quick to install and easy to expose your development app to the world. There are quite a few worthy alternatives now with and pagekite that pretty much do the same thing. 

But at times it gets annoying (especially when in the middle of other work) that I’m unable to access localtunnel because its down or I outran the free usage limit for Pagekite

I generally end up using localtunnel when I have to wait for an IPN from Paypal or (relay response) while working on my development environment. So here is a quick way to roll out a basic version of the service for your own needs.

Before we move to the “how to” here is a quick intro to ssh tunneling

Now, I assume that you have a staging server (or some server you have ssh access to).

The following terminal command does pretty much the same thing we do with the Ruby code that follows:

ssh -R0.0.0.0:9999:localhost:3000

The -R indicates that you’ll be forwarding your local app running on port 3000 to the remote port 9999 on the remote host so that everyone can access it. Now your application running on localhost:3000 is accessible at

We do the same thing now using the ruby net-ssh library. The following code snippet is customised to my defaults but its simple enough to change those settings.

#! /usr/bin/env ruby
require 'rubygems'
require 'net/ssh'
require "net/ssh/gateway"
# Terminal equivalent
# #ssh -R0.0.0.0:9999:localhost:3000
puts "Enter the remote server name you wish to forward your local port to: ("
remote_host = gets.chomp
puts "Enter the remote port you wish your local app to be available on – on the remote server (ex: 9999)"
remote_port = gets.chomp
puts "Enter remote server user to ssh with"
remote_user = gets.chomp
puts "Enter local port to forward"
local_port = gets.chomp
remote_host = "" if remote_host.empty? || remote_host.nil?
remote_port = 9999 if remote_port.empty? || remote_port.nil?
remote_user = "sid" if remote_user.empty? || remote_user.nil?
local_port = 3000 if local_port.empty? || local_port.nil?
gateway =, remote_user)
puts "Forwarding{local_port} to #{remote_host}:#{remote_port}"
#remote_host = ""
Net::SSH.start(remote_host, remote_user) do |ssh|
puts "Connecting…"
#puts ssh.inspect
ssh.logger.sev_threshold = Logger::Severity::DEBUG
#remote forward from remote to established
ssh.forward.remote(local_port, '', remote_port, remote_host)
ssh.loop { true }

Hope this helps

Async Responses in Thin and a Simple Long Polling Server

I’ve been playing with Thin, the Ruby web server which packages the awesomeness of Rack, Mongrel and Eventmachine (\m/). However, the thing that blew me away completely was the James Tucker’s Async Rack that was integrated to Thin. The combination opens a whole new world of realtime responses, something that I’ve been constantly switching to NodeJS and SocketIO for.

Async Rack

A simple rack application would look like this


class MyRackApp
  def call(env)
    [200, {'Content-Type' => 'text/html'}, ["Hello World"]]


# thin start -R -D -p3000

This simple rack app on hitting port 3000 responds with the a status code (200), the content-type and some body. Nothing special. With async rack we’d be able to send the response head back while we build the response. Once the response is built we send the body and close the connection.

A quick glance at Patrick’s Blog (apologies for not being able to find the last name of the author of this blog) should give you an excellent understanding of what I’m trying to explain

A simple async app


AsyncResponse = [-1, {}, []].freeze

class MyAsyncApp
  def call(env) do
      body = [200, {"Content-Type" => "text/html"}, ["<em > This is Async </em >"]]


The AsyncResponse is a constant which returns a -1 status which tells the server that the response will be coming later asynchronously. It, as defined in the examples provided in thin an “Async Template” Async Template

On a request the code initially returns an AsyncResponse while the thread waits on the sleep request. Once the thread is active again it builds the response and sends the response keeping the connection alive.

An async app where we build the response and close the connection

From here on, it would really help to have the thin code open in your text editor. We would be looking at the request.rb, response.rb and the connection.rb methods from /thin folder.

require 'rubygems'

class DeferrableBody
	include EventMachine::Deferrable

	def each &blk
		puts "Blocks: #{blk.inspect}"
		@callback_blk = blk

	def append(data)
		puts " -- appending data --"
		data.each do |data_chunk|
			puts " -- triggering callback --"

class AsyncLife
	AsyncResponse = [-1, {}, []].freeze
	def call(env)
		body =

		EM.next_tick {
			puts "Next tick before async"
			puts "#{env['async.callback']}"
			env['async.callback'].call([200, {'Content-Type' => 'text/plain'}, body])

		puts "-- activated 5 second timer --"
		EM.add_timer(5) {
			puts "--5 second timer done --"
			body.append(['Mary had a little lamb'])

			puts "-- activated 2 second timer --"
				puts "--7 second timer done --"
				body.append(['This is the house that jack built'])
				puts "-- succeed called -- "




This example uses Eventmachine Deferrable which allow us to determine when we’re done building our response. But the tricky part I struggled to get my head around was the strangle looking each method and how it uses the append method to build our responses.

Walking through the code

When the request comes in from the browser the application renders the AsyncResponse constant which tells the server that the response body is going to be built over time. Now on request eventmachine’s ( checkout thin/connection.rb ) post_init method creates a @request and @response object. This triggers a post_process method which on the first call returns without setting the header, status or body and prepares for the asynchronous response.

On next_tick we begin to create our header. We initialize a EM::Deferrable object which is assigned as the response body and this ensures the header is sent ahead of the body (because we don’t have anything to iterate over in the each block where the response is sent) The env[‘async.callback’] is a closure for the method post_process which is created by method(:post_process) checkout the pre_process method in the thin/connection.rb

The each method in our Deferrable class overrides the default each implementation defined by the response object. So now our each block is saved in an instance variable @callback_blk which is call ed when we call the append method. So essentially we are calling send_data on each of the data blocks we’re sending back when we call the append method.

Once that’s done we call succeed which tells Eventmachine to trigger the callback to denote we’re done building the body. It ensures the request response connection is closed.

The default each implementation

 # Send the response      
      @response.each do |chunk|        
        trace { chunk }
        send_data chunk

Thats pretty much what I gathered by going through the code on async response with thin and rack. Another useful module is the thin_async bit written by the creator of thin @macournoyer available here . This pretty much abstracts the trouble of overriding the each block.

Here’s an example of a simple long polling server I built using thin available here

Hope this is helpful

Tagging with Redis

Its been a long time since my last post and this one is about Redis. So I’ve been working on this project for a bit now and I had this story to implement where I had to associate posts with other posts which were identified to be similar based on a tagging system that already existed. The trouble here was that the existing tagging system was closely tied to a lot of other functionalities and couldn’t be easily (quickly) re-written. The feature needed to be rolled out quickly for several reasons which I won’t get into.

The Problem

The tags associated to each posts were poorly organized where one record in the Tag model would hold all the tags associated to the post as a whitespace separated string (ouch!) # business money finance investment

So to find posts that had 2 tags in common from a database of approximately 2000 posts with at least 4 tags each took a significant amount of time. Here is a basic benchmark of just finding the matches on each request.

Benchmark.measure do
 similar_posts = []
 Post.tagged_posts.each do |post|
   similar_tags = ( &
   similar_posts << post if similar_tags.size >= 2

Here is what benchmark returns

 => #Benchmark::Tms:0x111fd8cb8 @cstime=0.0, @total=2.25, 
@cutime=0.0, @label="", @stime=0.19, @real=4.79089307785034, @utime=2.06

Not Great.

So the next option was to pre-populate the related posts for every post and store it in the database as similar_posts. So all that was required was to fetch the post with its ‘similar_posts’ at runtime. This seemed like an acceptable solution considering the tags were not changed on a regular basis but if changed it would require the similar_posts table to be rebuilt again(which took a long time). Here is the benchmark for fetching the pre-populated data from the database

Benchmark.measure { p.similar_posts }
=> #Benchmark::Tms:0x104d153f8 @cstime=0.0, @total=0.0100000000000007, 
@cutime=0.0, @label="", @stime=0.0, @real=0.0333230495452881, @utime=0.0100000000000007

Nice! But this came at the cost of having to rebuild the similar_videos every time something had to be changed with tags or videos.


Redis is this awesome in memory key-value, data-structure store which is really fast. Though it would be wrong to think of it as a silver bullet it does a lot things which are really awesome. One is the ability to store data structures and perform operations on them, PubSub and a lot of other cool stuff. It even allows you to persist the information using a snapshotting technique which takes periodic snapshots of the dataset or by “append only file” where it appends to a file all the operations that take place and recreates the dataset in case of failure.

In my case I didn’t need to persist the data but maintain a snapshot of it in memory. So assuming some basic understanding of Redis and the redis gem I’ll describe the approach.

We created a SET with each tag name as key in REDIS so that every tag contains a set of the post_ids of all posts that have that tag. Inorder to identify a post having two tags in common all that was needed was the intersection of tags sets and REDIS provides built methods for these operations. Thats it!

{"communication" => ['12','219', .. , '1027']}  #sample SET

Fetching the similar posts

def find_similar_posts 
  similar_posts = []
  if self.tags.present? && self.tags.first.present?
    tags =
    tags_clone = tags.clone
    tags.each do |tag| 
      tags_clone.each do |tag_clone|  
        similar_posts << REDIS_API.sinter(tag.to_s, tag_clone.to_s)
    puts "No matching posts."
 similar_posts -= []


 >> Benchmark.measure { p.find_similar_posts }
Benchmark::Tms:0x1100636f8 @cstime=0.0, @total=0.0100000000000007, @cutime=0.0, @label="",
@stime=0.0, @real=0.163993120193481, @utime=0.0100000000000007

Which is pretty good considering that we do all the computation within this request and nothing is pre-populated.

RabbitMQ and Rails. Why my chat application failed.

Over the last few weeks I’ve been playing with RabbitMQ and Rails have been completely blown away by how awesome it is.

I started out to build a simple chat application which would allow users to create a room and log into the room and post messages to multiple users who were logged in. Though, this is probably not the best use case for using AMQP but it helped me understand how AMQP worked and maybe I’ll get to use somewhere more apt.


What is RabbitMQ
RabbitMQ is an open source, highly scalable messaging system which provides support for STOMP, SMTP and HTTP messaging. []

It’s built using the Advanced Message Queuing protocol (AMQP) which provides APIs in Ruby and several other programming languages.


The easiest way to install RabbitMQ on the Mac is using homebrew. The Macports installation has many issues and I would recommend going with HomeBrew. It also takes care of all dependencies like the correct Erlang installation etc.

You can test your installation by moving into the installation ‘bin’ directory and running rabbitmq-server

If you end up seeing something like

 starting error log relay                                              ...done
 starting networking                                                   ...done
 starting direct_client                                                ...done
 starting notify cluster nodes 
 broker running

your good to go.

Creating our AMQP server

In Ruby we could run our AMQP server using the ‘amqp gem’ along with EventMachine.

The AMQP server code.

require 'rubygems'
require 'amqp'
require 'mongo'
require 'em-websocket'
require 'json'


@sockets = [] do
  connection  = AMQP.connect(:host => '')
  channel =
  puts "Connected to AMQP broker. #{AMQP::VERSION} "
  mongo = MongoManager.establish_connection("trackertalk_development")

  EventMachine::WebSocket.start(:host => '', :port => 8080) do |ws|
    socket_detail = {:socket => ws}
    ws.onopen do
      @sockets < true), queue)

      elsif status[:status] == 'MESSAGE'
        full_message = " #{status[:username]} :  #{status[:message]}"

    ws.onclose do
      @sockets.delete ws


The code snippet contains most of the amqp server code. The entire code lives inside the loop. Ignore the Mongo bit as it doesn’t play a part in our area of interest.

@sockets = [] do
  connection  = AMQP.connect(:host => '')
  channel =

In this snippet I created a new AMQP connection to localhost. When not provided with any other parameter the default port used by AMQP is 5672 with the username: guest and password :guest. Since the RabbitMQ installation will never (in most cases) have to be accessed from outside the server its safe to run the installation with the defaults.

The second line creates a new channel.

Once the channel is established the EM::Websocket block is used to allow modern browsers to connect to our server on port 8080

EventMachine::WebSocket.start(:host => '', :port => 8080) do |ws|
    socket_detail = {:socket => ws}
    ws.onopen do
      @sockets < true), queue)

      elsif status[:status] == 'MESSAGE'
        full_message = " #{status[:username]} :  #{status[:message]}"

    ws.onclose do
      @sockets.delete ws

This is where the core logic lies. On the socket connect block I store the current socket in an array to use it later.

The ‘onmessage’ block is called when a new message arrives from the browser. Since I needed to dynamically create new exchanges and queues I used a very amateurish system. The message sent from the browser arrives with the following parameters in the JSON format

The following snippet is not a part of AMQP but a feature borrowed from XMPP to get my application working

{status:'status', username:'someusername', roomname:'someroomname', message:'somemessage'}

Status could have values from [‘status’, ‘message’, ‘presence’]
Username contains the ‘current_user’ name
Roomname indicates the ‘room’ the user has been logged into.

Status messages indicate that some kind of chore needs to be performed like subscribing to queues, binding exchanges etc.

Messages indicate that the message needs to be pushed to an exchange. When a message is published to an exchange, subscribing queues push the message to the browser using Websockets.

Presence messages indicate the users presence.

Why the entire application was a failure

The most important aspect while using AMQP is identifying the type of exchanges needed. In this case my assumptions on identifying rooms to be exchanges and users to have individual queues proved to be the reason for failure.

The reason why this assumption would fail is if a user queue would bind to multiple exchanges with each user having a unique queue assigned to him, a simple fanout exchange would push messages from different rooms to the user’s logged in chat room which is not what we want. This way users would end up receiving messages that are not relevant.

A ideal approach would have multiple users subscribe to a room (assumed to be the queue in this case) via a topic exchange or direct exchange.

I also believe that it would be logical to dynamically create queues and exchanges using the EM.periodic timer to check a NoSQL database or key-value store for new requests rather than using Websockets.

The AMQP server code is available here

EventMachine: Deferring Time consuming tasks to a separate thread

A small update on my last post which focuses on building a sample twitter feed application using eventmachine and the twitter gem. If you’ve already gone through the documentation and references, especially the introduction to eventmachine PDF, you would have noticed that the reactor is single threaded and the EM methods aren’t thread safe. It also describes a simple example on how to use EM.defer to run time consuming tasks on a separate thread as a background process by using a thread from the thread pool.

Though there isn’t significant benefit in running our twitter feed application’s ‘update fetch’ operation as a background process but for purely illustrative purposes lets make our server perform updates on a separate thread. EM.defer provides a callback option that allows us to fetch values returned by the background thread and return it to the user in the main thread.

Update your event_machine_server.rb file with the following changes and run both the client and server.

The code is also available here

require 'rubygems'
require 'eventmachine'
require 'socket'
require 'json'
require 'hashie'
require File.expand_path("twitter_lib.rb", __FILE__ + "/..")
require File.expand_path("tweet_Fetcher.rb", __FILE__ + "/..")

class Echo < EM::Connection

 attr_reader :data, :response, :status, :fetcher   
 def post_init   
   @status = :inactive 
    ip, port = Socket.unpack_sockaddr_in( get_peername) #peer.methods.inspect
    puts  "Connection Established from #{ip} #{port}"

  def receive_data(data)
    puts "[LOGGING: RECEIVED] #{data}"
    @data = JSON.parse(data)
    puts "[LOGGING: PARSED DATA ] #{@data} #{@data.class.to_s}"

  def unbind
    puts "Connection Lost" + self.inspect
  def respond
  def execute_request
    if @data["op"] == "fetch"
      puts "Please wait while we fetch the data ..."
      @status = :active
      response = @fetcher.fetch      
    elsif @data["op"] == "update"
      puts "Fetching update . . ."
      response = @fetcher.fetch_update
    #  send_data(response.to_json)      
  def self.activate_event_machine(this = nil) do 
        puts "Starting echo server . . . ."
        EM.start_server('', 6789, Echo)
        puts "STARTED "
  def self.activate_periodic_timer(this = nil)
    response = nil
    operations = do     
      response = this.execute_request
    callback ={ this.send_data(response.to_json)}
    EM.add_periodic_timer(2) do
      EM.defer(operations, callback)      
  def update_operation
    @data["op"] = "update"

  def initialize_fetcher
    @fetcher ={ :consumer_key => "xxxxxxxxxxxxxxxx",
                     :consumer_secret => "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
                     :oauth_token => "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
                     :oauth_token_secret => "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" 
   # puts "[LOGGING FETCHER INITIALIZED] #{@fetcher.inspect}"                


=begin do 
      puts "Starting echo server . . . ."
      EM.start_server('', 6789, Echo)
      puts "STARTED "
      EM.add_periodic_timer(5) do
        puts "Timer activated"

Event Machine: A sample application to learn about event machine

I have been playing around with event machine for a couple of days now and have been trying to work out a basic (quick and ugly) example to understand how it could be used for simple applications.

Though my understanding of the subject is still very superficial here is a sample application that you could try to get yourself going. There are a lot of tutorials available for this and I guess this would be a good starting point

To get a quick idea of the working app try changing the value of the ‘timeline’ key in the tweet_fetcher file to :public.

Further bulletins as events warrant.

Chaining methods using ‘tap’ method

The Ruby tap method is one the methods that makes you wonder about the thought that has gone behind writing the language. It belongs to Ruby 1.8.7 and above.

What does tap do
Tap allows you to create a chain of methods while working on the intermediate results and returning the object at the end.

Ruby Documentation

For example

[1,2,3,4].tap{|o| o << 100 } # [1,2,3,4,100]

I find this comes in really handy especially if your working with named_scopes. Though the technique may not be the best but I find it to be a better option than using eval on user input. So here is the situation.

We have the user enter certain information on the kind of users he wishes to see on the screen. Ex: Customers, Adminstrators, Managers

He could also specify if the user is active or inactive (indicating whether he has full access to his account or not).

So based on his input we generate a query string for method chaining. Assuming that I have a named_scope for each situation in my user model

result_set.tap{|o| (o.blank?) ? o << "experts" : o << ".experts" if params[:type] == "Expert"}.
        tap{|o| (o.blank?) ? o << "customers" : o << ".customers" if params[:type] == "Customer"}.
        tap{|o| (o.blank?) ? o << "adminstrators" : o << ".adminstrators" if params[:type] == "Adminstrator"}.
        tap{|o| (o.blank?) ? o << "activated" : o << ".activated" if params[:activated] == 'true'}.
        tap{|o| (o.blank?) ? o << "deactivated" : o << ".deactivated" if params[:activated] == 'false'}

   User.instance_eval { eval(result_set) }

How cool is that. 🙂

Understanding the &: Ruby syntax

The symbol.to_proc method was something I hesitated to use for a long time because I found it be unclear and something that went against Ruby’s clear and easy to understand syntax. Though this subject is old news its something I’ve found that goes against ruby’s clarity over cleverness standards.

The Proc Object

A Proc object is block code that can be accessed as an Object. A proc when defined associates itself to a set of variables (local) and can be used at later point of time in the context of those variables

The official definition

So lets consider a simple example

def test(&block)
   puts  block.class.to_s  # "Proc"  # would execute the block of code

test{puts "Bazingaaa" } # Bazingaa

So the result would be


Its just a block of code that you can run whenever you want.

Now Ruby symbols have a to_proc method which allows you to convert a symbol to a proc.

 => true

:someSymbol.to_proc # Would yield a proc object #

Now if we go through the code from the Ruby library, we see that it executes a block of code on each element of the array.     {|item| block }  -> an_array
 p  [1,2,3,4,5].map({ |x| x + 1 }) # 2,3,4,5,6

If we create a simple block and pass it to the map method it would execute the code in the block for each element.

So the &: is actually &symbol and not &:something. Since the conversion of the to_proc method is not explicit the code

p [1,2,3,4,5].map(&:to_i) 

is actually

p [1,2,3,4,5].map(&({|x| x.to_i}))

The to_proc method as found in Rails 2.3.5

  def to_proc { |*args| args.shift.__send__(self, *args) }

Hope this helps someone

Do while loops in Ruby

I’ve been programming in Ruby for almost a year now and I had no clue this existed. I came across this when I was looking at how best to emulate a do while loop in Ruby. Apparently there is a do-while structure in Ruby, based on my learning from Jeremy Voorhis’ blog post

Here is how it works

     i = 10
     puts i
     i -= 1
end while(i > 1)

This has been another “holy crap how come I did not know this moment”.

Ejabberd and Xmpp on EC2. Connection Timeout

I have been working on Ejabberd and Orbited for our Rails application for over 5 months and have installed and managed our servers (most of which run on the Amazon EC2 setup) for a while now, but my last installation left me stumped. The thing was my installation would go through perfectly and I would be able to access my ejabberd admin console on 5280 perfectly but was unable to connect to it through the console.

I would create my client and the connect call would wait forever before timing out

Errno::ETIMEDOUT: Connection timed out - connect(2)
  from /usr/lib/ruby/gems/1.8/gems/xmpp4r-0.5/lib/xmpp4r/client.rb:70:in
  from (irb):4

I tried reinstalling and looked everywhere I could have possibly goofed up.

The solution to the problem was on EC2 and not the installation. Its actually an EC2 security groups issue. You need to ensure for the instance your working with, if the security group has port 5222 and if your working with server-2-server 5269 open.

The way to get this done using elastic fox is

  • click on the instance you want the port to be open
  • click on the security groups tab
  • click on the group permissions -> grant permissions button (green button with a tick mark) which would open a popup box
  • add the port (in the port field) 5222.
  • Add for hosts to allow all.

Repeat the process and add it for 5269 if your working with server-2-server and your done.

Hope this is useful to someone.