Friday, 1 June 2012

SavonInstrument

Started working on a rails app that needs to communicate with a SOAP based web service in the backend. We are using savon for talking to the web service. Every service call took a long time to process and we wanted to measure how much time is spent in the service calls. Inspired by rails_instrument, a middleware to show instrumentation information in http headers, I thought of writing a similar one for the SOAP calls that we make. SavonInstrument is born, and this is how it looks, for the sample application.
The idea is simple. Intercept the calls made by savon client (using savon hooks), measure the time taken, and add it to the http headers (in the middleware)

The challenge in the whole process was to identify how to intercept the calls made by the savon client. There was no documentation around using hooks and in fact no real hooks are provided by savon. The source code gave a slight hint that a Savon::Hook defined for :soap_request will be called while making a request.

Update: Savon provides around hooks now. Hence the hook for :soap_request just captures the time before and after the request .


Saturday, 14 April 2012

The curious case of actionmailer

Came across a very weird issue while working with actionmailer last week. Debugging the issue gave a good understanding of how the mail gem (which actually constructs the message, body and its headers) works. The problem statement is to send multiple emails on some particular event. I went ahead and implemented this problematic piece of code


If you are still wondering what is the problem with this code, go ahead and try to send a mail to yourself using the above mailer for any value of n > 1. The basic rails code for the problem statement can be found here. You can notice that the first mail is sent successfully. And the subsequent emails do not contain the full text. It stops abruptly while constructing a html link. Weird. The logs told me that the headers of all the mails are the same, but still they are all delivered differently.

Started digging the actionmailer gem for some clue. Came across this post that explained how we are able to call the instance methods of a mailer as if they are class methods. So, I took a sneak peak at the initializer of ActionMailer::Base


and the most important mail method.


It is evident from the above methods that a Mail::Message object is initialized once (when the mailer is initialized) and is used in the mail method. In an ideal scenario, the mailer object is constructed, which in turn constructs the message object,  processed and returned by the mail method and eventually gets delivered. The next time you call the same method, another mailer object is constructed, and as a result another message object is constructed. But in my_mailer the mailer is initialized once and I've called the mail method more than once and hence the message object is being reused.

Reusing the same message to send multiple mails made some sense, but still it didn't provide a clear explanation as to why the emails are delivered partially. I suspected the content of my email and removed the second paragraph and to my surprise all the emails were delivered completely. So, reusing the message object is not the only problem. The content that I had was also part of the problem.

Tried again with the full content and the response constructed by the actionmailer had all the content. It is only when the content was set as the body of the message, part of it disappeared.

Started digging into the mail gem. The mail gem is the one that forms the various components of a message. It constructs the body and sets various headers that are appropriate to send the message across, along with the default settings given by the user. One such header field is 'content_transfer_encoding', which determines the encoding method that would be used to send the email. The Mail::Body class determines the default encoding whenever it is initialized with the content.


This encoding method is used when the mail is to be encoded to a different encoding method while sending.


The interesting piece of code is the normalization part in the above method. The raw_source is decoded from its default encoding and then encoded to the desired/best suitable format.

The encoding method of a body can also be set externally, from the Mail::Message, if the message header has the content_transfer_encoding field set.



Just before the message is delivered, it identifies and sets the transfer_encoding field in the header.


In my case, when the first message was constructed, the default encoding identified by the body was '8-bit' encoding method. But, the header[:content_transfer_encoding] was set to 'Quoted-Printable' encoding method, the best method identified to send the message across. Since the message object is being reused when the mail method is called multiple times, the same header is used for the subsequent messages as well, which sets 'Q-P' as the encoding method for the body of the message.

Quick summary:

While sending the first message,
  1. The message body is initialized with the content rendered from view
  2. The encoding method is identified as '8-bit'
  3. The message header doesn't have content_transfer_encoding set
  4. The body is decoded from '8-bit' and encoded to 'Q-P' (the best encoding method)
  5. The message header is set to have 'Q-P' as content_transfer_encoding
  6. Message delivered
While sending the second message,
  1. The message body is initialized with the content rendered from view
  2. The encoding method is identified as '8-bit'
  3. The message header has 'Q-P' as content_transfer_encoding and hence sets the encoding of the body to 'Q-P'
  4. The body is decoded from 'Q-P', where part of the message is lost, and the rest of the message is encoded to 'Q-P' again
  5. The message header remains to be 'Q-P'
  6. Message delivered

Moral of the story: Do not try to send more than one mail from a mailer method. Keep the loops outside of the mailer.

Saturday, 17 March 2012

TDD - All the time

Test Driven Development (TDD) is one of the best practices that extreme programming (Xp) suggests. As defined by Wikipedia, TDD is a software development process, in which a developer writes a failing test case, that defines a new improvement, or new functionality, then produce the code to pass that test and finally refactors it to acceptable standards.

Having said that, the series of questions I hear from people who are introduced to TDD are, "Is it really important to write tests in first place? I know the functionality. Why can't I just sit down and start coding. What do I gain by writing tests first? The whole idea of TDD is completely counter intuitive. Of course I understand the importance of tests. But I can add them later." Very valid questions. There are two simple reasons why I think TDD helps in development.

First thing - TDD is a very intuitive process. Whenever we write a piece of code everyone of us know, what will be the output of the code given an input. We keep simulating that in our brain. Nobody starts coding without a problem statement in mind. Everybody knows what is expected out of the code at every point I'm time. And we validate it with sample input every time. This, exactly, is the idea behind TDD. Set the expectation for the code you write. Implement it. Validate it. Isn't TDD a intuitive process?

I believe in the philosophy of 'Self Documenting Code', code that explains itself without the need for extra documents. The testcases are a very useful add-on to this philosophy. In addition to validating the code, it also captures the intent behind writing a particular code, which comes handy later in the life-cycle of a project, either while debugging or while revisiting a functionality.

Friday, 16 March 2012

Missing data during Tire indexing

Am working on a RoR project, which uses Postgres for data storage and ElasticSearch (ES) for providing the search functionality. We were using Tire to talk to the ES server. Ran into an issue recently, where in the search results didn't return a particular document.  For the purpose of this post lets consider a model like the one defined below.

class Document < ActiveRecord::Base
# t.integer id
# etc., etc.,
end

The particular record, that i was looking for, was available in the DB, but was not available in the ES index. So, the first and foremost thing that i did is to re-build the index. Tire provides you with a rake task to import the indices. So, I can do

rake tire:import CLASS=Document FORCE=true

to rebuild my indices. Did that and tire reported that it has indexed all the records from the database to ES. But the document that i was looking for is still not returned in the search results.

Suspected ES to be the culprit and
1) Checked the number of documents indexed and found that the number of records, that tire reported to have indexed, and the actual number of documents indexed did not match.
2) Grep-ed the ES data folder for the unique identifier of the missing document and it was nowhere to be found in those folders.
Read some more in the Tire documentation and found that every record in the database can be indexed individually. Enter rails console.

Document.find('The missing document').update_index

I was expecting this to fail and throw some exception to give me some clue in this debugging process. But to my surprise it was successful and the document was returned in the search results. Digging the tire documentation for some more time tells you that you can also do

Document.index.import Document.all

to index all the documents from DB to ES. Again, I was expecting this to give me the same results as the rake task, because ideally they are doing the same thing. I was in for a surprise again. The document still came up in the search results. This is when I started suspecting the rake task and looked up the tire source code to see what it actually does. Ignoring the code for progress bar and other things, only the three steps are of interest to me.

index = Tire::Index.new(klass.tire.index.name)
index.delete
index.create :mappings => klass.tire.mapping_to_hash, :settings => klass.tire.settings
index.import(klass, 'paginate' {}) do |documents|
documents
end

So, I tried doing the same thing in the rails console, by just replacing the 'klass' with 'Document'. And the document went missing again. But this time it gave some visibility into what is happening internally. When we ask tire to rebuild the indices, it doesn't get all the documents in one go and build the index. It fetches the documents page by page (with a max of 1000 documents per page) by calling the 'paginate' method, and the paginated results had duplicate records across the pages.

We were using 'kaminari' in our project to take care of the pagination and hence we had to inject this method for the models that needs indexing and it looked like:

module Paginatable
module KlassMethods
def paginate(options)
page(options[:page]).per(options[:per_page])
end
end

def self.included(klass)
klass.extend KlassMethods
end
end

I just had to change it to include some default ordering, to make sure that tire gets all the records (without any duplicates) for indexing.

def paginate(options)
order("#{table_name}.id").page(options[:page]).per(options[:per_page])
end

Thursday, 29 December 2011

Datatype support for MongoModel

Mongomodel is a ruby gem that does the Data mapping for mongo db. It supports the most commonly used data types. Recently our team had a requirement to handle (persist and retrieve back data from mongodb) OpenStruct. Since the mongomodel documentation doesn't provide enough information on how to extend it to support a new datatype, we tried to give it a shot ourselves and to our surprise it was much easier and straight forward than what we thought.

All that mongomodel expects is to have a class (converter) that can serialize and deserialize the datatype to be persisted in the mongodb. So, we'll start by defining the class with 'from_mongo' and 'to_mongo' methods in it.

module MongoModel
   module Types
      class MongoOpenStruct
         def from_mongo(value)
            OpenStruct.new(value)
         end
         def to_mongo(value)
            value.marshal_dump
         end
      end
   end
end

Alright. Now we need to tell mongo to use this class if it comes across a OpenStruct property in any mongomodel document. Mongomodel uses an internal hash map to identify the converter for a given datatype. So, all that we need to do is to add an entry (just after the 'converter' class defnition) for OpenStruct in the mapping.

Types::CONVERTER.merge! ::OpenStruct => Types::MongoOpenStruct.new

Thats it! We can now start using OpenStruct properties in any mongomodel document.

class Article < MongoModel::Document
   property :some_property, OpenStruct
end

Friday, 4 November 2011

Integration testing Search - Clash of the indices


ElasticSearch is an Open Source (Apache 2), Distributed, RESTful, Search Engine built on top of Apache Lucene. It promises to solve all the pain points of implementing search in a web application. Tire is one of the popular ruby clients for elasticsearch. Though it is important to have integration test for models while using tire in a ruby app, it is equally important to make sure that the test doesn't corrupt the development indices.

One approach to overcome this is to create a dummy class in the test that includes the Tire::Model::Search and Tire::Model::Callbacks module and use it to create indices and test search in the integration test. While this seems simple, there is a cleaner approach, that lets us use the actual models in test and does not corrupt the development indices as well. There is a configuration in tire that lets you provide the prefix to the index name. By specifying the
Tire::Model::Search.index_prefix Rails.env
in the initializers/tire.rb file, we can tell tire to use different indices for different environments. For example, the search in the dev mode will use "development_model_name" index and that in the test environment will use "test_model_name" index.

Monday, 27 June 2011

Monkey see, Monkey do - Why can't we?

Empathy - The ability to understand and share the feelings of another person - is one quality that we have been taught to have since childhood. I'm sure everyone of us would have felt the emotions, that the presence or absence of empathy, brings to us. I was under the impression that empathy or putting oneself into the other person's shoes or being on the other side of the table or whatever you call it, is a skill that every human being should try and develop, till I started reading "Just Listen" by Mark Goulston. It was interesting to know that it is not a skill that a person develops but instead it is wired in his brain. In fact that is something that we inherited from our ancestors during the evolution.

It has been proved by scientists who studied specific nerve cells of macaque monkeys that, these monkeys' brain cells reacted in the same way when the monkey performed some action and when it watched some other monkey perform the same action. V.S.Ramachandran explained in his ted talk that if we could remove the skin cells that senses the touch and sends the signals to brain, the reactions in the brain would be the same, if we were touched or if we watch someone else being touched. To the brain, it is all the same.

Every human being has this ability to empathize another person and expects the world to reflect our feelings as well (at least a little). Or in Mark's words
Each time we mirror the world, it creates a little reciprocal hunger to be mirrored back.
But most of the time we fail to do just that and it might create an impact in the other person called "mirror neuron receptor deficit".  When people experience it, they feel that they are left alone and depressed. Most of the techniques, to get through people,explained in this book is just about making people feel felt. When we mirror people's feelings/emotions it makes them feel less alone, which brings some relief to them, which in turn makes them feel more relaxed. As a result they'll be more open to listen to us and willing to work with us. I believe that this simple science must be very useful and handy to everyone of us who need to collaborate with a bunch of people day in and day out.