Deferred object as callback

Posted October 10th, 2012 by guidone with 6 Comments

Callbacks are a nice way of Iversion of control, but are even more powerful with deferred objects.
Basically callbacks are a way to pass back the control from a called function, the deferred/promise just do the opposite: it returns the control the called function.

Consider this scenario: a table widget, with the classic delete/modify buttons next to each row. Since the table widget is a generic component, we want to handle the delete operation on the model outside the widget, and we don’t event want to extend the generic object to do that, we want to user a callback:

$.widget('my_table_widget',{
   onDelete: function(id_of_row) {
      var that = $(this);
      // call the backend and delete the record
      $.ajax({
         method: 'delete',
         url: '/api/records/'+id_of_row
         })
         .done(function() {
            // ok, record delete, now I should update the table
            that.widget('remove_row',id_of_row);
            });
         }
      });

In the example above: the callback first calls the backend to remove the record and then removes the row from the table (I mean the DOM). In order to do this: it needs to know something about the table widget (in this case, the method “remove_row”).
But what if we could pass the ball back to the widget? Let’s rewrite it with a deferred object:

$.widget('my_table_widget',{
   onDelete: function(id_of_row) {
      var deferred = $.Deferred();
      // call the backend and delete the record
      if (confirm('Delete?')) {
         $.ajax({
            method: 'delete',
            url: '/api/records/'+id_of_row
            })
            .done(function() {
               // ok, record delete, now I should update the table
               deferred.resolve();
               })
            .fail(function() {
               deferred.reject();
               })
         }
      else {
         deferred.reject();
         } 
      return deferred.promise();
      }
   });

It’s more linear but, most important, the callback knows nothing about the remove_that_damn_row_from_the_dom method of the widget, it just passed back the control, it’s like saying “I’m done, it’s your turn”.
More separation, less documentation to read, easier to implement, less errors.

On the widget side, the callback should be treated this way

// somewhere inside the table widget, here is where we execute the callback
var callback_result = options.onDelete.call(widget,id);
// if the callback answer with a deferred/promise, this will
// handle it when it's resolved/rejected
if (isPromise(callback_result)) {
   callback_result
      .done(function() {
         // ok, asynch operation on the other side is over
         // remove the row from the dome
         widget.widget('remove_row',id);
         })
      .fail(function() {
         // do nothing
         })
   }

Lesson learned with package.json

Posted October 2nd, 2012 by guidone with No Comments

I’ve been using NodeJS for a project of mine and I was used to setup my package.json in this way:

...
   "some_external_lib": ">= 1.0.0",
   "another_external_lib": ">= 1.4.0"
...

I thought: the newer a library is, the better.
And I was wrong. In this way there’s no control on what is going to production/staging servers, if a new version of some_external_lib becomes available, let’s say 1.5.0-i-am-very-alhpa, that unstable code will be deployed on production.

Since the local environment is no longer aligned with production/staging (even if we cast a npm install locally), any bug introduced by unstable code will be very hard to spot in the local environment (likely you’ll end up debugging on production server).

Lesson learned: eradicate any “>=” from your package.json.

Adieu for-loops

Posted October 1st, 2012 by guidone with No Comments

Using many callbacks without embracing a functional style of coding can lead to mistakes that are very difficult to spot.
Consider this example: we have an array of user object and for each of them we need to fetch the first and last name using Facebook Graph API.

Using a procedural style:

for (idx = 0; idx < users.length; idx++) {
$.api.FB('/'+users[idx].FacebookId)
.done(function(fb) {
users[idx].first_name = fb.first_name;
users[idx].last_name = fb.last_name;
});
}

This is just a pseudo code, suppose the method $.api.FB() fetches information from Graph API.
At the end of the cycle, each record has the fields first_name and last_name filled in with values from Facebook.
That in theory.

The problem with this piece of code is that it launches several asynchronous operations and you cannot tell which one is going to end first, likely the entire loop will end even before the first asynchronous operation is complete.
In a cloud environment, where every request could hit a different physical server, is not assured that the first call will end before the next one.

To avoid this error, just turn the code in a functional style:

$(user).each(function(idx) {
$.api.FB('/'+users[idx].FacebookId)
.done(function(fb) {
users[idx].first_name = fb.first_name;
users[idx].last_name = fb.last_name;
});
}
});

Here it works because the idx var it’s local to the anonymous function inside the each and, thanks to the closures, it will live until the functions defined inside the scope (the callback) are complete.
Every time we create an anonymous function we have a brand new context in which we can mess around without worrying about what is happening before and after: less side effects and less errors, adieu for loops.

  • Categories

  • Tags

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 2 other subscribers

  • Twitter

    Flickr Stream