Cross Domain data channel with HTML5 Canvas

Standard Ajax is restricted to single origin policy so JSONP is the de-facto way for exchanging data with cross-domain sources and it works pretty well. Alternative, though bit hacky way, is to use HTML5 Canvas as cross domain work-around using pseudo Images as “covert channel”.

Basic idea is simple, javascript in client requests image file from 3rd party site where server encodes a data to the Image, client can use cookies and url parameters to identify itself as desired. Then client renders image, and decodes the data from the image pixels.

Backend

Backend needs to be able to construct images with custom pixel level data, in this example we use Node.js and canvas module that is server side HTML5 Canvas implementation based on Cairo graphics library.

This function accepts any object and returns canvas object that contains the objects JSON presentation encoded in image pixels.

function encodeDataToImage( data ) {

	// Convert data to binary buffer while being utf-8
	var s = encodeURIComponent( JSON.stringify(data) );
	var buffer = new Buffer(s, 'utf8');
	var pixelc = (buffer.length / 3) + (buffer.length % 3 ? 1 : 0)

	// Encode data as PNG image
	var Canvas = require('canvas');
	var canvas = new Canvas(pixelc, 1)
	var ctx = canvas.getContext('2d');
	var imgdata = ctx.getImageData(0, 0, pixelc, 1);

	for (var i=0, k=0; i < pixelc * 4; i += 4 ) {
		imgdata.data[i + 3] = 0xFF; // set alpha to full opaque
		for (var j=0; j < 3 && k < buffer.length; k++, j++ ) {
			imgdata.data[i + j] = buffer[k];
		}
	}
	// set "image" data
	ctx.putImageData(imgdata, 0, 0);
	return canvas;
}

Define xd request handler that builds and sends the data coded image to the client. (Example in Express.js).

someapp.get('/xd', function(req, res ) {
    // do here something with query or cookies, like resolve uid and set
    // data.
    // Example data
    var data = { a: 1, en: 'owl', fi: 'pöllö', es: 'búho', uid: req.query.uid }

    var canvas = encodeDataToImage( data );
    var img = canvas.toBuffer();
    res.contentType('png');
    res.header('Content-Length', img.length);
    res.send( img );
});

Browser
At browser side load the image and decode it back to object

function queryXD( query, callback ) {

	var img = new Image();
	img.src = 'http://some.site.example.com/xd?' + query;
	img.addEventListener('load', function() {

		// Image loaded, create temporary canvas
		var canvas = document.createElement('canvas');
		var ctx = canvas.getContext('2d');

		// draw image on canvas
		canvas.width = img.width;
		canvas.height = img.height;
		ctx.drawImage( img, 0, 0 );

		// collect bytes from image pixels
		var bytes = [];
		var imgdata = ctx.getImageData(0, 0, img.width, img.height);
		for (var i=0; i < img.width * 4; i++ ) {
			if ( i && (i + 1) % 4 == 0) {
				i++;
			}
			var b = imgdata.data[i];
			if (!b) {
				break;
			}
			bytes.push( b );
		}

		// convert bytes to string and parse JSON
		var s = decodeURIComponent( String.fromCharCode.apply(null, bytes) );
		var data = JSON.parse(s);

		callback(false, data);
	}, false);

        // image failed to load
	img.addEventListener('error', function(err) {
		callback(err);
	}, false);
}

And now its simple to do cross domain data exchange like

queryXD('uid=2134', function(err, data) {
   alert(data.en + ' is ' + data.fi + ' in Finnish and ' + data.es + ' in Spanish');
});

Caveats
Proxies and browsers like to cache the images, so use every time unique dummy parameter to force fetch.

Cached sequential unique identifiers with Node.js and MongoDB

Acquiring sequential id with MongoDB is simple, as it supports

$inc

command for atomic sequence increment. However, naive implementation requires hit to database every single time id is required, and this can create latency and overhead issues. Typical case is for user tracking where application needs to get unique global id for every user in load balanced array of node.js instances.

This is more optimized method and works if you’re running simultaneously several Node.js instances. This method fetches a unique number range from database, uses them from memory and fetches new range when it runs out. This example assumes you use https://github.com/mongodb/node-mongodb-native .

Initialize the id starting point.

On instance startup, the implementation initializes the id to starting value (if it does not exists) and fetch the current status from database. In this example the starting value is 1000.

function init_id( seqname, next ) {
    idcollection = new mongodb.Collection(client, 'ids');
    function _findId() {
        idcollection.findOne({_id: seqname}, function(err, doc) {
            if ( err ) { console.log( 'ERROR MONGO', 'ids', err ); return next(err); }
            if( doc ) {
               return next( false, { _id: seqname, waiters: [], high: doc.index, index: doc.index } )
            }
            idcollection.insert( {_id: seqname, index: 1000}, {safe: true}, function(err, doc) {
                if ( err ) { console.log( 'ERROR MONGO', 'ids', err ); return next(err); }
                return _findId();
            });
        });
    }
    _findId();
}

callback ‘next’ is called with object initialized to the current range from database.

    init_id( 'myseq', function(err, idstatus ) {
        // we have now id status
    }
...

Sequence generation function

Next we define function that is called to fetch the next id. Tricky part is that if code needs to fetch next batch of unique identifiers it needs to queue the other callers until fetch completes so we don’t end up fetching more than one range increment at a time.

The high and index properties were set to current value in initialization so first call to next_id will always trigger fetch.

var INDEX_STEP = 10; // range to prefetch per query

function next_id( idstatus, next ) {

    if (idstatus.high > idstatus.index) {
        // id available from memory
        return next(false, idstatus.index++);
    }

    // need to fetch, put callback in wait list
    idstatus.waiters.push( next )

    if (idstatus.infetch) {
       // already fetch in progress
       return;
    }

    // initiate fetch
    _fetch( INDEX_STEP );

    function _fetch( step ) {
        // use findandmodify to increment index and return new value
        idstatus.infetch = true;
        idcollection.findAndModify( {_id: idstatus._id}, [['_id','asc']],
				    {$inc: {index: step}},
		   		    {new: true}, _after_fetch);
    }

    function _after_fetch(err, object) {

        function _notify_waiters( err ) {
            // give id to all waiters
            while ( idstatus.waiters.length ) {
                if ( err ) {
                    (idstatus.waiters.shift())( err )
                } else {
                     if (idstatus.high <= idstatus.index) {
                        // we got more waiters during fetch and
                        // exhausted this batch, get next batch
                        return _fetch( INDEX_STEP );
                     }
                    (idstatus.waiters.shift())( false, idstatus.index++ )
                }
            }
           idstatus.infetch = false;
        }

        if (err) return _notify_waiters( err )
        if (!object) return _notify_waiters('index not found')

        idstatus.high = object.index

        // the current index must be reset to the allocated range
        // start, because there could be several parallel nodes making
        // incremental queries to the db so each node does not get
        // sequential ranges.
        idstatus.index = object.index - INDEX_STEP

        _notify_waiters();
    }
}

Code gets next id as argument to callback

    next_id( idstatus, function(err, id) {

        // 'id' is next unique id to use!
    });

Note.

  • Identifiers are sequential (growing) but not incremental, as multiple node instances will at some point make requests at the same time.
  • Each startup increments the current value of sequence in database by STEP_INDEX amount if next_id is called at least once
  • INDEX_STEP must be large enough to avoid race condition, or optimally should implement some kind of exponential retry

HTML5 Canvas Layout and Mobile Devices

Common problem with Canvas and mobile devices is how to get the canvas to fill the browser window properly. This can be tricky and require lots of tweaking and testing with different devices to get it exactly right.

Even if you get the size correctly defined, the rotation is another hurdle and the layout could break after orientation change or two.

I wrote an example of simple layout page, that should work both on desktops and mobile devices Android (>2.2) and iPhone/iPad. It should appear as following layout in all browsers shown here in the iPhone screenshots and not break on resize or orientation change.

portrait

 

The layout works also after rotation.

layout_l

Page defines a canvas (green), that occupies most of the screen and under that a fixed height div (yellow) containing ‘Some Text Here’. On every resize the code draws black rectangle that is -10 pixels short from each canvas border and writes number of orientation changes and the resize events for debugging purposes. The document background is defined blue to reveal possible unwanted overflows.

How it works?

DOM/CSS

First the meta elements tell to mobile devices how to handle the page. No scaling and width is fixed to device width.

<meta name="viewport" content="user-scalable=no, initial-scale=1.0, maximum-scale=1.0, width=device-width">
<meta name="apple-mobile-web-app-capable" content="yes">

The document is wrapped in single div (“container”) that contains the canvas and the fixed height div.

<body>
   <div id="container">
     <canvas id="canvas">HTML5 Canvas not supported.</canvas>
     <div id="fix">Some Text Here</div>
   </div>
...

Container is forced to fill the browser window by CSS rule that defines overflow as auto and width/height 100%.

body,html
{
    height: 100%;
    margin: 0;
    padding: 0;
    color: black;
}
#container
{
    width: 100%;
    height: 100%;
    overflow: auto;
}

Canvas element is inside the container and has no initial height and width. It is defined as display block in CSS to avoid unwanted padding or margins. Canvas default display is inline, that is something you almost never want.

#container canvas {
    vertical-align: top;
    display: block;
    overflow: auto;
}

Finally div (“fix”) is defined with fixed height

#fix {
    background: yellow;
    height: 20px;
}

This is not enough though, and some JS handling is required for resize and the orientation change.

Javascript

The JS listens both timeout and orientation change events and installs a timeout function that gets cancelled if browser sends several events rapidly.

var resizeTimeout;
$(window).resize(function() {
    clearTimeout(resizeTimeout);
    resizeTimeout = setTimeout(resizeCanvas, 100);
});

var otimeout;
window.onorientationchange = function() {
    clearTimeout(otimeout);
    otimeout = setTimeout(orientationChange, 50);
}

Orientation change listener does nothing important, it just updates the counter for debugging purposes.

The resizeCanvas is more involved. When browser is iPhone it first increases the container height 60 pixels higher than the browser window height. This makes possible to scroll the window down and hide the iPhone Safari address bar.

if (ios) {
    // increase height to get rid off ios address bar
    $("#container").height($(window).height() + 60)
    setTimeout(function() { window.scrollTo(0, 1);  }, 100);
}

Then it gets the container width and height, that are height and width of the browser window.

var width = $("#container").width();
var height = $("#container").height();

And finally forces the canvas size and width to the required. The height is subtracted by 20 to leave room for the fixed height div.

cheight = height - 20; // subtract the fix div height
cwidth = width;

// set canvas width and height
$("#canvas").attr('width', cwidth);
$("#canvas").attr('height', cheight);

There could be better way to do this, but at least this seems to be pretty robust and works in all major desktop and mobile browsers.

Code is available in Github.

Keeping CouchDB design docs up to date with Node.js

CouchDB views are defined typically as Javascript snippets and are part of special documents called design documents. I noticed that keeping these design documents up to date during development is pretty cumbersome and error prone. So I devised simply way to keep them updated using Node.js and Cradle couchdb driver.

Idea is to define the views in as variables in runnable js script and run that with Node each time it’s changed.

Here is the code. Copy it to e.g. cdb-views.js.

var cradle = require('cradle');

cradle.setup({ host: 'localhost',
               port: 5984,
               options: { cache:true, raw: false }});

var cclient = new (cradle.Connection)

function _createdb(dbname) {
    var db = cclient.database(dbname);
    db.exists(function(err, exists) {
        if (!exists) {
            db.create()
        }
    });
    return db;
}
var DB_SOMETHING = _createdb('somedb')

function cradle_error(err, res) {
    if (err) console.log(err)
}


function update_views( db, docpath, code ) {

    function save_doc() {
        db.save(docpath, code, function(err) {
            // view has changed, so initiate cleanup to get rid of old
            // indexes
            db.viewCleanup( cradle_error );
        });

        return true;
    }

    function compare_code( str1, str2 ) {
        var p1 = str1.split('\n');
        var p2 = str2.split('\n');

        for ( var i=0; i < p1.length || i < p2.length; i++ ) {
            var l1 = p1[i];
            var l2 = p2[i];
            l1 = l1 ? l1.trim() : '';
            l2 = l2 ? l2.trim() : '';
            if ( !l1 && !l2 ) continue;
            if ( l1 != l2 ) return true;
        }
        return false;
    }

    // compare function definitions in document and in code
    function compare_def(docdef, codedef) {
        var i = 0;

        if (!docdef && codedef) {
            console.log('creating "' + docpath +'"')
            return true;
        }
        if (!codedef && docdef) {
            console.log('removing "' + docpath +'"')
            return true;
        }
        if (!codedef && !docdef) {
            return false;
        }

        for (var u in docdef) {

            i++;
            if (codedef[u] == undefined) {
                console.log('definition of "' + u + '" removed - updating "' + docpath +'"')
                return true;
            }

            if (typeof(codedef[u]) == 'function') {
                if (!codedef[u] || compare_code( docdef[u], codedef[u].toString()) ) {
                    console.log('definition of "' + u + '" changed - updating "' + docpath +'"')
                    return true;
                }
            } else for (var f in docdef[u]) {
                i++;
                if (!codedef[u][f] || compare_code( docdef[u][f], codedef[u][f].toString()) ) {
                    console.log('definition of "' + u + '.' + f + '" changed - updating "' + docpath +'"')
                    return true;
                }

            }
        }
        // check that both doc and code have same number of functions
        for (var u in codedef) {
            i--;
            if (typeof(codedef[u]) != 'function') {
                for (var f in codedef[u]) {
                    i--;
                }
            }
        }
        if (i != 0) {
            console.log('new definitions - updating "' + docpath +'"')
            return true;
        }

        return false;
    }

    db.get(docpath, function(err, doc) {

        if (!doc) {
            console.log('not found - creating "' + docpath +'"')
            return save_doc();
        }

        if (compare_def(doc.updates, code.updates) || compare_def(doc.views, code.views)) {
            return save_doc();
        }
        console.log('"' + docpath +'" up to date')
    });
}

var EXAMPLE1_DDOC = {
    language: 'javascript',
    views: {
        active: {
            map: function (doc) {
                if (doc.lastsession) {
                    emit(parseInt(doc.lastsession / 1000), 1)
                }
            },
            reduce: function(keys, counts, rereduce) {
                return sum(counts)
            }
        },
        users: function(doc) { 
            if (doc.created) {
                emit(parseInt(doc.created / 1000), 1)
            }
        }
    }    
}

var EXAMPLE2_DDOC = {
    language: 'javascript',
    views: {
        myview: function(doc) {
            if (doc.param1 && doc.param2) {
                emit([doc.param1, doc.param2], null)
            }
        }
    }
}

update_views(DB_SOMETHING, '_design/example1', EXAMPLE1_DDOC);
update_views(DB_SOMETHING, '_design/example2', EXAMPLE2_DDOC);

The code is pretty simple.

  1. First it loads the Cradle couchdb driver and creates needed databases if they do not already exist. In this example only single database ‘somedb’ is created.
  2. The update_views is responsible of keeping the design docs up to date. It loads the design doc from defined DB and compares it to the code defined in the design doc in this file. If it has changed (or missing) it will be recreated.
  3. The example design docs (EXAMPLE1_DDOC and EXAMPLE2_DDOC) are simple design doc definitions as Javascript object. You’re familiar with CouchDB so this is self explanatory.
  4. Lastly the code just calls the update_views to update the design documents.

Now it’s possible to maintain the views in this Javascript file, the Node will make sure that the syntax is always valid.

Example output:

Views are up to date.

$ node cdb-views.js
"_design/example1" up to date
"_design/example2" up to date

Definition of view example2/myview has changed

$ node cdb-views.js
"_design/example1" up to date
definition of "myview" changed - updating "_design/example2"

Design doc example2 can not be found and is created.

$ node cdb-views.js
"_design/example1" up to date
no design doc found updating "_design/example2"

 

 

Callbacks from Threaded Node.js C++ Extension

UPDATE: this guide is bit outdated, new Node (0.6 > ) versions support easier way to access pooled worker threads so extension doesn’t need to create its own. See links in comments.

Writing threaded Node.js extension requires some care. All Javascript in Node.js is executed in single main thread, so you can not simply call the V8 engine directly from your background thread. That would cause segfault. Recommended way to do this is to spawn new thread on background and use the libev events to wake up the main thread to execute the Javascript callbacks.

Node.js framework has lots of ready stuff for implementing extensions, but there is no simple example how to implement this kind extension so here it is.

Add-on Source

Save this source to texample.cc

#include <queue>

// node headers
#include <v8.h>
#include <node.h>
#include <ev.h>
#include <pthread.h>
#include <unistd.h>
#include <string.h>

using namespace node;
using namespace v8;

// handles required for callback messages
static pthread_t texample_thread;
static ev_async eio_texample_notifier;
Persistent<String> callback_symbol;
Persistent<Object> module_handle;

// message queue
std::queue<int> cb_msg_queue = std::queue<int>();
pthread_mutex_t queue_mutex = PTHREAD_MUTEX_INITIALIZER;

// The background thread
static void* TheThread(void *)
{
    int i = 0;
    while(true) {
         // fire event every 5 seconds
        sleep(5);
       pthread_mutex_lock(&queue_mutex);
       cb_msg_queue.push(i);
       pthread_mutex_unlock(&queue_mutex);
       i++;
      // wake up callback
      ev_async_send(EV_DEFAULT_UC_ &eio_texample_notifier);
    }
    return NULL;
}

// callback that runs the javascript in main thread
static void Callback(EV_P_ ev_async *watcher, int revents)
{
    HandleScope scope;

    assert(watcher == &eio_texample_notifier);
    assert(revents == EV_ASYNC);

    // locate callback from the module context if defined by script
    // texample = require('texample')
    // texample.callback = function( ... ) { ..
    Local<Value> callback_v = module_handle->Get(callback_symbol);
    if (!callback_v->IsFunction()) {
         // callback not defined, ignore
         return;
    }
    Local<Function> callback = Local<Function>::Cast(callback_v);

    // dequeue callback message
    pthread_mutex_lock(&queue_mutex);
    int number = cb_msg_queue.front();
    cb_msg_queue.pop();
    pthread_mutex_unlock(&queue_mutex);

    TryCatch try_catch;

    // prepare arguments for the callback
    Local<Value> argv[1];
    argv[0] = Local<Value>::New(Integer::New(number));

    // call the callback and handle possible exception
    callback->Call(module_handle, 1, argv);

    if (try_catch.HasCaught()) {
        FatalException(try_catch);
    }
}

// Start the background thread
Handle<Value> Start(const Arguments &args)
{
    HandleScope scope;

    // start background thread and event handler for callback
    ev_async_init(&eio_texample_notifier, Callback);
    //ev_set_priority(&eio_texample_notifier, EV_MAXPRI);
    ev_async_start(EV_DEFAULT_UC_ &eio_texample_notifier);
    ev_unref(EV_DEFAULT_UC);
    pthread_create(&texample_thread, NULL, TheThread, 0);

    return True();
}

void Initialize(Handle<Object> target)
{
    HandleScope scope;

    NODE_SET_METHOD(target, "start", Start);

    callback_symbol = NODE_PSYMBOL("callback");
    // store handle for callback context
    module_handle = Persistent<Object>::New(target);
}

extern "C" {
static void Init(Handle<Object> target)
{
    Initialize(target);
}

NODE_MODULE(texample, Init);

Function walkthrough

  • The Init function gets called when you require('texample') the native module.
  • Initialize function defines module function start that will be called by javascript. It also stores the module handle for locating and calling the script defined callback on right context.
  • Start function initializes the libev event notifier and starts the background thread TheThread
  • Thread TheThread simply loops, sleeps and puts incremental integers to the queue and wakes up the main thread each time.
  • Callback function gets waken up the libev and it locates and calls the javascript function callback

Building

Copy this to the ‘wscript’  file.

def set_options(opt):
  opt.tool_options("compiler_cxx")

def configure(conf):
  conf.check_tool("compiler_cxx")
  conf.check_tool("node_addon")

def build(bld):
  obj = bld.new_task_gen("cxx", "shlib", "node_addon")
  obj.cxxflags = ["-g", "-D_FILE_OFFSET_BITS=64",
                  "-D_LARGEFILE_SOURCE", "-Wall"]
  obj.target = "texample"
  obj.source = "texample.cc"

Compile the code with node-waf

$ node-waf configure
$ node-waf build

Running

Start node shell and load the native add on module

$ node
> texample = require('./build/default/texample');

Define the callback function that the module will call

> texample.callback = function(i) {
... console.log('Bang: ' + i);
... }
>

Call start to kick of the background thread

> texample.start();
true
>

Wait for 5 seconds, you’ll start seeing your callback getting triggered every 5 seconds.

> Bang: 0
Bang: 1
> Bang: 2

Have fun!

Node.js TLS client example

Couldn’t find good end to end example of Node.js new raw SSL client API, so here is one. This snippet just connects to the https://encrypted.google.com and fetches the front page.

tls = require('tls');

// callback for when secure connection established
function connected(stream) {
    if (stream) {
       // socket connected
      stream.write("GET / HTTP/1.0\n\rHost: encrypted.google.com:443\n\r\n\r");  
    } else {
      console.log("Connection failed");
    }
}

// needed to keep socket variable in scope
var dummy = this;

// try to connect to the server
dummy.socket = tls.connect(443, 'encrypted.google.com', function() {
   // callback called only after successful socket connection
   dummy.connected = true;
   if (dummy.socket.authorized) {
      // authorization successful
      dummy.socket.setEncoding('utf-8');
      connected(dummy.socket);
   } else {
      // authorization failed
     console.log(dummy.socket.authorizationError);
     connected(null);
   }
});

dummy.socket.addListener('data', function(data) {
   // received data
   console.log(data);
});

dummy.socket.addListener('error', function(error) {
   if (!dummy.connected) {
     // socket was not connected, notify callback
     connected(null);
   }
   console.log("FAIL");
   console.log(error);
});

dummy.socket.addListener('close', function() {
 // do something
});

If you want to use client certificate authentication, define the options and give that as additional parameter to the tls.connect call.

var keyPem = fs.readFileSync("key-noenc.pem", encoding='ascii');
var certPem = fs.readFileSync("cert.pem", encoding='ascii');
var options = {key:keyPem, cert:certPem };

...
dummy.socket = tls.connect(443, 'some.example.com', options, function() {
....

Node.js Application Configuration Files

What is the best practice to make configuration file for your Node.js application? Writing property file parser or passing parameters at command line is cumbersome.

Eval

One easy way to separate configuration and application code is by using eval statement. Define your configuration as simple Javascript associative array and load and evaluage it on app startup.

Example configuration file myconfig.js

settings = {
    a: 10,
    // this is used for something
    SOME_FILE: "/tmp/something"
}

Then at start of your application

var fs = require('fs');
eval(fs.readFileSync('myconfig.js', encoding="ascii"));

Now settings object can be used as your program settings. e.g.

var mydata = fs.readFileSync(settings.SOME_FILE);
for( i = 0 ; i < settings.a ; i++) {
   // do something
}

Require

Another alternative to load configuration, as stated in comments, is to define configuration as module file and require it.

//-- configuration.js
module.exports = {
  a: 10,
  SOME_FILE: '/tmp/foo'
}

In application code then require file

var settings = require('./configuration');

This prevents other global variable creeping in global scope, but it’s hackier to do dynamic configuration reloading. If you detect that file has changed, and would want to reload it at runtime you must delete entry from require cache and re-require the file. Another minor complication is that require uses its own search path (that you can override with NODE_PATH env. variable) so it’s more work to define dynamic location for configuration file in your app. (e.g. set it from command line).

// to reload file with require
var path = require('path');
var filename = path.resolve('./configuration.js');
delete require.cache[filename];
var tools = require('./configuration');

Plain javascript as configure file has benefit (and downside) that it’s possible to run any javascript in the settings. For example.

settings = {
    started: new Date(),
    nonce: ~~(1E6 * Math.random()),
    a: 10,
    SOME_FILE: "/tmp/something"
}

Both of these methods are mostly matter of taste. Eval is bit riskier as it allows leaking variables to global namespace but you’ll never have anything “stupid” in the configuration files anyway. Right?

JSON file

I’m not fan of using JSON as configuration format as it’s cumbersome to write and most editors are not able to show syntax errors in it. JSON also does not support comments that can be a big problem in more complicated configuration files.
Example configuration file myconfig.json

{
   "a":10,
   "SOME_FILE":"/tmp/something"
}

Then at start of your application read json file and parse it to object.

var fs = require('fs');
var settings = JSON.parse(fs.readFileSync('myconfig.json', encoding="ascii"));

And then use settings as usual

var mydata = fs.readFileSync(settings.SOME_FILE);
for( i = 0 ; i < settings.a ; i++) {
   // do something
}

Merging configuration files

One way to simplify significantly configuration management is to do hierarchical configuration. For example have single base configuration file and then define overrides for developer, testing and production use.

For this we need merge function.

// merges o2 properties to o1
var merge = exports.merge = function(o1, o2) {
    for (var prop in o2) {
         var val = o2[prop];
         if (o1.hasOwnProperty(prop)) {
             if (typeof val == 'object') {
                 if (val && val.constructor != Array) { // not array
                     val = merge(o1[prop], val);
                 }
             }
         }
         o1[prop] = val; // copy and override
    }
    return o1;
}

You can use merge to combine configurations. For example, lets have these two configuration objects.

// base configuration from baseconf.js
var baseconfig = {
    a: "someval",
    env: {
        name "base",
        code: 1
    }
}

// test config from localconf.js
var localconfig = {
    env: { 
        name "test"
        db: 'localhost'
    },
    test: true
}

Now it’s possible to merge these easily

var settings = merge( baseconfig, localconfig );

console.log( settings. a ); // prints 'someval'
console.log( settings.env.name ); // prints 'test'
console.log( settings.env.code ); // prints '1'
console.log( settings.env.db ); // prints 'localhost'
console.log( settings.env.test ); // prints 'true'