Update Gandi.net DNS on Amazon EC2 server boot up

Cloud servers get often randomly picked public IP when they are booted up.  Here is a script can be used to update the Gandi.net DNS records automatically on EC2 server startup.

Step 1. Get API key

Activate and get your Gandi.net API key with  these instructions.  Also, ensure that your Zone file for the domain is updatable. Usually this means that you’ve made copy of the template Zone file.

Step 2. Configure

Download update_domain.py python script and fill in your domain details and the Gandi.net API key.

# CONFIGURATION
DOMAIN = "yourdomain.com" # the domain name to update
NAMES  = ["@", "www"]     # A record names to update
API_KEY= '*********'      # fill in gandi API key

Configuration above would update IP for records ‘yourdomain.com’ and ‘www.yourdomain.com’.

You can also redefine the function ‘resolve_ip’ to adapt the script for other environments than EC2. Current implementation uses EC2’s internal REST API to query instances public IP.

Step 3. Run and test

Run the script on the EC2 server, it should resolve the local IP, Zone file and check if records need to be updated.

$ python update_domain.py

Script does dry run by default and will not update records, set DRY_RUN flag to False to update the records for real.

DRY_RUN = False          # Set false to actually modify Zone

Step 4. Run on boot up

When you’re satisfied with the settings and tested script manually, run command ‘crontab -e’ and add the following entry.

@reboot python /home/ubuntu/update_domain.py

Cron will now run the script on every reboot.

Now Hiring in Mobile Gaming!

nonstop-4-2

I’m pleased to announce that my start-up Nonstop Games recently joined the King family and we are now a King studio in Singapore. We’re expanding the studio and it’s a great time to join us, as you will contribute and shape the games in the pipeline and be part of an awesome and passionate team!

We are looking for talented game and server developers and if you are looking for an exciting challenge where you can really make a difference, then check out our positions and apply!

http://www.nonstop-games.com/jobs

Please also follow us on LinkedIn for regular updates

https://www.linkedin.com/company/nonstop-games

Running external worker process in Node.js

This is example how to properly spawn external worker process in Node.js with error checking. This example runs a ImageMagick convert tool to transform .png image files to 8-bit .png file.

Function spawns ‘convert’ with original file as input and temporary destination path as output and checks the result. If everything seems to be in order, it renames the temporary file over original one.

This also prints out all output from the spawned process to stdout for diagnostics.

var fs = require('fs'),
    util = require('util'),
    spawn = require('child_process').spawn;

function package_image( path, next ) {

    // temporary file name. use process PID as part of the name so there
    // wont' be conflicts if two processes run in parallel
    var tmpfile = path + '.tmp.' + process.pid;

    var cmd = 'convert';
    // executes command 'convert path -type Palette png8:path.tmp'
    var convert = spawn(cmd, [
        path,
        '-type', 'Palette',
	'png8:' + tmpfile ]);

    // capture stdout and stderr. Note that convert does not have any output on success
    convert.stdout.setEncoding('utf8');
    convert.stdout.on('data', function (data) {
        console.log( cmd + ': stdout '+ path + ' ' + data.trim() );
    });

    convert.stderr.setEncoding('utf8');
    convert.stderr.on('data', function (data) {
        if (/^execvp\(\)/.test(data)) {
            // we get here if 'convert' command was not found or could
            // not be executed
	    console.log( cmd + ': failed to start: ' + data );
	} else {
	    console.log( cmd + ': stderr '+ path + ' ' + data.trim() );
	}
    });

    // hook on process exit
    convert.on('exit', function( code ) {
        if ( code ) {
	    // 127 means spawn error, command could not be executed
   	    console.log(cmd + ': error '+ path + ' ' + code );
	    return next(code);
	}
        // check if output file exists
        fs.stat( tmpfile, function(err, info) {
	    if ( err ) {
		// no file found? something went wrong. Just ignore
		console.log( cmd + ': output file not found ' + util.inspect(err));
		return next( err );
	    }
	    // check output file size
	    if ( !info.size || !info.isFile() ) {
		console.log( cmd + ': out file 0 bytes or not a file '+ tmpfile);
		fs.unlink( tmpfile ); // remove output file
		return next( true );
	    }

	    // rename temporary file over original one
	    fs.rename( tmpfile, path, function(err) {
	         if ( err ) {
		    fs.unlink( tmpfile );
		    console.log( cmd + ': can not rename '+ tmpfile + ' ' + util.inspect(err));
		 }
                 // done
		 return next( err );
	    });
	});
    });
}

Example of usage

package_image( '/tmp/some.png', function(err) { 
    if ( err ) { 
       console.log('Image conversion failed', err );
    }
});

CouchDB cleanup script for purging old docs

CouchDB does not have straightforward ways to clean up old data. This is one simple way do delete entries by date, but it requires that

  • Your documents have date or timestamp property
  • There is view for each database to fetch documents for that property

Prerequisities

  1. Node.js
  2. jss module, i.e. ‘npm install jss’.

1. Prepare views for Cleanup

Define view in each database that needs regular cleanup. Use something like this where emitted key field is timestamp in seconds.

views: {
    created: {
	map: function(doc) {
		if ( doc.created ) {
			emit(doc.created, doc._rev);
		}
	}
    }
    ....
}

2. The Cleanup script

Script queries old doc ids from the cleanup view and marks them as deleted. Documents are not deleted immediately but are removed physically on next CouchDB compact. CouchDB 1.2.0 supports autocompact so just enable it and don’t worry about it.

#!/bin/bash

DBHOST=localhost

# Get key for entries that are over 6 months old. This assumes that created view can be queried using timestamps as keys.
if uname -a | grep -i darwin > /dev/null
then
	TODAY=$(date '+%Y-%m-%d')
	MONTHSAGO=$(date -v -24w '+%Y-%m-%d')
	MONTHSAGO_E=$(date -v -24w '+%s')
else
	TODAY=$(date '+%Y-%m-%d')
	MONTHSAGO=$(date -d '24 weeks ago' '+%Y-%m-%d')
	MONTHSAGO_E=$(date -d '24 weeks ago' '+%s')
fi

PATH=$PATH:/usr/local/bin

# JSON scripting tool
JSS=$(npm bin)/jss

cleanup() {
	DATABASE=$1
	DESIGN=$2

	echo "Cleaning $DATABASE/$DESIGN"
	curl --silent -S http://$DBHOST:5984/$DATABASE/_design/$DESIGN/_view/created?endkey=$MONTHSAGO_E | \
		$JSS --bulk_docs '$.id' '{_id: $.id, _rev:$.value, _deleted:true}' | \
		curl --silent -S -X POST -d @-  -H "Content-Type:application/json" http://$HOST:$PORT/$DATABASE/_bulk_docs | \
		sed 's/\({[^}]*}\),/\1\n/g' | tr -d '[]' | \
		$JSS '$.ok != true'
}

echo "STATS CLEANUP <= $MONTHSAGO - Start" `date`

# Put databases and views here
cleanup somedb1 someview
cleanup somedb1 otherview
cleanup somedb2 alsoview

echo "STATS CLEANUP - Done" `date`

The script does this

  1. Get expired docs, e.g. curl ‘http://localhost:5984/mydatabase/_design/mydesign/_view/created?endkey=1337049581&#8217;
  2. Build bulk doc delete request (jss)
  3. Issue delete bulk request (curl post)
  4. sanitize couchdb output, i.e. add newlines and remove brackets (sed)
  5. print failed ones

Note that default version of jss doesn’t output proper JSON if no documents are found, use my fork to workaround this problem if you dont want to see errors in logs.

npm install https://github.com/tikonen/jss/tarball/master

EC2 EBS Backup Python script

This is simple EC2 backup script that snapshots listed EBS volumes daily. Script keeps maximum number of daily, weekly and monthly snapshots per volume and checks if daily backup has already been done or in progress, so it does not make duplicates for single day.

Prerequisities

1. Ec2 command line tools.
Check that you can run them from command line

$ ec2-describe-snapshots
SNAPSHOT	snap-070cba6c	vol-123123	completed	2012-04-19T02:06:54+0000	100%	457025778133		my.com root
SNAPSHOT	snap-170cba7c	vol-455445	completed	2012-04-19T02:07:09+0000	100%	457025778133	10	my.net root
...

2. Fabric administration and deployment scripting tool

Install with easy_install or pip

<pre>$ sudo easy_install fabric</pre>

See http://docs.fabfile.org/en/1.4.1/installation.html for more details

3. The script.
Copy following to ec2-backup.py and replace the BACKUP_VOLS array with your own volumes and their descriptions. Script is also available in GitHub.

import os, sys, time
import dateutil.parser
from datetime import date, timedelta, datetime

from fabric.api import (local, settings, abort, run, lcd, cd, put)
from fabric.contrib.console import confirm
from fabric.api import env

# for each volume, define the how many daily, weekly and monthly backups
# you want to keep. For weekly monday's backup is kept and for the each month
# the one from 1st day.
BACKUP_VOLS = {
	'vol-abc1234': {'comment': 'my.com root', 'days': 7, 'weeks': 4, 'months': 4},
	'vol-1234565': {'comment': 'my.com database', 'days': 7, 'weeks': 4, 'months': 4},
}


today = date.today()

snapshots = {}
hastoday = {}
savedays = {}	# retained snapshot days for each volume

for (volume, conf) in BACKUP_VOLS.items():
	daylist = savedays[volume] = []
	# last n days
	for c in range(conf['days'] - 1, -1, -1):
		daylist.append(today - timedelta(days=c))
	# last n weeks (get mondays)
	monday = today - timedelta(days=today.isoweekday() - 1)
	daylist.append(monday)
	for c in range(conf['weeks'] - 1, 0, -1):
		daylist.append(monday - timedelta(days=c * 7))
	# last n months (first day of month)
	for c in range(conf['months'] - 1, -1, -1):
		daylist.append(datetime(today.year, today.month - c, 1).date())

SNAPSHOTS = local('ec2-describe-snapshots', capture=True).split('\n')

SNAPSHOTS = [tuple(l.split('\t')) for l in SNAPSHOTS if l.startswith('SNAPSHOT')]

for (_, snapshot, volume, status, datestr, progress, _, _, _) in SNAPSHOTS:
	snapshotdate = dateutil.parser.parse(datestr).date()
	if volume in BACKUP_VOLS:
		if snapshotdate == today:
			hastoday[volume] = {'status': status, 'snapshot': snapshot, 'progress': progress.replace('%', '')}
		if volume not in snapshots:
			snapshots[volume] = []
		snapshots[volume].append((snapshot, status, snapshotdate))

for snapshotlist in snapshots.values():
	snapshotlist.sort(key=lambda x: x[2], reverse=True)

for volume in BACKUP_VOLS.keys():
	if volume not in snapshots:
		snapshots[volume] = []

print "VOLUME\tSNAPSHOT\tSTATUS\tDATE\tDESC"
for (volume, snapshotlist) in snapshots.items():
	for (snapshot, status, date) in snapshotlist:
		datestr = date.strftime('%Y-%m-%d')
		print "%s\t%s\t%s\t%s\t%s" % (volume, snapshot, status, datestr, BACKUP_VOLS[volume]['comment'])


def status():
	pass


def backup(dryrun=False):
	print "\nCREATING SNAPSHOTS"
	for (volume, snapshotlist) in snapshots.items():
		if volume in hastoday:
			print '%s has %s%% %s snapshot %s for today "%s"' % (volume,
															hastoday[volume]['progress'],
															hastoday[volume]['status'],
															hastoday[volume]['snapshot'],
															BACKUP_VOLS[volume]['comment'])
		else:
			print 'creating snapshot for %s "%s"' % (volume, BACKUP_VOLS[volume]['comment'])
			snapshotlist.insert(0, ('new', 'incomplete', today))
			if not dryrun:
				local('ec2-create-snapshot %s -d "%s"' % (volume, BACKUP_VOLS[volume]['comment']))

	print "\nDELETING OLD SNAPSHOTS"
	for (volume, snapshotlist) in snapshots.items():
		for (snapshot, _, date) in snapshotlist:
			if not date in savedays[volume]:
				datestr = date.strftime('%Y-%m-%d')
				print "deleting %s %s for %s (%s)" % (snapshot, datestr, volume, BACKUP_VOLS[volume]['comment'])
				if not dryrun:
					with settings(warn_only=True):
						local('ec2-delete-snapshot %s' % snapshot)


def dryrun():
	print """

*** DRY RUN ***

"""
	backup(dryrun=True)

You can dry run the script first to see what it would do

$ fab -f ec2-backup.py dryrun

To make actual backup

$ fab -f ec2-backup.py backup

Example output

$ fab -f ec2-backup.py backup
[localhost] local: ec2-describe-snapshots
VOLUME	SNAPSHOT	STATUS	DATE	DESC
vol-abc1234	snap-48fe4023	completed	2012-04-24	my.com database
vol-abc1234	snap-23863a48	completed	2012-04-23	my.com database
vol-abc1234	snap-838131e8	completed	2012-04-20	my.com database
vol-abc1234	snap-1b0cba70	completed	2012-04-19	my.com database
vol-abc1234	snap-0d4ffb66	completed	2012-04-17	my.com database
vol-1234565	snap-42fe4029	completed	2012-04-24	my.com root
vol-1234565	snap-25863a4e	completed	2012-04-23	my.com root
vol-1234565	snap-858131ee	completed	2012-04-20	my.com root
vol-1234565	snap-1f0cba74	completed	2012-04-19	my.com root
vol-1234565	snap-034ffb68	completed	2012-04-17	my.com root

CREATING SNAPSHOTS
creating snapshot for vol-abc1234 "my.com database"
[localhost] local: ec2-create-snapshot vol-abc1234 -d "my.com database"
SNAPSHOT	snap-8ccd74e7	vol-abc1234	pending	2012-04-25T02:18:58+0000		457025778133	50	my.com database
creating snapshot for vol-1234565 "my.com root"
[localhost] local: ec2-create-snapshot vol-1234565 -d "my.com root"
SNAPSHOT	snap-86cd74ed	vol-1234565	pending	2012-04-25T02:19:03+0000		457025778133	8	my.com root

DELETING OLD SNAPSHOTS
deleting snap-0d4ffb66 2012-04-17 for vol-abc1234 (my.com database)
[localhost] local: ec2-delete-snapshot snap-0d4ffb66
SNAPSHOT	snap-0d4ffb66
deleting snap-034ffb68 2012-04-17 for vol-1234565 (my.com root)
[localhost] local: ec2-delete-snapshot snap-034ffb68
SNAPSHOT	snap-034ffb68

Done.

If you try to run it again, it will notify about already running backups

...

CREATING SNAPSHOTS
vol-abc1234 has 55% pending snapshot snap-8ccd74e7 for today "my.com database"
vol-1234565 has 100% completed snapshot snap-86cd74ed for today "my.com root"

...

Keeping CouchDB design docs up to date with Node.js

CouchDB views are defined typically as Javascript snippets and are part of special documents called design documents. I noticed that keeping these design documents up to date during development is pretty cumbersome and error prone. So I devised simply way to keep them updated using Node.js and Cradle couchdb driver.

Idea is to define the views in as variables in runnable js script and run that with Node each time it’s changed.

Here is the code. Copy it to e.g. cdb-views.js.

var cradle = require('cradle');

cradle.setup({ host: 'localhost',
               port: 5984,
               options: { cache:true, raw: false }});

var cclient = new (cradle.Connection)

function _createdb(dbname) {
    var db = cclient.database(dbname);
    db.exists(function(err, exists) {
        if (!exists) {
            db.create()
        }
    });
    return db;
}
var DB_SOMETHING = _createdb('somedb')

function cradle_error(err, res) {
    if (err) console.log(err)
}


function update_views( db, docpath, code ) {

    function save_doc() {
        db.save(docpath, code, function(err) {
            // view has changed, so initiate cleanup to get rid of old
            // indexes
            db.viewCleanup( cradle_error );
        });

        return true;
    }

    function compare_code( str1, str2 ) {
        var p1 = str1.split('\n');
        var p2 = str2.split('\n');

        for ( var i=0; i < p1.length || i < p2.length; i++ ) {
            var l1 = p1[i];
            var l2 = p2[i];
            l1 = l1 ? l1.trim() : '';
            l2 = l2 ? l2.trim() : '';
            if ( !l1 && !l2 ) continue;
            if ( l1 != l2 ) return true;
        }
        return false;
    }

    // compare function definitions in document and in code
    function compare_def(docdef, codedef) {
        var i = 0;

        if (!docdef && codedef) {
            console.log('creating "' + docpath +'"')
            return true;
        }
        if (!codedef && docdef) {
            console.log('removing "' + docpath +'"')
            return true;
        }
        if (!codedef && !docdef) {
            return false;
        }

        for (var u in docdef) {

            i++;
            if (codedef[u] == undefined) {
                console.log('definition of "' + u + '" removed - updating "' + docpath +'"')
                return true;
            }

            if (typeof(codedef[u]) == 'function') {
                if (!codedef[u] || compare_code( docdef[u], codedef[u].toString()) ) {
                    console.log('definition of "' + u + '" changed - updating "' + docpath +'"')
                    return true;
                }
            } else for (var f in docdef[u]) {
                i++;
                if (!codedef[u][f] || compare_code( docdef[u][f], codedef[u][f].toString()) ) {
                    console.log('definition of "' + u + '.' + f + '" changed - updating "' + docpath +'"')
                    return true;
                }

            }
        }
        // check that both doc and code have same number of functions
        for (var u in codedef) {
            i--;
            if (typeof(codedef[u]) != 'function') {
                for (var f in codedef[u]) {
                    i--;
                }
            }
        }
        if (i != 0) {
            console.log('new definitions - updating "' + docpath +'"')
            return true;
        }

        return false;
    }

    db.get(docpath, function(err, doc) {

        if (!doc) {
            console.log('not found - creating "' + docpath +'"')
            return save_doc();
        }

        if (compare_def(doc.updates, code.updates) || compare_def(doc.views, code.views)) {
            return save_doc();
        }
        console.log('"' + docpath +'" up to date')
    });
}

var EXAMPLE1_DDOC = {
    language: 'javascript',
    views: {
        active: {
            map: function (doc) {
                if (doc.lastsession) {
                    emit(parseInt(doc.lastsession / 1000), 1)
                }
            },
            reduce: function(keys, counts, rereduce) {
                return sum(counts)
            }
        },
        users: function(doc) { 
            if (doc.created) {
                emit(parseInt(doc.created / 1000), 1)
            }
        }
    }    
}

var EXAMPLE2_DDOC = {
    language: 'javascript',
    views: {
        myview: function(doc) {
            if (doc.param1 && doc.param2) {
                emit([doc.param1, doc.param2], null)
            }
        }
    }
}

update_views(DB_SOMETHING, '_design/example1', EXAMPLE1_DDOC);
update_views(DB_SOMETHING, '_design/example2', EXAMPLE2_DDOC);

The code is pretty simple.

  1. First it loads the Cradle couchdb driver and creates needed databases if they do not already exist. In this example only single database ‘somedb’ is created.
  2. The update_views is responsible of keeping the design docs up to date. It loads the design doc from defined DB and compares it to the code defined in the design doc in this file. If it has changed (or missing) it will be recreated.
  3. The example design docs (EXAMPLE1_DDOC and EXAMPLE2_DDOC) are simple design doc definitions as Javascript object. You’re familiar with CouchDB so this is self explanatory.
  4. Lastly the code just calls the update_views to update the design documents.

Now it’s possible to maintain the views in this Javascript file, the Node will make sure that the syntax is always valid.

Example output:

Views are up to date.

$ node cdb-views.js
"_design/example1" up to date
"_design/example2" up to date

Definition of view example2/myview has changed

$ node cdb-views.js
"_design/example1" up to date
definition of "myview" changed - updating "_design/example2"

Design doc example2 can not be found and is created.

$ node cdb-views.js
"_design/example1" up to date
no design doc found updating "_design/example2"

 

 

Embedding V8 Javascript Engine and Go

This is two common examples merged together; how to run V8 as embedded and how to
call C modules from Go language. I’m using Ubuntu 10.04 x64 with standard gcc toolchain.

Step 1. Compile v8

Get v8 source and build v8 as shared library.
Use this command line and copy libv8.so to to your project directory:

$ scons mode=release library=shared snapshot=on arch=x64
$ cp libv8.so ~/v8example

Step 2. C Wrapper for V8

Write C++ function that accepts javascript source code as argument and compiles and runs it in v8.

Header file:

#ifndef _V8WRAPPER_H
#define _V8WRAPPER_H

#ifdef __cplusplus
extern "C" {
#endif
    // compiles and executes javascript and returns the script return value as string
    char * runv8(const char *jssrc);

#ifdef __cplusplus
}
#endif

#endif // _V8WRAPPER_H

Source file, this is slightly modified version from official v8 C++ embedders guide.

#include <v8.h>
#include <string.h>

#include "v8wrapper.h"

using namespace v8;

char * runv8(const char *jssrc)
{
    // Create a stack-allocated handle scope.
    HandleScope handle_scope;

    // Create a new context.
    Persistent<Context> context = Context::New();

    // Enter the created context for compiling and
    // running the script.
    Context::Scope context_scope(context);

    // Create a string containing the JavaScript source code.
    Handle<String> source = String::New(jssrc);

    // Compile the source code.
    Handle<Script> script = Script::Compile(source);

    // Run the script
    Handle<Value> result = script->Run();

    // Dispose the persistent context.
    context.Dispose();

    // return result as string, must be deallocated in cgo wrapper
    String::AsciiValue ascii(result);
    return strdup(*ascii);
}

Makefile.wrapper

V8_INC=/home/user/builds/v8/include

CC=g++
CFLAGS=-c -fPIC -I$(V8_INC)
SOURCES=v8wrapper.cc
OBJECTS=$(SOURCES:.cc=.o)
TARGET=libv8wrapper.so

all: $(TARGET)

.cc.o:
    $(CC) $(CFLAGS) $< -o $@

$(TARGET): $(OBJECTS)
    ld -G -o $@ $(OBJECTS)

Compile to get the shared library

$ make -f Makefile.wrapper

You should end up with file libv8wrapper.so

Step 3. CGO Wrapper for Go

Now define a CGO wrapper source file that exposes the v8 to the Go language.

Go source file for the CGO compiler. Note that the comments are functional and contain instructions to cgo compiler. The libv8.so and just compiled libv8wrapper.so are assumed to be in current working directory for linking.

// #cgo LDFLAGS: -L. -lv8wrapper -lv8  -lstdc++ -pthread
// #include <stdlib.h>
// #include "v8wrapper.h"
import "C"
import "unsafe"

func RunV8(script string) string {

  // convert Go string to nul terminated C-string
  cstr := C.CString(script)
  defer C.free(unsafe.Pointer(cstr))

  // run script and convert returned C-string to Go string
  rcstr := C.runv8(cstr)
  defer C.free(unsafe.Pointer(rcstr))

  return C.GoString(rcstr)  
}

CGO Makefile. Note here that you need to have GOROOT defined. The OS and Architecture are defined here too.

include $(GOROOT)/src/Make.inc

GOOS=linux
GOARCH=amd64

TARG=v8runner
CGOFILES=\
    v8runner.go\

include $(GOROOT)/src/Make.pkg

Compile to Go package v8runner and install it

$ make -f Makefile.cgo
$ make -f Makefile.cgo install

Install copies the package file to the $GOROOT/pkg/linux_amd64/v8runner.a where it can be imported by Go compiler and linker.

Step 4. The GO program

Now you’re finally ready to make plain Go program that runs v8.

package main

import "v8runner"
import "fmt"

func main() {
    r: = v8runner.RunV8("'Hello Go World'")
    fmt.Println(r)
}

Makefile.hello

include $(GOROOT)/src/Make.inc

TARG=hello
GOFILES=hello.go

Compile

$ make -f Makefile.hello

Set LD_LIBRARY_PATH to current directory, assuming you have libv8.so and libv8wrapper.so there.

$ export LD_LIBRARY_PATH=.

Run the program

$ ./hello
Hello Go World

To recap the steps

  1. Shared C++ library that exposes C-function to run javascript : libv8wrapper.so
  2. CGO compiled wrapper that passes arguments between Go and C world and calls the C functions: v8runner
  3. Go program that imports the package and uses it normally.

This hack has some  caveats.

  • There is currently no way to link everything statically, as the CGO does not support it. You need to use shared libraries.
  • I’m not aware of any easy way to call back Go from the CGO wrapped C++. You need wrappers over wrappers as demonstrated by this post: http://groups.google.com/group/golang-nuts/msg/c98b4c63ba739240. Matroska ftw.
  • Only one thread at a time can use v8 instance. You need to use Isolates (See v8 source for more information) how to support multiple instances. Still only one thread at a time can use specific instance

Developing on Google App Engine for Production

If you’re considering App Engine as platform for your next big thing, here is potpourri of observations that you might find worth reading. This is not tutorial, and basic App Engine hands-on experience is required. Stuff here is written from experiences in Python environment, for Java mileage may vary. There is also lots of functionality that is not covered here because I didn’t personally use them or they are otherwise well documented.

Queries and Indexes

Applications can use basically only two queries: Get data entity by key or get data entities by range.  In ranged query key can be property value, or composite of property values. Anything that needs more than one filter property and specific order will need composite index definition.

For key ordered queries App Engine supports self-merge for table, but in real life it doesn’t work always very far as when number of entities grow you may eventually hit the error “NeedIndexError: The built-in indices are not efficient enough for this query and your data. Please add a composite index for this query.”. This means that some values are too sparse to filter data efficiently. e.g. you have 30 000 entities and one of the properties you’re querying is boolean flag that is either True or False to every entity.

App engine uses composite keys for  building indexes for  queries that need specific order. Be careful when combining more than one list property values in composite index,  App Engine will build permutations of all key values and even with modest lists you end up with hundreds of index entries.

For example define model with two list properties and timestamp

class MyModel(db.Model):
   created = db.DateTimeProperty()
   list1 = db.StringListProperty()
   list2 = db.ListProperty(int)

Define composite index for the model

- kind: MyModel
  properties:
  - name: list1
  - name: list2
  - name: created
    direction: desc

Put entity

m = MyModel()
m.list1 = ["cat", "dog", "pig", "angry", "bird"]
m.list2 = [1, 2, 3]
m.put()

This would create following reverse and custom index entries

  • 5 for each item in list1
  • 3 for each item in list2
  • 1 for created
  • 5 * 3 = 15 entries for permutations (cat, 1, created), (cat, 2, created), (cat, 3, created), (dog, 1, created), (dog, 2, created), …

Total 15 + 1 + 3 + 5 = 24 entries. This is not much in the example, but if grows exponentially when number of list entries and indexes increases. 3 lists in index each having 10 values would mean 10^3 = 1000 index entries.

Maximum number of index entries is 5000 per entity, and this is shared with implicit reverse property index and explicit custom indexes. For example if you have listproperty that you use in custom index, it can have at maximum ~2500 values because the implicit reverse index will take 2500 and the custom index rest 2500 totalling 5000.

Remember to set indexed=false in property definition if you don’t need to query against property, this saves both space and CPU.

Query latencies are pretty ok,  ~100ms for few dozen entities and you can use IN-operator to make parallel queries. Just make sure that your ‘IN’ queries  do not return lots of overlapping results as that can hurt performance. Direct get by key latencies are very good. (~20ms). Naturally latency increases linearly if your objects are very large, especially with long listproperties.

Text search is in App Engine roadmap and under development. Meanwhile  you can make simple startswith queries against single property or list of strings. Queries are identical in both cases.
Single property startswith query

class User(db.Model):
   name = db.StringProperty()

users = User.all().filter('name >=', query).filter('name <', query + '\ufffd').fetch()

Listproperty startswith query

class SomeModel(db.Model):
   keywords = db.StringListProperty()

models = SomeModel.all().filter('keywords >=', query).filter('keywords <', query + '\ufffd').fetch()

Note that in latter the sort order may not what you wish for as you must sort always first by first inequality filter property, in this example keywords. Just keep in mind the index growth when you add more properties to the query.

Soft memory limit is < 200MB that is reached easily if you’ve large entities, don’t rely that you can do lots of in-memory sorting. Especially Python memory overhead is pretty big. As rule of thumb you can manipulate ~15000 properties per call. (e.g. 1000 entities each having 15 properties). Each element in listproperty is counted as property.

You’ll see often DeadLineExceedError in the logs, nothing you can do to these except to use highly replicated datastore. Just note that it has much higher CPU cost. Curiously frequency of these errors seem pretty constant and independent of the load. Maybe App Engine gives more priority to more popular apps.

Quotas

Depends lot of your application, but at least in my experience the CPU is limiting factor for most of the use cases. This is mainly because you need to do most of the work when inserting new objects instead of when querying them, so even rarely used queries will cost you in every single insert. Queries needs indexes and storing entities with indexes cost API CPU time. Both your own application execution and the API (DB, etc..) execution time is counted in your quota. Be sure to measure and estimate your costs. Putting entities with very large listproperties that use custom indexes can easily cost 25-60seconds of CPU time per entity.

In case combined CPU time (app + api) grows large enough (> ~1000ms) App Engine will warn you in logs that the call uses high amount of CPU and may run over it’s quota. Curiously it makes this same warning even when you have billing enabled but  it wont’ restrict your app in that case however.

Scalability is Latency

App Engine scalability rules are complex but what mostly matters is your average latency. If you’ve only slow request handler (latency > 500ms) app engine will limit your scalability. It’s not bad to have few slow ones, but make sure that the average is somewhere around ~250ms or less. In worst case App Engine refuses to start new instances and queues new requests to already serving instances thus growing the user perceived request latency. You can observe this from App Engine dashboard log entries showing ‘pending_ms’ times.

Note that cpu time is not same thing as latency, for example these two pieces of code have roughly same CPU cost, but latter has only 1/3 of latency

Slow put

 e1.put()
 e2.put()
 e3.put()

Fast put

db.put([e1, e2, e3])

Slow get

 e1 = db.get(key1)
 e2 = db.get(key2)
 e3 = db.get(key3)

Fast get

ents = db.get([key1, key2, key3])

Parental fetch, aka relation index, aka. parent reference

App Engine DB API does not support partial fetch that can be issue if you have very large listproperties in the entities. It’s possible to achieve something similar by using parent keys. For example if you’ve large number of elements in listproperty, you can make key only query againts that property and fetch only the keys. Then get keys parent value and fetch entity you need.

class Message(db.Model):
  text = db.StringProperty()

class MsgIdx(db.Model)
  recipients = db.ListProperty(db.Key)

msg = Message(text="Hello World")
msg.put()

idx = MsgIndex(key_name='idx', parent=msg)
idx.recipients.append(user1.key())
idx.recipients.append(user2.key())
 ...
idx.put()

Query messages where userX is in recipient list, first get keys

keys = MsgIndex.all(keys_only=True).filter('recipients', userX).fetch(100)

query actual message objects

msg = db.get([k.parent() for k in keys])

In this way you avoid serializing the potentially large recipient list completely.

See Brett Slatkin’s presentation for more details.

Transactions

App Engine DB supports  transactions but it’s not possible to implement global consistency, because transaction can only operate objects in single entity group. For example if you have entity A and B that have no parent keys, you can not operate them both in single transaction. Entity group is all entities with same parent root key, entity without parent key is its own group.

Word of warning, when you use transactions all entities with same parent key are locked for transaction (entity group), in general there should not be more than 1-3 updates per second for single entity group or you’ll get lots of transaction collisions retries that will eat your CPU and increase latency. Collision retries are logged as warnings in App Engine console.

Pre-fetch Referenceproperties

Prefetch referenceproperties before accessing them in sequence.
Bad, will trigger separate DB query for user property each time

class Foo(db.Model):
  user = db.ReferenceProperty(User)

foos = Foo.all().fetch(100)
for f in foo:
  print f.user.name

Good, See prefetch_reprop function implementation here.

foos = Foo.all().fetch(100)
prefetch_refprop(foos,  Foo.user)
for f in foo:
  print f.user.name

This will decrease latency and API CPU time significantly

Debugging and Profiling

Standard Python debugger does not work in App Engine development server, but you can use following wrapper and start dev_appserver.py from command line to get into debugger.

def appe_set_trace():
  import pdb, sys
  debugger = pdb.Pdb(stdin=sys.__stdin__,
  stdout=sys.__stdout__)
  debugger.set_trace(sys._getframe().f_back)

API profiling. Define appengine_config.py in your app and define access stats handler in app.yaml.

def webapp_add_wsgi_middleware(app):
  from google.appengine.ext.appstats import recording
  app = recording.appstats_wsgi_middleware(app)
  return app

- url: /stats.*
  script: $PYTHON_LIB/google/appengine/ext/appstats/ui.py
  login: admin

CPU profiling. Define profiling wrapper that dumps the CPU times to the log.

def real_main():
  # Run the WSGI CGI handler with that application.
  util.run_wsgi_app(application)

def profile_main():
  # This is the main function for profiling
  # We've renamed our main() above to real_main()
  import cProfile, pstats
  prof = cProfile.Profile()
  prof = prof.runctx("real_main()", globals(), locals())
  stream = StringIO.StringIO()
  stats = pstats.Stats(prof, stream=stream)
  stats.sort_stats("cumulative")  # time or cumulative
  stats.print_stats(80)  # 80 = how many to print
  # The rest is optional.
  # stats.print_callees()
  # stats.print_callers()
  logging.info('Profile data:\n%s', stream.getvalue());

if __name__ == '__main__':
  main = profile_main
  #main = real_main
  main()

Task TransientErrors

Task add fails often with transienterror, just retry it once more and you should get rarely failed task adds.

try:
   taskqueue.add(...
except taskqueue.TransientError:
   taskqueue.add(..  # retry once more

Misc

Other things.

  • Static files are not served from application environment, your application can not access them programmatically.
  • Urlfetch service has maximum of 10 sec timeout and can do maximum 10 parallel queries per instance. Queries fail occasionally with application error that is usually caused by server timeout. Queries are done from pretty random source ip’s that are shared by all other  engine apps. You can not override header.
  • Naked domains are not supported (like example.com)
  • Memcache lifetime can very short, mere minutes but if your application is popular App Engine might give more priority. Use multi get and set when ever possible.