1. The software development life cycle



    by starter-life


  2. What if sourcecode repositories where landscapes that you could fly through? How could that look like?

    Wonder no more, try it from your browser!

    SymfonyVisit the codescape here

    Codescap.es takes a public Git URL and then visualises the repository content as blocks of different color and height that you can interactively fly through.

    RailsVisit the codescape here

    Each block represents a file, the color represents the file type and the height the number of lines in the file.

    Angular.jsVisit the codescape here

    Clicking on a block shows you the line count and name of the file it represents.

    jQueryVisit the codescape here

    npmVisit the codescape here

    CodeScapes is on Github and open source so you can install it locally and use it with your own JSON files for Non-Git repositories or private repositories.

    Post your own repository landscapes as reactions on this blog or mention them on Twitter to me, @g33konaut

    Have fun!


  3. Easy scaling with Docker, HAProxy and confd


    Forrest is a very small shell script that binds together etcd and docker to allow easy scaling - for example with the combination of etcd + confd + HAProxy + Docker you can quickly and easily spin up a new web server instance and hook it into your load balancer.

    Here’s also a video of that scenario


    One of the most powerful advantages of the cloud computing era is the ability to quickly scale up and down as needed - particulary handy if you have viral marketing in place and you just don’t know if and when it hits your servers.

    So far that meant to spin up a full virtual instance (e.g. an Amazon instance or a droplet on Digital Ocean) but with lightweight isolations like Docker (which is based on LXC) we can easily package and distribute applications, start and stop them as needed.

    In this post I will give an example of a very simple Go application that responds to HTTP requests with a simple string.

    This application will be served behind an HAProxy as a reverse proxy (and a load balancer). Forrest is using etcd and confd to allow registering/unregistering new docker containers within HAProxy, so that traffic will automatically be routed to the available containers.

    In the scenario I will describe in this post, I assume three servers:

    name private IP packages
    gw1 haproxy, confd
    docker1 docker, forrest
    etcd1 etcd

    The private IPs allow the servers to communicate with each other, without exposing this communication interface to the public internet.

    If you don’t have a private network between your servers for whatever reason, make sure to read the documentation on the different components carefully to make sure your setup is still secure*

    Let’s start with etcd.


    etcd is a key/value store, basically. It allows easy storage of configuration values in a central place, using a simple REST-API.

    For example we could store our servers with a name and an IP like this:

    $ curl -XPUT -d value=""

    and then get the value with

    $ curl

    We will use this to store exactly this: Our docker container IPs plus their name, so we can find them automatically as they become available to add them to our reverse proxy as well as remove them from there when they go down.

    Let’s start etcd:

    nohup etcd/bin/etcd -bind-addr= > /var/log/etcd.log &


    confd is insanely useful, yet surprisingly simple to use. Basically all it does is watch a config server (etcd or Consul) for changes.

    If it detects a change, it uses a template to create a new version of a config file with these new values and run a command to reload the configuration (in our case, for instance, service haproxy reload).

    This way it allows “live” configuration changes.

    For that to work we need two files: The template and the configuration. Here is the configuration:


    src = "haproxy.cfg.tmpl"
    dest = "/etc/haproxy/haproxy.cfg"
    keys = [
    reload_cmd = "/usr/sbin/service haproxy reload"

    This defines where the file will be written to (“/etc/haproxy/haproxy.cfg”), what the template will be (“haproxy.cfg.tmpl”), what’s the root key on where to find the configuration values and the command to run if the config has been rewritten.

    A template for HAProxy could look like this:


      log     global
      mode    http
    listen frontend
      mode http
      stats enable
      stats uri /haproxy?stats
      balance roundrobin
      option httpclose
      option forwardfor
      {{range $server := .app_servers}}
      server {{Base $server.Key}} {{$server.Value}} check

    That’s a very simple HAProxy config that listens on port 8080, gathers and exposes some stats (nice for playing around with it), does a simple roundrobin load balancing.

    The last three lines are interesting, as confd comes into play here…

    The {{range $server := .app_servers}} iterates over each key within etcd at the /app/servers path and writes a config line containing server followed by the key name itself and its value.

    So if etcd has the following values:

    /app/servers/web1      = ""
    /app/servers/web2      = ""
    /app/servers/test_web1 = ""

    then we would get this config file lines:

    server web1 check
    server web2 check
    server test_web1 check

    which means that HAProxy will balance requests between these 3 servers and performs a health check on each of them.

    This way we can now use etcd and confd to dynamically configure our HAProxy via changes in the values stored in the etcd. Neat!

    Let’s start confd then:

    Starting confd

    nohup confd -verbose -interval 10 -node '' -confdir /etc/confd > /var/log/confd.log &


    For an in-depth introduction to Docker, check out the incredibly fantastic interactive tutorial on their site.

    For this scenario, I will pretend you have a working Docker host and a docker image for your application. I will call that image ‘demo/myapp’ for now.

    The image should only expose a single port.

    Start & Stop with Forrest

    All forrest does is launch (or stop) a container from an image and announce the newly started container, including the Port it exposes, to the etcd.

    Thus, it allows us to pick up the address:port combination from the etcd and put it into the configuration of our HAProxy… which effectively means we’re able to automatically add or remove containers from the load balancing as they go up or down.

    Now this is how this would look like:

    user@docker1: $ export ETCD_HOST=""
    user@docker1: $ export ETCD_PREFIX="app/servers"
    user@docker1: $ export FORREST_IP="" # possibly optional
    user@docker1: $ forrest launch demo/myapp
      Launching demo/myapp on ...
      Announcing to ...
      demo/myapp running on Port 49731 with name sharp_mccarthy
    user@docker1: $

    And you should now be able to reach this application through the HAProxy on your public IP or domain name.

    If you want to stop a running container and remove it from etcd (and thus from the HAProxy) you do

    user@docker1: $ forrest stop sharp_mccarthy
      Stopping sharp_mccarthy ...
      Found container 8d3299a3427b ...
    user@docker1: $


    The future

    I’m actually using forrest to control a pretty small amount of containers (I used it with ~30 containers a short while ago for fun&profit) and particularly like the possibility to do rolling updates with it (the “one - some - many” strategy where you try out a new version with only a few request and then ramp it up gently if you’re confident).

    Obviously this tool isn’t made for large scale deployments with a fleet of docker hosts - but I believe that it’s a good foundation to build something on top of it.

    For instance you could build a simple tool that automatically SSHes into a server from a cluster of docker hosts and launches new instances when load is rising - or you write a small agent that does so… there are many possibilities.


  4. Write a simple parser in Bison

    Ever wondered how to write a simple parser? Let’s write a simple math parser then!

    You will need to have Bison and a C compiler installed on your system. On a debian-like linux system you can do that by running

    $ apt-get install bison gcc

    The skeleton of our parser file

    Now create a new file, called calc.y. This file is laid out like this:

      /* Prologue with C declarations */
    /* Bison declarations */
      /* Bison grammar definition */
    /* C code */

    Now we want to write a simple parser that can parse mathematical expressions like this:

    20 - (3 * 4) / 2
    >  14

    so let’s start with our grammar.

    Our parsing grammar

    Before we think closely about our actual grammar, we need to identify what tokens we have that don’t need further parsing - that’ll be numbers for our case.

    1. Our grammar should work on a single line as well as a file containing multiple lines, one expression per line.
    2. A line can be empty and so can be the whole input.
    3. We will print out the result of the parsed expression for each line
    4. An expression is either a. A number b. Another expression in brackets c. Two expressions combined by one of the basic mathematical operations (addition, subtraction, multiplication, division).

    In our Bison file it looks like this:

    input:  | input line;
    line: '\n' | exp '\n' { printf ("> %.10g\n", $1); };
    exp:   NUM         { $$ = $1;         }
         | exp '+' exp   { $$ = $1 + $3; }
         | exp '-' exp    { $$ = $1 - $3;  }
         | exp '/' exp    { $$ = $1 / $3;  }
         | exp '*' exp    { $$ = $1 * $3; }
         | '(' exp ')'       { $$ = $2;         };

    Each type of node in our grammar is defined by a list of statements. For example:

    input: | input line;

    means: The input can be empty or some more input and a line. Besides simply defining a node type by recursion over other node types, you can also give rules for the evaluation.

    For instance:

    exp '+' exp { $$ = $1 + $3; }

    means that this will evaluate to the first token (“exp”) plus the third token (the second “exp”).

    Now we need to also tell Bison that NUM is a token.


    In the Bison declarations section add:

    %token NUM

    Now we will load some C headers and declare the default data type in the C declarations:

    #include <string.h>
    #include <ctype.h>
    #include <stdio.h>
    #include <stdlib.h>
    #define YYSTYPE double

    Some parsing code

    yylex ()
      int c;
      /* skip white space  */
      while ((c = getchar ()) == ' ' || c == '\t');
      /* process numbers   */
      if (c == '.' || isdigit (c)) {
          ungetc (c, stdin);
          scanf ("%lf", &yylval);
          return NUM;
      /* return end-of-file  */
      if (c == EOF)  return 0;
      /* return single chars */
      return c;                                
    yyerror (char *s) {
      printf ("%s\n", s);
    main ()
      yyparse ();

    This code provides the yylex function for Bison that reads the input, the yyerror function that is called when something went wrong while parsing and a main function that just calls the yyparse function.

    Bison provides the yyparse function and will call our yylex function to get data for parsing.

    Let’s look at our whole Bison file:

    #define YYSTYPE double
    #include <string.h>
    #include <ctype.h>
    #include <stdio.h>
    #include <stdlib.h>
    %token NUM
    input:  | input line;
    line: '\n' | exp '\n' { printf ("> %.10g\n", $1); };
    exp:   NUM                  { $$ = $1;      }
         | exp '+' exp          { $$ = $1 + $3; }
         | exp '-' exp          { $$ = $1 - $3; }
         | exp '/' exp          { $$ = $1 / $3; }
         | exp '*' exp          { $$ = $1 * $3; }
         | '(' exp ')'          { $$ = $2;      };
    yylex ()
      int c;
      /* skip white space  */
      while ((c = getchar ()) == ' ' || c == '\t')  
      /* process numbers   */
      if (c == '.' || isdigit (c))                
          ungetc (c, stdin);
          scanf ("%lf", &yylval);
          return NUM;
      /* return end-of-file  */
      if (c == EOF)                            
        return 0;
      /* return single chars */
      return c;                                
    yyerror (char *s) {
      printf ("%s\n", s);
    main ()
      yyparse ();

    Save this file as calc.y and run bison calc.y. This should have generated a new file called calc.tab.c, containing the C code for our parser - to make this an executable call gcc -o calc calc.tab.c and you should now have a file called calc.

    You can run this file like this:

    $ chmod +x calc
    $ echo "2*3" | ./calc
    > 6


  6. A really, really simple static site generator

    When I asked for suggestions what simple tool I should use to generate static HTML pages from Markdown files, the answer I got mismatched in the definition of simple - and that mad me sad.

    TL;DR: All the static site generators I found are pretty heavy in terms of dependencies and ship with a lot of things I don’t need, so I created Susi - a really simple, 67 lines node.js based site generator. Give it a go if you want something truly simple.

    A simple job

    I wanted to get a super stupidly simple job done quickly:

    Compiling a few markdown files into HTML, using a shared HTML skeleton around it.

    Static Site Generators

    That is exactly the domain of static site generators - but there’s a ton of them - and even the "Top list" counts a whopping 26 of them.

    And all of them appear to come with lots of stuff I don’t need at all, such as:

    • Templating languages such as ERB, Jade or Handlebars
    • Preprocessors like Coffeescript, Sass, Less or Compass
    • Build tools like Grunt or Gulp

    which is fine, if you need some more complex stuff - but I just want to parse some Markdown into HTML and put a static template around that.

    Susi, oh Susi…

    So I got fed up with all those feature-laden tools and decided to figure out what is really essential: Parsing markdown into HTML and putting it in a template - the result is SuSi.

    I used it to revamp the content on my website and it only has two dependencies which are Node.js (because it’s a node.js tool) and Marked to parse the markdown files.

    The whole source code counts 61 lines. That’s pretty small and simple.

    That’s it.

    And here’s is how a tiny demo:

  7. Wii homebrew game “Spaaace” Day 5: The game has a single player mode as well as a two player mode now. You get a score and the game is over if you lose all your 10 ships. That’s the firts milestone and already a pretty addictive game :)


  8. Now the aliens shoot back!

    Also a better spawning mechanism and a bit better balancing. Not to forget: Background music!

    The game and source code can be found at https://github.com/AVGP/wii-spaaace

  9. When your Wii homebrew game compiles & runs as expected


  10. From the idea to the play store within a day

    Thanks to @IonicFramework, @PhonegapBuild and @Cloud9IDE and @OpenDataZH I was able to bring an idea fully to live within one day.

    Here’s what and how.

    The idea

    In November 2013 at the Open Data Hacknights in Zurich an idea was presented: Using the data on trees in public space to create an app for people with pollen allergy in Zurich.

    The application should be designed for use on a mobile device and should also include a prognosis report and current pollen and weather data to give affected people a single point of contact to get all relevant information.

    I got a very, very basic prototype working which was just a map with the tree locations marked in different colors.

    The old prototype was neither beauti- nor useful

    Unfortunately, I didn’t really come around moving forward with it until this weekend.

    Getting started

    Ionic Framework

    As the app is envisioned to work well on mobile, I decided to use Ionic Framework to build the application.

    Getting started with a simple boilerplate is as easy as:

    npm install -g cordova ionic  
    ionic start PROJECTNAME tabs

    At this point I included a Google map and some settings, extracted the data into a JSON file to store within the project and voila: A nice application was ready.

    The web version of the application starts looking reasonably usable

    Settings allow to select what trees are displayed on the map

    At this point I have the web version ready (see it in action here) - it even has basic offline capabilities and allows fullscreen mode if you use Android or iOS and “Add to homescreen”.


    As a big fan of the web I value the possibility to work and be independent from the device I am working on - so I often use Cloud9 as the development environment of choice. It connects with Github and gives you a full virtual workspace environment including a static file web server and console. For this project I created a new workspace and installed the 2 node modules mentioned above.

    A Cloud9 workspace has it all: Terminal, Syntax highlightin, preview web server

    This creates the full Ionic Cordova project - but as I won’t build the project for Android directly on Cloud9 and use Phonegap Build (see below) instead, I can delete everything except the www folder.

    When I am done, I can use git to get it on Github, ready for building with the Phonegap Build service.

    git init  
    git add .  
    git commit -m "Initial commit"  
    git remote add <Github Repo Git URL>  
    git push -u origin master  

    Phonegap build

    If you don’t want to go through the hassle of setting up the various SDKs for the different platforms or you simply don’t have, for instance, any Apple hardware to run XCode on, you can use the build service from Adobe for Phonegap applications.

    This service is free for open source projects and you can start by just importing from your Github repository.

    You can then scan a QR code to install the application on your device.

    Phonegap build lets you easily create hybrid apps for the most important platforms

    If you upload your keys, you can also create the release version of your app and push it to the stores.

    This way, I got the app into the Play Store within an hour.


    Times are changing fast - and our development habits should adapt to this.

    By using a few tools, I was able to get from a rough prototype to a fully-featured app in the Google Playstore.

    I think we as developers are experiencing a wonderful era, where the whole supply chain is leaner than ever before.

    Use these powers to the benefit of many and yourself.