1. Lightweight virtual containers with virtual networking

    What and why?

    Sometimes you want to have a quick & cheap way of isolating a few processes in a lightweight, para-virtualized environment.

    When full-blown virtualization, for instance with Xen or KVM, is just too much (or not suitable, because you’re already running in a virtual environment) and process-level isolation (what for example docker.io provides) isn’t enough, user mode linux is for you.

    Example:

    Multiple isolated build environments for Continuous Integration

    One scenario is running integration tests between multiple components on a shared CI server (e.g. Jenkins).

    Let’s elaborate this scenario a little more:

    Say, we have a Jenkins instance that is used to build multiple projects, say an API and a “frontend webapp” (I’ll reference it as “Frontend” from now on). The API can be build and tested independently, but that requires the installation of a bunch of libraries, gems and other fun things.

    The Frontend is a little more complex: In addition to the need of its own dependencies (gems, libs, modules, …) it also needs a running instance of the latest stable API and a running API also needs a database and other components (let’s say a message queue).

    The safest bet is to have every system under test (API stand-alone, Frontend instance + API instance + database + message queue) isolated, always freshly created when the tests start and torn down when done.

    Our goal is to setup a virtual container in which we can build the API and another virtual container in which we will setup the database, the message queue, the API and a third one for the frontend, with a virtual network between the second and third container.

    With user-mode linux you can achieve this goal pretty fast&easily:

    Install user-mode-linux and the utilities needed:

    $ apt-get install wget user-mode-linux uml-utilities bridge-utils debootstrap realpath
    

    Setup a new machine

    You can use my create_machine script which is based on this article:

    $ wget https://gist.github.com/AVGP/5412047/raw/ee9057124fa32edbf5c427955cc0be4012015ec5/create_machine_switched.sh
    $ chmod +x create_machine_switched.sh
    $ ./create_machine_switched.sh
    

    The script will ask you for a few things: * The hostname of the container (we’ll use “API-build”, “API-run” and “Frontend”) * If you want SSH to be installed (“y” for our example) * If you want to configure the network (“y” for our example) * Couple of network settings. For this example we’re using the following settings: * IP-Address: 10.10.10.2 for API-build, 10.10.10.3 for API-run, 10.10.10.4 for Frontend build * Network: 10.10.10.0 * Broadcast: 10.10.10.255 * Subnet: 255.255.255.0 * Gateway: 10.10.10.1 * Root password for the container

    Setting up the host-side networking

    Before starting the container, we need to set up the host and the network switching on the host machine.

    First of all add the following to /etc/network/interfaces:

        auto tap0
        iface tap0 inet static
                address 10.1.0.1
                netmask 255.255.255.0
                tunctl_user uml-net
    

    Then edit /etc/default/uml-utilities by uncommenting the line with the UML_SWITCH_OPTIONS and, if needed, change it to look like this:

    UML_SWITCH_OPTIONS="-tap tap0"
    

    Then stop the uml-utilities daemon, bring up tap0 and start uml-utilities again:

    $ /etc/init.d/uml-utilities stop
    $ ifup tap0
    $ /etc/init.d/uml-utilities start
    

    Afterwards, you can start the container by changing in their directories and execute ./run, preferably do this in a screen (or tmux) session:

    Start the containers

    $ cd API-run
    $ ./run
    

    and the same for the other two machines. You should be abled to login as root on all running instances and ping to the internet as well as to the other instances.

    root@API-run # ifconfig eth0
        eth0   Link encap:Ethernet  HWaddr d2:45:9b:1f:ba:c6
                  inet addr:10.10.10.3  Bcast:10.10.10.255  Mask:255.255.255.0
                  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
                  RX packets:25 errors:0 dropped:0 overruns:0 frame:0
                  TX packets:43 errors:0 dropped:0 overruns:0 carrier:0
                  collisions:0 txqueuelen:1000
                  RX bytes:1092 (1.0 KiB)  TX bytes:2198 (2.1 KiB)
                  Interrupt:5
    root@API-run # ping -c1 10.10.10.4
    PING 10.10.10.4 (10.10.10.4) 56(84) bytes of data.
    64 bytes from 10.10.10.4: icmp_req=1 ttl=64 time=0.192 ms
    
    --- 10.10.10.4 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms
    

    Setup the NAT on the host

    For the containers to be abled to reach the internet, you need to enable masquerading and IP-Forwarding using these two commands:

    $ iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    $ echo '1' > /proc/sys/net/ipv4/ip_forward
    

    Now you can setup all the applications inside the containers (you may use provisioners, such as chef or puppet) and enjoy your virtual testing environment and its virtual networking.

     

  2. Things I learnt from deploying my first larger Meteor.js app

    With Google Reader said to be discontinued I had to look for alternatives. I had a look at the many alternatives but none made me really happy - plus I was already looking for a way of building a “real” app that I’d like to use to learn more about the woes and joys of maintaining a Meteor.js app and potentially running it at a bigger scale on my own infrastructure.

    So building my own RSS reader and offering it to everybody as a hosted service on my private server seemed perfect to achieve more “real life” experience with Meteor.js.

    The result is my latest side project Neee.ws, which is also on Github.

    Okay, enough context - what went good, what went bad?

    (Re-)Deployment

    First of all, redeployment involved the following steps

    1. Run mrt bundle
    2. Copy the tar to the server
    3. Unpack it
    4. Delete the fibers package (because I bundle on a different architecture as my servers’.
    5. Restart the application

    to make this a little more convenient, I wrote a little shell script to do that and also keep a backup of the previously deployed version.

    To run the app on the server I use forever, which is super-simple and works reliably for me.

    Beware of large collections!

    I totally forgot about the client-side mongo instance. Meteor tries to download the whole database (!) to the client for caching. Even if you’re limiting your query results: The client will get the whole database. That is part of the autopublish magic. Hence you should remove the autopublish package and use Meteor.publish, Meteor.subscribe and Deps.autorun and do as much filtering as possible on the server side to have only the most important documents cached locally.

    The performance gain is enormous - from 28 seconds loading time down to 1.9 seconds!

    The many joys of Meteor.js The development was really quick & easy - thanks to the many nice things Meteor gives you for free. I will just give a short list here:

    • Google Login + oAuth token for API access
    • Reactivity in data sources
    • Helpful Meteorite packages, particularly meteor-router
    • Easy setup & bundling
     

  3. The native mobile app fallacy

    Native, web, hybrid?

    When it comes to mobile application development, there’s far too many evangelism out there.

    The web fans sing their

    “write once, deploy everywhere”

    the native party yells back

    “But it’s slow and doesn’t look native!”

    but both lack the understanding of the advantages and disadvantages of each approach, as well as the rationalism to re-decide on this matter, based on the project at hand.

    Does native always mean better?
    Certainly no. Facebook is a great example for this. Their claim was, that the Android app’s quirks and issues originated in its hybrid nature.
    The reality is: The new native Android app does not really work any better and Sencha gave an impressive proof that a non-native app could do better.

    So a native app is not by definition better.
    If you write inefficient code, the performance will be bad and the development iterations may be longer, as you can’t easily test on many different platforms simultaneously without having to repeat modifications in the code for each platform.
    If you also run a web version of your project, the mobile web or the hybrid approach allows you to quickly iterate on both of them.

    As always you should pick the right tool for the job, which can be a native or web app. It can even be a hybrid app.

    The tricky thing is to find out when to use what.

    Quick overview of what’s what

    • Native: Requires code tailored to each supported platform but gives you direct access to the native features.
    • Mobile web: The same code runs on many (if not all) platforms. Can also be delivered to desktop browsers, but can not be installed via app stores and can not access all device features. As the code is run in the browser which runs on top of the operating system, some features may be slower than native features.
    • Hybrid: Combination. Same code can run everywhere, all native features can be accessed and native components can be used to get a good performance. Hybrid apps can also be distributed in the app stores.

    If you have a big team of experienced mobile developers for the different platforms at hand, you can of course leverage that.

    But when you have a team of experienced web developers instead and need to quickly get an app out there (maybe even flanked by a web app), mobile web or hybrid apps are a great tool for achieving this.

    The thing many people miss is: It’s not only good for prototyping and quick iterations - it is a viable option for building great apps.
    With modern Javascript frameworks and libraries (such as Angular and Lungo in combination with PhoneGap) you can use web technologies to your mobile apps that can play in the same league as native apps.

    It needs to look&feel native, right?
    First of all: If you want to achieve perfectly native look&feel, you’re not going to have a good time with a hybrid approach. But do you really want that?

    Let’s look at the Foursquare app for a second:
    Foursquare app ©Foursquare

    and why not take another look at the Facebook app: Facebook on Android and iOS

    © six revisions, see their great article about native vs. mobile web

    It’s easy to spot that even the native Facebook-App doesn’t really look native. And that’s no problem! The user doesn’t really care, as long as it looks great and feels right.

    This is something most people get wrong - you can have your very own look in your apps. Stop mimicking the native look and feel and start delivering something great and useful to your users.

    But to get in the stores, you need native, right?
    No you don’t.
    You can for example use PhoneGap to package your hybrid app for many different platforms (e.g. iOS, Android, Windows phone, Blackberry).

    Okay, but what if I need the native features, like the camera?
    Again, PhoneGap is the answer.

    But how do you build a great app?
    There are many different ways to build great hybrid apps. To achieve the best results, you should focus on the following aspects:

    • Responsiveness - the app has to respond quickly to user input
    • It has to feel right - don’t imitate the perfect native transitions. Use decent transitions that are not giving the impression of “wanna be native”
    • Be aware that the user won’t always have a good internet connection (or any at all) - leverage local storage whenever you can and minimize the data to transfer
    • Use native components (e.g via PhoneGap plugins) whenever needed
    • be careful: Not all platforms already support every HTML 5 feature. Provide fallbacks
    • test your app on different devices, especially on older ones
     

  4. Why I don’t care, that you use Ember

    Recently on HackerNews, Robin Ward explains why Discourse uses Ember.js.

    In the first part he describes, why Javascript client-side MVC is a good thing - and boy, kudos for that, he’s right there.

    In the second part however, he explains why they chose Ember from the giant jungle of framework choices out there. And having evaluated Ember, Knockout, Backbone and Angular, I disagree.

    The very first example of Angular.js he shows, is Transclude. One of the far more advanced features. What he doesn’t tell you about are the really simple examples of the documentation, including one typical project, you find right on the homepage of Angular.

    It’s easy as one two three and took a job applicant with little JS experience a few minutes to get started with it.

    Tell me, what’s complicated here.

    it is important to me that the Ember community didn’t spring out of a corporate sponsorship

    Fine. But this, to me, is not a valid argument. This is blah-blah. Google backs Angular.js. While Yehuda Katz is a great guy with a great track record in Open Source, some people consider Google “the internet”.

    There are only a few bigger players out there - and as Angular is open source, you can pick it up and do whatever you feel like with it, if Google abandons it (pssh, as if) or develops it in a direction you dislike. Stop bitching about rockstar people or “corporate is bad, mhkay?”. I wanna get a job done and Angular.js does this pretty well.

    Plus: When I went to StackOverflow a few months ago because I had a few problems, there was one (in numbers “1”) thread for one of them. One. With no answers. The rest was fishing in the dark. Is that an “excellent support of the community”? For Angular I only had to look something up once. And found a lot of support activity in various channels.

    This may have changed, as Ember.js is moving forwards, but Angular was at that point already and is moving forwards in a very fast pace as well.

    Another thing that is only marginally mentioned, is the fact that Angular.js uses vanilla-style Javascript and the DOM in a pretty normal fashion. No fancy-pants .extend() stuff, no class-hierarchy magic. No “attr()” or set()/get() stuff. Just plain JS ala “$scope.blah = 123”. That’s it.

    Also, you can comfortably consume APIs of different flavors.

    Is it proper REST? Yay, just throw in the ngResource!

    Oh its unRESTy? Well, just throw in ngResource with adapted methods or use $http. Quick&easy. No tears.

    Even if this article may sound a little harsh and rant-y, I think Ember.js is nice. But looking at it from a more practical perspective (e.g. having developers that are familiar with “the good ole Javascripte” vanilla JS and HTML, as well as having to deal with complex systems and non-REST APIs) Angular.js comes at lower cost when it comes to adoption and producing most of the web applications, you wanna build these days (and possibly tomorrow).

     
  5. LEAPiano - a LeapMotion demo using the Javascript API, HTML5 and CSS.

    The actual LeapMotion is set in front of the display and allows precise & responsive controls using your hands (or other things) floating over the device.

    The audio comes from DataURIs, no audio files are needed.

    The code is available on GitHub

     
     

  6. Writing web applications - all the way on the web!

    Lately, the web has become a really powerful platform.

    With things like UserMediaWebRTC and WebGL it can do many things you may not expect to be possible in your browser.

    This article wants to shed a light on one more use case: Developing and deploying a web application from scratch - only using your browser.


    Cloud9 - the IDE is your browser

    First of all, you need a place to write and test your application, basically you need some sort of development environment. Cloud9 provides this.

    It allows you to build web applications, run ruby or node.js applications, install gems as well as node packages and run them.

    It integrates well with Git-, FTP- or SSH servers. The free plan includes public workspaces, but you can have private workspaces with the paid plan.

    Cloud9 serves your application from a globally reachable address but stops processes like node or rails after a certain time (which is long enough for testing, though).

    Bonus: Cloud9 is open-source.

    For a full-power web application you may need a database… the next step describes how to set up and manage a mongo instance from your browser. If you don’t want mongo, check Appfog below.

    MongoHQ - your database as a service

    MongoHQ allows you to create and manage mongo databases via a simple web interface.

    You can also view, edit or delete documents or collections from your browser or connect using the mongo shell (which also integrates well with Cloud9).

    Appfog - cloud hosting made really easy

    Appfog is like Heroku, but it offers you a lot of different languages, frameworks and tools.

    You can start by creating an application from their web interface.

    Deployment is done via their “af” rubygem from the console (I use the console from Cloud9, which works like a charm).

    Afterwards you can create and bind services, which is basically one of the many supported databases (mongo, redis, postgres, mysql,…) to use with your application.

    In addition, you can scale your application by adding more RAM or instances to it.

    Now we created our web app + database and host it on the cloud - all from our browser.

    But this is not the end of the story - let’s test how well it performs!

    Blitz.io - load testing as a service

    Blitz.io allows you to test how your application reacts to a given amount of users hitting your website over a given time.

    You can configure from where the bombardement with users should come (they have a bunch of regions, scattered all over the world), the pattern of the users hitting your application (e.g. linear growth or saw-tooth patterns) and how long the “rush” should last.

    Afterwards you get a summary of how many hits per second were successful, how many errors and timeouts happened and some more details.

     

  7. FirefoxOS: Does it have a chance?

    After reading Sebastian Anthonys article about why Firefox OS does not have a chance in the mobile market, I’d like to give my opinion.

    First of all: The article says, that even if Firefox OS would gain interest and market acceptance for making web applications run like native applications on mobile devices, others like Android or iOS could just do the same.

    I don’t see any problem. In fact, that would be great.

    Firefox OS is not just an operating system, it is a philosophy.

    The web is a powerful and fast-progressing platform and has already started to take the world of software and services by storm.

    As the web advances further from web sites to web applications, it is only natural to step into the mobile market, where applications are the center of the ecosystem, while the platform below them does not matter much anymore.

    The first few have made this step and with success!

    But all of them are just “packaging” the web into a native veneer - so they build something around that, what is already part of the platform: The web.

    Mozilla is only consequent in taking it further: The web is the platform in Firefox OS.

    When this attempt is successful, it does not matter, if the web as a mobile platform is called “Firefox OS”, “iOS”, “Android” or whatever - it’s the web.

    The web is open. The web is everywhere. The web is built upon HTML, CSS and JavaScript. Three powerful and yet emerging technologies that I love. Market shares of Firefox OS? I can’t care less, as long as the web wins.

     

  8. Measuring developer performance is tricky.

    As a manager, you usually want to know, how the performance of your employees is.

    You want to know, if you need to give them training, if you need to adjust some misbehaviour or if their productivity is simply bad.

    But how would you do that for a developer (especially, when you’re not a developer yourself)?

    This is a surprisingly difficult thing to do, as there is hardly a good metric to measure the performance without reading their code.

    Let’s take a look at some metrics and their problems:

    1. Lines of code

    You simply look at how many lines of code the developer created.

    This is an obviously bad one for measuring their performance - if a good developer finds an elegant (and still read- and maintainable) solution in just a few lines of code while a not-so-good developer needs pages and pages of code to do the same, the metric will promote the wrong person. That obviously leads to less productivity and possibly less quality.

    2. Number of commits

    I’ve seen this one once and I can’t stop shaking my head about it.

    This metric has nothing to do with the actual work. You can rank good in this metric by simply commiting every single line you changed. It’s completely unrelated to the work you actually did. Simple example: Somebody corrects a few typos in a template and commits after every single corrected typo. He may have 40 commits now. Somebody else implemented a big feature in the same time, adding new functionality and lots of potentially tricky code - but he only commits when a block of functionality is done - let’s say he committed 10 times during this. According to this “metric” he is performing worse than the person who just corrected typos without adding much value.

    3. Number of user stories / tickets / issues

    This one isn’t too bad, but still not good. You’re measuring the number of things done.

    Somebody who resolved 10 tickets is better than somebody, who just resolved 3, right?

    Wrong.

    First of all: Tickets may differ in complexity. A ticket that says: “Move the login button from the left to the right” is extremly different from “Implement a real-time video streaming feature”. The first one may take 20 minutes, the other one may take some hours.

    Variation: Taking complexity into account

    I thought I’d have a solution to that one: You multiply the number of tickets with their complexity. Somebody doing 10 user stories with 1 story point (easy stuff) would then have the same performance as somebody who resolves 1 user story with 10 story points (complex stuff).

    Anyways, this is flawed as well.

    Implementing something the right way instead of “quick & dirty” requires additional time. To increase your “productivity” when this metric is applied, you would usually go for “quick & dirty” to raise your throughput. That is a horrible thing, because it punishes the developers who do it right. Quick & dirty solutions are piling up as technical debt and ultimately lead to unmaintainable code, that is pricy to fix.

    4. Time

    Time tracking is another option for measuring how somebody performs. But it has the same problem as the “Lines of code” metric: A developer that solves a problem in the right way in 20 minutes is punished, while a developer who takes longer is rewarded.

    Conclusion: Don’t use a naive metric, use communication

    There simply is no way of just applying some simple ruleset when you want to evaluate the performance of your developers. So, how do you do it then?

    My suggestion is: Listen to the team, listen to people who work with the developers you want to evaluate and listen to the developers as well.

    Team

    Listening to what the team communicates is crucial here.

    Is somebody always saying something like “I will have to look at this together with X” - the person in question may need training.

    Is somebody always saying they’re working on the same thing, without giving a reason? They may need help to perform better or they may not match the project.

    Is the team complaining about somebody being unproductive? You should take action.

    Is the team happy and things move forward? Let them go on with it! Don’t get in their way.

    Others

    You should also listen to other people, working with the developers. For example the customer support or the product managers.

    If they are happy with how the developers handle requests and tasks, you shouldn’t be worried about their performance.

    If you hear bad things (such as deadlines not being kept etc.), you should investigate this and resolve the issues.

     

  9. Twitter API 1.1 responds with status 401, code 32

    For the impatient readers: If you run into this, try sending the POST params in the querystring instead of the body. This does not only apply to node.js twit, but is generally valid.

    The longer version:

    I was playing around with twit lately. It’s a wonderful node module that allows you to interact either with the Twitter REST or the Streaming API in a very easy way, possibly the easiest way I’ve ever seen.

    I tried to post a new Tweet saying “Hello World!” which didn’t work.

    The REST API responded with “Status 401, Could not authenticate you” and I started googling.

    The first hint that came up, was trying to use v1 instead of v1.1 of the API. This resolved the issue! But as v1 is going to deprecate soon, I didn’t want to go that way, but find the root cause.

    I ran the tests for twit and found, that it actually was abled to tweet. Even with v1.1.

    The only difference was the text for the new tweet: The (successful) example from the tests did not contain an exclamation mark, while my (failing) example did.

    So I removed the exclamation mark from my example - and it worked.

    I looked through the many discussions of the “Code 32” issue on the Twitter developer portal and finally found https://dev.twitter.com/discussions/11280 that answered this:

    If you, for some reason, send the parameters for the POST via the querystring - instead of the “right way” for HTTP POST where the parameters go in the HTTP body - it works!

    I changed that and it worked surprisingly well - my pull-request to twit is currently pending.

     

  10. Jenkins+Github+Dashing+TV+Raspberry Pi = Project-Cockpit

    In our office we had a spare TV and wanted to use it as an information radiator in our room.

    As our setup involves Github where pull requests are built with Jenkins to verify everything is working as expected, I wanted to put the success rate of the builds, the build health and the latest open pull request on the screen.

    The screen is hooked up to a Raspberry Pi, so I had a vast amount of possibilities to get something on screen.

    After having a look at Dashing, I decided to go for it and build the project-cockpit project on top of it.

    It uses the Jenkins API to get the ration of successful / non-successful builds as well as the latest build state. It also calls the Github API to get the latest pull request to display the name and picture of the author and the title of the pull request on the dashboard.

    This is how it looks like in action:

    In case the latest build is broken, it plays a Youtube Video of an exploding nuclear bomb.

    The next step will be to show the JIRA tickets in the different stages (open, in progress, done).