Between reality and cyberspace

orhedenr:

When we started building UserApp we all agreed on one thing. Focus on the things that we did best and outsource the rest.

This played in well with our choice to go with a CDN instead of hosting our front-end ourselves. After all, our front-end is just HTML/JS, so why bother hosting it ourselves…

Quick review: Samsung 11” Chromebook

The first impression in short (TL;DR)

The Samsung 11” Chromebook is a solid, cheap and lightweight device to get work done while being on the go.

How it all came to be

I work alot while I’m on the road - be it commuting on the bus or travelling around on a train or airplane.

During these times I am usually programming (mostly using lightweight or browser-based tools), writing blog posts (like this one) or preparing a slide deck.

I was still searching for a cheap, slightly smaller and lighter device to do those rather simple tasks and when I travelled to London, I decided to go into a store and get my hands on the 11” Chromebooks from HP and Samsung. Surprisingly the Samsung one convinced me more with its overall impression - so I bought one - and herein I give you an overview of my experience with it, sitting in an airplane heading home.

Getting started

After unwrapping it and plugging in the charger, I was suprised that it was fully uncharged but claimed to be fully charged within the next 40 minutes. And it was!

That initial plus was overthrown by the setup, that needed a WiFi connection but had trouble working with the “Sign-In” of the wireless network at London Heathrow, so I had trouble setting it up, but the guest mode allowed me to properly surf the net instantly.

The next minor quirk was the initial tour application not responding to my inputs at all - but I can live with that.

There’s an app for that!

I believe in the web as being a powerful platform, so I was looking forward to trying out the many apps from the Chrome Web Store.

I was looking for a decent Markdown editor with a live preview feature, as that’s the way I enjoy writing my blog posts the most - and I found Poe. As my IDE of choice, I picked Cloud9 and after a couple of minutes I was ready to go.

My biggest worry was, that the apps may not work without an internet connection, but flying back just now while writing this post proves that it indeed did work offline.

The great thing: I’m not missing any application and if I ever will, I could write that app using my beloved stack of web technologies;

How working on the Chromebook feels

The keyboard

The keyboard feels good and has decent spacing, sizing and feedback, the only thing that I may miss is the backlight I am used to on my Macbook.

The screen

The 11” screen is the perfect size for me, providing enough screen space to work and being comfortably sized for carrying it around. The brightness and view angle are sufficient - even this writing session along with my experients, for a bit over 2 hours now, aren’t a problem.

The rest

The case seems pretty sturdy but definitely has a plastic touch to it, the camera is pretty good and the Chromebook starts up in a couple of seconds and the battery - according to the system panel, should last for around 6 hours, maybe 5 with WiFi turned on.

Only the WiFi gave the impression to have a tendency to be a bit flaky at times.

Quick bulletin: Picking a way of developing hybrid mobile apps

Due to popular demand: A superquick guide to picking the right tools for you.

TL;DR: You may want to give my Hybrid Strategy Picker a whirl. Be careful tho, it’s far too simplistic, but a good starting point.

Warning: This is by no means complete and it may not turn out to be the right fit for you. For more details and the full story see my slides from SpainJS 2013.

With this out of the way, let’s step into this!

So you want to go hybrid..

..nice! But before you start looking at the technologies, you need to answer a couple of questions and make a couple of decisions.

This post aims at speeding this up a little for you.

Remark The following choices can be considered production ready.

Choice 1: Only Javascript

If you come from a JS world and have been working with the backend side of things a lot and you feel more comfortable with writing pure Javascript all the time (or Coffeescript, for that matter), this choice is for you.

You should have a close look at Sencha as it gives you a one-stop-solution: UI, Logic / Architecture, Packaging right out of the box.

Choice 2: HTML / CSS / JS

You’re coming from the web world and know all parts of the frontend stack? That gives you a couple of choices on it’s own:

Option A: Kendo + Cordova

This gives you the UI & Logical architecture from Kendo (MVVM) and deployability to a range of platforms, including iOS and Android.

There’s even a simple way of getting into it right from your browser, check Icenium to get up and running in no time.

Option B: LAB + Cordova

This gives you the wonderful world of Angular.js with the UI sweetness & easy prototyping Lungo.js provides. Check out our Github repository or the grunt-init task for the LAB to get started as quick as possible.

I hope you now have some pointers to go and play with things that are ready for prime time - use these powers with care and never forget

Martin’s rule of mobile app development

Don’t trust anybody, test it yourself

So you don’t like Firefox OS? Change it!

After getting my Geeksphone PEAK last week, I have to say, that it’s a pretty neat device.

However, some stuff was not making me happy. One example was the email app on the device.

It’s not bad, but I didn’t like the appearance of the read vs. the unread mails.

So here comes a wonderful thing about FirefoxOS: If you’re not happy with something, change it.

If you want to change the system UI like I wanted, you need to make changes in Gaia, the collection of system apps. Start by checking out Gaia from Github and make sure to checkout the right branch for your image.

  • PEAK stable: Checkout the v1.0.1 branch
  • PEAK nightly: Checkout the latest master

Now you can run

$ PRODUCTION=1 make
$ PRODUCTION=1 make install-gaia

and get your latest Gaia version on your phone. In my case I just worked on the email app, so I sped things up with

$ PRODUCTION=1 APP=email make
$ PRODUCTION=1 APP=email make install-gaia

to only recompile and push the email app.

So, now go and make your Firefox OS phone yours!

RemoteDOM - a web standard proposal for next gen web apps

Similarly to how the Shadow DOM paved the way for custom elements using web technologies, a “Remote DOM” could allow display of portions of the web app to be displayed on “remote” (i.e. “external”) devices, such as screens, Smart TVs, etc.

This brings interesting capabilities to web apps, such as leveraging external screens for presentation, supplemental content or second screen experiences.

Polyfill demo

I created a simple proof of concept demo to showcase the possibilities. And to kickoff a possible Polyfill, should my proposed W3C group be accepted. I need your support there :)

Install dependencies & requirements

To run the demo, you need to have node.js (at least version 0.8) and npm installed. Please also install Grunt.js. Run

$ npm install

to fetch all other dependencies.

As of June 2013: To run the demo on a mobile device, make sure you use the Chrome BETA and enable WebRTC.

To run the examples, start a static webserver with

$ grunt server

Then visit http://localhost:3000/display.html to get a Display-ID.

On another device, visit http://localhost:3000/index.html, enter the Display-ID in the popup and see the result.

Clicking the button will add an image to the secondary display page.

Okay, so what can I do with that?

It’s a very basic proof of concept for a RemoteDOM implementation. You basically add some DOM node into your HTML, like this:

<div data-screen="remote"></div>

and everything that goes in there (even dynamically created content) gets transferred to the “remote” display but is hidden for the browser displaying this document. To initialize the remoteDOM-polyfill, call

remoteDOM.connectDisplay(window.prompt(displayID);

where the displayID is the ID you get from a call to

remoteDOM.getId();

and finally, you can create a display with

remoteDOM.createDisplay(someDOMElement);

where someDOMElement is some DOM node that will contain the remoteDOM stuff it receives.

You’ll need a display document as well to make use of this, you can use the display.html as a very basic example how such a “display” document could look like.

Okay, but how would a proper implementation look like in the (near) future?

Well, if this stuff becomes a standard, display devices (think: Apple TVs, Smart TVs, Beamers, …) could provide such a display document for you (or some other means to get your DOM content on them) and browsers could provide display discovery services, so that you can select available displays in your network and “beam” content there.

Support the W3 group proposal

Get a (free!) W3 account and support the proposed RemoteDOM group @ W3C

Caveats of the polyfill

Right now the polyfill has two drawbacks:

  1. It uses Peer.js and its public peer broker server to connect the two peers.
  2. It sends the full innerHTML property of the remoteDOM container on DOM mutations, because DOM nodes can’t be easily converted into JSON yet.
Quick tip: HTML5 offline cache manifest tips

If you want to leverage the power of the HTML5 offline caching, together with Cross Origin AJAX, you may see failing requests.

In addition you should specify a timestamp and/or a version number to easily give browsers a possibility to update their cache.

The trick is, to specify your “local” (i.e. “belonging to your app”) assets, such as HTML, CSS, JS and image files) explicitely in the caching section of your cache manifest and mark everthing else explicitely as “always download”. In my example the file looks like this:

CACHE MANIFEST
# 2013-06-07v3

index.html
img/icon.png
img/loader.gif
css/app.css
js/app.js
js/controller.js
js/directives.js
js/services.js
components/quojs/quo.debug.js
components/lungo.brownie/lungo.js
components/moment/min/moment.min.js
components/lungo.brownie/lungo.theme.css
components/lungo/lungo.icon.css
components/lungo/lungo.css
components/lungo-angular-bridge/dist/lungo-angular-bridge.js
components/angular/angular.js

NETWORK:
*

The last section, captioned “NETWORK” lets the browser download everything that is not mentioned in the previous section of the manifest.

I saw failing requests in my Chrome Webapp without specifying the other requests as non-cached.

User testing done right

Don’t make assumptions about the user, have her test your website or app! It’s easy, quickly done and cheap. Trust me.

When it comes to usability and providing the best solution for the user, I have seen lots of mind-numbing, long and - in the end - pointless discussions.

People tend to project their own behaviour onto the users of their application - and often this turns into discussions about the user as if she was an alien, yet to be discovered life form from outer space.

I did that as well, but I learnt my lesson: I’m not the user. You’re not the user user either.

You know how your application is supposed to work. You know what feature can be found where - your users may think differently. And they may be different from what you think they are.

A brilliant example was one of my websites: I put a big, prominent “Sign in with twitter” button on the homepage, but the conversion rate was poor. So I decided to do a very simple, quick and cheap way of usability testing:

The 5 second test

In this case, you take a screenshot of a prototype, your website, your app, your design and upload it to Usability Hub (which is free, when you do other peoples tests - or you can pay for answers) and asked the question “where can you become a user of that service” and asked for 10 answers.

Now this picture is shown to different participants for 5 seconds each. Afterwards my question will appear and they can answer. The answers were surprising and shocking: Only 2 people got it right. The others answered along the lines “I see, where I can sign in after sign up, but where do I sign up?”.

This was interesting, because with Twitter login, there are no concepts such as “sign up” and “login” - you hit “sign in” the first time and get signed up, you hit it the next time you’re on the website and you get logged in.

Knowing the struggles of my users, I re-captioned the button “Sign up with twitter” and added a little “Sign in with twitter” on the top right corner et voila: Conversion rate rocketed.

UsabilityHub also offers other tests, like

The click test, where you ask your users a question like “Where would you click to sign up” and see a) where they actually click b) how long it takes them to spot it.

The nav flow test, where you upload multiple screens and ask the user to carry out a task that should take them from one screen to another - you’ll learn if your navigation concept works from this test type.

Test early & test often

This shows: Testing early is possible and valuable! All it takes is a few minutes and a screenshot, a design, or even a picture of a sketch on paper. This allows you to check, if your users “get it” on the many different levels:

  • Does the wording make sense?
  • Does the layout structure and content hiearchy make sense?
  • Is the visual design confusing people?

For all this, you don’t need a working prototype, you need nothing “done” or “shiny” - but you can already learn about your users. It takes minutes and is cheap (or free).

User tests

With services such as BetaPunch or YouEye it’s very easy to get real users to do a couple of tasks on your app or website and get a recording with them saying what they think while carrying out the tasks you gave them as well as their inputs and mouse movements.

BetaPunch, for example, offers one free test to get you started.

There is also a user test light version, where you can only get recordings of your website / app users using your app and see their mouse movements and inputs - see here for more details on that.

Lightweight virtual containers with virtual networking

What and why?

Sometimes you want to have a quick & cheap way of isolating a few processes in a lightweight, para-virtualized environment.

When full-blown virtualization, for instance with Xen or KVM, is just too much (or not suitable, because you’re already running in a virtual environment) and process-level isolation (what for example docker.io provides) isn’t enough, user mode linux is for you.

Example:

Multiple isolated build environments for Continuous Integration

One scenario is running integration tests between multiple components on a shared CI server (e.g. Jenkins).

Let’s elaborate this scenario a little more:

Say, we have a Jenkins instance that is used to build multiple projects, say an API and a “frontend webapp” (I’ll reference it as “Frontend” from now on). The API can be build and tested independently, but that requires the installation of a bunch of libraries, gems and other fun things.

The Frontend is a little more complex: In addition to the need of its own dependencies (gems, libs, modules, …) it also needs a running instance of the latest stable API and a running API also needs a database and other components (let’s say a message queue).

The safest bet is to have every system under test (API stand-alone, Frontend instance + API instance + database + message queue) isolated, always freshly created when the tests start and torn down when done.

Our goal is to setup a virtual container in which we can build the API and another virtual container in which we will setup the database, the message queue, the API and a third one for the frontend, with a virtual network between the second and third container.

With user-mode linux you can achieve this goal pretty fast&easily:

Install user-mode-linux and the utilities needed:

$ apt-get install wget user-mode-linux uml-utilities bridge-utils debootstrap realpath

Setup a new machine

You can use my create_machine script which is based on this article:

$ wget https://gist.github.com/AVGP/5412047/raw/ee9057124fa32edbf5c427955cc0be4012015ec5/create_machine_switched.sh
$ chmod +x create_machine_switched.sh
$ ./create_machine_switched.sh

The script will ask you for a few things: * The hostname of the container (we’ll use “API-build”, “API-run” and “Frontend”) * If you want SSH to be installed (“y” for our example) * If you want to configure the network (“y” for our example) * Couple of network settings. For this example we’re using the following settings: * IP-Address: 10.10.10.2 for API-build, 10.10.10.3 for API-run, 10.10.10.4 for Frontend build * Network: 10.10.10.0 * Broadcast: 10.10.10.255 * Subnet: 255.255.255.0 * Gateway: 10.10.10.1 * Root password for the container

Setting up the host-side networking

Before starting the container, we need to set up the host and the network switching on the host machine.

First of all add the following to /etc/network/interfaces:

    auto tap0
    iface tap0 inet static
            address 10.1.0.1
            netmask 255.255.255.0
            tunctl_user uml-net

Then edit /etc/default/uml-utilities by uncommenting the line with the UML_SWITCH_OPTIONS and, if needed, change it to look like this:

UML_SWITCH_OPTIONS="-tap tap0"

Then stop the uml-utilities daemon, bring up tap0 and start uml-utilities again:

$ /etc/init.d/uml-utilities stop
$ ifup tap0
$ /etc/init.d/uml-utilities start

Afterwards, you can start the container by changing in their directories and execute ./run, preferably do this in a screen (or tmux) session:

Start the containers

$ cd API-run
$ ./run

and the same for the other two machines. You should be abled to login as root on all running instances and ping to the internet as well as to the other instances.

root@API-run # ifconfig eth0
    eth0   Link encap:Ethernet  HWaddr d2:45:9b:1f:ba:c6
              inet addr:10.10.10.3  Bcast:10.10.10.255  Mask:255.255.255.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:25 errors:0 dropped:0 overruns:0 frame:0
              TX packets:43 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:1092 (1.0 KiB)  TX bytes:2198 (2.1 KiB)
              Interrupt:5
root@API-run # ping -c1 10.10.10.4
PING 10.10.10.4 (10.10.10.4) 56(84) bytes of data.
64 bytes from 10.10.10.4: icmp_req=1 ttl=64 time=0.192 ms

--- 10.10.10.4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms

Setup the NAT on the host

For the containers to be abled to reach the internet, you need to enable masquerading and IP-Forwarding using these two commands:

$ iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
$ echo '1' &gt; /proc/sys/net/ipv4/ip_forward

Now you can setup all the applications inside the containers (you may use provisioners, such as chef or puppet) and enjoy your virtual testing environment and its virtual networking.

The bad, the good, the masterpieces

Building websites and web applications is a craft. And as far as crafts go, there will always be bad, mediocre and great pieces.

The question is, what makes something a masterpiece? Is it the techniques applied, the perfect mastery of them, the little details?

Coming back from Rome again, a city full of various disciplines master pieces, I have gained a better understanding.

Though it’s hard for me to grasp the entire cleverness and knowledge that contribute to the overall results of each of them, certain patterns and aspects can be found repeatedly on most, if not all, of them.

It’s usually neither one aspect, nor all of them together, it’s the consistency in the impression that is created in the observer.

There is a lot of different aspects that come into play there, for example

  • Color theory
  • Use of fore- and background, as well as the space in between
  • Composition
  • Guidance of the viewers’ eyes
  • Technique

Being a developer striving to learn about design, it came as a surprise to me, that the techniques used and how well they were mastered by the artist are not the most important aspect that makes something a masterpiece.

Looking at a lot of mediocre, some most-popular and some less popular but extraordinarily intriguing items, I figured that mostly three things caught my eyes and captured my attention for more then the average time span:

  1. The colors and contrasts
  2. The composition of the piece
  3. The guidance of view

The amazing thing I never noticed before: That also worked, when proportions where slightly off, or there were minor defects (e.g. broken off parts of a statue or parts of the canvas) or missing details, etc.

I did notice these deficiencies, though I was standing there, admiring the overall impression they made on me.

I had to look at a bunch of those pieces and it was only at the Vatican museums, where most of the art was related to Christianity and there were a lot of more-or-less comparable works to gaze upon, or downright staring at them for pretty long, to gain the insights, that it’s the above-mentioned three things (in the order I put them) that attracts people.

I even watched others and how they looked at the different things and how their eyes were moving around.

After getting this invaluable “Heureka” moment, I am now looking differently at designs and art, whether it’s a famous painting, an anchient statue or something profane as a website or a web application, because this simple finding applies to all of them:

You don’t have to do everything 100% perfectly, as long as you put effort in how the spectator reacts to it and is engaged with it and get it pretty right.

Why developers should learn to estimate and estimate often

Developers are sometimes a grumpy set of people.

To a certain extent, I understand my fellow developers - deadlines are tight, features creep in and out from time to time and uncertainty is your daily companion.

Asking the right questions before starting to hack away is crucial but pretty hard at the same time.

It’s hard, because it’s likely that you never did something alike and before sitting down and thinking it through carefully it’s hard to ask the interesting questions to other stakeholders.

So why would you do that?

First of all, estimations help everyone to get the expectations straight. That tough deadline is in two weeks but the estimated work says 2 months? Well, then there’s some sort of problem and you should talk about that.

The work is finished faster than expected? Well, then you may have a problem as well (somebody may have missed important bits & pieces or the code is poorly written, docs may be missing, …)

What can be the reasons that estimations are wrong?

There’s a pretty long list. My top 3:

1.) The task wasn’t thought-through. Sub-task were missed, uncertainties weren’t identified or the requirements weren’t documented. 2.) A lack of experience. It takes some try&error to get a feeling for the common tasks and problems. 3.) Scope creep happened between the estimation and the (delayed) delivery.

Yes, in that particular order.

How do do better estimations?

That’s not a one-shot thing. It took me a couple of attempts to get to acceptable and apropriate estimations for different tasks. Basically you oscillate around the right estimation - sometimes you’re over, sometimes you’re under, but you should always aim to make it as close as possible. If you get your performance to +/- 5%, you’re good. For example: In sprints of one week I tried to be within 2 hours of my estimations. I am pretty sure I never reached that goal, but I wasn’t too bad at it (I think).