The Android lifecycle

The last time I wrote Android apps was years ago, starting with the ADP1 and Android 1.0. Back then, the big deal was the Activity lifecycle. Mind the life cycle events! To reclaim memory, Android may kill your app or your activities at any time, they said. Being economical with resources was the big concern. There was even an onMemoryLow event you could implement to help the system manage memory.

At the very beginning, there wasn’t even AsyncTask, never mind retained Fragments.

I was always trying my damnedest to fit everything properly into the activity lifecycle. Like rotating my Async tasks along with with the activity. And it was always such a pain. I never quite understood why it had to be such a pain, and I guess in my subconsciousness I always realized I was not understanding something. I was never sure how global state fit into the whole concept, and was uncomfortable using it. I only saw the activity lifecycle. As in, how can there be background thread dangling around outside of an onCreate/onDestroy cycle?

Coming back to Android now, and seeing people use neat things like event buses finally caused be to think it through, and I believe I had a personal epiphany.

Android is killing your process, but the documentation claims even today that Android may destroy activities to save memory (“the system might destroy that activity completely if it needs to recover system memory”). Is Android “killing” your Java objects? How would it even do that? Is the Android SDK in your process doing some kind of object management when instructed by the system? The idea seems strange, and apparently it isn’t really the case.

The trick for me was to forget about the Activity lifecycle as being memory-related. I think of it now as an implementation detail of the UI framework you are using, possible helpful in terms of resetting state, and both useful and cumbersome in terms of configuration/layout loading.

I had to stop thinking of my app as merely a bunch of Android primitives (activities, services, broadcast receivers) into which I have to fit everything. While these primitives admittedly are actual concepts within Android itself, the specific implementation, including the lifecycle stuff, is technically up to your process.

Instead, if I consider my app a holistic, proper process, simply running on a device where it may be regularly killed, then I’m no longer afraid of using global state, background threads, an event bus, or anything else outside of the framework primitives. I just have to remember: The app may be killed once in the background, so for important things, tell the system you are doing service-stuff right now.

Here are some resources I found helpful:

A Recipe for writing responsive REST clients on Android – How to write a modern Android app.

AsyncTask is bad and you should feel bad – Using AsyncTask outside of activities. Why not just use a thread? The AsyncTask will return in the main thread, and cannot return between in-between a configuration change. Still, a sticky event in EventBus, or a producer in otto might make more sense.

Android Priority Job Queue – If you have a lot of background jobs.

Other things I realized: It is time to upgrade to a 100 character right margin. And the right way to use a SQLite database.

Font Browsers


  • Concept of local font directory
  • Don’t like the UI

Windows Fonts Explorer

  • No concept of local font directory

FontHit Font Tools

  • Looks interesting, but I couldn’t install it


  • Open Source
  • One of the better UIs, but still not good.
  • No support for a local font directory


  • Supports font folders
  • Way Overdesigned, no immediate preview


  • Can browse folders
  • Doubtful UI quality
  • Interesting features, like classification browse


  • Suffers from some Windows95-ersims, but for once, this is a UI with some thought behind it.
  • Supports font folders.

PaaS platforms

This seems to be a busy space. I’m trying to catch up and understand how these all relate.

See also the blurring line between PaaS and Iaas, the PaaS Cheatsheet, a huge Spreadsheet.

This image helped me understand the layers involved:


  • Flynn
  • Deis: Seems like an extended dokku; deploys apps from git, not so much focused on backing services.
  • Gilliam: Runs a service on the host, per-project YAML file and command line client to deploy, can run backing services.
  • maestro-ng: Will setup docker containers based on a YAML
  • Flynn


  • Velociraptor: Deploys multiple apps on the same host without virtualization. openstack

Service discovery realizations

I’m setting up some docker containers and want to use service discovery. A challenge is that most services do not explicitly support it, so there needs to be a way to make them, easily.

This is just me, thinking it through as a newb.

  • There are two roles, registering services, and consuming services.
  • These roles are entirely separate concerns; a service might heartbeat with service discovery, but we still could do consumption in such a way that the host controller sets a static environment variable (w/ a value service discovery) and has to restart the WordPress container when the MySQL container changes IP.
  • The controller managing dependencies and restarting dependent services isn’t acceptable though. Dependencies may be known during deployment of containers, but once they are running, we don’t want to require a central instance to keep it all going.


  • For service registry, docker containers can either expose themselves internally either a) via an internal LAN interface or b) by mapping to a host port.
  • In both cases, it is the controller’s responsibility to know and decide on the address with which to register. In a) it is the only one knowing what the LAN is, in b) it is the one to pick the host port mappings.
  • (In (a), a container may be able to know/guess the LAN on its own, but only because conceptually, it it would still the controller setting up the container interfaces.)
  • In a docker setup specifically, the container may do the registration, but again it would be the host telling it the ip:port to register with. This can be easiest because LAN IPs are assigned via DHCP, so the controller would only know the IP post-start. A container can just wrap its binary in something like (sdutil)[] and take advantage of docker’s daemonization.
  • (So if a container registers itself on its eth0 ip on a port chosen on its own, I am arguing this is still conceptionally the controller telling the container: feel free to register on any port on this interface you can assume to be the LAN.).


  • On the consuming side, if the service interacts with service discovery, we don’t have a problem.
  • Again, this must work without the controller being involved to do dependency-related restarts.
  • Instead, the service must use an external helper like (sdutil)[] to restart itself. I.e. we are pushing it to the edge.
  • A tool like (synapse)[] or the proposed (CoreOS jumpers)[] would move the service discovery consuming the host (and provide a transparently redirect port to the container). This is acceptable, because we are not per se centralizing on the host. Instead, it would be more accurate to say that services get a secondary, sibling service to do discovery consumption.
  • In fact, the jumper may be a superior solution because it does not require restarts.

Available Tools

  • (skydock)[] – uses DNS, with short timeouts consumption and docker-events for automatic registration.
  • etcd
  • serf

Looking for a 12factor app reverse proxy

Ever since I started using docker for running web stuff, I’ve been looking for a HTTP reverse proxy for the routing part. There are a couple requirements I have:

  1. Dynamic configuration. Bringing up a new backend or service should auto-configure the router. Having to update config files or restart the router is not something you want to bother with.
  2. Support for SSL, in particular, multiple certificates, ideally SNI.
  3. Authentication features, to serve internal services.

I’ve been using hipache so far, but notably, hipache doesn’t support SNI. Other commonly suggested solutions include node-http-proxy, which is essentially a build-it-yourself kit, and the hipache version written in nginx Lua (with touted speed advantages, which is also just a proof of concept requiring you to build something yourself.

One might think that a proxy could be used that just proxies the encryption connection through, letting the backend deal with SSL (e.g. sniproxy). Apart from not supporting dynamic configuration either, there are downsides to this approach: You cannot add X-Forwarded-For headers, which is not optimal. Also, I’m not a fan of the approach: I’d much rather have the proxy hold the SSL configuration.

There is a hipache fork that uses etcd.


A lot of internal services (say my qless web interface) do not support authentication – and neither should they. They still need to be protected though (One option is a VPN, of course).

Nobody seems to support authentication in combination with a dynamic data backend, though multiple people have written about the issue, like here (OAuth Apache module) or Nginx Lua OAuth.

In additional challenge is that for an internal service, you may not want to prefer to run it on a path, /qless, as opposed to a separate (sub)domain, so ideally that would be supported.


I’m not sure yet. If I have to build something myself, Nginx with Lua might be worth the effort. You could build one in Python, for a change (the Gilliam project also has one). I always wanted to have a closer look at go, too. There are actually multiple reverse proxy projects in Go out there. drunken-hipster only support one SSL cert per IP. A shoutout to the folks building http-master – they support SNI, and are planning to add auth, so that is pretty close (not to mention other useful stuff like redirects).

etcedge will copy information from etcd into Redis to be consumed by hipache.

Finally, there is Flynn’s router, based on etcd, which should be promising because it should be designed to solve this exact problem.

HAProxy does seem popular, is integrated with Amazon OpsWorks, and allows enabling/disabling instances via a socket, but not, as far as I can tell, to add new ones. Maybe AWS is simply rewriting their HAProxy config file when instances change? HAP It has the benefit of supporting all kinds of advanced features (repeated routing to the same backend based on cookies)

Update: I am working on adding SSL routing to strowger.

Redis admin options


  • Only one db
  • No actual UI to see or edit data (in-browser CLI only)
  • Needs to write actual stats data to your redid.
  • Looks great.


  • Not the most feature rich, but has the important stuff
  • Looks great
  • Supports multiple connections (stored in a config file in the home directory); I want an easy ability for a new docker redid instance to register with the management UI.
  • Redis database key-spaces (0,1,2,…) etc. need to be registered as separate connections, which sucks.
  • Because it tries to do fancy stuff like show the redis keyspace in a tree, it doesn’t handle large datasets too well:


  • Looks great, but there is the pricing (per db, not by total keyspace size)
  • All the issues with a hosted solution.


  • Also renders keyspace as tree, I suspect it has performance problems as well
  • Simple but has the basic functionality down
  • Multiple database are configured in PHP config file.

docker etc

  • I was confused as to make slugrunner start a container in detached mode (-d and -a flags conflict). The following seems to work: cat /tmp/slug.tar.gz | docker run -i -a stdin flynn/slugrunner start web

  • The stackbrew/hipache docker image uses Redis 2.2, which means you need to be careful because rpush only supports a single argument. I attempted to add multiple arguments using the Python client, the data ended up garbled, and hipache basically does no logging whatsoever, so this is a paint to figure out.

Is your rippled server not coming up?

If the RPC API tells you {“error”:”noNetwork”,”error_code”:12,”error_message”:”Network not available.”} and the log is busy repeating messsage like this:

2013-Dec-16 04:45:33 ResourceManager:NFO Charging for useless data ($5)
2013-Dec-16 04:45:16 Peer:NFO No new transactions until synchronized

You might not have defined validators and a validation_quorum in rippled.cfg, which the template doesn’t make clear you both need. You could use, for example:



See also What is causing “Network not available” error in rippled RPC call and how to fix it?.