Monday, 23 February 2015

Zero-overhead Recursive ADT Coproducts

Zero-product Recursive AMD what???

Ok. Imagine this: you're building some app, and in certain parts users can type text with special tokens. (It's like in Twitter, when typing “Hello @japgolly” the “@japgolly” part gets special treatment.) You parse various types of tokens. You might have different locations to type text, and rules about which tokens are allowed in each location. You want the compiler to enforce those rules but you also want to handle tokens generically sometimes. How would you do such a thing in Scala?

Initial Attempts

Ideally you'd define an ADT (algebraic data type) for all tokens possible, then create new types for each location that form a subset of tokens allowed. If that were possible, here's what that would look like.

sealed trait Token
case class  PlainText(t: String)  extends Token
case object NewLine               extends Token
case class  Link(url: String)     extends Token
case class  Quote(q: List[Token]) extends Token

// type BlogTitle   = PlainText | Link    | Quote[BlogTitle]

// type BlogComment = PlainText | NewLine | Quote[BlogComment]

Now Scala won't let us create our BlogTitle type like shown above. It doesn't have a syntax for coproducts (which is what BlogTitle and BlogComment would be, also called “disjoint unions” and “sum-types”) over existing types. Seeing as we have control over the definition of the generic tokens, we can be tricky and inverse the declarations like this:

sealed trait BlogTitle
sealed trait BlogComment

sealed trait Token
case class  PlainText(t: String) extends Token with BlogTitle with BlogComment
case object NewLine              extends Token                with BlogComment
case class  Link(url: String)    extends Token with BlogTitle
case class  Quote(q: List[????]) extends Token with BlogTitle with BlogComment
//                        ↑ hmmm...

...but as you can see, we hit a wall when we to Quote, which is recursive. We want a Quote in a BlogTitle to only contain BlogTitle tokens, not just any type of token. We can continue our poor hack as follows.

abstract class Quote[A <: Token] extends Token { val q: List[A] }
case class QuoteInBlogTitle  (q: List[BlogTitle])   extends Quote[BlogTitle]
case class QuoteInBlogComment(q: List[BlogComment]) extends Quote[BlogComment]

Not pleasant. And we're not really sharing types anymore. What else could we do?

We could create separate ADTs for BlogTitle and BlogComment, that would mirror and wrap their matching generic tokens, then write converters from specific to generic. That's a lot of duplicated logic and tedium, plus we now double the allocations and memory needed. Let's try something else...


NOTE: This bit about Shapeless is an interesting detour, but it can be skipped.

We could use Shapeless! Shapeless is an ingeniously-sculpted library that facilitates abstractions that a panel of sane experts would deem impossible in Scala, one such abstraction being Coproducts. Here's what a solution looks like using Shapeless.

(Sorry I thought I had this working but just realised recursive coproducts don't work. I've commented-out Quote[A] for now. There is probably a way of doing this – Shapeless often doesn't take no from scalac for an answer – I'll update this if some kind 'netizen shares how.)

sealed trait Token
case class  PlainText(text: String) extends Token
case object NewLine                 extends Token
case class  Link(url: String)       extends Token
case class  Quote[A](q: List[A])    extends Token

type BlogTitle   = PlainText :+: Link         :+: /*Quote[BlogTitle]   :+: */ CNil
type BlogComment = PlainText :+: NewLine.type :+: /*Quote[BlogComment] :+: */ CNil

/* compiles → */ val title = Coproduct[BlogTitle](PlainText("cool"))
// error    → // val title = Coproduct[BlogTitle](NewLine)

So far so good. What would a Token ⇒ Text function look like?

object ToText extends Poly1 {
  implicit def caseText    = at[PlainText   ](_.text)
  implicit def caseNewLine = at[NewLine.type](_ => "\n")
  implicit def caseLink    = at[Link        ](_.url)
  // ...

val text: String =

Ok I'm a little unhappy because I'm very fold of pattern-matching in these situations, but the above does work effectively. One thing to be aware of with Shapeless, is how it works. To achieve its awesomeness, it must build up a hierarchy of proofs which incurs time and space costs at both compile- and run-time – its awesomeness ain't free. The val title = ... statement above creates at least 7 new classes at runtime, where I want 1. Depending on your usage and needs, that overhead might be nothing, but it might be significant. It's something to be aware of when you decide on your solution.

Zero-overhead Recursive ADT Coproducts

There's another way. I mentioned “zero-overhead” and it can be done. Here is a different solution that relies solely on standard Scala features, one such feature being path-dependent types.

You can create an abstract ADT, putting each constituent in a trait, then simply combine those traits into an object to have it reify a new, concrete, sealed ADT. Sealed! Let's see the new definition:

// Generic

sealed trait Base {
  sealed trait Token

sealed trait PlainTextT extends Base {
  case class PlainText(text: String) extends Token

sealed trait NewLineT extends Base {
  case class NewLine() extends Token

sealed trait LinkT extends Base {
  case class Link(url: String) extends Token

sealed trait QuoteT extends Base {
  case class Quote(content: List[Token]) extends Token

// Specific

object BlogTitle   extends PlainTextT with LinkT    with QuoteT
object BlogComment extends PlainTextT with NewLineT with QuoteT

Now let's use it:

   List[BlogTitle.Token](BlogTitle.PlainText("Hello")) // success
// List[BlogTitle.Token](BlogTitle.NewLine)   ← error: BlogTitle.NewLine doesn't exist
// List[BlogTitle.Token](BlogComment.NewLine) ← error: BlogComment tokens aren't BlogTitle tokens

// Specific
val blogTitleToText: BlogTitle.Token => String = {
  // case BlogTitle.NewLine   => ""     ← error: BlogTitle.NewLine doesn't exist
  // case BlogComment.NewLine => ""     ← error: BlogComment tokens not allowed
  case BlogTitle.PlainText(txt) => txt
  case BlogTitle.Link(url)      => url
  // Compiler warns missing BlogTitle.Quote(_) ✓

// General
val anyTokenToText: Base#Token => String = {
  case a: PlainTextT#PlainText => a.text
  case a: LinkT     #Link      => a.url
  case a: NewLineT  #NewLine   => "\n"
  // Compiler warns missing QuoteT#Quote ✓

// Recursive types
val t: BlogTitle  .Quote => List[BlogTitle  .Token] = _.content
val c: BlogComment.Quote => List[BlogComment.Token] = _.content
val g: QuoteT     #Quote => List[Base       #Token] = _.content

Look at that! That is awesome. These are some things that we get:

  • No duplicated definitions or logic.
  • Generic & specific hierarchies are sealed, meaning the compiler will let you know when you forget to cater for a case, or try to cater for a case not allowed.
  • Children of recursive types have the same specialisation.
    Eg. a BlogTitle can only quote using BlogTitle tokens.
  • Tokens can be processed generically.
  • Zero-overhead. No additional computation or new memory allocation needed to store tokens, or move them into a generic context. No implicits.
  • Nice, neat pattern-matching which makes me happy.
  • It's just plain ol' Scala traits so you're free to encode more constraints & organisation. You can consolidate traits, add type aliases, all that jazz.

There you go. Seems like a great solution to this particular scenario.

Nothing is without downsides though. Creation will likely be a little hairy; imagine writing a serialisation codec – the Generic ⇒ Binary part will be easy but Binary ⇒ Specific will be more effort. In my case I will only create this data thrice {serialisation, parsing, random data generation} but read and process it many, many times. Good tradeoff.

Sunday, 28 September 2014

An Example of Functional Programming

Many people, after reading my previous blog post, asked to see a practical example of FP with code. I know it's been a few months – I actually got married recently; wedding planning is very time consuming – but I've finally come up with an example. Please enjoy.

Most introductions to FP begin with pleas for immutability but I'm going to do something different. I've come up with a real-world example that's not too contrived. It's about validating user data. There will be 5 incremental requirements that you can imagine coming in sequentially, each one building on the other. We'll code to satisfy each requirement incrementally without peeking at the following requirements. We'll code with an FP mindset and use Scala to do so but the language isn't important. This isn't about Scala. It's about principals and a perspective of thought. You can write FP in Java (if you enjoy pain) and you can write OO in Haskell (I know someone who does this and baffles his friends). The language you use affects the ease of writing FP, but FP isn't bound to or defined by any one language. It's more than that. If you don't use Scala this will still be applicable and useful to you.

I know many readers will have programming experience but little FP experience so I will try to make this as beginner-friendly as possible and omit using jargon without explanation.

Req 1. Reject invalid input.

The premise here is that we have data and we want to know if it's valid or not. For example, suppose we want ensure a username conforms to certain rules before we accept it and store it in our app's database.

I'm championing functional programming here so let's use a function! What's the simplest thing we need to make this work? A function that takes some data and returns whether it's valid or not: A ⇒ Boolean.

Well that's certainly simple but I'm going to handwaveily tell you that primitives are dangerous. They denote the format of the underlying data but not its meaning. If you refactor a function like blah(dataWasValid: Boolean, hacksEnabled: Boolean, killTheHostages: Boolean) the compiler isn't going to help you if you get the arguments wrong somewhere. Have you ever had a bug where you used the ID of one data object in place of another because they were both longs? Did you hear about the NASA mission that failed because of mixed metric numbers (eg. miles and kilometers) being indistinguishable?

So let's address that by first correcting the definition of our function. We want a function that takes some data and returns whether it's valid or not an indication of validity: A ⇒ Validity.
sealed trait Validity
case object Valid extends Validity
case object Invalid extends Validity

type Validator[A] = A => Validity

We'll also create a sample username validator and put it to use. First the validator:
val usernameV: Validator[String] = {
  val p = "^[a-z]+$".r.pattern
  s => if (p.matcher(s).matches) Valid else Invalid

Now a sample save function:
def example(u: String): Unit =
  usernameV(u) match {
    case Valid   => println("Fake-saving username.")
    case Invalid => println("Invalid username.")

There's a problem here. Code like this will make FP practitioners cry and for good reason. How would we test this function? How could we ever manipulate or depend on what it does, or its outcome? The problem here is “effects” and unbridled, they are anathema to healthy, reusable code. An effect is anything that affects anything outside the function it lives in, relies on anything impure outside the function it lives in, or happens in place of the function returning a value. Examples are printing to the screen, throwing an exception, reading a file, reading a global variable.

Instead we will model effects as data. Where as the above example would either 1) print “Fake-saving username” or 2) print “Invalid username”, we will now either 1) return an effect that when invoked, prints “Fake-saving username”, or 2) return a reason for failure.

We'll define our own datatype called Effect, to be a function that neither takes input nor output.
(Note: If you're using Scalaz, scalaz.effect.IO is a decent catch-all for effects.)
type Effect = () => Unit
def fakeSave: Effect = () => println("Fake save")

Next, Scala provides a type Either[A,B] which can be inhabited by either Left[A] or Right[B] and we'll use this to return either an effect or failure reason.
Putting it all together we have this:
def example(u: String): Either[String, Effect] =
  usernameV(u) match {
    case Valid   => Right(fakeSave)
    case Invalid => Left("Invalid username.")

Req 2. Explain why input is invalid.

We need to specify a reason for failure now.
We still have two cases: valid with no error msg, invalid with an error msg. We'll simply add an error message to the Invalid case.
case class Invalid(e: String) extends Validity

Then we make it compile and return the invalidity result in our example.
 val usernameV: Validator[String] = {
   val p = "^[a-z]+$".r.pattern
-  s => if (p.matcher(s).matches) Valid else Invalid
+  s => if (p.matcher(s).matches) Valid else
+         Invalid("Username must be 1 or more lowercase letters.")
 def example(u: String): Either[String, Effect] =
   usernameV(u) match {
     case Valid      => Right(fakeSave)
-    case Invalid    => Left("Invalid username.")
+    case Invalid(e) => Left(e)

Req 3. Share reusable rules between validators.

Imagine our system has 50 data validation rules, 80% reject empty strings, 30% reject whitespace characters, 90% have maximum string lengths. We like reuse and D.R.Y. and all that; this requirement addresses that by demanding that we break rules into smaller constituents and reuse them.

We want to write small, independent units and join then into larger things. This leads us to an important and interesting topic: composability.

I want to suggest something that I know will cause many people to cringe – but hear me out – let's look to math. Remember basic arithmetic from ye olde youth?
8 = 5 + 3
8 = 5 + 1 + 2
Addition. This is great! It's building something from smaller parts. This seems like a perfect starting point for composition to me. There's a certain beauty and elegance to math, and its capability is proven; what better inspiration!

Let's look at some basic properties of addition.

Property #1:
8 = 8 + 0
8 = 0 + 8
Add 0 to any number and you get that number back unchanged.
Property #2:
8 = (1 + 3) + 4
8 = 1 + (3 + 4)
8 = 1 + 3 + 4
Parentheses don't matter. Add or remove them without changing the result.
Property #3:
I'll also mention that in primary school, you had full confidence in this:
number + number = number
It may seem silly to mention, but imagine if your primary school teacher told you that
number + number = number | null | InvalidArgumentException
+ has other properties too, like 2+6=6+2 but we don't want that for our scenario with validation. The above three provide enough benefit for what we need.

You might wonder why I'm describing these properties. Why should you care? Well as programmers you gain much by writing code with similar properties. Consider...
  • You know you don't have to remember to check for nulls, catch any exceptions, worry about our internal AddService™ being online.
  • As long as the overall order of elements is preserved, you needn't care about the order in which groups are composed. i.e. we know that a+b+c+d+e will safely yield the same result if we batch up execution of (a+b) and (c+d+e) then add their results last. And parenthesis support is already provided by the programming language.
  • If ever forced into composition by some code path and you can opt out by specifying the 0 because we know that 0+x and x+0 are the same as x. No need to overload methods or whatnot.
Simple right? Well have you ever heard the term “monoid” thrown around? (Not “monad”.) Guess what? We've just discussed all that makes a monoid what it is and you learned it as a young child.
A monoid is a binary operation (x+x=x) that has 3 properties:
  • Identity: The 0 is what we call an identity element. 0+x = x = x+0
  • Associativity: That's the ability to add/remove parentheses without changing the result.
  • Closure: Always returns a result of the same type, no RuntimeExceptions, no nulls.
If jargon from abstract algebra intimidates you, know that it's mostly just terminology. You already know the concepts and have for years. The knowledge is very accessible and it's incredibly useful to be able to identify these kinds of properties about your code.

Speaking of code, let's implement this new requirement as a monoid. We'll add Validator.+ for composition and ensure it preserves the associativity property, and for identity (also called zero).
(Note: If using Scalaz, Algebird or similar, you can explicitly declare your code to be a monoid to get a bunch of useful monoid-related features for free.)
case class Validator[A](f: A => Validity) {
  @inline final def apply(a: A) = f(a)

  def +(v: Validator[A]) = Validator[A](a =>
    apply(a) match {
      case Valid         => v(a)
      case e@ Invalid(_) => e

object Validator {
  def id[A] = Validator[A](_ => Valid)
The difficulty of building human-language sentences scales with expressiveness. For our demo it's enough to simply have validators contain error message clauses like “is empty”, “must be lowercase” and just tack the subject on later.

First we define some helper methods pred and regex, then use them to create our validators
object Validator {
  def pred[A](f: A => Boolean, err: => String) =
    Validator[A](a => if (f(a)) Valid else Invalid(err))

  def regex(r: java.util.regex.Pattern, err: => String) =
    pred[String](a => r.matcher(a).matches, err)

val nonEmpty = Validator.pred[String](_.nonEmpty, "must be empty")
val lowercase = Validator.regex("^[a-z]*$".r.pattern, "must be lowercase")

val usernameV = nonEmpty + lowercase
Then we gaffe our subject to on to our error messages before displaying it and we're done.
def buildErrorMessage(field: String, err: String) = s"$field $err"

def example(u: String): Either[String, Effect] =
  usernameV(u) match {
    case Valid      => Right(fakeSave)
    case Invalid(e) => Left(buildErrorMessage("Username", e))

Req 4. Explain all the reasons for rejection.

Users are complaining that they get an error message, fix their data accordingly only to have it then rejected for a different reason. They again fix their data and it is rejected again for yet another reason. It would be better to inform the user of all the things left to fix so they can amend their data in one shot.
For example, an error message could look like “Username 1) must be less than 20 chars, 2) must contain at least one number.”

In other words there can be 1 or more reasons for invalidity now. Ok, we'll amend Invalid appropriately...
case class Invalid(e1: String, en: List[String]) extends Validity

Then we just make the compiler happy...
 case class Validator[A](f: A => Validity) {
   @inline final def apply(a: A) = f(a)
   def +(v: Validator[A]) = Validator[A](a =>
-    apply(a) match {
-      case Valid         => v(a)
-      case e@ Invalid(_) => e
-    })
+    (apply(a), v(a)) match {
+      case (Valid          , Valid          ) => Valid
+      case (Valid          , e@ Invalid(_,_)) => e
+      case (e@ Invalid(_,_), Valid          ) => e
+      case (Invalid(e1,en) , Invalid(e2,em) ) => Invalid(e1, en ::: e2 :: em)
+    })
 object Validator {
   def pred[A](f: A => Boolean, err: => String) =
-    Validator[A](a => if (f(a)) Valid else Invalid(err))
+    Validator[A](a => if (f(a)) Valid else Invalid(err, Nil))
-def buildErrorMessage(field: String, err: String) = s"$field $err"
+def buildErrorMessage(field: String, h: String, t: List[String]): String = t match {
+  case Nil => s"$field $h"
+  case _   => (h :: t){case (e,i) => s"${i+1}) $e"}.mkString(s"$field ", ", ", ".")
 def example(u: String): Either[String, Effect] =
   usernameV(u) match {
     case Valid         => Right(fakeSave)
-    case Invalid(e)    => Left(buildErrorMessage("Username", e))
+    case Invalid(h, t) => Left(buildErrorMessage("Username", h, t))
(Note: If you're using Scalaz, NonEmptyList[A] is a better replacement for A, List[A] like I've done in Invalid. The same thing can also be achieved by OneAnd[List, A]. In fact OneAnd is a good way to have compiler-enforced non-emptiness.)

Req 5. Omit mutually-exclusive or redundant error messages.

Take the this error message: “Your name must 1) include a given name, 2) include a surname, 3) not be empty”. If the user forgot to enter their name you just want to say “hey you forgot to enter your name”, not bombard the user with details about potentially invalid names.

What does this mean? It means one rule is unnecessary if another rule fails. What we're really talking about here is the means by which rules are composed. Let's just add another composition method. We talked about the + operation in math already, well math also provides a multiplication operation too. Look at an expression like 6 + 14 + (7 * 8). Two types of composition, us explicitly clarifying our intent via parentheses. That's perfectly expressive to me and it solves our new requirement with simplicity and minimal dev. As a reminder that we can borrow from math without emulating it verbatim, instead of a symbol let's give this operation a wordy name like andIfSuccessful so that we can say nonEmpty andIfSuccessful containsNumber to indicate a validator that will only check for numbers if data isn't empty.

Just like these express different intents and yield different results
number = 4 * (2 + 10)
number = (4 * 2) + 10
So too can
rule = nonEmpty andIfSuccessful (containsNumber and isUnique)
rule = (nonEmpty andIfSuccessful containsNumber) and isUnique
Or if you don't mind custom operators
rule = nonEmpty >> (containsNumber + isUnique)
rule = (nonEmpty >> containsNumber) + isUnique

To implement this new requirement we add a single method to Validator:
def andIfSuccessful(v: Validator[A]) = Validator[A](a =>
  apply(a) match {
    case Valid           => v(a)
    case e@ Invalid(_,_) => e


And we're done.
It's not how I would've approached code years back in my OO/Java era, nor is it like any of the code I came across written by others in that job. As an experiment I started fulfilling these requirements in Java the way old me used to code and there was a loooot of wasted code between requirements. I'd get all annoyed at each new step, so much so that I didn't even bother finishing. On the contrary, I enjoyed writing the FP.

Right, what conclusions can we draw?

FP is simple. Each validation is a single function in a wrapper.
FP is flexible. Logic is reusable and can be assembled into complex expressions easily.
FP is easily maintainable & modifiable. It has less structure, less structural dependencies, and is less code, plus the compiler's got your back.
FP is easy on the author. There was next to no rewriting or throw-away of code between requirements, and each new requirement was easy to implement.

I hope this proves an effective concrete example of FP for programmers of different backgrounds. I also hope this enables you to write more reliable software and have a happier time doing it.

Go forth and function.

Monday, 9 June 2014

A Year of Functional Programming

It's been a year since I first came across the concept of functional programming. To say it's changed my life, is an unjust understatement. This is a reflection on that journey.

Warning: I use the term FP quite loosely throughout this article.

Where I Was

I've been coding since the age of 8, so 26 years now. I started with BASIC & different types of assembly then moved on to C, C++, PHP and Perl (those were different times, maaaan ☮), Java, JavaScript, Ruby. That's the brief gist of it. Basically: a lot of time on the clock and absolutely no exposure to FP concepts or languages. I thought functional programming just meant recursive functions (ooooo big deal right?). I really came in blind.

How It Started: Scala

Last year, I wanted the speed and static checking (note: not “types”) of Java with the conciseness and flexibility of Ruby. I came across Scala, skimmed a little and was impressed. I bought a copy of Scala For The Impatient and just ate it for breakfast. I read the entire thing in 2 or 3 days, jotted down everything useful and then just started coding. It was awesome! At first I was just coding the same way I would Java with less than half the lines of code. It is a very efficient Java.

Exposure: Haskell

Lurking around the Scala community, I came across a joke. Someone said “Scala is a gateway drug to Haskell.” I found that amusing, although not for the reasons that the author intended. Haskell? Isn't that some toy university language? An experiment or something. Is it even still alive? Scala's awesome and so powerful, why would that lead to Haskell? How.... intriguing. Inexplicably it piqued my interest and really stuck with me. Later I decided to look it up and yes, it sure was alive and very active. I was shocked to discover that it compiles to machine code (binary) and is comparable in speed to C/C++. What?! It seems idiotic now but a year ago I thought it was some interpreted equation solver. I'm not alone in that ignorance, sadly; talking about it to some mates over lunch last year and a friend incredulously burst out laughing, “Haskell?” as if I was trying to tell him my dishwasher had an impressive static typing system. It saddens me to realise how in the dark I was, and how many people still are. Haskell is pretty frikken awesome! True to the gateway drug prophecy, I do now look to Haskell as an improvement to using Scala. But let's get back on track.

Exposure: Scalaz

I also started seeing contentious discussion of some library called Scalaz. Curious, I had a look at the code to see what people were on about, and didn't understand it at all. I'd see classes like Blah[F[_], A, B], methods with confusing names that take params like G[_], A → C, F[X → G[Y]], implementations like F.boo_(f(x))(g)(x), and I'd just think “What the hell is this? How is this useful?”. I was used to methods that did something pertaining to a goal in its domain. This Scalaz code was very alien to me and yet, very intriguing. Some obviously-smart person spent time making the alphabet soup permutations, why?

I've since discovered that answer to that question and I never could've imagined the amount of benefit it would yield. Instead of methods with domain-specific meaning, I now see functions for morphisms, in accordance with relevant laws. Simply put: changing the shape or content of types. No mention of any kind of problem-domain; applicable to all problem-domains! It's been surprising over the last year to discover just how applicable this is. This kind of abstraction is prolific, it's in your code right now, disguised, and intertwined with your business logic. Without knowledge of these kind of abstractions (and the awareness that a type-component level of abstraction is indeed possible) you are doomed to reinvent the wheel for all time and not even realise it. Identifying and using these abstractions, your code becomes more reusable, simple, readable, flexible, and testable. When you learn to recognise it, it's breathtaking.

FP: The Basics

Now that I'd been exposed to FP I started actively learning about it. At first I learned about referential transparency, purity and side-effects. I nodded agreement but had major reservations about the feasibility of adhering to such principals, and so at first I wasn't really sold on it. Or rather, I was hesitant. I may have been guilty of mumbling sentences involving the term “real-world”. Next came immutability. Now I'm a guy who used to use final religiously in Java, const on everything in C++ back in the day, and here FP is advocating for data immutability. Not just religiously advocating but providing real, elegant solutions to issues that you encounter using fully immutable data structures. Wow! So with immutability, composability, lenses it had its hooks in me.

Next came advocation for more expressive typing and (G)ADTs. That appealed in theory too and again I was hesitant about its feasibility. Once I experimentally applied it to some parts of my project, I was blown away by how well it worked. That became the gateway into thinking of code/types algebraically, which lead to...

FP: Maths

I loved maths back in school and always found it easy. Reading FP literature I started coming across lots of math and at first thought “great! I'm awesome at maths!” but then, trying to make sense of some FP math stuff, I'd find myself spending hours clicking link after link, realising that I wasn't getting it and, in many cases, still couldn't even make sense of the notation. It became daunting. Even depressing. Frequently demotivating.

The good news is that everything you need is out there; you just have to be prepared to learn more than you think you need. I persisted, I stopped viewing it as an annoying bridge and starting treating it as a fun subject on its own and, before long things made sense again. It opens new doors when you learn it.

Example: I had a browser tab (about co-Yoneda lemma) open for 3 months because I couldn't make sense of it. It took months (granted not everyday) of trying then confusion then tangents to understand background and whatever it was that threw me off. Once I learned that final piece of background info, I went from understanding only the first 5% to 100%. It was a great feeling.

Feeling Intimidated

Looking back there were times when I felt learning FP quite intimidating. When I'm in/around/reading conversations between experienced FP'ers quite often I've seriously felt like a moron. I started wondering if I gave my head a good slap, would a potato fall out. It can be intimidating when you're not used to it. But really, my advice to you, Reader, is that everyone's nice and happy to help when you're confused. I have a problem asking for help but I've seen everyone else do it and be received kindly... then I swoop in an absorb all the knowledge, hehe.

It's a mindset change. I wish I'd known this earlier as it would've saved me frustration and doubt, but you kind of need to unlearn what you think you know about coding, then go back to the basics. Imagine you've driven trains for decades, and spontaneously decide you want to be a pilot. No, you can't just read a plane manual in the morning and be in Tokyo in the afternoon. No, if you grab a beer with experienced pilots you won't be able to talk about aviation at their level. It's normal, right? Be patient, learn the basics, have fun, you'll get there.

On that note, I highly recommend Fuctional Programming in Scala, it's a phenomenal book. It helped me wade my way from total confusion to comfortable comprehension on a large number of FP topics with which I was struggling trying to learn from blogs.

Realisation: Abstractions

Recently I looked at some code I wrote 8 months ago and was shocked! I looked at one file written in “good OO-style”, lots of inheritance and code reuse, and just thought “this is just a monoid and a bunch of crap because I didn't realise this is a monoid” so I rewrote the entire thing to about a third of the code size and ended up with double the flexibility. Shortly after I saw another file and this time thought “these are all just endofunctors,” and lo and behold, rewrote it to about a third of the code size and the final product being both easier to use and more powerful.

I now see more abstractions than I used to. Amazingly, I'm also starting to see similar abstractions outside of code, like in UI design, in (software) requirements. It's brilliant! If you're not on-board but aspire to write “DRY” code, you will love this.

Realisation: Confidence. Types vs Tests

I require a degree of confidence in my code/system, that varies with the project. I do whatever must be done to achieve that. In Ruby, that often meant testing it from every angle imaginable, which cost me significant effort and negated the benefit of the language itself being concise. In Java too, I felt the need to test rigorously.

At first I was the same in Scala, but since learning more FP, I test way less and have more confidence. Why? The static type system. By combining lower-level abstractions, an expressive type system, efficient and immutable data structures, and the laws of parametricity, in most cases when something compiles, it works. Simple as that. There are hard proofs available to you, I'm not talking about fuzzy feelings here. I didn't have much respect for static types coming from Java because it's hard to get much mileage out of it (even in Java 8 – switch over enum needs a default block? Argh fuck off! Gee maybe maybe all interfaces should have a catch-all method too then. That really boiled my blood the other day. Sorry-), anyway: Java as a static typing system is like an abusive alcoholic as a parent. They may put food on the table and clothes your back, but that's a far cry from a good parent. (And you'll become damaged.) Scala on the other hand teaches you to trust again. Trust. Trust in the compiler. I've come to learn that when you trust the compiler and can express yourself appropriately, entire classes of problems go away, and with it the need for testing. It's joyous.

Sadly though, eventually you get to a point where Scala starts to struggle. It gets stupid, it can't work out what you mean, what you're saying, you have to gently hold its hand and coax it with explicit type declarations or tricks with type variance or rank-n types. Once you get to that level you start to feel like you've outgrown Scala and now need a big boy's compiler which can lead to habitual grumbling and regular reading about Haskell, Idris, Agda, Coq, et al.

However when you do need tests, you can write a single test for a bunch of functions using a single expression. How? Laws. Properties. Don't know what I mean? Pushing an item on to a stack should always increase its size by 1, the popping of which should reduce its size by 1 and return the item pushed, and return a stack equivalent to what you started with. Using libraries like ScalaCheck, turning that into a single expression like pop(push(start, item)) == (start, item) which is essentially all you need to write; ScalaCheck will generates test data for you.

Where Next?

What does the future hold for me? Well, I could never go back to dynamically-typed language again.
I will stick with Scala as I've invested a lot in it and it's still the best language I know well. I'd like to get more hands-on experience with Haskell; I don't know its trade-offs that well but its type system seems angelic. Got my eye on Idris, too.


I used to get excited discovering new libraries. I'd always think “Great! I wonder what this will allow me to do.” Well now I feel that way about research & academic papers. They are the same thing except smarter, more lasting, more dependable, and they yield more flexibility. It's awesome and I've got decades of catching up to do! Over the next year I'll definitely spend a lot of time learning more FP and comp-sci theory. I'd also like to be able to understand most Haskell blogs I come across. They promise very intelligent constructs (which aren't specific to Haskell) but the typing usually gets a bit too dense; it'd be nice to be able to read those articles with ease.

Don't fall for the industry dogma that academia isn't applicable to the real-world. What a load of horseshit that lie is. It does apply to the real-world, it's here to help you achieve your goals, it will save you significant time and effort, even with the initial learning overhead considered. Don't say you don't have time to do things in half the time. If you're always busy and your business are super fast-paced agile scrum need-it-yesterday kinda people, well I know you don't have time, but what I'm offering is this: say you just need to get something out the door and can do it quickly and messily in 1000 lines in 6 hours with 10 bugs and 10 “abilities”, well if you spend a bit of your own time learning you could perform the same task in 400 lines in 4 hours with maybe 1 bug and 20 “abilities”. You've just saved 2 hours up-front, not to mention days of savings when adding new features, fixing bugs, etc. That's applicable to you, the “real-world” and the industry. I've spent years in the industry and not just as a coder and I wish I'd known about this stuff back then because it would've saved me so much time, effort and stress. There seems to be this odd disdain for academia throughout the industry. Reject it. It's an ignorant justification of laziness and short-sightedness. It's false. I encourage you to take the leap.

Saturday, 26 October 2013

Scala: Methods vs Functions

It's a bright, windy Saturday morning. Sipping a nice, warm coffee I find myself musing over the performance implications of subtleties between methods and functions in Scala. Functions are first-class citizens in Scala and represented internally as instances of Functionn (where n is the arity); effectively, an interface with an apply method. Methods are methods are methods; they are directly invocable in JVM-land.

So what differences are there that could affect performance? I can think of:

  • Because functions are traits with abstract type members, in JVM-land those abstract types will be erased to Objects. I presume this means boxing for your primitives like int and long (unless scalac has any tricks up its sleeve like @specialized)
  • When passing a method to a higher-order function, methods will need to be boxed into a Function. For example, in
    def method(s:String) = s.length
    the map method requires an instance of Function1 so I (again, presume) the compiler generates a synthetic instance of Function1 as a proxy to the target method.
  • The JVM can invoke methods directly. To invoke a function, it will have to first load up the Function object and then invoke its apply method. That's an extra hop.
  • Probably more.

So those are some of the differences between functions and methods in Scala. Let's see how they perform. There's an awesome little micro-benchmarking tool called ScalaMeter. It takes about 2 min to get started with it. I decided to test 1,000,000 reps along three axes:

  1. Direct invocation vs passing to someone else
  2. Primitive vs Object argument
  3. Primitive vs Object result

The Results

Fn improvement over Method
Directint → int-6.32%
int → str-9.53%
str → int-4.39%
str → str-1.09%
As Argint → int0.31%
int → str0.94%
str → int-0.22%
str → str-0.24%
What can we see?
  • It seems that there is a boxing cost for functions' i/o.
  • It seems that there is a slight cost when invoking functions directly.
  • It seems that there is no real cost boxing methods into functions.


If you're like one-week-ago-me (pft he's idiot!), you might think you're helping the compiler out by writing functions instead of methods when their only use is to be passed around. Well it doesn't appear to be so.
Just use methods and let your mind (if you're lucky enough to have its cooperation) worry about and solve other things.

Code and raw results available here:

Thursday, 5 September 2013

Keyed Lenses

TL;DR: Lenses are cool. I've come up with keyed lenses which I find helpful. Hopefully you will too. Do you? Have I just reinvented the wheel in ignorance?

Annual blog post time. So this year I've been imploding and exploding with enthusiasm over functional programming. Already I've found some FP perspectives and strategies mind-boggle-blow-blast-ingly effective, and beautiful. My day project is in a significantly better state owing to FP. If you're not onboard I recommend reading Learn You a Haskell for Great Good! and Functional Programming in Scala.

(Btw: big thanks to NICTA, specifically Tony Morris & Mark Hibberd who held 2 free FP courses and gracefully tolerated numerous dumb-shit moments from me. I think it's a semi-annual recurring thing so keep an eye out on scala-functional for the next one if you're interested and in Australia.)

What is a lens? If you don't know what a lens is, it's basically a decoupled getter/setter that be composed with other lenses, so that the depth and structure of data can be hidden. In traditional OO you might not see the merits but when your data structures are all immutable, the benefit is immense. There are plenty of good resources online to learn more, such as this, this and this.


What I'm calling a KeyedLens, is a lens that points to a value in a composite value such that a key is required. A Map is an obvious example.

(NOTE: Scalaz has some basic support for this -- I am aware of it -- but I find that it doesn't my needs and/or the way I try to use it. I find that the call-site syntax becomes long and nasty, it doesn't compose well, and it creates new lenses every time it's used which is inefficient).

Let's start with some toy data.
I'm going to use Scala and the awesome Scalaz library, and there's a link to the KeyedLens source code at the end of the post. (If you don't know Scala, just imagine it's pseudo-code. The concepts translate into almost anything.)

Let's model a band from a guitarist's point of view.

Here we're modelling the mighty band Tesseract (think Pink Floyd + Meshuggah).
There are two places where I'm going to use a keyed lens.

  1. To access the guitar of a given band member. (guitarL)
  2. To access the string gauge of a given guitar. (stringGaugeL)

Here are the lens definitions:

This gives us the following lenses:

guitarTuningLLensFamily[Guitar, Guitar, String, String]
stringGaugeLLensFamily[(Guitar,Int), Guitar, Double, Double]
bandNameLLensFamily[Band, Band, String, String]
guitarLLensFamily[(Band,Person), Band, Guitar, Guitar]
guitaristsTuningLLensFamily[(Band,Person), Band, String, String]
guitaristsGaugeLLensFamily[(Band,(Person,Int)), Band, Double, Double]
Notice the keys always get propagated to the left.
Now let's see them in action. This is what appeals to me the most.

Usage. The Fun Part.

Get with one key. What is Acle's guitar tuned to?

scala> guitaristsTuningL.get(band, acle)
res0: String = BEADGBE

Get with two keys. What is the gauge of Acle's 7th string?

scala> guitaristsGaugeL.get(band, (acle, 7))
res1: Double = 0.059

Set with one key. I want to change Acle's tuning.

scala> guitaristsTuningL.set((band, acle), "G#FA#D#FA#D#")

res2: Band = Band(Tesseract,Map(Person(Acle) -> Guitar(7,G#FA#D#FA#D#,List(0.011, 0.014, 0.018, 0.028, 0.038, 0.049, 0.059)), Person(James) -> Guitar(6,EADGBE,List(0.01, 0.013, 0.017, 0.026, 0.036, 0.046))),Set(Person(Jay), Person(Amos), Person(Ashe)))

Set with two keys. I want to lower the gauge of Acle's 7th string.

scala> guitaristsGaugeL.set((band, (acle, 7)), 0.0666666)

res3: Band = Band(Tesseract,Map(Person(Acle) -> Guitar(7,BEADGBE,List(0.011, 0.014, 0.018, 0.028, 0.038, 0.049, 0.0666666)), Person(James) -> Guitar(6,EADGBE,List(0.01, 0.013, 0.017, 0.026, 0.036, 0.046))),Set(Person(Jay), Person(Amos), Person(Ashe)))

[If you're new to lenses, keep in mind that all the data here is immutable. Objects are copied and reused.]

  • I really like the brevity of these lines.
  • I like that you get a single lens that requires a key be provided.
  • I don't like the (A, (K1,K2)) type of guitaristsGaugeL which is easily changable into (A, K1, K2) but then what happens when it's further recomposed? I'd probably need methods like compose2, compose3, compose4, etc. Will think later.
What do you think? Does anything like this already exist?

Source code for KeyedLenses.scala is here.


Thursday, 25 April 2013

Trial: Choosing Lift over Rails

Today, I'm going to start on a slightly scary journey. I'm going to start work on a new webapp but I'm not going to use Ruby on Rails which I (mostly) love (with plugins) and already know well. Instead I'm going to use the Scala-based Lift framework which I have never used and is completely alien to me. This is going to cost me a lot of time at the beginning, not just because I don't know the API, but because the mindset of the beast seems drastically different to typical web frameworks that most of us are used to.

(NOTE re RAILS: I will make many comparisons to RoR because it's the web framework I know best and cos it's well-known. Just like everyone thinks Maccas (ie. McDonalds for non-Aussies) when you think fast-food. It's tough being on top. Bad luck RoR.)

(Have you made a similar transition? I'd love to know how it went. Let me know!)


In no particular order...

  • Security. Big security focus. Immune to lots of common vulnerabilities. I think it even automatically uses random param names for POSTs, etc.
  • Speed. Compiles to Java bytecode, runs on JVM. Parallel rendering. Scala has built-in, native XML support so that should be faster than parsing textual templates. I read somewhere that a modest, old processor can comfortably serve 300 req/sec on a single processor, doing a modest amount of transformation. With threading I've read that this can exceed 20x the speed of the same webapp in RoR running multi-process, presumably on something like Puma, Thin, whatnot. (No sources, sorry, it's arbitrary anyway. And we all know Ruby can be scaled, that's not the topic here.)
  • Snippets rather than MVC. MVC has never felt right to me. It's great for trivial DB-interface-like-webapp kind of stuff, but outside of that I've often run into ambiguous scenarios that don't feel comfortable... although I have used it happily for lack of a better alternative. Lift instead uses snippets which are pieces of logic/functionality that you can use all over the place in as many views as you like. Seems much better in terms of reusability, organisation on the other hand I'm not sure yet. Snippets are also executed in parallel -- nice.
  • Easier Ajax with wiring/binding, server-push, more. In addition to what the links say, most of the Ajax plumbing seems to be automatic; you don't even need to declare URLs or actions. Also type and parameter safety -- excellent.

Those are the main reasons that come to mind. There other niceties too such as lazy loading. The doco flaunts designer friendly templates as some awesome feature but I'm personally on the fence about it. I'm a one-man everything team at the moment so it doesn't immediately appeal to my situation. I can see it potentially making CSS dev faster (because you can work off a static template with all cases hardcoded which gets wiped by Lift) but I think any small gains will be offset by the major productivity loss of not having HAML.

Downsides. The Oh Noes.

Basically, less <good thing>.

Less adoption. Less incumbency. This results in less libraries to choose from. Less plugins. I assume more reinvent-the-wheel kind of stuff for me to write.

Less community. It especially won't be as big as RoR's. This means less examples and code online, less questions on StackOverflow, less forums and blogs, less information and help. It sounds like the mailing list is friendly enough and I'll be joining today but it's nicer to have more resources at your disposal rather than just always hitting up the same guys for help.

Less developers on the market. If this app makes me millions of dollars and I get to a point where I need to outsource or hire then I'm going have significantly less people that can do the job. It's hard not to compare to RoR which is ubiquitous these days and which would cause no sweat finding capable helpers.

I'm going to accept these problems because Scala is awesome, Lift is philosophically fresh and whispers of great advantages once you climb the learning curve, and there are aspects of Lift that will have a direct impact on time, money and resources. For example, I'll be able to host my project free for longer (due to improved performance). If my project gets popular, yes, I'll have it harder looking for manpower but, (and this runs contrary to my innate tendencies), I often read & hear (with supporting evidence) that it's best to focus entirely on the short term when starting up a small business or venture (notice I avoided calling it a "startup"). Worries like scalability, resourcing, support, etc. can and should be dealt with once the project premise is proven to be successful and profitable. Like the 37-Signals guys say, "A business without a path to profit isn't a business, it's a hobby", and if this gets off the ground and becomes just a hobby then I'm not going to fuss about that stuff anyway.

Wednesday, 17 April 2013

Lessons Learnt on My Second Android App - Pt.1

Hi. Recently I released my second Android app. It's a collection of game timers and bells and whistles to be used whilst playing board games, card games, party games, the like. Practically, that means you can use it to play 4 player Scrabble where each player has 2 minutes-per-turn and a total of 25 min for the entire game, for example. Or, for a game like Scattegories you can have an hourglass set to run for a random amount of time between 1~2 min and then scare the crap out of everyone when it goes off. That kind of thing.

If you're interested, you can peruse some screenshots here and find the app here: “Time Us!”

Moving on, I'd like to share and record for future-me, the lessons I learned and observations I made whilst creating this app.

Scale Bitmaps Effeciently

Those tiny graphic files in your resources consume a surprising amount of memory when in use. A 100KB PNG is compressed and when loaded can comsume 10MB of memory as a Bitmap. As we know memory is quite important on mobile devices, especially on older devices. If the user is on a small LDPI device for example, their tired old phone isn't going to have much memory to spare. If you're scaling bitmaps there's a chance that you're consuming more memory that you need to, and the amount of waste gets higher with older devices where it matters more.

Say you have a large graphic that you scale to a smaller size, you're going to find that without some special options in-place, the device will load the full graphic into memory and retain it as it was loaded prior to scaling. I originally assumed that on a small LDPI device the graphic would scale down to about 20% (or whatever) of its size and accordingly only consume 20% of the full size in memory. Right? No. By default Android loads first, scales later and doesn't let go of that full-sized bitmap.

So how can you reduce memory consumption? The first recommended solution (because it is more processor-performant as well) is to prepare different copies of your graphics for each density. Ok great, but there are scenarios where that's not the best approach so what then? It turns out that there's a whole trove of info on this issue online in the official Android docs, the specifics on this scaling issue here:

You can mostly just copy-and-paste the utility code from the link above. By applying the methods therein, I was able to reduce memory usage by up to 80% or so (from memory), depending on the device. I didn't know this page existed until I needed it so I suggest you bookmark it and/or keep a mental note if you don't read the above page immediately.

Free Bitmap Memory Manually

Now that we know a little more about bitmap memory consumption, let's talk about another snag I encountered.

My app's home screen has two looks: dark & light. The light-mode background consists of 2 images. On a w360dp XHDPI phone, it consumes 5.8MB for bkgd img #1, and 4.0MB for #2, making a total of 9.8MB.

(FYI: You can easily see the memory profile of, and analyse your app by doing this from Eclipse: DDMS → Devices → Select process and click “Dump HPROF file” in the toolbar.)

What would you expect to happen to that 9.8MB when the user hits a button and the app moves on to the next activity (screen)? I expected that the OS would release the memory because it's not in use. Not so. As long as the home activity is in the task history the memory will stay consumed and locked which in my case means for the entirety of the app. Now maybe technology is catching up but the old phone I had for two years before my newer Galaxy Nexus, was a HTC Magic, and that thing was constantly chugging due to memory problems and as such. Consequentially, I'm not comfortable with holding onto 10MB for no reason on a mobile device.

So what can you do? Well I could draw both images onto a single canvas but that's not really solving the problem, that would just drop the 10MB down to 6MB and it would still be unavailable later. To solve the problem I did three things:

  1. Remove, dereference, recycle the images in onStop().
    I wrote a utility function to do just that, and then simply called in in my activity's onStop() method. It looks like this:
  2. Set the images during onStart().
    In your activity, just use View.setBackground(Drawable) or View.setBackgroundDrawable(Drawable) in your onStart() callback to restore the images that you clear in onStop().
  3. Turn off hardware acceleration.
    Devices running Android 4.2 require an additional kick in the pants to get them to let go of the memory. You'll want to turn hardware acceleration off or else you'll end up with instances of GLES20DisplayList in memory that consume exactly the same amount of space. Instructions to turn off hardware acceleration can be found here:

There's actually a whole bunch of doco on bitmaps and memory on the official Android site so for more info, take a look at


If you have a moderate amount of graphics and like me you optimise (ie. re-generate) them for multiple densities, then the size of your APK can balloon fast. This situation lead me to discover a utility called PNGOUT (Windows, ArchLinux, Other Linux + OSX). It optimises your PNGs and gets them down to a smaller size, from memory, using a special compression algorithm targeted at graphics. As with most types of ultra-compression it can be slow but the results are worth it.

I ran it over my PNGs. It took 30min and reduced my total size from 7.4MB to 5.9MB, a 20% reduction. Nice.

Screen Widths

I learned a few things about screen width. Firstly I had to find a database of devices and specs. One doesn't exist but I did find two admirable attempts here and here. After some analysis, from what I can tell, most older phones have a width of 320dp and I don't think there are any phones with less than that -- AdMob's smallest banner ads are 320dp. For LDPI and MDPI, 320dp is a safe bet. Once you get to HDPI, it seems that around 70% of devices are 320dp, 25% are 360dp and the rest are larger than that. Here's the most interesting piece of news: for XHDPI it seems that the 360dp is the minimum and majority. There doesn't seem to be any 320dp XHDPI devices in existence!

What does this mean? Why do I care? It means that on some devices in some scenarios, you're going to end up with more free space that you expected and a smaller perception. Instead a larger graphic is more appropriate and that's something to consider when generating graphics. You'll need to make a conscious decision about which dimension is more important to you for each graphic, and enlarge appropriately. If you want a graphic to be perceived as being the same size on a 320dp-width and a 360dp-width HDPI device, then you might want to generate another copy at 360 ÷ 320 = 1.125x the original size. If you're supporting tablets then it might pay to do something similar for the most common tablet sizes.

Here's what I ended up with:

drawable-w360dp-hdpi1.6875Width-sensitive GFX only
drawable-xhdpi2 or 2.25 depending on the gfxEverything

If you'd like to learn more have a read of these:
Supporting Different Densities
Supporting Different Screen Sizes

More Next Time

I learnt more but I'll post that next time. If you read this far I hope I've been helpful.