Monday, 5 June 2017

Dependently-Typed Functions

It's been a while since my last blog post. This time around I'm going to show how to do something in Scala that you might first think would be very straight-forward but unfortunately, at least in Scala ≤ 2.12.2, requires a bit of hoop-jumping: dependently-typed functions.

Consider the following data type:

sealed trait Field { type Value }
object Field {
  case object Name extends Field { type Value = String }
  case object Age  extends Field { type Value = Int }

With this data type, you have the following path-dependent types:

Name.Value == String
Age .Value == Int

val n = Name
n.Value == String

You also have the following type projections:

Name .type#Value = String
Age  .type#Value = Int
Field     #Value = Any

1. A Dependently-Typed Method

Lets say you want to create the following, rather innocent-looking function:

def emptyValue(f: Field): f.Value

Likely your first attempt will be to pattern-match:

def emptyValue(f: Field): f.Value =
  f match {
    case Field.Name => ""
    case Field.Age  => 0

Unfortunately scalac doesn't support this and you instead get compilation errors:

<console>:15: error: type mismatch;
 found   : String("")
 required: f.Value
           case Field.Name => ""
<console>:16: error: type mismatch;
 found   : Int(0)
 required: f.Value
           case Field.Age  => 0

Ok, let's switch to a different representation:

type Aux[A] = Field { type Value = A }

def emptyValueHack[A](f: Aux[A]): A =
  f match {
    case Field.Name => ""
    case Field.Age  => 0

def emptyValue(f: Field): f.Value =

This does work. Great! END BLOG POST. Right? Wrong. There's another problem. What happens if we forget a case:

def emptyValueHack2[A](f: Aux[A]): A =
  f match {
    case Field.Name => ""
    // case Field.Age  => 0

It's compiles successfully without any warnings. We've lost exhaustivity checking which is a runtime exception just waiting to happen. Oh noes. Now what? Consider: what are we doing when we do simple pattern-matching on each case? We're inspecting a value, choosing a corresponding function, and executing it. If each case has exactly one corresponding function then we have exhaustivity. That sounds like something we already have the means to do, without pattern-matching, in plain old Scala.

Ok, let do this ourselves, exactly one function per case and choose depending on the Field.

sealed trait Field {
  type Value
  def fold(n: Field.Name.type => Field.Name.Value,
           a: Field.Age .type => Field.Age .Value): Value

object Field {
  case object Name extends Field {
    override type Value = String
    override def fold(n: Field.Name.type => Field.Name.Value, a: Field.Age.type => Field.Age.Value): Value =

  case object Age extends Field {
    override type Value = Int
    override def fold(n: Field.Name.type => Field.Name.Value, a: Field.Age.type => Field.Age.Value): Value =

Take a good look at this. There are a few things that might seem odd. You're probably wondering why I'm passing statically-known singletons to arguments. Why not just:

def fold(name: => Field.Name.Value,
         age : => Field.Age .Value): Value

Three reasons:

  1. Cases aren't always objects. What if they're case classes like case class CustomField(label: String) extends Field. In such a case it will be important to pass the instance to the caller so that they have access to the additional/dynamic information in the label field.

  2. It embodies the proof that the appropriate argument case is required for each field case. The types make it clear and so long as you call the arg with this instead of Field.Name directly, then you get an extra bit of proof of correctness. Seeing as the workarounds described in this post introduce some tedium so too do they often introduce copy-pasting which can lead to accidental bugs like this:

// Spot the bug...
case object Name extends Field {
  override type Value = String
  override def fold(n: Field.Name.type => Field.Name.Value, a: Field.Age.type => Field.Age.Value): Value =
case object Age extends Field {
  override type Value = Int
  override def fold(n: Field.Name.type => Field.Name.Value, a: Field.Age.type => Field.Age.Value): Value =
  1. The caller has the same problem as above; if they're pattern-matching and want an downcast instance of Field in their case functions then this gives it to them so that they don't manually reference Field.Name and end up with potential bugs.

2. Reducing Duplication

Next, let's think about what happens if we add a new case like Field.Address; we'll have to update the fold signature in our current three places and then add a fourth in Field.Address. So much repetition! If you find yourself copy-pasting a cumbersome list of method arguments, consider it a good practice to encapsulate them all in a class/record. That way you can consistently pass around and declare a single type, and changes to its fields don't cause changes to method signatures. Let's do that for our fold method:

sealed trait Field {
  type Value
  def fold(f: Field.Fold): Value

object Field {
  case object Name extends Field {
    override type Value = String
    override def fold(f: Field.Fold) =

  case object Age extends Field {
    override type Value = Int
    override def fold(f: Field.Fold) = f.age(this)

  final case class Fold(name: Field.Name.type => Field.Name.Value,
                        age : Field.Age .type => Field.Age .Value)

Look at that fold method, nice and easy. We can add cases ALL DAY LONG without repetition. High five thineself.

While we're in the vicinity, let's add a little convenience method to the Fold class to really prove to ourselves that can accomplish our original goal, pay attention to signature:

final case class Fold(name: Field.Name.type => Field.Name.Value,
                      age : Field.Age .type => Field.Age .Value) {
  def apply(f: Field): f.Value =

Now using the above, how can we rewrite our original emptyValue function?

def emptyValue(f: Field): f.Value =
    name = _ => "",
    age  = _ => 0,

Goals: accomplished.

  • Concise
  • Exhaustive
  • Pattern-match in spirit

It's a little ugly but not a huge deal. Depending on your codebase and environment you can likely avoid the need for _ => bits without penalty which will make it a little nicer on the eyes.

3. A Dependently-Typed Function

Like most everything from OOP, methods are boring. You can't abstract over them, all you can do is call them. Functions allow you to do more because they themselves are values; you can pass them around which facilitates awesome collections methods like .map(A => B), .filter(A => Boolean), etc. and that's just the tip of the iceberg.

In the previous step we created a method whose output depends on its input value. How can we have the same in the form of a function? Function types are fixed, and the function type doesn't have a value to work with. Scala doesn't accept this kind of syntax:

// nope - not valid syntax
type MyFn = (f: Field) => f.Value

You might try using projections like this but you'll lose the relationship between input and output.

type MyFn = Field => Field#Value
// which is the same as
type MyFn = Field => Any

We've already encountered the answer. Surprise! Field.Fold is what we're looking for. It's a value, you can pass it around; you can apply it via its apply method to obtain a dependently-typed result.

Let's try it out:

def blah(field: Field, getValue: Field.Fold): field.Value =

Hmmm, yes, well it does work but it's admittedly not very interesting in that shape. It can only represent f => f.Value...

4. Dependently-Typed Functions

What if we want a function of any shape instead of just f => f.Value? Maybe you want to be able to print each field to the screen: f => f.Value => Unit. Maybe you want to reduce lists of each value to a single value: f => List[f.Value] => f.Value. Or perform validation: f => f.Value => Either[Error, f.Value].

Let's find an abstraction that can represent each example above. They all have the shape: f => {something including f.Value}. Add some type aliases for the right-hand sides:

type GetValue  [Value] = Value
type Print     [Value] = Value => Unit
type ReduceList[Value] = List[Value] => Value
type Validate  [Value] = Value => Either[Error, Value]

There's our abstraction:

type F[Value] = // various

with the fold being various cases of f => F[f.Value] where F[Value] = ….

Let's update the fold to allow this:

final case class Fold[F[_]](name: Field.Name.type => F[Field.Name.Value],
                            age : Field.Age .type => F[Field.Age .Value]) {
  def apply(f: Field): F[f.Value] =

Now let's try it out:

// f: Field => f.Value
val getValue = Field.Fold[GetValue](
  name = _ => "George",
  age  = _ => 99)

// f: Field => f.Value => Unit
val printValue = Field.Fold[Print](
  name = _ => n => println(s"My name is $n."),
  age  = _ => i => println(s"I am $i years old."))

Great. Dependently-typed function values. And how do we use them? Just like normal functions.

val f = Field.Age
val v = getValue(f) // v = 99
printValue(f)(v)    // prints: I am 99 years old.

Very good. I hope you found the interesting. Thanks for reading!

For more exhilaration, as an exercise to the reader, try to:

  • define composition of fold functions
  • create multiple folds for subsets of the same data type


Type parameters

In terms of representation, using type parameters is isomorphic (ignoring variance). The following represents the exact same information:

sealed trait Field[V]
object Field {
  case object Name extends Field[Int]
  case object Age  extends Field[String]

In terms of usage however, the two representations are far from the same and have a drastic impact on how you'll use them. Some things are easier, some harder.

For example, in terms of generic access existential types make life easier: its simply Field instead of Field[_]. This is even more evident when there's a constraint on the type like Field[_ <: AnyRef] whilst Field remains the same.

I've come across other scenarios but honestly can't remember right now, something something implicits... Instead take a look at this slide deck of @julienrf's for more comparison:


The final code in full of this blog post is available here:

Sunday, 20 March 2016

Testing Scala.JS on Firefox & Chrome from SBT

It's been fantastic being able to write Scala and compile it to JavaScript thanks to Scala.JS. Going further, Scala.JS lets you write accompanying unit tests, run them from SBT, and choose a target environment from {Rhino, Node.JS, Phantom.JS}. If you needed DOM in your testing, your only option was Phantom.JS which seems great at first but—because it only simulates DOM—you soon discover that there are many cases in which its behaviour diverges from normal browsers (eg. DOM types for <td> tags), or isn't supported at all (text selection, anything to do with focus, more). Oh, and it's also riddled with bugs, many significant and long-standing. So while Phantom.JS's effort is appreciated, it's no substitute for a real browser. This was where the story ended until very recently.

Recently, the Scala.JS team released scala-js-env-selenium which allows you to use Selenium in the same way you would the other JS environments. That means that Scala.JS can now interface with real browsers, namely Firefox and Chrome. Awesome!

The next step, and the purpose of this blog post, is to effectively integrate it into your SBT/Scala/Scala.JS project. Let me tell you about my ideal environment and then I'll show you how to achieve it.


In my ideal environment:
  • I write my Scala.JS unit tests the same way I already do, and I can continue to run them against Node.JS or Phantom.JS because it's fast. No changes there.
  • To run tests against Firefox or Chrome, I simply prefix my SBT command with firefox: or chrome:.
    For example, firefox:test would run tests in Firefox, chrome:testOnly a.b.c would run my a.b.c test in Chrome.
  • Testing in FF/Chrome doesn't require recompilation (All in the Land know Scalac is slow). It should use all of the same bits and bobs (especially the output JS) that normal tests use.
  • I specify that certain tests will only run in FF and/or Chrome.
    For example, tests that use focus should skip Phantom.JS where I know focus doesn't work.
  • I run testAll to run the same tests in all environments (fast-env & Firefox & Chrome). This will happen concurrently.
  • I can use FF/Chrome headlessly (i.e. without the windows popping up when the browsers are launched and running).

1. Selenium support.

Add this to your project/plugins.sbt:
libraryDependencies += "org.scala-js" %% "scalajs-env-selenium" % "0.1.1"
Then install ChromeDriver which is needed so that Selenium can interface with Chrome.

2. General SBT Config

Create a file called project/InBrowserTesting.scala with the following content: This creates SBT configurations for each browser, then delegates most of the settings to the test:* settings.

3. Project-specific SBT config

Next you need to look at your existing SBT build settings.

For each cross-JVM/JS project, add:
For each JS-only project, add:
For each JVM-only project, add:

You might be wondering why JVM projects need any configuration at all. It's so that when testAll is run from a JVM project or the root project, you want it to run the JVM tests. Without this setting, testAll in a JVM project would do nothing at all, and testAll from the root would only run the JS tests.

4. Environment-Dependent Tests

I often forget that I'm writing JS when I write Scala.JS. As we're in JS land, to determine our environment all we have to do is check the user-agent. How you skip tests depends on the test framework you're using but all you have to do is put something like this in your test code:
  if (JsEnvUtils.isRealBrowser) {
    // test here
  } else {
    // skip

5. Headlessness

Super simple (unless you're a Windows user). Install xvfb, "X Virtual FrameBuffer", which starts X without a graphics display.

Here are two different ways you can use it:
  1. Either start it in a separate window via
    Xvfb :1
    or in the background
    nohup Xvfb :1 &
    then launch SBT like this:
    DISPLAY=:1 sbt
  2. This tip comes from danielkza (thanks!). You can simply prepend your SBT command with xvfb-run -a to have an X server spun up on demand without the need to start it yourself. Beware though, that xvfb-run is a bit naive and susceptible to race conditions so while it'll be fine on your local machine, it may cause you problems on your CI server or similar.
You'll no longer see any Firefox and Chrome windows; all your output just appears in the SBT console as usual. Too easy.

Note: The :1 indicates the X display-number, which is a means to uniquely identify an X server on a host. The 1 is completely arbitrary—you can choose any number so long as there isn't another X server running that's already associated with it.


All goals described are thus achieved.
I hope you found this helpful.
Happy coding!

Monday, 23 February 2015

Zero-overhead Recursive ADT Coproducts

Zero-product Recursive AMD what???

Ok. Imagine this: you're building some app, and in certain parts users can type text with special tokens. (It's like in Twitter, when typing “Hello @japgolly” the “@japgolly” part gets special treatment.) You parse various types of tokens. You might have different locations to type text, and rules about which tokens are allowed in each location. You want the compiler to enforce those rules but you also want to handle tokens generically sometimes. How would you do such a thing in Scala?

Initial Attempts

Ideally you'd define an ADT (algebraic data type) for all tokens possible, then create new types for each location that form a subset of tokens allowed. If that were possible, here's what that would look like.

sealed trait Token
case class  PlainText(t: String)  extends Token
case object NewLine               extends Token
case class  Link(url: String)     extends Token
case class  Quote(q: List[Token]) extends Token

// type BlogTitle   = PlainText | Link    | Quote[BlogTitle]

// type BlogComment = PlainText | NewLine | Quote[BlogComment]

Now Scala won't let us create our BlogTitle type like shown above. It doesn't have a syntax for coproducts (which is what BlogTitle and BlogComment would be, also called “disjoint unions” and “sum-types”) over existing types. Seeing as we have control over the definition of the generic tokens, we can be tricky and inverse the declarations like this:

sealed trait BlogTitle
sealed trait BlogComment

sealed trait Token
case class  PlainText(t: String) extends Token with BlogTitle with BlogComment
case object NewLine              extends Token                with BlogComment
case class  Link(url: String)    extends Token with BlogTitle
case class  Quote(q: List[????]) extends Token with BlogTitle with BlogComment
//                        ↑ hmmm...

...but as you can see, we hit a wall when we to Quote, which is recursive. We want a Quote in a BlogTitle to only contain BlogTitle tokens, not just any type of token. We can continue our poor hack as follows.

abstract class Quote[A <: Token] extends Token { val q: List[A] }
case class QuoteInBlogTitle  (q: List[BlogTitle])   extends Quote[BlogTitle]
case class QuoteInBlogComment(q: List[BlogComment]) extends Quote[BlogComment]

Not pleasant. And we're not really sharing types anymore. What else could we do?

We could create separate ADTs for BlogTitle and BlogComment, that would mirror and wrap their matching generic tokens, then write converters from specific to generic. That's a lot of duplicated logic and tedium, plus we now double the allocations and memory needed. Let's try something else...


NOTE: This bit about Shapeless is an interesting detour, but it can be skipped.

We could use Shapeless! Shapeless is an ingeniously-sculpted library that facilitates abstractions that a panel of sane experts would deem impossible in Scala, one such abstraction being Coproducts. Here's what a solution looks like using Shapeless.

(Sorry I thought I had this working but just realised recursive coproducts don't work. I've commented-out Quote[A] for now. There is probably a way of doing this – Shapeless often doesn't take no from scalac for an answer – I'll update this if some kind 'netizen shares how.)

sealed trait Token
case class  PlainText(text: String) extends Token
case object NewLine                 extends Token
case class  Link(url: String)       extends Token
case class  Quote[A](q: List[A])    extends Token

type BlogTitle   = PlainText :+: Link         :+: /*Quote[BlogTitle]   :+: */ CNil
type BlogComment = PlainText :+: NewLine.type :+: /*Quote[BlogComment] :+: */ CNil

/* compiles → */ val title = Coproduct[BlogTitle](PlainText("cool"))
// error    → // val title = Coproduct[BlogTitle](NewLine)

So far so good. What would a Token ⇒ Text function look like?

object ToText extends Poly1 {
  implicit def caseText    = at[PlainText   ](_.text)
  implicit def caseNewLine = at[NewLine.type](_ => "\n")
  implicit def caseLink    = at[Link        ](_.url)
  // ...

val text: String =

Ok I'm a little unhappy because I'm very fold of pattern-matching in these situations, but the above does work effectively. One thing to be aware of with Shapeless, is how it works. To achieve its awesomeness, it must build up a hierarchy of proofs which incurs time and space costs at both compile- and run-time – its awesomeness ain't free. The val title = ... statement above creates at least 7 new classes at runtime, where I want 1. Depending on your usage and needs, that overhead might be nothing, but it might be significant. It's something to be aware of when you decide on your solution.

Zero-overhead Recursive ADT Coproducts

There's another way. I mentioned “zero-overhead” and it can be done. Here is a different solution that relies solely on standard Scala features, one such feature being path-dependent types.

You can create an abstract ADT, putting each constituent in a trait, then simply combine those traits into an object to have it reify a new, concrete, sealed ADT. Sealed! Let's see the new definition:

// Generic

sealed trait Base {
  sealed trait Token

sealed trait PlainTextT extends Base {
  case class PlainText(text: String) extends Token

sealed trait NewLineT extends Base {
  case class NewLine() extends Token

sealed trait LinkT extends Base {
  case class Link(url: String) extends Token

sealed trait QuoteT extends Base {
  case class Quote(content: List[Token]) extends Token

// Specific

object BlogTitle   extends PlainTextT with LinkT    with QuoteT
object BlogComment extends PlainTextT with NewLineT with QuoteT

Now let's use it:

   List[BlogTitle.Token](BlogTitle.PlainText("Hello")) // success
// List[BlogTitle.Token](BlogTitle.NewLine)   ← error: BlogTitle.NewLine doesn't exist
// List[BlogTitle.Token](BlogComment.NewLine) ← error: BlogComment tokens aren't BlogTitle tokens

// Specific
val blogTitleToText: BlogTitle.Token => String = {
  // case BlogTitle.NewLine   => ""     ← error: BlogTitle.NewLine doesn't exist
  // case BlogComment.NewLine => ""     ← error: BlogComment tokens not allowed
  case BlogTitle.PlainText(txt) => txt
  case BlogTitle.Link(url)      => url
  // Compiler warns missing BlogTitle.Quote(_) ✓

// General
val anyTokenToText: Base#Token => String = {
  case a: PlainTextT#PlainText => a.text
  case a: LinkT     #Link      => a.url
  case a: NewLineT  #NewLine   => "\n"
  // Compiler warns missing QuoteT#Quote ✓

// Recursive types
val t: BlogTitle  .Quote => List[BlogTitle  .Token] = _.content
val c: BlogComment.Quote => List[BlogComment.Token] = _.content
val g: QuoteT     #Quote => List[Base       #Token] = _.content

Look at that! That is awesome. These are some things that we get:

  • No duplicated definitions or logic.
  • Generic & specific hierarchies are sealed, meaning the compiler will let you know when you forget to cater for a case, or try to cater for a case not allowed.
  • Children of recursive types have the same specialisation.
    Eg. a BlogTitle can only quote using BlogTitle tokens.
  • Tokens can be processed generically.
  • Zero-overhead. No additional computation or new memory allocation needed to store tokens, or move them into a generic context. No implicits.
  • Nice, neat pattern-matching which makes me happy.
  • It's just plain ol' Scala traits so you're free to encode more constraints & organisation. You can consolidate traits, add type aliases, all that jazz.

There you go. Seems like a great solution to this particular scenario.

Nothing is without downsides though. Creation will likely be a little hairy; imagine writing a serialisation codec – the Generic ⇒ Binary part will be easy but Binary ⇒ Specific will be more effort. In my case I will only create this data thrice {serialisation, parsing, random data generation} but read and process it many, many times. Good tradeoff.

Sunday, 28 September 2014

An Example of Functional Programming

Many people, after reading my previous blog post, asked to see a practical example of FP with code. I know it's been a few months – I actually got married recently; wedding planning is very time consuming – but I've finally come up with an example. Please enjoy.

Most introductions to FP begin with pleas for immutability but I'm going to do something different. I've come up with a real-world example that's not too contrived. It's about validating user data. There will be 5 incremental requirements that you can imagine coming in sequentially, each one building on the other. We'll code to satisfy each requirement incrementally without peeking at the following requirements. We'll code with an FP mindset and use Scala to do so but the language isn't important. This isn't about Scala. It's about principals and a perspective of thought. You can write FP in Java (if you enjoy pain) and you can write OO in Haskell (I know someone who does this and baffles his friends). The language you use affects the ease of writing FP, but FP isn't bound to or defined by any one language. It's more than that. If you don't use Scala this will still be applicable and useful to you.

I know many readers will have programming experience but little FP experience so I will try to make this as beginner-friendly as possible and omit using jargon without explanation.

Req 1. Reject invalid input.

The premise here is that we have data and we want to know if it's valid or not. For example, suppose we want ensure a username conforms to certain rules before we accept it and store it in our app's database.

I'm championing functional programming here so let's use a function! What's the simplest thing we need to make this work? A function that takes some data and returns whether it's valid or not: A ⇒ Boolean.

Well that's certainly simple but I'm going to handwaveily tell you that primitives are dangerous. They denote the format of the underlying data but not its meaning. If you refactor a function like blah(dataWasValid: Boolean, hacksEnabled: Boolean, killTheHostages: Boolean) the compiler isn't going to help you if you get the arguments wrong somewhere. Have you ever had a bug where you used the ID of one data object in place of another because they were both longs? Did you hear about the NASA mission that failed because of mixed metric numbers (eg. miles and kilometers) being indistinguishable?

So let's address that by first correcting the definition of our function. We want a function that takes some data and returns whether it's valid or not an indication of validity: A ⇒ Validity.
sealed trait Validity
case object Valid extends Validity
case object Invalid extends Validity

type Validator[A] = A => Validity

We'll also create a sample username validator and put it to use. First the validator:
val usernameV: Validator[String] = {
  val p = "^[a-z]+$".r.pattern
  s => if (p.matcher(s).matches) Valid else Invalid

Now a sample save function:
def example(u: String): Unit =
  usernameV(u) match {
    case Valid   => println("Fake-saving username.")
    case Invalid => println("Invalid username.")

There's a problem here. Code like this will make FP practitioners cry and for good reason. How would we test this function? How could we ever manipulate or depend on what it does, or its outcome? The problem here is “effects” and unbridled, they are anathema to healthy, reusable code. An effect is anything that affects anything outside the function it lives in, relies on anything impure outside the function it lives in, or happens in place of the function returning a value. Examples are printing to the screen, throwing an exception, reading a file, reading a global variable.

Instead we will model effects as data. Where as the above example would either 1) print “Fake-saving username” or 2) print “Invalid username”, we will now either 1) return an effect that when invoked, prints “Fake-saving username”, or 2) return a reason for failure.

We'll define our own datatype called Effect, to be a function that neither takes input nor output.
(Note: If you're using Scalaz, scalaz.effect.IO is a decent catch-all for effects.)
type Effect = () => Unit
def fakeSave: Effect = () => println("Fake save")

Next, Scala provides a type Either[A,B] which can be inhabited by either Left[A] or Right[B] and we'll use this to return either an effect or failure reason.
Putting it all together we have this:
def example(u: String): Either[String, Effect] =
  usernameV(u) match {
    case Valid   => Right(fakeSave)
    case Invalid => Left("Invalid username.")

Req 2. Explain why input is invalid.

We need to specify a reason for failure now.
We still have two cases: valid with no error msg, invalid with an error msg. We'll simply add an error message to the Invalid case.
case class Invalid(e: String) extends Validity

Then we make it compile and return the invalidity result in our example.
 val usernameV: Validator[String] = {
   val p = "^[a-z]+$".r.pattern
-  s => if (p.matcher(s).matches) Valid else Invalid
+  s => if (p.matcher(s).matches) Valid else
+         Invalid("Username must be 1 or more lowercase letters.")
 def example(u: String): Either[String, Effect] =
   usernameV(u) match {
     case Valid      => Right(fakeSave)
-    case Invalid    => Left("Invalid username.")
+    case Invalid(e) => Left(e)

Req 3. Share reusable rules between validators.

Imagine our system has 50 data validation rules, 80% reject empty strings, 30% reject whitespace characters, 90% have maximum string lengths. We like reuse and D.R.Y. and all that; this requirement addresses that by demanding that we break rules into smaller constituents and reuse them.

We want to write small, independent units and join then into larger things. This leads us to an important and interesting topic: composability.

I want to suggest something that I know will cause many people to cringe – but hear me out – let's look to math. Remember basic arithmetic from ye olde youth?
8 = 5 + 3
8 = 5 + 1 + 2
Addition. This is great! It's building something from smaller parts. This seems like a perfect starting point for composition to me. There's a certain beauty and elegance to math, and its capability is proven; what better inspiration!

Let's look at some basic properties of addition.

Property #1:
8 = 8 + 0
8 = 0 + 8
Add 0 to any number and you get that number back unchanged.
Property #2:
8 = (1 + 3) + 4
8 = 1 + (3 + 4)
8 = 1 + 3 + 4
Parentheses don't matter. Add or remove them without changing the result.
Property #3:
I'll also mention that in primary school, you had full confidence in this:
number + number = number
It may seem silly to mention, but imagine if your primary school teacher told you that
number + number = number | null | InvalidArgumentException
+ has other properties too, like 2+6=6+2 but we don't want that for our scenario with validation. The above three provide enough benefit for what we need.

You might wonder why I'm describing these properties. Why should you care? Well as programmers you gain much by writing code with similar properties. Consider...
  • You know you don't have to remember to check for nulls, catch any exceptions, worry about our internal AddService™ being online.
  • As long as the overall order of elements is preserved, you needn't care about the order in which groups are composed. i.e. we know that a+b+c+d+e will safely yield the same result if we batch up execution of (a+b) and (c+d+e) then add their results last. And parenthesis support is already provided by the programming language.
  • If ever forced into composition by some code path and you can opt out by specifying the 0 because we know that 0+x and x+0 are the same as x. No need to overload methods or whatnot.
Simple right? Well have you ever heard the term “monoid” thrown around? (Not “monad”.) Guess what? We've just discussed all that makes a monoid what it is and you learned it as a young child.
A monoid is a binary operation (x+x=x) that has 3 properties:
  • Identity: The 0 is what we call an identity element. 0+x = x = x+0
  • Associativity: That's the ability to add/remove parentheses without changing the result.
  • Closure: Always returns a result of the same type, no RuntimeExceptions, no nulls.
If jargon from abstract algebra intimidates you, know that it's mostly just terminology. You already know the concepts and have for years. The knowledge is very accessible and it's incredibly useful to be able to identify these kinds of properties about your code.

Speaking of code, let's implement this new requirement as a monoid. We'll add Validator.+ for composition and ensure it preserves the associativity property, and for identity (also called zero).
(Note: If using Scalaz, Algebird or similar, you can explicitly declare your code to be a monoid to get a bunch of useful monoid-related features for free.)
case class Validator[A](f: A => Validity) {
  @inline final def apply(a: A) = f(a)

  def +(v: Validator[A]) = Validator[A](a =>
    apply(a) match {
      case Valid         => v(a)
      case e@ Invalid(_) => e

object Validator {
  def id[A] = Validator[A](_ => Valid)
The difficulty of building human-language sentences scales with expressiveness. For our demo it's enough to simply have validators contain error message clauses like “is empty”, “must be lowercase” and just tack the subject on later.

First we define some helper methods pred and regex, then use them to create our validators
object Validator {
  def pred[A](f: A => Boolean, err: => String) =
    Validator[A](a => if (f(a)) Valid else Invalid(err))

  def regex(r: java.util.regex.Pattern, err: => String) =
    pred[String](a => r.matcher(a).matches, err)

val nonEmpty = Validator.pred[String](_.nonEmpty, "must be empty")
val lowercase = Validator.regex("^[a-z]*$".r.pattern, "must be lowercase")

val usernameV = nonEmpty + lowercase
Then we gaffe our subject to on to our error messages before displaying it and we're done.
def buildErrorMessage(field: String, err: String) = s"$field $err"

def example(u: String): Either[String, Effect] =
  usernameV(u) match {
    case Valid      => Right(fakeSave)
    case Invalid(e) => Left(buildErrorMessage("Username", e))

Req 4. Explain all the reasons for rejection.

Users are complaining that they get an error message, fix their data accordingly only to have it then rejected for a different reason. They again fix their data and it is rejected again for yet another reason. It would be better to inform the user of all the things left to fix so they can amend their data in one shot.
For example, an error message could look like “Username 1) must be less than 20 chars, 2) must contain at least one number.”

In other words there can be 1 or more reasons for invalidity now. Ok, we'll amend Invalid appropriately...
case class Invalid(e1: String, en: List[String]) extends Validity

Then we just make the compiler happy...
 case class Validator[A](f: A => Validity) {
   @inline final def apply(a: A) = f(a)
   def +(v: Validator[A]) = Validator[A](a =>
-    apply(a) match {
-      case Valid         => v(a)
-      case e@ Invalid(_) => e
-    })
+    (apply(a), v(a)) match {
+      case (Valid          , Valid          ) => Valid
+      case (Valid          , e@ Invalid(_,_)) => e
+      case (e@ Invalid(_,_), Valid          ) => e
+      case (Invalid(e1,en) , Invalid(e2,em) ) => Invalid(e1, en ::: e2 :: em)
+    })
 object Validator {
   def pred[A](f: A => Boolean, err: => String) =
-    Validator[A](a => if (f(a)) Valid else Invalid(err))
+    Validator[A](a => if (f(a)) Valid else Invalid(err, Nil))
-def buildErrorMessage(field: String, err: String) = s"$field $err"
+def buildErrorMessage(field: String, h: String, t: List[String]): String = t match {
+  case Nil => s"$field $h"
+  case _   => (h :: t){case (e,i) => s"${i+1}) $e"}.mkString(s"$field ", ", ", ".")
 def example(u: String): Either[String, Effect] =
   usernameV(u) match {
     case Valid         => Right(fakeSave)
-    case Invalid(e)    => Left(buildErrorMessage("Username", e))
+    case Invalid(h, t) => Left(buildErrorMessage("Username", h, t))
(Note: If you're using Scalaz, NonEmptyList[A] is a better replacement for A, List[A] like I've done in Invalid. The same thing can also be achieved by OneAnd[List, A]. In fact OneAnd is a good way to have compiler-enforced non-emptiness.)

Req 5. Omit mutually-exclusive or redundant error messages.

Take the this error message: “Your name must 1) include a given name, 2) include a surname, 3) not be empty”. If the user forgot to enter their name you just want to say “hey you forgot to enter your name”, not bombard the user with details about potentially invalid names.

What does this mean? It means one rule is unnecessary if another rule fails. What we're really talking about here is the means by which rules are composed. Let's just add another composition method. We talked about the + operation in math already, well math also provides a multiplication operation too. Look at an expression like 6 + 14 + (7 * 8). Two types of composition, us explicitly clarifying our intent via parentheses. That's perfectly expressive to me and it solves our new requirement with simplicity and minimal dev. As a reminder that we can borrow from math without emulating it verbatim, instead of a symbol let's give this operation a wordy name like andIfSuccessful so that we can say nonEmpty andIfSuccessful containsNumber to indicate a validator that will only check for numbers if data isn't empty.

Just like these express different intents and yield different results
number = 4 * (2 + 10)
number = (4 * 2) + 10
So too can
rule = nonEmpty andIfSuccessful (containsNumber and isUnique)
rule = (nonEmpty andIfSuccessful containsNumber) and isUnique
Or if you don't mind custom operators
rule = nonEmpty >> (containsNumber + isUnique)
rule = (nonEmpty >> containsNumber) + isUnique

To implement this new requirement we add a single method to Validator:
def andIfSuccessful(v: Validator[A]) = Validator[A](a =>
  apply(a) match {
    case Valid           => v(a)
    case e@ Invalid(_,_) => e


And we're done.
It's not how I would've approached code years back in my OO/Java era, nor is it like any of the code I came across written by others in that job. As an experiment I started fulfilling these requirements in Java the way old me used to code and there was a loooot of wasted code between requirements. I'd get all annoyed at each new step, so much so that I didn't even bother finishing. On the contrary, I enjoyed writing the FP.

Right, what conclusions can we draw?

FP is simple. Each validation is a single function in a wrapper.
FP is flexible. Logic is reusable and can be assembled into complex expressions easily.
FP is easily maintainable & modifiable. It has less structure, less structural dependencies, and is less code, plus the compiler's got your back.
FP is easy on the author. There was next to no rewriting or throw-away of code between requirements, and each new requirement was easy to implement.

I hope this proves an effective concrete example of FP for programmers of different backgrounds. I also hope this enables you to write more reliable software and have a happier time doing it.

Go forth and function.

Monday, 9 June 2014

A Year of Functional Programming

It's been a year since I first came across the concept of functional programming. To say it's changed my life, is an unjust understatement. This is a reflection on that journey.

Warning: I use the term FP quite loosely throughout this article.

Where I Was

I've been coding since the age of 8, so 26 years now. I started with BASIC & different types of assembly then moved on to C, C++, PHP and Perl (those were different times, maaaan ☮), Java, JavaScript, Ruby. That's the brief gist of it. Basically: a lot of time on the clock and absolutely no exposure to FP concepts or languages. I thought functional programming just meant recursive functions (ooooo big deal right?). I really came in blind.

How It Started: Scala

Last year, I wanted the speed and static checking (note: not “types”) of Java with the conciseness and flexibility of Ruby. I came across Scala, skimmed a little and was impressed. I bought a copy of Scala For The Impatient and just ate it for breakfast. I read the entire thing in 2 or 3 days, jotted down everything useful and then just started coding. It was awesome! At first I was just coding the same way I would Java with less than half the lines of code. It is a very efficient Java.

Exposure: Haskell

Lurking around the Scala community, I came across a joke. Someone said “Scala is a gateway drug to Haskell.” I found that amusing, although not for the reasons that the author intended. Haskell? Isn't that some toy university language? An experiment or something. Is it even still alive? Scala's awesome and so powerful, why would that lead to Haskell? How.... intriguing. Inexplicably it piqued my interest and really stuck with me. Later I decided to look it up and yes, it sure was alive and very active. I was shocked to discover that it compiles to machine code (binary) and is comparable in speed to C/C++. What?! It seems idiotic now but a year ago I thought it was some interpreted equation solver. I'm not alone in that ignorance, sadly; talking about it to some mates over lunch last year and a friend incredulously burst out laughing, “Haskell?” as if I was trying to tell him my dishwasher had an impressive static typing system. It saddens me to realise how in the dark I was, and how many people still are. Haskell is pretty frikken awesome! True to the gateway drug prophecy, I do now look to Haskell as an improvement to using Scala. But let's get back on track.

Exposure: Scalaz

I also started seeing contentious discussion of some library called Scalaz. Curious, I had a look at the code to see what people were on about, and didn't understand it at all. I'd see classes like Blah[F[_], A, B], methods with confusing names that take params like G[_], A → C, F[X → G[Y]], implementations like F.boo_(f(x))(g)(x), and I'd just think “What the hell is this? How is this useful?”. I was used to methods that did something pertaining to a goal in its domain. This Scalaz code was very alien to me and yet, very intriguing. Some obviously-smart person spent time making the alphabet soup permutations, why?

I've since discovered that answer to that question and I never could've imagined the amount of benefit it would yield. Instead of methods with domain-specific meaning, I now see functions for morphisms, in accordance with relevant laws. Simply put: changing the shape or content of types. No mention of any kind of problem-domain; applicable to all problem-domains! It's been surprising over the last year to discover just how applicable this is. This kind of abstraction is prolific, it's in your code right now, disguised, and intertwined with your business logic. Without knowledge of these kind of abstractions (and the awareness that a type-component level of abstraction is indeed possible) you are doomed to reinvent the wheel for all time and not even realise it. Identifying and using these abstractions, your code becomes more reusable, simple, readable, flexible, and testable. When you learn to recognise it, it's breathtaking.

FP: The Basics

Now that I'd been exposed to FP I started actively learning about it. At first I learned about referential transparency, purity and side-effects. I nodded agreement but had major reservations about the feasibility of adhering to such principals, and so at first I wasn't really sold on it. Or rather, I was hesitant. I may have been guilty of mumbling sentences involving the term “real-world”. Next came immutability. Now I'm a guy who used to use final religiously in Java, const on everything in C++ back in the day, and here FP is advocating for data immutability. Not just religiously advocating but providing real, elegant solutions to issues that you encounter using fully immutable data structures. Wow! So with immutability, composability, lenses it had its hooks in me.

Next came advocation for more expressive typing and (G)ADTs. That appealed in theory too and again I was hesitant about its feasibility. Once I experimentally applied it to some parts of my project, I was blown away by how well it worked. That became the gateway into thinking of code/types algebraically, which lead to...

FP: Maths

I loved maths back in school and always found it easy. Reading FP literature I started coming across lots of math and at first thought “great! I'm awesome at maths!” but then, trying to make sense of some FP math stuff, I'd find myself spending hours clicking link after link, realising that I wasn't getting it and, in many cases, still couldn't even make sense of the notation. It became daunting. Even depressing. Frequently demotivating.

The good news is that everything you need is out there; you just have to be prepared to learn more than you think you need. I persisted, I stopped viewing it as an annoying bridge and starting treating it as a fun subject on its own and, before long things made sense again. It opens new doors when you learn it.

Example: I had a browser tab (about co-Yoneda lemma) open for 3 months because I couldn't make sense of it. It took months (granted not everyday) of trying then confusion then tangents to understand background and whatever it was that threw me off. Once I learned that final piece of background info, I went from understanding only the first 5% to 100%. It was a great feeling.

Feeling Intimidated

Looking back there were times when I felt learning FP quite intimidating. When I'm in/around/reading conversations between experienced FP'ers quite often I've seriously felt like a moron. I started wondering if I gave my head a good slap, would a potato fall out. It can be intimidating when you're not used to it. But really, my advice to you, Reader, is that everyone's nice and happy to help when you're confused. I have a problem asking for help but I've seen everyone else do it and be received kindly... then I swoop in an absorb all the knowledge, hehe.

It's a mindset change. I wish I'd known this earlier as it would've saved me frustration and doubt, but you kind of need to unlearn what you think you know about coding, then go back to the basics. Imagine you've driven trains for decades, and spontaneously decide you want to be a pilot. No, you can't just read a plane manual in the morning and be in Tokyo in the afternoon. No, if you grab a beer with experienced pilots you won't be able to talk about aviation at their level. It's normal, right? Be patient, learn the basics, have fun, you'll get there.

On that note, I highly recommend Fuctional Programming in Scala, it's a phenomenal book. It helped me wade my way from total confusion to comfortable comprehension on a large number of FP topics with which I was struggling trying to learn from blogs.

Realisation: Abstractions

Recently I looked at some code I wrote 8 months ago and was shocked! I looked at one file written in “good OO-style”, lots of inheritance and code reuse, and just thought “this is just a monoid and a bunch of crap because I didn't realise this is a monoid” so I rewrote the entire thing to about a third of the code size and ended up with double the flexibility. Shortly after I saw another file and this time thought “these are all just endofunctors,” and lo and behold, rewrote it to about a third of the code size and the final product being both easier to use and more powerful.

I now see more abstractions than I used to. Amazingly, I'm also starting to see similar abstractions outside of code, like in UI design, in (software) requirements. It's brilliant! If you're not on-board but aspire to write “DRY” code, you will love this.

Realisation: Confidence. Types vs Tests

I require a degree of confidence in my code/system, that varies with the project. I do whatever must be done to achieve that. In Ruby, that often meant testing it from every angle imaginable, which cost me significant effort and negated the benefit of the language itself being concise. In Java too, I felt the need to test rigorously.

At first I was the same in Scala, but since learning more FP, I test way less and have more confidence. Why? The static type system. By combining lower-level abstractions, an expressive type system, efficient and immutable data structures, and the laws of parametricity, in most cases when something compiles, it works. Simple as that. There are hard proofs available to you, I'm not talking about fuzzy feelings here. I didn't have much respect for static types coming from Java because it's hard to get much mileage out of it (even in Java 8 – switch over enum needs a default block? Argh fuck off! Gee maybe maybe all interfaces should have a catch-all method too then. That really boiled my blood the other day. Sorry-), anyway: Java as a static typing system is like an abusive alcoholic as a parent. They may put food on the table and clothes your back, but that's a far cry from a good parent. (And you'll become damaged.) Scala on the other hand teaches you to trust again. Trust. Trust in the compiler. I've come to learn that when you trust the compiler and can express yourself appropriately, entire classes of problems go away, and with it the need for testing. It's joyous.

Sadly though, eventually you get to a point where Scala starts to struggle. It gets stupid, it can't work out what you mean, what you're saying, you have to gently hold its hand and coax it with explicit type declarations or tricks with type variance or rank-n types. Once you get to that level you start to feel like you've outgrown Scala and now need a big boy's compiler which can lead to habitual grumbling and regular reading about Haskell, Idris, Agda, Coq, et al.

However when you do need tests, you can write a single test for a bunch of functions using a single expression. How? Laws. Properties. Don't know what I mean? Pushing an item on to a stack should always increase its size by 1, the popping of which should reduce its size by 1 and return the item pushed, and return a stack equivalent to what you started with. Using libraries like ScalaCheck, turning that into a single expression like pop(push(start, item)) == (start, item) which is essentially all you need to write; ScalaCheck will generates test data for you.

Where Next?

What does the future hold for me? Well, I could never go back to dynamically-typed language again.
I will stick with Scala as I've invested a lot in it and it's still the best language I know well. I'd like to get more hands-on experience with Haskell; I don't know its trade-offs that well but its type system seems angelic. Got my eye on Idris, too.


I used to get excited discovering new libraries. I'd always think “Great! I wonder what this will allow me to do.” Well now I feel that way about research & academic papers. They are the same thing except smarter, more lasting, more dependable, and they yield more flexibility. It's awesome and I've got decades of catching up to do! Over the next year I'll definitely spend a lot of time learning more FP and comp-sci theory. I'd also like to be able to understand most Haskell blogs I come across. They promise very intelligent constructs (which aren't specific to Haskell) but the typing usually gets a bit too dense; it'd be nice to be able to read those articles with ease.

Don't fall for the industry dogma that academia isn't applicable to the real-world. What a load of horseshit that lie is. It does apply to the real-world, it's here to help you achieve your goals, it will save you significant time and effort, even with the initial learning overhead considered. Don't say you don't have time to do things in half the time. If you're always busy and your business are super fast-paced agile scrum need-it-yesterday kinda people, well I know you don't have time, but what I'm offering is this: say you just need to get something out the door and can do it quickly and messily in 1000 lines in 6 hours with 10 bugs and 10 “abilities”, well if you spend a bit of your own time learning you could perform the same task in 400 lines in 4 hours with maybe 1 bug and 20 “abilities”. You've just saved 2 hours up-front, not to mention days of savings when adding new features, fixing bugs, etc. That's applicable to you, the “real-world” and the industry. I've spent years in the industry and not just as a coder and I wish I'd known about this stuff back then because it would've saved me so much time, effort and stress. There seems to be this odd disdain for academia throughout the industry. Reject it. It's an ignorant justification of laziness and short-sightedness. It's false. I encourage you to take the leap.

Saturday, 26 October 2013

Scala: Methods vs Functions

It's a bright, windy Saturday morning. Sipping a nice, warm coffee I find myself musing over the performance implications of subtleties between methods and functions in Scala. Functions are first-class citizens in Scala and represented internally as instances of Functionn (where n is the arity); effectively, an interface with an apply method. Methods are methods are methods; they are directly invocable in JVM-land.

So what differences are there that could affect performance? I can think of:

  • Because functions are traits with abstract type members, in JVM-land those abstract types will be erased to Objects. I presume this means boxing for your primitives like int and long (unless scalac has any tricks up its sleeve like @specialized)
  • When passing a method to a higher-order function, methods will need to be boxed into a Function. For example, in
    def method(s:String) = s.length
    the map method requires an instance of Function1 so I (again, presume) the compiler generates a synthetic instance of Function1 as a proxy to the target method.
  • The JVM can invoke methods directly. To invoke a function, it will have to first load up the Function object and then invoke its apply method. That's an extra hop.
  • Probably more.

So those are some of the differences between functions and methods in Scala. Let's see how they perform. There's an awesome little micro-benchmarking tool called ScalaMeter. It takes about 2 min to get started with it. I decided to test 1,000,000 reps along three axes:

  1. Direct invocation vs passing to someone else
  2. Primitive vs Object argument
  3. Primitive vs Object result

The Results

Fn improvement over Method
Directint → int-6.32%
int → str-9.53%
str → int-4.39%
str → str-1.09%
As Argint → int0.31%
int → str0.94%
str → int-0.22%
str → str-0.24%
What can we see?
  • It seems that there is a boxing cost for functions' i/o.
  • It seems that there is a slight cost when invoking functions directly.
  • It seems that there is no real cost boxing methods into functions.


If you're like one-week-ago-me (pft he's idiot!), you might think you're helping the compiler out by writing functions instead of methods when their only use is to be passed around. Well it doesn't appear to be so.
Just use methods and let your mind (if you're lucky enough to have its cooperation) worry about and solve other things.

Code and raw results available here:

Thursday, 5 September 2013

Keyed Lenses

TL;DR: Lenses are cool. I've come up with keyed lenses which I find helpful. Hopefully you will too. Do you? Have I just reinvented the wheel in ignorance?

Annual blog post time. So this year I've been imploding and exploding with enthusiasm over functional programming. Already I've found some FP perspectives and strategies mind-boggle-blow-blast-ingly effective, and beautiful. My day project is in a significantly better state owing to FP. If you're not onboard I recommend reading Learn You a Haskell for Great Good! and Functional Programming in Scala.

(Btw: big thanks to NICTA, specifically Tony Morris & Mark Hibberd who held 2 free FP courses and gracefully tolerated numerous dumb-shit moments from me. I think it's a semi-annual recurring thing so keep an eye out on scala-functional for the next one if you're interested and in Australia.)

What is a lens? If you don't know what a lens is, it's basically a decoupled getter/setter that be composed with other lenses, so that the depth and structure of data can be hidden. In traditional OO you might not see the merits but when your data structures are all immutable, the benefit is immense. There are plenty of good resources online to learn more, such as this, this and this.


What I'm calling a KeyedLens, is a lens that points to a value in a composite value such that a key is required. A Map is an obvious example.

(NOTE: Scalaz has some basic support for this -- I am aware of it -- but I find that it doesn't my needs and/or the way I try to use it. I find that the call-site syntax becomes long and nasty, it doesn't compose well, and it creates new lenses every time it's used which is inefficient).

Let's start with some toy data.
I'm going to use Scala and the awesome Scalaz library, and there's a link to the KeyedLens source code at the end of the post. (If you don't know Scala, just imagine it's pseudo-code. The concepts translate into almost anything.)

Let's model a band from a guitarist's point of view.

Here we're modelling the mighty band Tesseract (think Pink Floyd + Meshuggah).
There are two places where I'm going to use a keyed lens.

  1. To access the guitar of a given band member. (guitarL)
  2. To access the string gauge of a given guitar. (stringGaugeL)

Here are the lens definitions:

This gives us the following lenses:

guitarTuningLLensFamily[Guitar, Guitar, String, String]
stringGaugeLLensFamily[(Guitar,Int), Guitar, Double, Double]
bandNameLLensFamily[Band, Band, String, String]
guitarLLensFamily[(Band,Person), Band, Guitar, Guitar]
guitaristsTuningLLensFamily[(Band,Person), Band, String, String]
guitaristsGaugeLLensFamily[(Band,(Person,Int)), Band, Double, Double]
Notice the keys always get propagated to the left.
Now let's see them in action. This is what appeals to me the most.

Usage. The Fun Part.

Get with one key. What is Acle's guitar tuned to?

scala> guitaristsTuningL.get(band, acle)
res0: String = BEADGBE

Get with two keys. What is the gauge of Acle's 7th string?

scala> guitaristsGaugeL.get(band, (acle, 7))
res1: Double = 0.059

Set with one key. I want to change Acle's tuning.

scala> guitaristsTuningL.set((band, acle), "G#FA#D#FA#D#")

res2: Band = Band(Tesseract,Map(Person(Acle) -> Guitar(7,G#FA#D#FA#D#,List(0.011, 0.014, 0.018, 0.028, 0.038, 0.049, 0.059)), Person(James) -> Guitar(6,EADGBE,List(0.01, 0.013, 0.017, 0.026, 0.036, 0.046))),Set(Person(Jay), Person(Amos), Person(Ashe)))

Set with two keys. I want to lower the gauge of Acle's 7th string.

scala> guitaristsGaugeL.set((band, (acle, 7)), 0.0666666)

res3: Band = Band(Tesseract,Map(Person(Acle) -> Guitar(7,BEADGBE,List(0.011, 0.014, 0.018, 0.028, 0.038, 0.049, 0.0666666)), Person(James) -> Guitar(6,EADGBE,List(0.01, 0.013, 0.017, 0.026, 0.036, 0.046))),Set(Person(Jay), Person(Amos), Person(Ashe)))

[If you're new to lenses, keep in mind that all the data here is immutable. Objects are copied and reused.]

  • I really like the brevity of these lines.
  • I like that you get a single lens that requires a key be provided.
  • I don't like the (A, (K1,K2)) type of guitaristsGaugeL which is easily changable into (A, K1, K2) but then what happens when it's further recomposed? I'd probably need methods like compose2, compose3, compose4, etc. Will think later.
What do you think? Does anything like this already exist?

Source code for KeyedLenses.scala is here.