During this winter break I decided to jump into developing an alternative Scala wrapper for the Java Framework Vaadin. Many parts of the API are dealing with measurements like *pixels* or *cm*, so having a good design for handling measurements is crucial. In this post we first develop an API on the termlevel and then shift parts of the validation to the typelevel using a *typeclass* encoding.

Let’s first consider how the api should be used.

The small language we are trying to implement consists of *measures* und their unit suffix. Let’s inspect the design of the already existing wrapper library scaladin:

In Vaadin units are represented as enum, but the Java names are rather clumsy so new enums are introduced, reusing the integer values of the Java enums. The implicit class `MeasureExtent`

enabled suffix use of unit extensions to create instances of `Measure`

. `Measure`

then represents the compound of a length value and it’s unit.

## Some Simple Math

Up to now measures can be created using suffix syntax, but doing math with them requires manual destruction. So let’s add a simple `addition`

method on class `Measure`

.

You’ll notice that the implementation of `+`

includes a call to `conv`

which is not yet implemented. Since we cannot add apples and oranges we need to convert between them before doing the math. Following the CSS specification conversion can be performed on absolute values. Implementing the conversion naively we would end up with 2^{k} cases to handle. In our example there are k=3 absolute units (`em`

is a relative unit) resulting in 8 different conversion to address.

This exponential blowup can be avoided easily by converting to some intermediate base unit, reducing the implementation cost to 2*k.

The following code directly results from choosing `pt`

as the base unit:

The implementation of `conv`

first handles the trivial conversion with source being the same unit as target and then tries to obtain a conversion function from source to base and from base to target.
While this works pretty well, we will run into conversion errors at runtime:

The second part of this post will explore how to shift parts of the implementation to the typelevel in order to fail earlier.

## From Term To TypeLevel

The basic idea is to move the selection of the *conversion function* to the typelevel. If no conversion function can be found a type error will be raised.

### Some Preparations

In order to be able to use the information at the type level some preparation needs to be made. First of all we need to add a type parameter to `Measure`

:

This allows matching on the type of `unit`

when it comes to comparing two different measures. Secondly, when searching for the right conversion there is not
much information on the type of the unit available. Asking for the type tag of an enum value results in `Units.Value`

:

So, what can we do about it? Since values of an enum already fulfill the role of a singleton, why not turn them into objects?

(The last line is necessary to force strict evaluation of the objects, otherwise the lookup `Units(0)`

will fail)

Asking for the type tag of an enum value now results in the singleton type of the corresponding object – great!

### The Real Action

Now that all preparations are done, let’s get to the real stuff. The conversion lookup will be implemented using typeclasses just like the Scala collection library does with `CanBuildFrom`

. (If you are not familiar with this encoding I warmly recommend reading “Type Classes as Objects and Implicits” by Bruno Oliveira et al.)

The encoding will immediately follow from the above termlevel implementation (The names of the typeclasses are chosen to amplify this correspondence).

The typeclass `CanConvert[From, To]`

represents an evidence that type `From`

can be converted to type `To`

. This evidence manifests in the implementation of the conversion `impl`

.
Following the above implementation of `conv`

the first thing to do is to check for equivalence of `From`

and `To`

:

`id`

will be chosen by implicit lookup only if `U == U`

. It then provides the identity function as conversion. (Since the scope of implicit lookup is extended to the companion object, it is a good place to store all the implicit methods and values.)

This one was easy, but how can we encode “first try to find a conversion (a) to `Base`

, only if this succeeds try to find a conversion (b) from `Base`

to `To`

”?

If both conversions (a) and (b) are implemented as typeclasses, we can just use the `ToBase`

-conversion (a) as a premise to the `BaseTo`

implicit. Thus, the lookup will fail if there exists no conversion (a).

This might look a little bit complicated, but it just reads as “If you can provide me evidence that `From`

can be converted to base, I can provide evidence that `From`

can be converted to X”. Where X is the corresponding `To`

type i.e. `Units.pt.type`

in the first case.

For the corresponding implementation of `toBase`

we thus need a second typeclass:

This one is easier again, it just reads as “To convert `From`

to base just use the provided conversion function.”

Everything is wired together by object `CanConvert`

inheriting from `BaseTo`

. This way implicit lookup will prefer the identity implicit – and only if it fails continue to lookup in the parent trait `BaseTo`

.

Using this framework to implement addition in Measure results in the following code:

## Discussion

The usage of the API did not change but `14.pt + 1.em`

will now result in a compile time error. This error even can be customized by using `scala.annotation.implicitNotFound`

on `CanConvert`

.
The best thing of this encoding is the fact, that it is external to `Measure`

. This way users can add their own `CanConvert`

instances to allow additional conversions. For instance a typeclass `ConvertRelative`

could be defined that searches for an implicit value of the fontsize and allows conversion from `em`

to absolute measures.

Perfect, isn’t it?

Sadly, the answer for this use case is “no”. The typing of the underlying framework Vaadin is too weak. Methods like `ComponentPosition.getTop`

can return an arbitrary measure, so there is not all the necessary compile time information o make implicit lookup work. Of course one can pattern match on the result of the method call to restore type information, but this is tedious and too verbose.

I would love to hear some suggestions on how this situation can be improved. In fact, this post is presented in reverse order: I started with the typed version and only after encountering these usability issues I implemented the “runtime version”.

## Comments