Monday, 30 September 2013

More on the Subtleties of Scala and the Uniform Access Principle

I posted before about Scala and the Uniform Access Principle.  At that point I felt pretty smug that I’d noticed something clever.  Well I didn’t realise everything, but luckily the Atomic Scala Atom: “Uniform Access and Setters” was there to bring me fully up to speed.

Once you’ve seen it, it makes perfect sense, but when I came into this chapter-ette, I’d been under the impression thought you could swap defs for vals or vars as you pleased.  However, it soon became clear that there was more to think about, and this thinking comes down to contracts; contracts that the language makes with us, and which it (and we) can’t break.

For this discussion, the relevant Scala contracts are:

  • vals are immutable
  • vars aren’t
  • functions (def) can return different values and so Scala can’t guarantee the result will always be the same

This means that you can’t just implement fields and methods from an abstract base type in a subtype using any old variable or function. You must make sure you don’t break the contract that was made already in the parent.  Explicitly:

  • an abstract def can be implemented as a val or a var
  • an abstract var can be implemented as a def as long as a you also provide a setter as well as a getter

You’ll note that abstract vals can’t be implemented with defs.  That makes sense if we think about it.  A def could return various things – Scala can’t guarantee it’ll always be the same (especially if you consider overriding), whereas vals are immutable.  Broken contract? Denied.

An Added Subtlety

But wait a minute, we missed a fourth contract.  That second bullet mentioned setters. The contracts in play here are actually four-fold:

  • vals are immutable
  • vars aren’t
  • functions (def) can return different values and so Scala can’t guarantee the result will always be the same
  • vars require getters and setters

But we can still roll with that if we add another little piece of Scala sugar, we can supply a setter method in the subtype:

def d3 = n
def d3_=(newVal:Int) = (n = newVal)

Here, the “def d3_=…” line adds the setter Scala needs to fulfil the contracts and we’re back in action.

Does This Also Stand For Non-Abstract Overrides?

One final thing to consider is how uniform really is the Scala implementation of the principle? Pretty well universal as far as I can see, because going beyond the scope of the Atom, what happens when the superclass and it’s methods and fields aren’t abstract?  It turns out it’s exactly the same as above, as long as you remember your override keywords.  Predictable. Nice.

Wednesday, 25 September 2013

Scala is an Expression-Based Language Not a Statement-Based One

I listened to a great podcast from the Java Posse Roundup 2013 on the train home last night: “Functional Programming
During it, Dick Wall briefly described how Scala is an “expression-based” language as opposed to a “statement-based” one (because everything in it is an expression) and that this was one of the main reasons why he liked it. 
In short:
  • if a language is expression-based, it means everything has a value (i.e. returns something),
  • but if it is statement-based, some things need to rely on side-effects to work (e.g. in Java if/else doesn’t have a value, it returns something nothing)
Now, I’ve looked into the terms statement and expression before on this blog, but the full weight of the concept hadn’t then struck home.  Listening to the podcast was the final push I needed.  Consequently, I did a little more reading on the topic.  I was planning on writing a post here about what I found, but instead found Joel Spolsky had got there before me.

Monday, 23 September 2013

Simple Tuples for the Tired, Plus a Lesson in the Uniform Access Principle

I’m back on the early train again.  I’m tackling Tuples again too. (Atomic Scala, “Tuples” atom.)

I’m after the “Brevity” and “A Question of Style” atoms too, so the more succinct syntax coupled with something my Java-brain isn’t tuned into (plus, I would argue, the early hour) meant I did a quadruple-take and then a little investigation before Tuple unpacking made sense to me.  The reason?  This idiom (for I’m guessing it’s idiomatic:

def f = (1, 3.14, “Mouse”, false, “Altitude”)

What do we know from this, despite the spare syntax?

  1. it’s a function (because of the “def”)
  2. it’s called “f
  3. it takes no arguments, and does not mutate state (no parentheses just after the method name)
  4. it comprises a single expression (no curly braces)
  5. the return type is a tuple
  6. this return type is implicit inferred by the compiler (no need to specify in the method signature) (thanks Semeru-san)

With that in mind, what does this function do?  Well, nothing. It only serves to make and return a 5-value tuple which can be captured in a val or var; the first element of which is an Int, the second a Double, the third and fifth are Strings and the fourth is a Boolean.  In short, it’s a very simple tuple-builder.

Next step in the atom is to prove to ourselves that a tuple is just a collection of vals.  Firstly, we allocate our tuple to a val:

val (n, d, a, b, h) = f

That’s a little scary to the uninitiated (or tired) too. But it’s simply saying that I want to call the previously defined tuple-maker function, f, and then store the resulting tuple in another, explicitly defined, tuple of vals. This is called unpacking, and that then means, as well as a val for the tuple as a whole, we also have vals for each individual element and can manipulate them individually (notice the order change in the second line, and the “is” function comes from AtomicTest):

(n, d, a, b, h) is (1,3.14,"Mouse",false,"Altitude")
(a, b, n, d, h) is ("Mouse", false, 1, 3.14, "Altitude")

The final little chunk is some tuple indexing.  Again, this has an unfamiliar syntax, but makes sense once you roll it around a little:

f._1 is 1           // NOTE: NOT ZERO-INDEXED!
f._2 is 3.14
f._3 is "Mouse"
f._4 is false
f._5 is "Altitude"

This seemed a little haywire at first, but it makes perfect sense.  Again, breaking it down, we have f, the tuple-maker function from before, which returns a tuple of elements in a specific order when called.  So after the “f” we have a tuple.  We can then call methods on this (using the “.” in this example) and the methods we are calling here are the ones for indexing.

Lessons Learned

  1. Trying to learn a new type of language when still waking up is difficult
  2. There is a secret point that this atom is also making: In this simplest of simple functions, f, we seeing the uniform access principle in action. That is to say, as functions are first-class, I should be able to do with a function what I would do with any other object, and it should be transparent.  It was this transparency that pulled me up short.  Now, having realised this, I’m a lot more comfortable with the principle, and another small piece of Scala syntax

Tuesday, 17 September 2013

Slowly Constructing a Pseudo-Backus-Naur Form

“For Comprehensions” (or just “Comprehensions”) are famously (on this blog at least) where I “lost it” during the TypeSafe’s Fast Track to Scala course right back at the start of this, and despite tackling it again on my own via Scala in Action it’s still not properly stuck.  At the time, the “losing it” pretty much felt like I had little familiar to grip onto.  Nothing looked as I’d expected it (the syntax for example) and the terminology wrapped around things that were tossed out to help me was alien too.  That first post (linked above) already makes clear I’m not coming at Scala from the traditional CompSci background, and in many respects, this unholy combination of factors is precisely why I’m diving into this – I know this is an area I am both weak in, and will benefit greatly from learning.  The second post (again linked above), while a lot more positive, still flounders a little in the big-ness of it all.  This post is a stripping back, and a bit of solid progress forward too.

The “Comprehensions” Atom

So now I’m tackling them again; the Atomic Scala aversion therapy approach has got me up to this point, and I’ve just had my first trip back into the water at the scene where I had my terrible experience last time.  But this time I’m prepared.  I’ve seen the basic Scala for construct on its own, stripped down and simple.  I’ve also been steadily introduced to much basic but needed terminology – “expression” for example, andinfix - and built up a solid and dependable understanding of it.  Besides this, I’ve slowly jettisoned by must-be-C-like syntax dependence with some hardcore functions-as-objects fiddling and parentheses and semi-colons have been ripped away from me left, right and centre, but I’m still comfortable.

But even with all this in the mental kit-bag, I’m going to take this slowly.  I’m fortunate that this is also the approach Bruce and Dianne feel is appropriate.  Typically a new concept is introduced not only with the hardcore details, but also with some lighter discussion of why, or how Scala folks mostly tend to use something.  They are also very up front about when they’ve avoided looking under such-and-such a rock for now; but they do point out the rocks, so you know there’ll be a return for more at a later date.

One such example in this Atom is the discussion around the possible flavours of for-comprehension using either parentheses or curly braces (Atomic Scala, pp.177).  A cursory glance at this topic, considering the (surface) similarity of the Scala for-comprehension with the Java for loop, might indicate that starting an introduction to such significant syntax with surrounding parens rather than curly braces might be more intuitive.  It’s not.  My biggest mistake was to bring anything of the Java for with me when I started on this Scala construct.  By going in first with a multi-line formatting and curly braces at the top and bottom, announcing (subtly) a block, the authors give me something to read, line-by-line.  To me, it’s a bonus that this happens to also be the way most Scala code is written.

Now that I have the metre of the construct we’re looking at, I can begin to look at the individual elements.  The first step is to re-visit the generator.  The statement beginning “the for loop you saw in For Loops was a comprehension with a single generator…” (my italics) implies to the careful reader that things will get harier later on in the generator space - but not yet - we’re sticking with one.  I’ve already commented that I like this syntax. (And just for the record, I think the Java for syntax, and by extension the C/C++ syntax is terrible.)  Lets keep moving.

Next up is the addition of some filters.  These are expressed in the form of if expressions which I’ve also seen a lot previously. It’s beginning to look as if every element that is pulled out of the input by the generator (or should I say generated?) drops down through the various filters which follow, only proceeding if the expression returns true.

The last piece in this puzzle are some definitions.  While these aren’t picked out as explicitly as the other two constructs, it seems clear that these are the remaining statements which all seem to be allocating values to vars or vars, either ones which are scoped with in the comprehension body or outwith it.  isOdd = (n % 2 != 0) is one such definition.  The authors note the lack of val / var declaration for this and I do too.  Scala is managing things for us here.

Putting It All Back Together

The final step is to pull it all back together to see what we have in totality – and I’m finding that this is easiest by my building a mini, mental pseudo-Backus-Naur Form. (See how nicely formatted my mind is? :D ):

for ([generator]) {
    [filter | definition] (*)
}

Please note, I find B.-N.F. as illegible as anything else, and the above chunk makes no attempt to be valid in any way. (It’s pseudo B.-N. F. remember). But it does provide me a way of representing what I’m building up on the page/screen.

When we do this it seems clear that we have an incredibly powerful tool for working our ways through collections and filtering each of the elements in turn to restrict things to only the ones we really need, using definitions as required to make our code clean and clear.

But What About yield?

Helpfully for me, this just made sense.  Having come across the same keyword in Ruby (though it’s used differently, with closures) I already had a mental construct of what this keyword should do to a running system that fits use-case just as nicely. Even when we get to the multi-line-block variety  Lets add it to the mini-mental pseudo-B.-N. F.:

for ([generator]) {
    [filter | definition] (*)
} [expression using element from the left-hand-side of the generator | yield] {
   
[expression using element from the left-hand-side of the generator] (*)
}

What Next?

I’ve been around this game long enough to know that it’s not going to stay as simple as I currently have it in my head.  But this feels nice and solid enough a piece of scaffolding to be getting on with.  I’m not going to speculate on where the additional complexity is going to come from.  I’m happy for now to roll this elegant piece of syntax around my brain pan for a while and see how it feels.

Monday, 16 September 2013

It’s Nice That…

In Scala, Arrays are a lot more similar to other collections classes (like Vector). E.g.:

def sumIt(args:Int*) = {
    args.reduce((sum, n) => sum + n)
}

Looks pretty similar to:

def sumIt(args:Vector) = {
    args.reduce((sum, n) => sum + n)
}

Wednesday, 11 September 2013

Legibility Wins and Losses (Part 1 in an Occasional Series)

I keep banging on about Scala’s syntax and the resulting legibility.  I’ve also mentioned more than once that I think the style propounded in Atomic Scala is the clearest I’ve read anywhere, and is likely to be a deciding factor in the books longevity.

What follows is a round-up of the wins, and losses, for legibility, that I’ve encountered so far.  this is all opinion, so please feel free to ignore everything in this post.

Wins

  • Scripts and classes
  • No semi-colons
  • vals and vars (as well as immutability as a general concept)
  • type inference
  • function definitions
  • The for loop (and how they play with Range literals)
  • Class Arguments
  • Named arguments
  • Default arguments
  • Case Classes (because of the terse syntax – no need for a body!, the removal of the need for “new” and the free “toString”)
  • Some of the methods in core classes (e.g. ‘to’ and ‘until’ when creating Ranges)
  • Overloading
  • String Interpolation
  • Optional Parameterized Types (square brackets just seem more apt for a DSL, but being optional means I can avoid using them unless its necessary)

Losses

  • :: (aka “prepend to a List”)
  • Some of the methods in the Collections classes (e.g. Vector.sorted)
  • Constructors (where did they go? they’ve just become a “run” of the body of a class. And yes, I know about the “Constructors” Atom in Atomic Scala)
  • Auxiliary Constructors
  • Function definitions within function definitions
  • “”””bleh””” (triple quotes; really?)

Undecided

  • Pattern matchers
  • The Scala Style Guide
  • Optional parentheses
  • Optional ‘.’ when calling methods on objects
  • Return type inferral

Could I have a style guide which just banned the “Losses” way of doing things?  Yes. We do that in Java all the time.  Might I be wrong about some of these?  Yes.  Noob’s prerogative.  Would an IDE help? Yes, syntax highlighting is a boon in my opinion.  Have they chipped away at my enthusiasm?  Not really.  The more I see, the easier it becomes to read.  It’s generally getting less and less outlandish looking with every passing day.

Pattern Matching, Take 2, Part 1

Things are about to get interesting.  I’ve made it to the point in Atomic Scala where Bruce (Eckel) and Dianne (Marsh) recommend you jump in if you're proficient in another language (not me, hopefully) or if you didn’t want to make damn certain you grokked the basics, before moving onto the core stuff (definitely me).  So what’s up? Pattern Matching that’s what’s up.

The first time I came across the Pattern Matching Syntax, like many things in Scala, I wasn’t a fan. I’m rapidly coming to the conclusion however that this initial revulsion is the way things are normally presented, rather than the syntax itself.  My primary requirement (in order for it to enter my head and slosh around up there accruing supporting information) is that it reads.  That’s one of Dianne and Bruce’s strengths; the first encounters with new syntactical elements always serves as a way to “read it to yourself in your head”. For example, you could narrate their introductory example on pp130 as:

“take this color [sic] object and match it to these cases: ({)
    in the case when it’s “red” => then produce result “RED”
    in the case when it’s “blue” => then produce result “BLUE”
    in the case when it’s “green” => then produce result “GREEN”
    and in the case when it’s anything else (_) => then produce result “UNKNOWN COLOR: ” + color”

That feels right to me. i know it can get a helluva lot more complicated, but to have that basic understanding to hang my hat on helps a lot.  Helpfully Bruce and Dianne also signpost specifically where things are going to get hairy later on. But I don’t have to worry about that yet.  First I’ve got the exercises to cement things.