Oct 10, 2012

Massive Multiplayer Recreational Programming Games

I've finally done it: 153 (out of 153) problems solved, first rank (out of more than 9000!) on 4clojure. Well, at the moment of writing, anyway.

It started as an attempt to justify an impulse purchase of the book called Programming Clojure.

Fiddling with the language and searching for examples of some decent clojure code online, I've come upon a link on a forum, tried to solve a couple of the first problems just for kicks... and got sucked in.

4clojure is a web-site where you get a small self-contained problem to solve, write a snippet of Clojure code, press the "Submit" button, see the unit tests go green, rejoice, rinse and repeat.

It's somewhat similar to Project Euler, except that it's focused on Clojure and is more interactive.

And somehow it turned out to be a lot of fun for such a generally boring activity.

I mean, most people would not be as motivated to solve these problems if it was a homework assignment. I am certain I would not.

At least, not in a way that you crave "to solve just one more before going to sleep" or are looking forward to "turn that tough one in once online".

Quite an amusing psychological phenomena.

My speculation is that one can find some parallels between this and some mechanics of massive (or not-so-massive) multiplayer online games. Those, that make it "fun". Thus the cheap "MMORPG" pun, see?

For example, let's take the "constant, tangible positive feedback from the user activities" (oh yes, that sounds boring, for sure... but the fun kinda comes from it)

In the case of 4clojure this kind feedback is present on different levels:

  • The problems are mostly quite short, so generally it takes a limited amount of time to solve (and "turn in") each
  • It is run "inside" the browser, in a sense that you type the code in, press the button and immediately see if your code does the right thing. Seeing the green bullet points when the test passes vs otherwise grey (or red) ones has also an interesting psychological effect.
  • There is a "Top users" list, where you get the rank according to the amount of problems solved, and you can see your own entry slowly climbing upward with every problem that just has gone all green.
  • After solving each problem you can see other users' solutions to it and compare with your own code. I found this to be an invaluable experience, since after trying and solving the problem yourself, you get much richer context for reading and understanding someone else's code. Quite often, the solutions by different people differ a lot. Quite enlightening.
  • And probably the most important one: you constantly feel that you learn! The language, the algorithms, the ways to write (and not to write) elegant (and not-so-elegant) code.

There is no MMORPG without cheating, and we got it here... in a sense.

See, for every problem, the ultimate goal is to input a piece of Clojure code which would, when copy-pasted into the blanks inside the several predefined "test" expressions, all evaluate to true.

So let's say we have a problem that wants us to write a function that does a complex computation on an input string and returns some number.

There might be the test cases:


(= 42 (__ "Some string"))
(= 13 (__ "A different string"))
(= 17 (__ "Even different string"))

Whatever complex computation on those input strings the problem actually wants, you don't really need those for the test to actually pass:

#({\S 42, \A 13, \E 17} (first %))

will do well enough.

Instead of actually solving the problem (whatever it is), we are just using a look up table here.

Even more hilarious examples are, when for test cases of a kind:

(= false (__ (some-very-complex-expression1)))
(= true  (__ (some-very-complex-expression2)))
(= true  (__ (some-very-complex-expression3)))

one comes up with something like:

(fn [_] ([true false] (int (* 2 (Math/random)))))

...and then spams the "submit" button until the random permutation of results (in what is essentially a coin flip experiment) will match the desired sequence of true and false values.

Btw, can you tell what would be the probability that we get the "right" answer after, say, 10 "submit" button presses (uhm... sounds somehow familiar)?

What is amusing, that the boundaries of what can be called "cheating" here are actually fuzzy. It can be seen just as a custom set of rules, metagame in a sense, that is played, again, just for fun.

Another thing worth mentioning is that most of the content it the "game" is apparently a user content, something that has been a Holy Grail of the MMORPGs.

You can quite easily submit your own problem, which would get reviewed by the moderators and become the part of the game. Cool, isn't it?

But alright, no matter how great it is, there are still things that could've been better.

My personal list of issues (with hopefully constructive comments):

  • The complexity of the tasks is graded via "trivial", "easy", "medium", "hard" (and they are sorted by the complexity). This rank is often not representative of the "actual" complexity (it's assigned by the author of the problem at the point when its initially typed in). Some kind of user voting for the complexity (and for that matter, maybe for the problem itself) could help.

  • There are timeouts on how long for your code takes to execute. While it's a good thing in general, the actual timing apparently also includes e.g. the macro expansion, which for some reason can take insanely long (one of the examples is usage of "for" macro in some of the complex problems, which I ended up avoiding altogether). Granted, this looks more like a problem on the Clojail side (it is used to sandbox the code execution), so that's probably where it should be improved, in the first place.

  • The unit test cases sometimes are boring, not representative and easily cheated. That's where the voting (and commenting) would possibly help as well.

  • "Code golf" league, while cool, often makes people write incomprehensible code (as opposed to clean and elegant). I believe this is partially beacause of too simple "conciseness" metric. At the moment it's just the amount of characters minus whitespace and commas:

        (defn code-length [code]
          (count (remove #(or (Character/isWhitespace %)
                              (= % \,))
                         code)))
    

    This possibly could have been improved by e.g. not counting the comments, not taking the identifier length into account, counting forms rather than characters (and possibly the "complexity" of forms... some nestedness factor or something).

  • User ranking system could have been more sofisticated. At the moment it's just the amount of problems solved. But ideally, it could also have taken the "quality" of the solutions into the account: running time (even though it's quite unclear how to measure it adequately), votes of other users for the solution, "proper" code golf metrics.

  • The limited scope of the problems in the context of the Clojure language features. Most of the problems are solved via just sequence manipulations, not really involving things as macros (and DSLs), multi-methods, namespaces etc. I am not really sure how to tackle this one (aside from consciously submitting more problems on the poorly covered language features). Some of the limitations come from the approach of "copypasting the user's code snippet into the blanks inside the test expressions" itself. Maybe that could have been extended somehow.

But enough with the "complaining".

To sum up, it has been a great both educational and entertaining experience. There might just be something very promising about this way of learning things.

I believe people need more stuff like 4clojure.