After just being hired by a group that thinks SML is the only true language, I should probably not be writing this… but here we go.
Today I participated in another programming contest. Nothing official, just for the fun of it. And I decided to challenge myself and solve the first problem (a really easy problem) in a different language. My language of choice for quick and dirty coding has always been C++. It can easily read and write on standard I/O, has a bunch of libraries and data-structures available, and you can compile and have an executable file with a simple g++ code.cpp. No weird keywords or classes, no linking of funny libraries, no console, just a plain binary file ready to be run, and most importantly, run fast!
But there are all these fancy languages around, so I decided to go ahead and see how they would perform (or better how I would perform with them). The problem in question was a simple one. Given a very big number (up to 10¹⁶), we had to transform it in a funny way (reverse and rotate digits — e.g. 6 becomes a 9) and check if the original and the transformed numbers are both primes [1]. The primality test is the first thing that pops out. Maybe I need to implement a fast primality check algorithm? That’s not the case for this problem, the silly one that checks for divisor between 2 and the square root of the number should work just fine (the time limit is 2s). The problem there is the size of the number. Remember that a 32 bit integer can only hold up to 2.147.483.647, a 10 digit number. So we just need to use a 64 bit integer and it should work just fine. Same algorithm.
My first attempt was to implement it on my English of programming languages (not my native language, but another one in which I am quite fluent): OCaml. The code looked nice, apart from some weird castings from int to float and then to int again because, apparently, sqrt and exponentiation only work on floats (what happened to polymorphic functions? wait… it gets worse). After implementing all my recursive functions beautifully, I tested on a few cases and made sure it was working. Then I tested on the biggest possible input and it worked. Great! Submit… runtime error. 🙁 Turns out that, although my machine is 64 bits, the server is 32, so the integers there were overflowing. All I needed to do is use OCaml’s “long int”. I found two libraries: Int64 and Big_int. Since Big_int had more operations (like sqrt and exponentiation), I went for that one. The problem was that, since I was no longer using int, I was no longer allowed to use + or – or /, no no no. I had to use add_big_int, sub_big_int, div_big_int, and so on. My pretty non-verbose functions were ugly 🙁 . With that came a million type errors, I had to add castings everywhere and link a library when compiling. The thing was just horrendous.
This was more than one hour into the contest. So I decided to try a different thing.
Integers of arbitrary precision? Let’s take a chance on my German of programming languages (I cannot quite speak it but really wish I could): Python. It took me an hour to write a 50 line code, since I had to kind of learn everything from scratch, but I had a working program. In the meantime I learned the hard way that Python and recursion do not go together. It was working for the biggest possible input, and it is Python, so if it needed a longer integer it would switch at runtime. Great! Submit… memory limit exceeded. What? Reading here and there I decided that using range() was the thing to blame, so I switched to a strange islice thing. Submit… time limit exceeded 🙁 Come on!
By the time the contest ended I had two beautiful pieces of code in two different languages that just did not work because of technical reasons. Don’t get me wrong, I am sure these languages have their place and are really good for some stuff. But for coding something easy fast and efficiently, I’ll stick with the good old C++.
Out of curiosity I checked what language the other people in the contest were using. Note that these are all people at least 5 years younger than me, possibly 10, still undergrads in CS. And what were they using? C++!! It is quite impressive, since I am pretty sure they learn programming in Python first, and then go ahead to C or ML. They could choose any of the fancy modern languages, but decide to stick with C++. To show that I do not have a biased sample, just check the statistics of websites like codeforces, spoj or UVa.
To think that I learned how to program in Java (it was the latest thing) and nowadays kids are learning how to program in Python (it is the latest thing), the persisting use of C++ says something about it. A colleague referred to C as an honest language. I think C++ is fits this description too. No fancy stuff, it is the thinnest layer between you and assembly code, and it does the job pretty well.
[1] Problem K
here.