Scheme in 33 Miniatures

This weekend I was trying to wake my old brain up by looking and the clever and mind-bending problems from my graduate school. I have really gotten excited about Madhur Tulsi‘s Mathematical Toolkit class — as a “data scientist”/”machine learning engineer” (I sometimes wonder what these designations mean — heck I just am grateful to have been able to work on A.I. with all its challenges and contradictions for a couple of decades) I find myself treasuring the notes and exercises.

One gem that I found is the text Thirty-three Miniatures: Mathematical and Algorithmic Applications of Linear Algebra by Jiřì Matoušek. This is full of memorable tricks and insights that leverage linear algebra. While the text is mostly about proofs, I used some of the algorithms presented discussions do some Lisp programming this weekend.

I started off with the first two miniatures on Fibonacci numbers — the first presenting a fast method to get the nth Fibonacci number and the second which is probably not so fast is yet elegant in that it presents a function to compute Fibonacci numbers.

So first, I wanted to re-awaken my Lisp programming brain. Racket is definitely one of the more innovative Lisp languages to have emerged over the last 10 years. You can think of Racket as a package that provides a collection of Lisp-languages, among them standard Scheme — a paired down Lisp dialect.

The first implementation achieves O(logn) running time by using exponentiation by squaring. This is achieved by first realizing that the Fibonacci recurrence can be written in terms of a matrix polynomial \begin{pmatrix} F_(n+1)\\ F_(n)\end{pmatrix} = M^n \begin{pmatrix} 1\\ 0\end{pmatrix} where M = \begin{pmatrix} 1 & 1\\ 1 & 0\end{pmatrix}. I hadn’t known this but n is a power of 2 we can compute the M^n by repeatedly squaring M and then the case where n is odd is taken care of by a multiply. I’m guessing that the core exponentiation routine would use this (or a faster method) anyway, but still interesting to consider.

So first to see if a quick exponentiation by squaring does against the method provided in the racket math/matrix package. Here’s my quick recursive method, based on the tail recursive method in the wikipedia article

(define one-matrix (matrix [[1 0] [0 1]]))
;;recursive exponentiation by squaring
(define (exp-by-squaring x n)
       (lambda (y x n)
          [(= n 0) y]
          [(= n 1) (matrix* y x)]
          [(= (modulo n 2) 0)
            (exp-by-squaring-aux y (matrix* x x) (/ n 2))]
             (matrix* y x) (matrix* x x) (/ (- n 1) 2))]))])
    (exp-by-squaring-aux one-matrix x n)))

And here is a quick running time analysis

> (time (dotimes (x 1000) (exp-by-squaring (matrix [[1 1 ] [1 0]]) 100)))
cpu time: 793 real time: 802 gc time: 175
> (time (dotimes (x 1000) (matrix-expt (matrix [[1 1 ] [1 0]]) 100)))
cpu time: 143 real time: 144 gc time: 36

Looks like I’m better off with the library — although curious to know what the library implementation is.

So sticking with the matrix-expt library function:

;;;-- Miniature 1: Fibonacci Numbers, Quickly
(define one-zero-matrix (matrix [[1 0]]))

(define fib-matrix (matrix [[1 1 ] [1 0]]))

(define (fast-fib n)
 ;;; Fast method of computing nth fibonnaci number
 (let* (
         [m-expt (matrix-expt fib-matrix n)]
         [idx0 0]
         [idx1 1]
         [prod (matrix* one-zero-matrix m-expt )]
    (matrix-ref prod idx0 idx1 )))

Now here’s an old-school tail recursive implementation

;;; Old school fib
(define (os-fib n)
 (letrec ([os-fib-helper 
            (lambda (n a b)
              [(= n 0) b]
                (os-fib-helper (- n 1) b (+ a b) )]))])
           [(= n 0) 0]
           [(= n 1) 1]
             (os-fib-helper (- n 2) 1 1)])))

Now let’s look at how they compare.

> (time (dotimes (x 10000) (fast-fib 100)))
cpu time: 2554 real time: 2612 gc time: 617
> (time (dotimes (x 10000) (os-fib 100)))
cpu time: 41 real time: 41 gc time: 11

Wow. Maybe we’ll have to go to type checked racket to see any gains there!

Typed racket claims to get more of a boost in performance by using type hints. Here is the typed version of the fast Fibonacci

;;; -- Miniature 1: Fibonacci Numbers, Quickly (with type hints)
(: one-zero-matrix (Matrix Integer))
(define one-zero-matrix (matrix [[1 0]]))

(: fib-matrix (Matrix Integer))
(define fib-matrix (matrix [[1 1 ] [1 0]]))

(: fast-fib (-> Integer Integer))
(define (fast-fib n)
 ;;; Fast method of computing nth fibonnaci number
   [m-expt (matrix-expt fib-matrix n)]
   [idx0 : Integer 0]
   [idx1 : Integer 1]
   [prod : (Matrix Any) (matrix* one-zero-matrix m-expt )]
  (cast (matrix-ref prod idx0 idx1 ) Integer)))

;;Hacky runner to allow my bootleg benchmark
(: run-fast-fib-100 (-> Integer Integer))
(define (run-fast-fib-100 n)
  (dotimes (x n)
   (fast-fib 100))
> (time (run-fast-fib-100 10000))
cpu time: 689 real time: 728 gc time: 251
- : Integer
> (time (run-fast-fib-100 10000))
cpu time: 672 real time: 733 gc time: 244
- : Integer
> (time (run-fast-fib-100 10000))
cpu time: 704 real time: 750 gc time: 267
- : Integer
> (time (dotimes (x 10000) (os-fib 100)))
cpu time: 28 real time: 28 gc time: 0
> (time (dotimes (x 10000) (os-fib 100)))
cpu time: 31 real time: 30 gc time: 0
> (time (dotimes (x 10000) (os-fib 100)))
cpu time: 35 real time: 35 gc time: 0

About an order of magnitude better, but still order of magnitude worse than the iterative version though. Better off sticking with the simple iteration.

Now in the second miniature, Matoušek derives a numeric expression for the nth Fibonacci number by solving a system of linear equations that follow from the recurrence. This is F_n = 1/\sqrt{5} \left[ \right( (1 + \sqrt{5})/2 \left) ^n - \right( (1 - \sqrt{5})/2 \left) ^n \right]

;;; -- Miniature 2: Fibonacci Numbers, the Formula(: root-five Real) 
(define root-five (sqrt 5.0)) 

(define inv-root-five (/ 1.0 root-five)) 

(define first-tau-root (/ (+ 1.0 root-five) 2.0)) 

(define second-tau-root (/ (- 1.0 root-five) 2.0)) 

(define (fib-root n) 
 ;;; Not so fast but fun ?
    (* inv-root-five 
     (- (expt first-tau-root n) (expt second-tau-root n))))

How does compare against an “old school” iterative version?

> (time (dotimes (x 10000) (fib-root 100)))
cpu time: 3 real time: 4 gc time: 0
> (time (dotimes (x 10000) (os-fib 100)))
cpu time: 27 real time: 28 gc time: 0

Well, that seems to best it. At least until we get further out in the sequence, where overflow gets to be an issue

> (time (dotimes (x 10000) (fib-root 1200)))
cpu time: 4 real time: 4 gc time: 0
> (time (dotimes (x 10000) (os-fib 1200)))
cpu time: 1271 real time: 1419 gc time: 293
> > (time (dotimes (x 10000) (fib-root 2000)))
. . exact-round: contract violation
 expected: rational?
 given: +inf.0
> (time (dotimes (x 10000) (os-fib 2000)))
cpu time: 2619 real time: 2893 gc time: 725

So we’ve got constant time with the analytic solution, we might loose the better performance once we get better precision. Interesting. I’ll hold it there for now and contemplate. It’s been a nice Scheme appetizer — a feel my inner Scheme brain re-awakening.

Adapting to the Kinesis keyboard

I ordered a Kinesis keyboard on joining Automattic, figuring that I would try to kick the new year off taking ergonomics seriously. It feels like I have so far gone through two periods of learning, and still getting with respect to matching the rate of touch typing on Mac laptop keyboard.

The first phase was rough, feeling like a basic hunt and peck. I’d quickly get frustrated and go back to the laptop keyboard, or the small Mac wireless silver keyboard. So somewhere between one and two hours a day. That phase was from late February until say around first week in March. I took a couple weeks break out of frustration and then picked up last week in March.

Phase II seemed remarkably better, much faster touch  typing to the point of being relaxed and less having to think about where the keys are. The challenge so far seems to be around getting into the zone as far as emacs keys are concerned. That is, the META- and CONTROL keys still present a problem for me, and it still seems a bit weird in terms of the Mac key mapping for the kinesis (I’m using Advantage ). Having the the CONTROL key be right next to either thumb seems like the more appropriate mapping to have. I still haven’t quite figured out how to deal with numbers effectively. Strangely enough, it almost seems like although I’m creating globs of floats everyday, most of those are generated by the machine.

The real issue is having faster access to the numeric operators, parens, brackets. So maybe the next goal is to work on those python and scala typing drills eh?

Clojure and Lisp

I suspect that Clojure, event though acknowledge to be the latest incarnation of the common lisp family of languages, is less powerful than Common Lisp. I suspect that this has a lot to do with macro processing, the fact that it is a JVM language.

Is it worth pursuing to any degree? How would one go about firming such a claim, and to whom would it matter?