Sunday, 28 October 2012

Active Reading

Active reading, as advocated by Scott Young, entails more than simply letting the words bounce off your eyeballs. Instead you apply a succession of filters to drill down into the chapter you are reading:
  1. First, scan through the chapter to get the gist of it. How long is it? Does it contain diagrams? Are there lots of information dense sections?
  2. Then, read the chapter and highlight the key points.
  3. Next, re-read the highlighted sections and summarize them in your notebook. Re-wording the information forces you to consider it in more depth, and begins the process of understanding it.
  4. Finally, read the summaries and do the extra holistic learning work e.g. make metaphors, 'visceralize', write a blog article etc :-)
Having watched Scott's video about the technique (part of the Learn More, Study Less course) I decided to apply it to the short chapter on active reading contained in the accompanying book. Having done so, I came up with the metaphor of an archaeologist at a dig:
  1. First she marks out with string the area of the ground that she is going to dig.
  2. Then she goes to work with a spade and, as she finds interesting artifacts, sets them to one side.
  3. Next she sorts through the artifacts and writes up the findings in her journal.
  4. Finally, back in the warmth of her office, she works through her journal and tries to marry up the findings with research or previous historical findings.
I have no idea if this is what an archaeologist does, but it does help me to internalize the active reading process by considering it from a different angle. Books contain ideas, you need to dig them out and then link them to your existing knowledge, so that you remember them.

Saturday, 9 June 2012

Information Diet

I'm currently reading The Information Diet by Clay Johnson, an interesting book that argues that we should regulate our information intake in the same way that we regulate our diet. Your health suffers if you eat junk food. Johnson argues that your health also suffers if you read junk information sources.

Like I said, it's an interesting read, and has already inspired me to make some changes. I haven't opened Google Reader or Twitter for a couple of days now, and feel all the better for it. Most importantly, I have changed the configuration of my Galaxy Nexus so that I don't have instant access to information sources that can distract me.

Previously, my home screen looked like this:

I had everything organized into groups, so that all the common applications I used were at most two touches away.

My new home screen looks like this:

I have removed easy access to anything that I can use as an information source so that I don't habitually open a browser or Google Reader, or Twitter etc in an idle moment. When I consume information, I want to make a conscious effort to do so.

The Launcher replacement I used for this was Nova Launcher Prime. This has some nifty features, such as being able to create folders in the application drawer, and hide applications. My application drawer now looks like this:

Everything is nice and clean, I don't need to swipe through multiple drawers to find an application. So when I make the conscious decision to consume information, I can do so in 3 touches, rather than 2.

It may not seem like a big difference, but it has certainly made a difference to me. A few times now I have found myself mindlessly opening up my phone, and been halted by the new layout. That extra level of indirection is enough to help me to develop a new information consumption habit.

Let's see if it lasts!

Sunday, 20 November 2011


"We have nothing else cheap left to exploit. We are completely in danger from lack of culture. We were all trained up to be consumers...throw away the past, the future will take care of itself, catch the latest thing and suck it up." -- Vivienne Westwood, addressing Occupy activists (
 "There is no doubt in 1933 that the collapses of the older systems which we witness are probably irrevocable. Sir Auckland Geddes, the British Ambassador to the United States of America, foresaw them when he said in 1920: 'In Europe, we know that an age is dying. Here it would be easy to miss the the signs of coming changes, but I have little doubt that it will come. A realization of the aimlessness of life lived to labour and to die, having achieved nothing but avoidance of starvation, and the birth of children also doomed to the weary treadmill, has seized the minds of millions.'" -- Science and Sanity, p. 49, Alfred Korzybski
What does 'human' mean? What differentiates human from animal? Alfred Korzybski suggested we classify life on Earth as follows:
  1. Plants: static, synthesize sunlight and other chemicals in order to grow. Hence we define them as 'chemical-binders'.
  2. Animals: like plants, they synthesize chemicals to grow, but also have the ability to move around. This movement in 'space' differentiates plants from animals and hence we define them as 'space-binders'.
  3. Humans: like plants, synthesize chemicals to grow, and like animals move around in space. Humans add to this the ability to learn from previous generations - one generation can start from where the previous left off. This ability to transmit knowledge across time leads to the definition of humans as 'time-binders'.
Korzybski argues first in Manhood of Humanity, and later in Science and Sanity, that this definition of humans as 'time-binders' describes functionally what humans actually do. One only needs to look around to verify the claim. The presents from the past rest all around us.

The time-binding capacity of humans means that I can learn from Korzybski, despite the fact that he died 60 years ago. I can learn from Einstein, despite the fact he died 60 years ago. I can learn from Aristotle, despite the fact that he died 2000 years ago. Civilization arises because of the time-binding capacity. Imagine a world where every generation starts afresh, unable to use the fruits of the previous generations' efforts. We would still be living in the trees, picking flies off each other.
"NYPD & Brookfield have taken the People's Library again. and we love you all." --!/owslibrary/status/136970287601291265
By building a People's Library, the Occupiers in NY implicitly recognized the uniquely human capacity of time-binding. Bringing together books in that way honours the time-binding capacity that defines humans. What does destroying them signify?

So far, the mass media has focused on the occupation of 'space' in cities all around the world. But this space-binding activity is incidental to the more important time-binding activities, the creation and propagation of memes.

OWS Teach-In with Douglas Rushkoff from Douglas Rushkoff on Vimeo.

Pepper spray doesn't work against ideas. You can't kettle a meme. Animalistic, space-binding tactics may work in the short-term, but in the long-term, human-nature, the time-binding capacity, will assert itself.

To live life as a human means to live life in order to learn and progress, not simply to labour and die. To live a life of culture, rather than consumerism. To exercise the uniquely human time-binding capacity.

Sunday, 14 August 2011

Literate Programming with Marginalia

Here is the source code of the project I'm working on at home, formatted by Marginalia:

I love it. Having something so beautifully formatted really motivates me to write comprehensive documentation. I'll be using it in all my Clojure projects from now on.

The set up was fairly straightforward, so I won't go through it in depth here. Once I had it working, I wanted to find an easy way to publish the updated documentation, and make it easily readable on-line. The solution was GitHub pages, and git submodules.

Here's what I did:
  • Added that branch as a submodule of the main source code in the docs directory.
  • git submodule add docs
    cd docs
    git checkout gh-pages
  • Created index.html which redirects to uberdoc.html
  • Back up to the parent directory and run 'lein marg'
Running 'lein marg' creates a new version of uberdoc.html in the docs directory, which is actually the gh-pages branch of the overall project. Pushing the change therefore makes it available at

    Saturday, 13 August 2011

    Simple Arity Checking Using Higher Order Functions

    As a learning exercise I'm writing a little command line tool to connect to Remember the Milk using Clojure. The idea is that the tool is a mini shell (or repl) which reads commands, executes them, and prints the result.

    The commands are written as simple Clojure functions. When I type something at the command line, the command is looked up in a map and, if found, executed, passing the rest of the arguments.

    For this to be safe I need to check the arity of the function matches the number of commands entered at the command line.

    For example:

    When I type 'help' then all the available commands are returned:
    rtm> help
    (echo exit help)

    If I type help with an argument, then it still works, because help is multi-arity, and has an implementation that takes an argument:
    rtm> help command
    Help for command

    If I type help with more than one argument, the arity check kicks in and refuses to call the function:
    rtm> help won't work
    help: wrong number of args
    Help for help

    It took a bit of thinking to work out how to do this, but I eventually came up with the following solution.

    ;; higher order functions rock!
    (defn arity-check
      "Returns a function that evaluates to true if the arity matches the count"
      ;; special case - if arglist is of zero length then no need to check for & args
      (if (= 0 (count arglist))
        #(= % 0)
        (let [arg-map (apply assoc {}
                             (interleave arglist (range 0 (count arglist))))]
          ;; if & args found then number of args is >= the position of the &
          ;; otherwise it's just a simple size comparison
          (if ('& arg-map)
            #(>= % ('& arg-map))
            #(= % (count arglist))))))
    ;; this builds a collection of functions, one for each of the arglists
    ;; which evaluates to true if the number of args matches the arity of the
    ;; arglist. it then applies each function in turn against the size of the
    ;; args, and determines if any of them returned true. if at least one of
    ;; them returned true, then we can safely do the call
    (defn arity-matches-args
      "Returns true if the args match up to the function"
      [f args]
      (let [arity-check-fns (map arity-check (:arglists (meta f)))]
        ((set (map #(% (count args)) arity-check-fns)) true)))

    The arity-check function takes an arglist, as returned from the meta-data of a function. This is a higher order function which returns a function that evaluates to true if the number passed in is assignment compatible with the arity of the arglist.

    e.g. if the arglist is [x] then 1 argument is expected. If it is [x y] then 2 arguments are expected. If it is [& args] then any number of arguments >= 0 are expected. If it's [x & args] then any number of arguments >= 1.

    The arity-matches-args function calls the arity-check for each of the arglists in the meta data of the function, and stores them in a sequence, arity-check-fns. It then calls every one of those functions passing in the number of arguments in args, and generates a set of the results, which will be either true, false, or nil. It then uses the set as a function to see if it contains true. If it does then at least one of the arglist arities matches the number of arguments that have been entered.

    There is probably an easier way to do this...but I'm still proud of this solution. It opened my eyes to the power of higher order functions

    The source code is at

    Saturday, 6 August 2011

    Will Clojure Ever Be 'Finished'?

    Clojure, as a Lisp dialect, is an extremely malleable language. The lack of syntax lends itself to creating domain specific languages. The built-in meta-programming (macros) means that anyone can add new language features. In contrast, adding new language features to Java requires a lengthy process of negotiation and compromise via the JCP.
    If you give someone Fortran, he has Fortran. If you give someone Lisp, he has any language he pleases.  -- Guy Steele
    Will we hit a point where Rich and the Clojure/Core team regard the core language as complete? In other words, will Clojure hit a stage where it is rich enough (no pun intended) that there is no further work required, other than to write new libraries? This seems plausible to me, and perhaps even desirable.

    I'd be interested to hear Rich Hickey's view on this.

    Update (18th November): at the Clojure / Conj last week, I had the opportunity to put this question to Rich. He broadly agreed with the point, saying that the core language is basically done, and that the remaining work is mostly around the edges.

    Sunday, 31 July 2011

    Lazy Sequences

    I like this a lot:
    ;; define a function with a side effect to show when it is called
    (defn square [x]
        (println (str "Processing: " x))
        (* x x)))
    ;; define map-result which is result of calling square for every item in the list
    (def map-result (map square '( 1 2 3 4 5 6 7 8)))
    First time, enough of the lazy sequence is evaluated to generate the result:
    practical-clojure.chpt5> (nth map-result 2)
    Processing: 1
    Processing: 2
    Processing: 3
    The next time, with the same args, the previous calculation was cached, so nothing to evaluate:
    practical-clojure.chpt5> (nth map-result 2)
    The next call it needs to evaluate a couple more of the lazy sequence:
    practical-clojure.chpt5> (nth map-result 4)
    Processing: 4
    Processing: 5
    And again, this is cached:
    practical-clojure.chpt5> (nth map-result 4)
    This is possible because a sequence looks like:
    first --> (rest)
    first --> (first --> (rest))
    first --> (first --> (first --> rest))
    Rest is only evaluated when it is required.