(david-mcneil.com :blog)

2012 March 17 10:56am

In the Datomic unsession last night at ClojureWest Stu and Rich hosted a Q&A on Datomic. I appreciated their time and I now feel I have a better grasp of what they are doing with Datomic.

In particular I felt enlightened by Stu’s question: “which provides faster data access, a fast local spinning drive or an SSD attached via a fat network pipe?”

I hadn’t focused on this aspect of their approach. In particular I was not aware that Amazon’s DynamoDB is SSD based.

So the way I understand what Datomic does is that it allows the traditional database server to be distributed across many server machines (i.e. Datomic peers). Writes still go through a single server for consistency, but reads can go against a cluster of machines. This is possible because Amazon’s SSD backed DynamoDB offers network access to storage at speeds that historically required local disks. So as I understand it:

To my way of thinking the description above makes it more clear to me what Datomic is enabling and why it is possible. It seems many people that talk about Datomic get hung up on thinking of Datomic peers as traditional clients. To me it seems that Datomic peers are more like traditional database servers.

I wonder if my description above is correct? I wonder if positioning peers as servers would be an easier “sell”?

2012 March 17 10:12am

I ported over the first few steps of the in-memory Datomic tutorial to Clojure: https://gist.github.com/2060731

To run this code do something like:

  • download the datomic zip

  • follow the datomic README to install the datomic jar to your local maven repo (in the mvn incantation change “datomic.jar” to match the local file name from the datomic zip)

  • setup a lein project something like (I looked in the datomic pom.xml file to see what version of Clojure was required):

(defproject datomic-tutorial "1.0.0-SNAPSHOT"
  :dependencies [[org.clojure/clojure "1.4.0-beta3"]
                 [com.datomic/datomic "0.1.2753"]]
  :dev-dependencies [[swank-clojure "1.4.0-SNAPSHOT"]])

At first I tried to run this on a version of Clojure 1.3 and nothing worked.

I assume that there are Clojure protocols that we can use instead of the Java interfaces but I haven’t tracked them down yet.

2012 February 4 1:28pm

The code from my “Building a DSL in Clojure For Controlling a Lego MindStorm / Arduino Robot” talk at Lambda Lounge on February 2, 2012 is up on github.

The talk had a few points:

  • playing with Lego MindStorm and Arduino based robots is great fun
  • they are very effective tools for teaching kids to program and to start to understand engineering
  • a DSL built in Clojure can give kids the the same level of “tinkerability” in the programming language as they have with the hardware
  • creating a new type in Clojure (a List object that is associative) is a way to get a featureful DSL with very little code

The first file to checkout is op_demo.clj. This file introduces the new Clojure type that the DSL is built on, a Clojure “operator”. An operator is similar to a Clojure record, except it presents itself as a List instead of as a Map.

With that as our base, we can now introduce the operators that will make up our language.

Let’s see what we can do with these operators:

This example shows a technique for creating a DSL that nestles tightly down into Clojure.

  • No parser is needed; the Clojure reader reads the program.
  • No compiler is needed; the Clojure evaluator will walk the tree of expressions.
  • We get macros for free.
  • Our programs are data that can be programatically accessed.
  • We can implement multiple targets for the language by implementing the operators in multiple namespaces.

With respect to teaching kids to program, this approach is a natural fit to the robots. The kids can:

  • Define the language they will use. This can be made simpler for younger kids and more advanced for older kids.
  • Implement the operators for their language.
  • Learn that programs are just data to be generated and manipulated.
  • Go as deep as they care to with creating a real compiler by defining operators for control-flow, variable bindings, etc.
  • Get hooked on Lisp!

2012 January 26 2:47pm

As presented at today’s St. Louis Clojure Cljub meeting.

2011 May 7 7:17pm

I updated the defrecord2 code. It now includes the following:

  • record zipper support
  • record matchure support
  • dissoc2 preserves record type
  • universal record constructor
  • defined record? predicate

2011 April 6 7:37pm

In the course of doing Clojure development I have made extensive use of records and have extended them in many was. So I was excited to see a proposal from the Clojure Core team for defrecord improvements. It looks like a good start, below are my thoughts and questions about the proposal.

1) Variety of forms

One question that arises is how to understand the difference between the various forms. For example these two forms:

#myns.MyRecord[1 2]


(MyRecord. 1 2)

As best I can understand it, the first form would have the validation function applied but the second would not.

Furthermore as a literal form, the first can only include constants. So for instance the following use of a function call would not be valid in the first syntax:

(MyRecord. 1 (+ 1 1))

If you need to write expressions to compute field values and want the validation function to be applied then you use one of the factory functions:

(myns/->MyRecord 1 (+ 1 1))

I assume that the position forms will require all of the fields to be provided? If so, the initialization values will only be relevant to the map forms.

My understanding of the proposal is that the literal record syntax is a general syntax for Java objects. So the object created by

(java.util.Locale. "en" "US") 

could be expressed in the literal syntax as

#java.util.Locale["en" "US"]

However, it doesn’t seem to me to be generally possible to know which contructor to use for a given object so I am not sure when this literal syntax will be used for printing (non-record) Java objects.

2) Factory function naming

I realize that the names in the proposals are just placeholders… my preference is to not use “->” (which could easily be confused with the threading macro “->”) in the names and to name the functions with lower-case-dashed versions of the record names instead of CamelCase versions. For my sense of aesthetics this makes the use of record factory functions look like “normal” function calls. Furthermore, I value the ability to specify the name of the factory functions at the time the record is defined.

3) Validation

One of the forms of validation that we have found particularly helpful is to validate the names of the fields passed into the map factory function. If the map contains a key that is not a record field name then an exception is thrown. It is possible to add additional, non field-name keys, with assoc.

In addition I think it is useful to allow a validation function to be defined as part of the defrecord.

4) Writing records

I value an option to exclude the namespace from the printed from of the records. Instead of this:

(myns/->MyRecord 1 2)

They would optionally print as:

(->MyRecord 1 2)

This is useful when printing deeply nested trees of records because it trims a potentially long namespace identifier from every object’s output.

Finally, we have found it useful to suppress nil values when printing records with the map factory form. Again this makes the output less verbose.

5) Mutators

Beyond the initial creation of records we have found it useful to provide functions to create new record objects from existing objects. For example, the syntax could be:

(def x (myns/map->MyRecord {:a 1}))
(myns/map->MyRecord x {:b 2})
;; -> (myns/->MyRecord 1 2)

By virtue of going through the factory function the validations are applied. As opposed to using assoc directly in which case the validations are not applied.

Related to this is the idea of a universal constructor, e.g. named “new-record”:

(def x (myns/map->MyRecord {:a 1}))
(new-record x {:b 2})
;; -> (myns/->MyRecord 1 2)

This allows a new record object to be created from a record object without knowing the type of the object. We have found this useful for writing generic code to handle record objects.

Finally we have found it useful to define a dissoc function that removes a key from a record object, but produces a record object as the result.

So instead of this default behavior from Clojure:

(class (dissoc (map->MyRecord {:a 1 :b 2}) :b))
;; -> clojure.lang.PersistentArrayMap

We would get:

(class (dissoc2 (map->MyRecord {:a 1 :b 2}) :b))
;; -> myns.MyRecord

6) record?

We have found it useful to define a record? predicate function that reports whether a given object is a record object:

(record? (map->MyRecord {:a 1 :b 2}))
;; -> true
(record? "hello")
;; -> false

7) walking records

We have extended defrecord to define prewalk and postwalk support and this has proven useful (despite the fact that pre/postwalk are semi-deprecated).

8) zipper support

We have extended defrecord to generate multi-method implementations for each record class to participate in a zip-record function that allows zippers to be used to navigate record trees. We have used this feature extensively in our product to manipulate record trees.

9) matchure support

We have extended defrecord to support matchure. Specifically all records participate in a multi-method that allows them to be used with a “match-record” of our creation that delegates to matchure if-match . For example:

(match-record [(map->MyRecord {:a 1 :b ?b})
               (map->MyRecord {:a 1 :b 2})]
;; -> 2

2011 February 24 9:13pm

Once you start using Clojure protocols to capture abstractions it is natural to want to define implementations of higher level protocols in terms of the lower level protocols. But, Clojure does not allow protocols to be extended to other protocols. At the Clojure Conj in 2010 Rich Hickey mentioned an approach to this problem in which a protocol is extended to Object as a “catch all”. Then if the protocol is used with an object that satisfies protocol X dynamically extend the class of the object to protocol Y.

For example, consider a low-level protocol for a Dog:

(defprotocol Dog
  (bark [_]))

And a higher level protocol for an Animal:

(defprotocol Animal
  (speak [_]))

We would like to be able to write something like the following to define how a Dog can participate in the Animal protocol.

(adapt-protocol Dog Animal
                (speak [dog]
                       (bark dog)))

I have written a module, named clojure-adapt, that provides this adapt-protocol capability.

The adapt-protocol call registers the adapter functions in a global map that is keyed by the protocols Animal and Dog. The Animal protocol is extended to the base Object class with implementation functions that consult the global adapter map and dynamically extend the Animal protocol to the classes of objects that satisfy the Dog protocol.

If we have an object that satisfies the Dog protocol, for example, a String:

(extend-protocol Dog String
                 (bark [s] (str "arf " s)))

Then we can use the Animal functions on the Dog:

(speak "Fido")
=> "arf Fido"

The adapters are only used if the object does not satisfy the protocol. So if a protocol is extended to a class, then the adapters are not used on that class even if objects of the class satisfy the protocol being adapted.

For example, if the Animal protocol and the Dog protocol are extended to the Date class the adapter is not used.

(extend-protocol Dog java.util.Date
                 (bark [d] (str "dog as of " (.getTime d))))

(extend-protocol Animal java.util.Date
                 (speak [d] (str "animal as of " (.getTime d))))

(bark (java.util.Date. (long 100)))
=> "dog as of 100"

(speak (java.util.Date. (long 100)))
=> "animal as of 100"

When the Animal function, speak, is called on a Date object the adapter is not used even though the Date satisfies the Dog protocol. This is because the Date class is already participating in the Animal protocol.

The adapt-protocol call is tricky to use during development because once an adapter is “installed” for a class subsequent calls to adapt-protocol do not affect the class. One work-around to this is to refine an adapter by using extend-protocol with a test class. Once the adapter is working properly then register it for use via adapt-protocol.

2010 November 3 8:36pm

Clojure records and protocols eschew inheritance. For details see the “Datatypes and protocols are opinionated” section on the Clojure datatypes page.

If you are used to Java style type inheritance you might be surprised that there is no explicit record/protocol mechanism for defining one type as a “sub-type” of another and inheriting the super-type’s implementation. You might even think that Clojure datatypes are less powerful than Java clasess… but you would be wrong.

The trick is that Clojure allows the implementation of a protocol to be specified as a map. Just a simple, standard Clojure map that maps function names to implementations. Since implementations are expressed as maps, the surprising place to look for information on implementation “inheritance” is just the functions that operate on maps. For example: assoc and merge.

Notice in the example that a TrainedDog contains a Dog as a member. This, combined with an implementation of to-dog that returns the TrainedDog’s Dog, allows the functions defined in base-behavior to also operate on TrainedDogs.

Clojure maps are constructed using the full power of the Clojure language. Maps can be combined in ways that emulate Java’s single inheritance model or more complex multiple inheritance models and mixins. Combine maps in any way needed to do what you need.

This is a great example of embracing standard Clojure datatypes as the means of abstraction for a problem (in this case, the problem is “how to specify type implementations”) and gaining the full power of Clojure as a consequence.

(This was presented at the October 2010 Clojure Cljub )

2010 October 30 8:54am

At Clojure Cljub we talked about dynamic variables being like Java ThreadLocals. Which led to the question of whether they were really ThreadLocals. After a cursory look at the Clojure 1.2 source, the answer appears to be: “yes”. Each binding is not itself a ThreadLocal, but the dynamic values of a Var appear to be rooted in a ThreadLocal named “dvals”.

From src/jvm/clojure/lang/Var.arg

public final class Var extends ARef implements IFn, IRef, Settable{
  static ThreadLocal<Frame> dvals = new ThreadLocal<Frame>(){
  public static Associative getThreadBindings(){
    Frame f = dvals.get();
    IPersistentMap ret = PersistentHashMap.EMPTY;
    for(ISeq bs = f.bindings.seq(); bs != null; bs = bs.next())
        IMapEntry e = (IMapEntry) bs.first();
        Var v = (Var) e.key();
        Box b = (Box) e.val();
        ret = ret.assoc(v, b.val);
      return ret;

2010 October 28 1:29pm

From http://blacketernal.wordpress.com/set-up-key-mappings-with-xmodmap/


xmodmap -pke > default-modmap
xmodmap -pm >> default-modmap

Then edit the end of the default-modmap file to be formatted like this:

add shift =      Shift_L  Shift_R
add lock  =      Caps_Lock

Then the resulting file can be loaded via:

xmodmap default-modmap

This is useful for restoring the default key maps after you have messed them up. This is particularly useful if your custom xmodmap file is not idempotent.

past life