treap

A sorted dictionary data structure based on randomized search trees.

  1. treap
  2. Usage
  3. Documentation
    1. Treap procedures
    2. Notes on the algorithm
  4. Examples
  5. About this egg
    1. Author
    2. Version history
    3. License

Usage

(require-extension treap)

Documentation

The treap library is an implementation of an ordered dictionary data structure, based on randomized search trees (treaps) by Seidel and Aragon:

R. Seidel and C. R. Aragon. Randomized Search Trees.
Algorithmica, 16(4/5):464-497, 1996.

This code defines a treap object that implements an ordered dictionary mapping of keys to values. The object responds to a variety of query and update messages, including efficient methods for finding the minimum and maximum keys and their associated values as well as traversing of a treap in an ascending or descending order of keys. Looking up an arbitrary or the min/max keys, and deleting the min/max keys require no more key comparisons than the depth of the treap, which is O(log n) where n is the total number of keys in the treap. Arbitrary key deletion and insertions run in O(log n) amortized time.

This code is inspired by a Stefan Nilsson's article Treaps in Java (Dr.Dobb's Journal, July 1997, p.40-44) and by the Java implementation of treaps described in the article. Yet this Scheme code has been developed completely from scratch, using the description of the algorithm given in the article, and insight gained from examining the Java source code. As a matter of fact, treap insertion and deletion algorithms implemented in this code differ from the ones described in the article and implemented in the Java code; this Scheme implementation uses fewer assignments and comparisons (see below for details). Some insight as to a generic tree interface gleaned from wttree.scm (Weight balanced trees) by Stephen Adams.

A treap is a regular binary search tree, with one extension. The extension is that every node in a tree, beside a key, a value, and references to its left and right children, has an additional constant field, a priority. The value of this field is set (to a random integer number) when the node is constructed, and is not changed afterwards. At any given moment, the priority of every non-leaf node never exceeds the priorities of its children. When a new node is inserted, we check that this invariant holds; otherwise, we perform a right or left rotation that swaps a parent and its kid, keeping the ordering of keys intact; the changes may need to be propagated recursively up. The priority property, and the fact they are chosen at random, makes a treap look like a binary search tree built by a random sequence of insertions. As the article shows, this makes a treap a balanced tree: the expected length of an average search path is roughly 1.4log2(n)-1, and the expected length of the longest search path is about 4.3log2(n). See Stefan Nilsson's article for more details.

Treap procedures

The treap object is created by a make-treap function, the only user-visible function defined in this egg:

[procedure] make-treap:: KEY-COMPARE-PROC -> SELECTOR

where KEY-COMPARE-PROC is a user-supplied function that takes two keys and returns a negative, positive, or zero number depending on how the first key compares to the second.

The returned selector procedure can take one of the following arguments:

'get returns a procedure LAMBDA KEY . DEFAULT-CLAUSE which searches the treap for an association with a given KEY, and returns a (key . value) pair of the found association. If an association with KEY cannot be located in the treap, the PROC returns the result of evaluating the DEFAULT-CLAUSE. If the default clause is omitted, an error is signalled. KEY must be comparable to the keys in the treap by a key-compare predicate (which has been specified when the treap was created)
'get-min returns a (key . value) pair for an association in the treap with the smallest key. If the treap is empty, an error is signalled.
'delete-min! removes the min key and the corresponding association from the treap. Returns a (key . value) pair of the removed association. If the treap is empty, an error is signalled.
'get-max returns a (key . value) pair for an association in the treap with the largest key. If the treap is empty, an error is signalled.
'delete-max! removes the max key and the corresponding association from the treap. Returns a (key . value) pair of the removed association. If the treap is empty, an error is signalled.
'empty? returns #t if the treap is empty
'size returns the size (the number of associations) in the treap
'depth returns the depth of the tree. It requires the complete traversal of the tree, so use sparingly
'clear! removes all associations from the treap (thus making it empty)
'put! returns a procedure LAMBDA KEY VALUE which, given a KEY and a VALUE, adds the corresponding association to the treap. If an association with the same KEY already exists, its value is replaced with the VALUE (and the old (key . value) association is returned). Otherwise, the return value is #f.
'delete! returns a procedure LAMBDA KEY . DEFAULT-CLAUSE which searches the treap for an association with a given KEY, deletes it, and returns a (key . value) pair of the found and deleted association. If an association with the KEY cannot be located in the treap, the PROC returns the result of evaluating DEFAULT-CLAUSE. If the default clause is omitted, an error is signalled.
'for-each-ascending returns a procedure LAMBDA PROC that will apply the given procedure PROC to each (key . value) association of the treap, from the one with the smallest key all the way to the one with the max key, in an ascending order of keys. The treap must not be empty.
'for-each-descending returns a procedure LAMBDA PROC that will apply the given procedure PROCto each (key . value) association of the treap, in the descending order of keys. The treap must not be empty.
'debugprint prints out all the nodes in the treap, for debug purposes

Notes on the algorithm

As the DDJ paper shows, insertion of a node into a treap is a simple recursive algorithm, Example 1 of the paper. This algorithm is implemented in the accompanying source [Java] code as

   private Tree insert(Tree node, Tree tree) 
                      {
		        if (tree == null) return node;
		        int comp = node.key.compareTo(tree.key);
		        if (comp < 0) {
		           tree.left = insert(node, tree.left);
		           if (tree.prio > tree.left.prio)
		              tree = tree.rotateRight();
		        } else if (comp > 0) {
		           tree.right = insert(node, tree.right);
		           if (tree.prio > tree.right.prio)
		              tree = tree.rotateLeft();
		        } else {
		           keyFound = true;
		           prevValue = tree.value;
		           tree.value = node.value;
		        }
		        return tree;
		     }

This algorithm, however, is not as efficient as it could be. Suppose we try to insert a node which turns out to already exist in the tree, at a depth D. The algorithm above would descend into this node in the winding phase of the algorithm, replace the node's value, and, in the unwinding phase of the recursion, would perform D assignments of the kind

    tree.left = insert(node, tree.left);

and D comparisons of nodes' priorities. None of these priority checks and assignments are actually necessary: as we haven't added any new node, the tree structure hasn't changed.

Therefore, the present Scheme code implements a different insertion algorithm, which avoids doing unnecessary operations. The idea is simple: if we insert a new node into some particular branch of the treap and verify that this branch conforms to the treap priority invariant, we are certain the invariant holds throughout the entire treap, and no further checks (up the tree to the root) are necessary. In more detail:

  1. Starting from the root, we recursively descend until we find a node with a given key, or a place a new node can be inserted.
  2. We insert the node and check to see if its priority is less than that of its parent. If this is the case, we left- or right-rotate the tree to make the old parent a child of the new node, and the new node a new root of this particular branch. We return this new parent as an indication that further checks up the tree are necessary. If the new node conforms to the parent's priority, we insert it and return #f
  3. On the way up, we check the priorities again and rotate the tree to restore the priority invariant at the current level if needed.
  4. If no changes are made at the current level, we return a flag #fmeaning that no further changes or checks are necessary at the higher levels.

Thus, if a new node was originally inserted at a level D in the tree (level 0 being the root) but then migrated up by L levels (because of its priority), the original insertion algorithm would perform (D-1) assignments, (D-1) priority checks, plus L rotations (at a cost of 2 assignments in the treap each). Our algorithm does only (L+1) node priority checks and max(2(L-1)+2,1) assignments.

Note if priorities are really (uniformly) random, L is uniformly distributed over [0,D], so the average cost of our algorithm is

 	D/2 +1 checks and D assignments

compared to

       D-1 checks and 2D-1 assignments

for the original algorithm described in the DDJ paper.

The similar gripe applies to the Java implementation of a node deletion algorithm:

   private Tree delete(Ordered key, Tree t) 
                      {
		        if (t == null) return null;
		        int comp = key.compareTo(t.key);
		        if (comp < 0)
		           t.left = delete(key, t.left);
		        else if (comp > 0)
		           t.right = delete(key, t.right);
		        else {
		           keyFound = true;
		           prevValue = t.value;
		           t = t.deleteRoot();
		        }
		        return t;
		     }

The algorithm as implemented looks fully-recursive. Furthermore, deletion of a node at a level D in the treap involves at least D assignments, most of them being unnecessary. Indeed, if a node being deleted is a leaf, only one change to the tree is needed to detach the node. Deleting a node obviously requires a left- or a right-kid field of the node's parent be modified (cleared). This change, however does NOT need to be propagated up: deleting of a node does not violate neither ordering nor priority invariants of the treap; all changes are confined to a branch rooted at the parent of the deleted node.

This Scheme code implements node deletion algorithm in the optimal way, performing only those assignments which are absolutely necessary, and replacing full recursions with tail recursions (which are simply iterations). Our implementation is also simpler and clearer, making use of a helper procedure join! to join two treap branches (while keeping both treap invariants intact). The deletion algorithm can then be expressed as replacing a node with a join of its two kids; compare this explanation to the one given in the DDJ paper!

Examples

;; "--> Sorting of a set of numbers via a treap" 

(define (++ x) (fx+ 1 x))
(define (-- x) (fx- x 1))

(let
  ((min-key -1) (max-key 10)
   (treap (make-treap (lambda (x y) (- x y))))
   ;; a hard-wired association between a key and a value
   (compute-assoc (lambda (key) (cons key (++ key)))))

  ;; loading a sequence [min-key .. max-key] in ascending order
  (do ((i min-key (++ i))) ((> i max-key))
    ((treap 'put!) i (cdr (compute-assoc i))))
  (treap 'debugprint)
  (print "treap's depth is " (treap 'depth) "\n")

 ; (assert (equal? ((treap 'get) (++ min-key))
 ;		  (compute-assoc (++ min-key))))
  (print ((treap 'get) (++ min-key)))

  ; (assert (equal? ((treap 'get) (++ min-key) #f)
  ;		  (compute-assoc (++ min-key))))
  (print ((treap 'get) (++ min-key) 'notfound))

  ;; checking traversing in the ascending order
  (let ((expected-key min-key))
    ((treap 'for-each-ascending)
      (lambda (association)
        (print (equal? association (compute-assoc expected-key)))
        (set! expected-key (++ expected-key)))))
    ;(assert (= expected-key (++ max-key))))

  ;; clearing the treap and reloading the same seq in descending order
  (treap 'clear!)
  (do ((i max-key (-- i))) ((< i min-key))
     ((treap 'put!) i (cdr (compute-assoc i))))
  (treap 'debugprint)
  (print "treap's depth is " (treap 'depth) "\n")

  ;(assert (equal? (treap 'get-max) ((treap 'get) max-key)))
  ;(assert (equal? (treap 'delete-max!) (compute-assoc max-key)))
  ;(assert (not ((treap 'get) max-key #f)))
  ;(assert (equal? (treap 'get-max) ((treap 'get) (-- max-key))))
  ;(assert (= (treap 'size) (+ -2 (++ (- max-key min-key)))))

  ;; tchecking traversing in the descending order
  (let ((expected-key max-key))
    ((treap 'for-each-descending)
      (lambda (association)
        (print (equal? association (compute-assoc expected-key)))
        (set! expected-key (-- expected-key))))))
    ;(assert (= expected-key (-- min-key))))

About this egg

Author

Oleg Kiselyov; packaged for Chicken Scheme by Ivan Raikov

Version history

1.7
Ensure test script returns proper exit status
1.6
Documentation converted to wiki format
1.5
Ported to Chicken 4
1.4
Build script updated for better cross-platform compatibility
1.3
License upgrade to GPL v3
1.2
Added support for chicken-setup -test
1.1
Added definitions for the dec macros
1.0
Initial release

License

Copyright Oleg Kiselyov. Chicken Scheme support copyright 2007-2011
by Ivan Raikov.

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or (at
your option) any later version.

This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
General Public License for more details.

A full copy of the GPL license can be found at
<http://www.gnu.org/licenses/>.