You are looking at historical revision 14794 of this page. It may differ significantly from its current revision.
Bindings for PostgreSQL's C-api.
Original author: Johannes Grødem
Please do not mail to Johannes directly as he no longer develops this egg.
Current maintainer: Peter Bex
- Felix L. Winkelmann
- Alex Shinn
- Ed Watkeys
- Taylor Campbell
This extension provides an interface to the PostgreSQL relational database.
Connection functions[procedure] (connect CONNECTION-SPEC [TYPE-PARSERS [TYPE-UNPARSERS]])
Opens a connection to the database given in CONNECTION-SPEC, which should be either a PostgreSQL connection string or an alist with entries consisting of a symbol and a value (which is internally converted to such a string). The symbols should be connection keywords recognized by PostgreSQL's connection function. See the PostgreSQL documentation for these. At the time of writing, they are host, hostaddr, port, dbname, user, password, connect_timeout, options, sslmode, service.
TYPE-PARSERS is an optional alist that maps PostgreSQL type names to parser procedures, TYPE-UNPARSERS is an optional alist that maps predicates to unparser procedures. They default to (default-type-parsers) and (default-type-unparsers), respectively (see below).
The return value is a connection-object.
Also note that while these bindings use the non-blocking interface to connect to PostgreSQL, if you specify a hostname (using the host-keyword), the function might not be able to yield because the resolver will block.[procedure] (disconnect CONNECTION)
Disconnects from the given CONNECTION.[procedure] (reset-connection CONNECTION)
Resets, that is, reopens the connection with the same connection-specs as was given when opening the original CONNECTION.[procedure] (connection? OBJECT)
Returns true if OBJECT is a PostgreSQL connection-object.
Query procedures[procedure] (query CONN QUERY . PARAMS)
Execute QUERY, which is a string containing one SQL statement. CONN indicates the connection on which to execute the query, and PARAMS is an arbitrary number of optional arguments indicating positional parameters ($1, $2 etc).
This returns a result object (see below).
(let ((conn (connect '((dbname . test))))) (row-map identity (query conn "SELECT $1::text, 2::int2" "hello")))
A less convenient but slightly more powerful version of the query procedure; PARAMS must now be a list (instead of rest-arguments). FORMAT is a symbol specifying how to return the resultset: either as binary or text (the default). RAW is a boolean which defines whether the PARAMS should be treated "raw" or first passed through the unparsers associated with CONN. If they are treated "raw", they must all be strings, blobs or sql-null objects.
See type conversion for more info on unparsers.
(let ((conn (connect '((dbname . test))))) (row-map* (lambda (a b) (list (blob->string a) (blob->u8vector b))) (query* conn "SELECT $1::text, 2::int2" '("hello") format: 'binary)))
(("hello" #u8(0 2)))
A simplified query procedure. This allows no positional params and no control over how the result is returned (defaulting to text). The one advantage is that it is allowed to enter multiple SQL statements in the QUERIES string. This procedure returns a list of result objects, one for every statement in the string.
(let ((conn (connect '((dbname . test))))) (map row-values (multi-query conn "SELECT 'hello', 'world'; SELECT 1")))
(("hello" "world") (1))
Usually you will use only these procedures to process result sets, but you can fall back to (or even mix with) the low-level API if you need to do custom things.
- <procedure>(row-fold KONS KNIL RESULT)</procedure>
- <procedure>(row-fold* KONS KNIL RESULT)</procedure>
This is the fundamental fold operator for result sets. It calls (kons row seed) for every row, where row is the list of values in the current row and seed is the accumulated result from previous calls (initially knil). It returns the final accumulated result.
The starred version works the same, except it calls (kons col1 col2 ... seed) instead, so the procedure must know how many columns you have in the result set.
(let ((conn (connect '((dbname . test))))) (row-fold (lambda (row sum) (+ (car row) sum)) 0 (query conn "SELECT 1 UNION SELECT 2")))
(let ((conn (connect '((dbname . test))))) (row-fold* (lambda (value str) (string-append str value)) "" (query conn "SELECT 'hello, ' UNION SELECT 'world'")))
- <procedure>(column-fold KONS KNIL RESULT)</procedure>
- <procedure>(column-fold* KONS KNIL RESULT)</procedure>
As row-fold/row-fold*, except this iterates sideways through the columns instead of lengthways through the columns, calling KONS with all values in all the rows of the current column, from left to right.
The starred version is much less useful here since you often don't know the number of returned columns, but it is provided for consistency.
(let ((conn (connect '((dbname . test))))) (column-fold (lambda (col sum) (+ (car col) sum)) 0 (query conn "SELECT 1, 100 UNION SELECT 2, 200")))
- <procedure>(row-for-each PROC RESULT)</procedure>
- <procedure>(row-for-each* PROC RESULT)</procedure>
Simple for-each, calling the (PROC row) on each row, in turn, only for the purpose of its side-effects. The starred version calls (PROC col1 col2 ...).
- <procedure>(column-for-each PROC RESULT)</procedure>
- <procedure>(column-for-each* PROC RESULT)</procedure>
Column variants of row-for-each/row-for-each*.
- <procedure>(row-map PROC RESULT)</procedure>
- <procedure>(row-map* PROC RESULT)</procedure>
Maps rows to lists by applying PROC to every row and using its result in the result list on the position corresponding to that of the row. This procedure is not guaranteed to walk the result set in any particular order, so do not rely on the order PROC will be called.
(let ((conn (connect '((dbname . test))))) (row-map* + (query conn "SELECT 1, 100 UNION SELECT 2, 200")))
- <procedure>(column-map PROC RESULT)</procedure>
- <procedure>(column-map* PROC RESULT)</procedure>
Column variants of row-map/row-map*.
(let ((conn (connect '((dbname . test))))) (column-map* + (query conn "SELECT 1, 100 UNION SELECT 2, 200")))
Low-level API[procedure] (result? OBJ)
Returns #t when OBJ is a result object, #f otherwise.[procedure] (clear-result! RES)
Directly clean up all memory used by the result object. This is normally deferred until garbage collection, but it's made available for when you want more control over when results are released.[procedure] (value-at RES [COLUMN [ROW]] [raw: RAW])
Returns the value at the specified COLUMN and ROW. It is parsed by an appropriate parser unless RAW is specified and #t. If RAW is true, the value is either a string, blob or an sql-null object. Otherwise, it depends on the parsers.
If ROW or COLUMN are not specified, they default to zero. This makes for more convenient syntax if you're just reading out a result of a query which always has one row or even one value.
See type conversion for more info on parsers.[procedure] (row-values RES [ROW] [raw: RAW])
Returns a list of all the columns' values at the given ROW number. If ROW is omitted, it defaults to zero. If RAW is true, the values are either strings, blobs or sql-null objects. Otherwise, it depends on the parsers.[procedure] (column-values RES [COLUMN] [raw: RAW])
Returns a list of all the rows' values at the given COLUMN number. If COLUMN is omitted, it defaults to zero. If RAW is true, the values are either strings, blobs or sql-null objects. Otherwise, it depends on the parsers.[procedure] (row-alist RES [ROW])
Returns an alist of the values at the given ROW number. The keys of the alist are made up by the matching column names, as symbols.
If ROW is omitted, it defaults to zero. If RAW is true, the values are either strings, blobs or sql-null objects. Otherwise, it depends on the parsers.[procedure] (affected-rows RES)
For INSERT or UPDATE statements, this returns the number of rows affected by the statement that RES is a result for. Otherwise it's zero.[procedure] (inserted-oid RES)
For INSERT statements resulting in a single record being inserted, this returns the OID (a number) assigned to the newly inserted row. Returns #f for non-INSERT or multi-row INSERTs, or INSERTs into tables without OIDs.[procedure] (row-count RES)
Returns the number of rows in the result set.[procedure] (column-count RES)
Returns the number of columns in the result set.[procedure] (column-index RES COLUMN)
Returns the index of COLUMN in the result set. COLUMN should be a symbol indicating the column name.[procedure] (column-name RES INDEX)
Returns the name of the column (a symbol) at the position in the result set specified by INDEX. This is its aliased name in the result set.[procedure] (column-names RES)
Returns a list of all the column names (symbols) in the result set. The position in the list reflects the position of the column in the result set.[procedure] (column-format RES INDEX)
Returns the format of the column at INDEX, which is a symbol; Either text or binary. This determines whether the value returned by value-at will be a string or a blob.[procedure] (column-type RES INDEX)
Returns the OID (an integer) of the column at INDEX.[procedure] (column-type-modifier RES INDEX)
Returns an type-specific modifier (a number), or #f if the type has no modifier.[procedure] (table-oid RES INDEX)
The OID (a number) of the table from whence the result column at INDEX originated, or #f if the column is not a simple reference to a table column.[procedure] (table-column-index RES INDEX)
Returns the column number (within its table) of the column making up the query result column at the position specified by INDEX.
Note: This procedure returns indexes starting at zero, as one would expect. However, the underlying C function PQftablecol is one-based. This might trip up experienced Postgres hackers.
To embed arbitrary values in query strings, you must escape them first, to protect yourself from SQL injection bugs. This is not required if you use positional arguments (the PARAMS arguments in the query procedures).[procedure] (escape-string CONNECTION STRING)
Quotes special characters in STRING which are otherwise interpreted by the SQL parser, obeying the CONNECTION's encoding settings.[procedure] (escape-bytea CONNECTION STRING)
Quotes special characters in STRING which are otherwise interpreted by the SQL parser. This differs from escape-string in that some bytes are doubly encoded so they can be used for bytea columns.
This is required because of a technicality; PostgreSQL first parses the string value as a string, and then casts this string to bytea, interpreting another layer of escape codes.
For example, E'a\\000bcd' is first converted to 'a\000bcd' by the text parser, and then interpreted by the bytea parser as an "a" followed by a NUL byte, followed by "bcd". In Scheme, the value returned by (escape-bytea conn "a\x00bcd") is "a\\\\000bcd". Yes, that's a lot of backslashes :)[procedure] (unescape-bytea STRING)
This unescapes a bytea result from the server. It is not the inverse of escape-bytea, because string values returned by the server are not escaped for the text-parser. (ie, step one in the encoding process described under escape-bytea is skipped)
Represents the numeric value of the invalid Oid. Rarely useful, except perhaps when doing low-level operations in the system catalog.
A condition of kind (exn postgresql) is signaled when an error occurs. The postgresql component of this condition contains several properties. Unless otherwise noted, these properties may not be present, in which case they have the value #f.
- One of the symbols error, fatal, panic, warning, notice, debug, info, log. Always present.
- A string representating a Postgresql error class (the first two characters of error-code).
- A string representing a Postgresql error code. See the Postgresql documentation for a description of error codes and error classes.
- A secondary (to the usual exn message property) message with extra detail about the problem.
- A string with a suggestion about what to do about the problem.
- An integer indicating an error cursor position as an index into the original statement string. The first character has index 1, and positions are measured in characters, not bytes.
- An indication of the context in which the error occurred. Presently this includes a call stack traceback of active PL functions. The trace is one entry per line, most recent first.
- The file name of the Postgresql source-code location where the error was reported.
- A string containing the line number of the Postgresql source-code location where the error was reported.
- The name of the source-code function reporting the error.
Type information is read from the database the first time you connect to it. Result set values are either text or binary (or sql null). If they are text, they are converted to Scheme objects by type parsers, as described below. If they are binary, they will be returned as unprocessed blobs (which you can then convert to u8vectors or strings).
Parsers[parameter] (default-type-parsers [ALIST])
Postgres result values are always just strings, but it is possible to map these to real Scheme objects. With this parameter, you can map your own custom postgresql datatype to Scheme datatypes.
The alist is a mapping of Postgres type names (strings) to procedures accepting a string and returning a Scheme object of the desired type.
The parsers can also be set per connection with the TYPE-PARSERS argument of the connect procedure.
(parameterize ((default-type-parsers `(("text" . ,string->symbol)))) (let ((conn (connect '((dbname . test))))) (row-map identity (query conn "SELECT 'hello'::text"))))
The default parsers look like this:
`(("text" . ,identity) ("bytea" . ,bytea-parser) ("char" . ,char-parser) ("bpchar" . ,identity) ("bool" . ,bool-parser) ("int8" . ,numeric-parser) ("int4" . ,numeric-parser) ("int2" . ,numeric-parser) ("float4" . ,numeric-parser) ("float8" . ,numeric-parser) ("numeric" . ,numeric-parser) ("oid" . ,numeric-parser))
These parsers are described below. For anything where no parser is found, the value is returned verbatim (which is always a string, or a blob in case of binary data).[procedure] (update-type-parsers! CONN [TYPE-PARSERS])
As described above, type information is extracted from the system catalog whenever you initiate a new connection. However, there is a bootstrapping problem when you are defining custom data types. You must first connect before you can define your custom data types. But the type parsers do not have the information for this new type yet, so you must update them.
To do this, you can call update-type-parsers!. This procedure updates all the type parsers originally associated with connection CONN. By providing the optional TYPE-PARSERS, you can override the existing type parsers for this connection with new ones, otherwise the old ones are just refreshed.[procedure] (bool-parser STR)
Returns #t if the string equals "t", #f otherwise.[procedure] (bytea-parser STR)
Returns a u8vector containing the bytes in STR, after unescaping it using unescape-bytea.[procedure] (char-parser STR)
Returns the first character in STR.[procedure] (numeric-parser STR)
Returns STR converted to a number using decimal representation. If STR could not be converted to a number, raises an error.
Unparsers[parameter] (default-type-unparsers [ALIST])
Just as PostgreSQL types are converted to Scheme types in result sets, Scheme types need to be converted to PostgreSQL types when providing positional parameters to queries. For this, the library uses type unparsers. Just like type parsers, you can override them either per-connection using the TYPE-UNPARSERS parameter to the connect procedure, or globally by changing a parameter.
This alist is a mapping of predicates to unparsers. Predicates are procedures which accept a scheme object and return a true value if the object is of the type for which the unparser is intended. Unparsers are procedures which accept a scheme object and return either a string, a blob or an sql-null object to be used in the query.
It is not necessary to reload type unparsers after defining a new data type in the database.
Order matters; the type unparser alist is traversed from left to right, trying predicates in order and invoking the unparser linked to the first predicate that does not return #f. If none of the predicates match, the type must be of string, blob or sql-null type. If not, the query procedure will raise an error.
The default unparsers look like this:
`((,string? . ,identity) (,u8vector? . ,u8vector->blob/shared) (,char? . ,string) (,boolean? . ,bool-unparser) (,number? . ,number->string))[procedure] (bool-unparser B)
Returns "TRUE" for true values and "FALSE" for #f.
- 3.1 - Small (but backwards incompatible) interface improvement: make row and column arguments optional.
- 3.0 - Port to Chicken 4. Major code overhaul, backwards incompatible.
- 2.0.14 - Add PQescapeStringConn functionality to pg:escape-string. Fixed weird compilation issue with SRFI-69 by removing it from USES list.
- 2.0.12 - added pg:named-tuples parameter [Contributed by Graham Fawcett]
- 2.0.11 - added syntax-case requirements to .meta file [Thanks to Michele Simionato]
- 2.0.10 - adapted to new easyffi usage [Thanks to rreal]
- 2.0.9 - Yet more improvements to error reporting
- 2.0.8 - More detailed error information, export pg:sql-null-object
- 2.0.7 - Added missing error-function [Thanks to Reed Sheridan]
- 2.0.6 - Removed dependency on format [Thanks to Reed Sheridan]
- 2.0.5 - Some bugfixes and pq:escape-string by Reed Sheridan; adapted to SRFI-69 hash-tables
- 2.0.4 - Changed usage of hygienic macros in setup script
- 2.0.3 - Bugfixes.
- 2.0.0 - Interface improvements. (Backward-incompatible.)
- 1.2.1 - Non-blocking queries.
- 1.2.0 - Optimizations, minor fixes and cleanups.
Copyright (C) 2008-2009 Peter Bex Copyright (C) 2004 Johannes Grødem <email@example.com> Redistribution and use in source and binary forms, with or without modification, is permitted. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.