From: Robert Soulliere Date: Wed, 15 Sep 2010 20:16:53 +0000 (-0400) Subject: Do some clean up of development files. Adjust converting script to X-Git-Url: https://old-git.evergreen-ils.org/?a=commitdiff_plain;h=69e2c5d197a7441df95c17126b91de4f9cd224a0;p=Evergreen-DocBook.git Do some clean up of development files. Adjust converting script to place web files in 1.6 folder. --- diff --git a/1.6/development/OpenSRF_intro.xml b/1.6/development/OpenSRF_intro.xml index 34598db..0048152 100644 --- a/1.6/development/OpenSRF_intro.xml +++ b/1.6/development/OpenSRF_intro.xml @@ -140,25 +140,27 @@ For example, the following code implements an OpenSRF service. The service includes one method named opensrf.simple-text.reverse() that accepts one string as input and returns the reversed version of that string: - #!/usr/bin/perl + +#!/usr/bin/perl - package OpenSRF::Application::Demo::SimpleText; +package OpenSRF::Application::Demo::SimpleText; - use strict; +use strict; - use OpenSRF::Application; - use parent qw/OpenSRF::Application/; +use OpenSRF::Application; +use parent qw/OpenSRF::Application/; - sub text_reverse { - my ($self , $conn, $text) = @_; - my $reversed_text = scalar reverse($text); - return $reversed_text; - } +sub text_reverse { + my ($self , $conn, $text) = @_; + my $reversed_text = scalar reverse($text); + return $reversed_text; +} - __PACKAGE__->register_method( - method => 'text_reverse', - api_name => 'opensrf.simple-text.reverse' - ); +__PACKAGE__->register_method( + method => 'text_reverse', + api_name => 'opensrf.simple-text.reverse' +); + Ten lines of code, and we have a complete OpenSRF service that exposes a single method and could be deployed quickly on a cluster of servers to meet your application’s ravenous demand for reversed strings! If you’re unfamiliar with @@ -193,27 +195,29 @@ Begin by defining the service itself in opensrf.xml. To register the opensrf.simple-text service, add the following section to the <apps> element (corresponding to the XPath /opensrf/default/apps/): - <apps> - <opensrf.simple-text> - <keepalive>3</keepalive> - <stateless>1</stateless> - <language>perl</language> - <implementation>OpenSRF::Application::Demo::SimpleText</implementation> - <max_requests>100</max_requests> - <unix_config> - <max_requests>1000</max_requests> - <unix_log>opensrf.simple-text_unix.log</unix_log> - <unix_sock>opensrf.simple-text_unix.sock</unix_sock> - <unix_pid>opensrf.simple-text_unix.pid</unix_pid> - <min_children>5</min_children> - <max_children>15</max_children> - <min_spare_children>2</min_spare_children> - <max_spare_children>5</max_spare_children> - </unix_config> - </opensrf.simple-text> + +<apps> + <opensrf.simple-text> + <keepalive>3</keepalive> + <stateless>1</stateless> + <language>perl</language> + <implementation>OpenSRF::Application::Demo::SimpleText</implementation> + <max_requests>100</max_requests> + <unix_config> + <max_requests>1000</max_requests> + <unix_log>opensrf.simple-text_unix.log</unix_log> + <unix_sock>opensrf.simple-text_unix.sock</unix_sock> + <unix_pid>opensrf.simple-text_unix.pid</unix_pid> + <min_children>5</min_children> + <max_children>15</max_children> + <min_spare_children>2</min_spare_children> + <max_spare_children>5</max_spare_children> + </unix_config> + </opensrf.simple-text> - <!-- other OpenSRF services registered here... --> - </apps> + <!-- other OpenSRF services registered here... --> +</apps> + @@ -313,16 +317,18 @@ edit the opensrf_core.xml configuration file to add the service to the list of publicly accessible services: Making a service publicly accessible in <literal>opensrf_core.xml</literal> - <router> - <!-- This is the public router. On this router, we only register applications - which should be accessible to everyone on the opensrf network --> - <name>router</name> - <domain>public.localhost</domain> - <services> - <service>opensrf.math</service> - <service>opensrf.simple-text</service> - </services> - </router> + +<router> + <!-- This is the public router. On this router, we only register applications + which should be accessible to everyone on the opensrf network --> + <name>router</name> + <domain>public.localhost</domain> + <services> + <service>opensrf.math</service> + <service>opensrf.simple-text</service> + </services> +</router> + @@ -365,15 +371,17 @@ The following example calls the opensrf.simple-text.reverse method of the opensrf.simple-text OpenSRF service, passing the string "foobar" as the only method argument: - $ srfsh - srfsh # request opensrf.simple-text opensrf.simple-text.reverse "foobar" + +$ srfsh +srfsh # request opensrf.simple-text opensrf.simple-text.reverse "foobar" - Received Data: "raboof" +Received Data: "raboof" - =------------------------------------ - Request Completed Successfully - Request Time in seconds: 0.016718 - =------------------------------------ +=------------------------------------ +Request Completed Successfully +Request Time in seconds: 0.016718 +=------------------------------------ + Getting documentation for OpenSRF methods from the srfsh client @@ -393,37 +401,39 @@ accurate information. The quality varies across the set of OpenSRF and Evergreen APIs, although some effort is being put towards improving the state of the internal documentation. - srfsh# introspect opensrf.simple-text "opensrf.simple-text.reverse" - --> opensrf.simple-text + +srfsh# introspect opensrf.simple-text "opensrf.simple-text.reverse" +--> opensrf.simple-text - Received Data: { - "__c":"opensrf.simple-text", - "__p":{ - "api_level":1, - "stream":0, - "object_hint":"OpenSRF_Application_Demo_SimpleText", - "remote":0, - "package":"OpenSRF::Application::Demo::SimpleText", - "api_name":"opensrf.simple-text.reverse", - "server_class":"opensrf.simple-text", - "signature":{ - "params":[ - { - "desc":"The string to reverse", - "name":"text", - "type":"string" - } - ], - "desc":"Returns the input string in reverse order\n", - "return":{ - "desc":"Returns the input string in reverse order", - "type":"string" - } - }, - "method":"text_reverse", - "argc":1 - } - } +Received Data: { + "__c":"opensrf.simple-text", + "__p":{ + "api_level":1, + "stream":0, + "object_hint":"OpenSRF_Application_Demo_SimpleText", + "remote":0, + "package":"OpenSRF::Application::Demo::SimpleText", + "api_name":"opensrf.simple-text.reverse", + "server_class":"opensrf.simple-text", + "signature":{ + "params":[ + { + "desc":"The string to reverse", + "name":"text", + "type":"string" + } + ], + "desc":"Returns the input string in reverse order\n", + "return":{ + "desc":"Returns the input string in reverse order", + "type":"string" + } + }, + "method":"text_reverse", + "argc":1 + } +} + @@ -481,43 +491,45 @@ Calling OpenSRF methods from Perl applications To call an OpenSRF method from Perl, you must connect to the OpenSRF service, issue the request to the method, and then retrieve the results. - #/usr/bin/perl - use strict; - use OpenSRF::AppSession; - use OpenSRF::System; + +#/usr/bin/perl +use strict; +use OpenSRF::AppSession; +use OpenSRF::System; - OpenSRF::System->bootstrap_client(config_file => '/openils/conf/opensrf_core.xml'); +OpenSRF::System->bootstrap_client(config_file => '/openils/conf/opensrf_core.xml'); - my $session = OpenSRF::AppSession->create("opensrf.simple-text"); +my $session = OpenSRF::AppSession->create("opensrf.simple-text"); - print "substring: Accepts a string and a number as input, returns a string\n"; - my $result = $session->request("opensrf.simple-text.substring", "foobar", 3); - my $request = $result->gather(); - print "Substring: $request\n\n"; +print "substring: Accepts a string and a number as input, returns a string\n"; +my $result = $session->request("opensrf.simple-text.substring", "foobar", 3); +my $request = $result->gather(); +print "Substring: $request\n\n"; - print "split: Accepts two strings as input, returns an array of strings\n"; - $request = $session->request("opensrf.simple-text.split", "This is a test", " "); - my $output = "Split: ["; - my $element; - while ($element = $request->recv()) { - $output .= $element->content . ", "; - } - $output =~ s/, $/]/; - print $output . "\n\n"; +print "split: Accepts two strings as input, returns an array of strings\n"; +$request = $session->request("opensrf.simple-text.split", "This is a test", " "); +my $output = "Split: ["; +my $element; +while ($element = $request->recv()) { + $output .= $element->content . ", "; +} +$output =~ s/, $/]/; +print $output . "\n\n"; - print "statistics: Accepts an array of strings as input, returns a hash\n"; - my @many_strings = [ - "First I think I'll have breakfast", - "Then I think that lunch would be nice", - "And then seventy desserts to finish off the day" - ]; +print "statistics: Accepts an array of strings as input, returns a hash\n"; +my @many_strings = [ + "First I think I'll have breakfast", + "Then I think that lunch would be nice", + "And then seventy desserts to finish off the day" +]; - $result = $session->request("opensrf.simple-text.statistics", @many_strings); - $request = $result->gather(); - print "Length: " . $result->{'length'} . "\n"; - print "Word count: " . $result->{'word_count'} . "\n"; +$result = $session->request("opensrf.simple-text.statistics", @many_strings); +$request = $result->gather(); +print "Length: " . $result->{'length'} . "\n"; +print "Word count: " . $result->{'word_count'} . "\n"; - $session->disconnect(); +$session->disconnect(); + @@ -607,20 +619,22 @@ returns a list, we accept two arguments of type string: the string to be split, and the delimiter that should be used to split the string. Basic text splitting method - sub text_split { - my $self = shift; - my $conn = shift; - my $text = shift; - my $delimiter = shift || ' '; + +sub text_split { + my $self = shift; + my $conn = shift; + my $text = shift; + my $delimiter = shift || ' '; - my @split_text = split $delimiter, $text; - return \@split_text; - } + my @split_text = split $delimiter, $text; + return \@split_text; +} - __PACKAGE__->register_method( - method => 'text_split', - api_name => 'opensrf.simple-text.split' - ); +__PACKAGE__->register_method( + method => 'text_split', + api_name => 'opensrf.simple-text.split' +); + We simply return a reference to the list, and OpenSRF does the rest of the work for us to convert the data into the language-independent format that is then @@ -650,27 +664,29 @@ configuration file and dynamically register OpenSRF methods for creating, reading, updating, and deleting all of the defined classes. Example fieldmapper class definition for "Open User Summary" - <class id="mous" controller="open-ils.cstore open-ils.pcrud" - oils_obj:fieldmapper="money::open_user_summary" - oils_persist:tablename="money.open_usr_summary" - reporter:label="Open User Summary"> - <fields oils_persist:primary="usr" oils_persist:sequence=""> - <field name="balance_owed" reporter:datatype="money" /> - <field name="total_owed" reporter:datatype="money" /> - <field name="total_paid" reporter:datatype="money" /> - <field name="usr" reporter:datatype="link"/> - </fields> - <links> - <link field="usr" reltype="has_a" key="id" map="" class="au"/> - </links> - <permacrud xmlns="http://open-ils.org/spec/opensrf/IDL/permacrud/v1"> - <actions> - <retrieve permission="VIEW_USER"> - <context link="usr" field="home_ou"/> - </retrieve> - </actions> - </permacrud> - </class> + +<class id="mous" controller="open-ils.cstore open-ils.pcrud" + oils_obj:fieldmapper="money::open_user_summary" + oils_persist:tablename="money.open_usr_summary" + reporter:label="Open User Summary"> + <fields oils_persist:primary="usr" oils_persist:sequence=""> + <field name="balance_owed" reporter:datatype="money" /> + <field name="total_owed" reporter:datatype="money" /> + <field name="total_paid" reporter:datatype="money" /> + <field name="usr" reporter:datatype="link"/> + </fields> + <links> + <link field="usr" reltype="has_a" key="id" map="" class="au"/> + </links> + <permacrud xmlns="http://open-ils.org/spec/opensrf/IDL/permacrud/v1"> + <actions> + <retrieve permission="VIEW_USER"> + <context link="usr" field="home_ou"/> + </retrieve> + </actions> + </permacrud> +</class> + @@ -887,24 +903,26 @@ In the following example, the text splitting method has been reimplemented to support streaming; very few changes are required: Text splitting method - streaming mode - sub text_split { - my $self = shift; - my $conn = shift; - my $text = shift; - my $delimiter = shift || ' '; + +sub text_split { + my $self = shift; + my $conn = shift; + my $text = shift; + my $delimiter = shift || ' '; - my @split_text = split $delimiter, $text; - foreach my $string (@split_text) { - $conn->respond($string); - } - return undef; - } + my @split_text = split $delimiter, $text; + foreach my $string (@split_text) { + $conn->respond($string); + } + return undef; +} - __PACKAGE__->register_method( - method => 'text_split', - api_name => 'opensrf.simple-text.split', - stream => 1 - ); +__PACKAGE__->register_method( + method => 'text_split', + api_name => 'opensrf.simple-text.split', + stream => 1 +); + @@ -936,43 +954,49 @@ You can include many calls to the OpenSRF logger; only those that are higher than your configured logging level will actually hit the log. The following example exercises all of the available logging levels in OpenSRF: - use OpenSRF::Utils::Logger; - my $logger = OpenSRF::Utils::Logger; - # some code in some function - { - $logger->error("Hmm, something bad DEFINITELY happened!"); - $logger->warn("Hmm, something bad might have happened."); - $logger->info("Something happened."); - $logger->debug("Something happened; here are some more details."); - $logger->internal("Something happened; here are all the gory details.") - } + +use OpenSRF::Utils::Logger; +my $logger = OpenSRF::Utils::Logger; +# some code in some function +{ + $logger->error("Hmm, something bad DEFINITELY happened!"); + $logger->warn("Hmm, something bad might have happened."); + $logger->info("Something happened."); + $logger->debug("Something happened; here are some more details."); + $logger->internal("Something happened; here are all the gory details.") +} + If you call the mythical OpenSRF method containing the preceding OpenSRF logger statements on a system running at the default logging level of INFO, you will only see the INFO, WARN, and ERR messages, as follows: Results of logging calls at the default level of INFO - [2010-03-17 22:27:30] opensrf.simple-text [ERR :5681:SimpleText.pm:277:] Hmm, something bad DEFINITELY happened! - [2010-03-17 22:27:30] opensrf.simple-text [WARN:5681:SimpleText.pm:278:] Hmm, something bad might have happened. - [2010-03-17 22:27:30] opensrf.simple-text [INFO:5681:SimpleText.pm:279:] Something happened. + +[2010-03-17 22:27:30] opensrf.simple-text [ERR :5681:SimpleText.pm:277:] Hmm, something bad DEFINITELY happened! +[2010-03-17 22:27:30] opensrf.simple-text [WARN:5681:SimpleText.pm:278:] Hmm, something bad might have happened. +[2010-03-17 22:27:30] opensrf.simple-text [INFO:5681:SimpleText.pm:279:] Something happened. + If you then increase the the logging level to INTERNAL (5), the logs will contain much more information, as follows: Results of logging calls at the default level of INTERNAL - [2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:277:] Hmm, something bad DEFINITELY happened! - [2010-03-17 22:48:11] opensrf.simple-text [WARN:5934:SimpleText.pm:278:] Hmm, something bad might have happened. - [2010-03-17 22:48:11] opensrf.simple-text [INFO:5934:SimpleText.pm:279:] Something happened. - [2010-03-17 22:48:11] opensrf.simple-text [DEBG:5934:SimpleText.pm:280:] Something happened; here are some more details. - [2010-03-17 22:48:11] opensrf.simple-text [INTL:5934:SimpleText.pm:281:] Something happened; here are all the gory details. - [2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:283:] Resolver did not find a cache hit - [2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:Cache.pm:125:] Stored opensrf.simple-text.test_cache.masaa => "here" in memcached server - [2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:579:] Coderef for [OpenSRF::Application::Demo::SimpleText::test_cache]... - [2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:586:] A top level Request object is responding de nada - [2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:190:] Method duration for [opensrf.simple-text.test_cache]: 10.005 - [2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:780:] Calling queue_wait(0) - [2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:769:] Resending...0 - [2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:450:] In send - [2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] AppSession sending RESULT to opensrf@private.localhost/... - [2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] AppSession sending STATUS to opensrf@private.localhost/... - ... + +[2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:277:] Hmm, something bad DEFINITELY happened! +[2010-03-17 22:48:11] opensrf.simple-text [WARN:5934:SimpleText.pm:278:] Hmm, something bad might have happened. +[2010-03-17 22:48:11] opensrf.simple-text [INFO:5934:SimpleText.pm:279:] Something happened. +[2010-03-17 22:48:11] opensrf.simple-text [DEBG:5934:SimpleText.pm:280:] Something happened; here are some more details. +[2010-03-17 22:48:11] opensrf.simple-text [INTL:5934:SimpleText.pm:281:] Something happened; here are all the gory details. +[2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:283:] Resolver did not find a cache hit +[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:Cache.pm:125:] Stored opensrf.simple-text.test_cache.masaa => "here" in memcached server +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:579:] Coderef for [OpenSRF::Application::Demo::SimpleText::test_cache]... +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:586:] A top level Request object is responding de nada +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:190:] Method duration for [opensrf.simple-text.test_cache]: 10.005 +[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:780:] Calling queue_wait(0) +[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:769:] Resending...0 +[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:450:] In send +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] AppSession sending RESULT to opensrf@private.localhost/... +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] AppSession sending STATUS to opensrf@private.localhost/... +... + To see everything that is happening in OpenSRF, try leaving your logging level set to INTERNAL for a few minutes - just ensure that you have a lot of free disk @@ -990,23 +1014,25 @@ by sleeping for 10 seconds the first time it receives a given cache key and cannot retrieve a corresponding value from the cache: Simple caching OpenSRF service - use OpenSRF::Utils::Cache; - sub test_cache { - my $self = shift; - my $conn = shift; - my $test_key = shift; - my $cache = OpenSRF::Utils::Cache->new('global'); - my $cache_key = "opensrf.simple-text.test_cache.$test_key"; - my $result = $cache->get_cache($cache_key) || undef; - if ($result) { - $logger->info("Resolver found a cache hit"); - return $result; - } - sleep 10; - my $cache_timeout = 300; - $cache->put_cache($cache_key, "here", $cache_timeout); - return "There was no cache hit."; - } + +use OpenSRF::Utils::Cache; +sub test_cache { + my $self = shift; + my $conn = shift; + my $test_key = shift; + my $cache = OpenSRF::Utils::Cache->new('global'); + my $cache_key = "opensrf.simple-text.test_cache.$test_key"; + my $result = $cache->get_cache($cache_key) || undef; + if ($result) { + $logger->info("Resolver found a cache hit"); + return $result; + } + sleep 10; + my $cache_timeout = 300; + $cache->put_cache($cache_key, "here", $cache_timeout); + return "There was no cache hit."; +} + @@ -1221,15 +1247,17 @@ OpenSRF requests and returns OpenSRF results as HTTP results to the initiating HTTP client. Issuing an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator - # curl request broken up over multiple lines for legibility - curl -H "X-OpenSRF-service: opensrf.simple-text" - --data 'osrf-msg=[ \ - {"__c":"osrfMessage","__p":{"threadTrace":0,"locale":"en-CA", - "type":"REQUEST","payload": {"__c":"osrfMethod","__p": - {"method":"opensrf.simple-text.reverse","params":["foobar"]} - }} - }]' - http://localhost/osrf-http-translator + +# curl request broken up over multiple lines for legibility +curl -H "X-OpenSRF-service: opensrf.simple-text" + --data 'osrf-msg=[ \ + {"__c":"osrfMessage","__p":{"threadTrace":0,"locale":"en-CA", + "type":"REQUEST","payload": {"__c":"osrfMethod","__p": + {"method":"opensrf.simple-text.reverse","params":["foobar"]} + }} + }]' +http://localhost/osrf-http-translator + @@ -1297,21 +1325,23 @@ Results from an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator - # HTTP response broken up over multiple lines for legibility - [{"__c":"osrfMessage","__p": - {"threadTrace":0, "payload": - {"__c":"osrfResult","__p": - {"status":"OK","content":"raboof","statusCode":200} - },"type":"RESULT","locale":"en-CA" - } - }, - {"__c":"osrfMessage","__p": - {"threadTrace":0,"payload": - {"__c":"osrfConnectStatus","__p": - {"status":"Request Complete","statusCode":205} - },"type":"STATUS","locale":"en-CA" - } - }] + +# HTTP response broken up over multiple lines for legibility +[{"__c":"osrfMessage","__p": + {"threadTrace":0, "payload": + {"__c":"osrfResult","__p": + {"status":"OK","content":"raboof","statusCode":200} + },"type":"RESULT","locale":"en-CA" + } +}, +{"__c":"osrfMessage","__p": + {"threadTrace":0,"payload": + {"__c":"osrfConnectStatus","__p": + {"status":"Request Complete","statusCode":205} + },"type":"STATUS","locale":"en-CA" + } +}] + @@ -1433,48 +1463,52 @@ the advantages of locale support and tracing the request from the requester through the listener and responder (drone). A request for opensrf.simple-text.reverse("foobar"): - <message from='router@private.localhost/opensrf.simple-text' - to='opensrf@private.localhost/opensrf.simple-text_listener_at_localhost_6275' - router_from='opensrf@private.localhost/_karmic_126678.3719_6288' - router_to='' router_class='' router_command='' osrf_xid='' - > - <thread>1266781414.366573.12667814146288</thread> - <body> - [ - {"__c":"osrfMessage","__p": - {"threadTrace":"1","locale":"en-US","type":"REQUEST","payload": - {"__c":"osrfMethod","__p": - {"method":"opensrf.simple-text.reverse","params":["foobar"]} - } - } - } - ] - </body> - </message> + +<message from='router@private.localhost/opensrf.simple-text' + to='opensrf@private.localhost/opensrf.simple-text_listener_at_localhost_6275' + router_from='opensrf@private.localhost/_karmic_126678.3719_6288' + router_to='' router_class='' router_command='' osrf_xid='' +> + <thread>1266781414.366573.12667814146288</thread> + <body> +[ + {"__c":"osrfMessage","__p": + {"threadTrace":"1","locale":"en-US","type":"REQUEST","payload": + {"__c":"osrfMethod","__p": + {"method":"opensrf.simple-text.reverse","params":["foobar"]} + } + } + } +] + </body> +</message> + A response from opensrf.simple-text.reverse("foobar") - <message from='opensrf@private.localhost/opensrf.simple-text_drone_at_localhost_6285' - to='opensrf@private.localhost/_karmic_126678.3719_6288' - router_command='' router_class='' osrf_xid='' - > - <thread>1266781414.366573.12667814146288</thread> - <body> - [ - {"__c":"osrfMessage","__p": - {"threadTrace":"1","payload": - {"__c":"osrfResult","__p": - {"status":"OK","content":"raboof","statusCode":200} - } ,"type":"RESULT","locale":"en-US"} - }, - {"__c":"osrfMessage","__p": - {"threadTrace":"1","payload": - {"__c":"osrfConnectStatus","__p": - {"status":"Request Complete","statusCode":205} - },"type":"STATUS","locale":"en-US"} - } - ] - </body> - </message> + +<message from='opensrf@private.localhost/opensrf.simple-text_drone_at_localhost_6285' + to='opensrf@private.localhost/_karmic_126678.3719_6288' + router_command='' router_class='' osrf_xid='' +> + <thread>1266781414.366573.12667814146288</thread> + <body> +[ + {"__c":"osrfMessage","__p": + {"threadTrace":"1","payload": + {"__c":"osrfResult","__p": + {"status":"OK","content":"raboof","statusCode":200} + } ,"type":"RESULT","locale":"en-US"} + }, + {"__c":"osrfMessage","__p": + {"threadTrace":"1","payload": + {"__c":"osrfConnectStatus","__p": + {"status":"Request Complete","statusCode":205} + },"type":"STATUS","locale":"en-US"} + } +] + </body> +</message> + The content of the <body> element of the OpenSRF request and result should look familiar; they match the structure of the OpenSRF over HTTP examples that we previously dissected. @@ -1510,7 +1544,9 @@ - The optional stream member, if set to any value, specifies that the method supports returning multiple values from a single call to subsequent requests. OpenSRF automatically creates a corresponding method with ".atomic" appended to its name that returns the complete set of results in a single request. Streaming methods are useful if you are returning hundreds of records and want to act on the results as they return. + The optional stream member, if set to any value, specifies that the method supports returning multiple values from a single call to + subsequent requests. OpenSRF automatically creates a corresponding method with ".atomic" appended to its name that returns the complete set of results in a + single request. Streaming methods are useful if you are returning hundreds of records and want to act on the results as they return. @@ -1525,7 +1561,8 @@ - The params member of the signature hash is an array of hashes in which each array element describes the corresponding method argument in order. + The params member of the signature hash is an array of hashes in which each array element describes the corresponding method + argument in order. @@ -1557,7 +1594,8 @@ - The type member of the return hash specifies the data type of the return value: for example, string, integer, boolean, number, array, or hash. + The type member of the return hash specifies the data type of the return value: for example, string, integer, boolean, number, + array, or hash. @@ -1575,61 +1613,61 @@ - The open-ils.actor service supports common tasks for working with user + The open-ils.actor service supports common tasks for working with user accounts and libraries. - The open-ils.auth service supports authentication of Evergreen users. + The open-ils.auth service supports authentication of Evergreen users. - The open-ils.booking service supports the management of reservations + The open-ils.booking service supports the management of reservations for bookable items. - The open-ils.cat service supports common cataloging tasks, such as + The open-ils.cat service supports common cataloging tasks, such as creating, modifying, and merging bibliographic and authority records. - The open-ils.circ service supports circulation tasks such as checking + The open-ils.circ service supports circulation tasks such as checking out items and calculating due dates. - The open-ils.collections service supports tasks that assist collections + The open-ils.collections service supports tasks that assist collections agencies in contacting users with outstanding fines above a certain threshold. - The open-ils.cstore private service supports unrestricted access to + The open-ils.cstore private service supports unrestricted access to Evergreen fieldmapper objects. - The open-ils.ingest private service supports tasks for importing + The open-ils.ingest private service supports tasks for importing data such as bibliographic and authority records. - The open-ils.pcrud service supports permission-based access to Evergreen + The open-ils.pcrud service supports permission-based access to Evergreen fieldmapper objects. - The open-ils.penalty penalty service supports the calculation of + The open-ils.penalty penalty service supports the calculation of penalties for users, such as being blocked from further borrowing, for conditions such as having too many items checked out or too many unpaid fines. @@ -1637,25 +1675,25 @@ - The open-ils.reporter service supports the creation and scheduling of + The open-ils.reporter service supports the creation and scheduling of reports. - The open-ils.reporter-store private service supports access to Evergreen + The open-ils.reporter-store private service supports access to Evergreen fieldmapper objects for the reporting service. - The open-ils.search service supports searching across bibliographic + The open-ils.search service supports searching across bibliographic records, authority records, serial records, Z39.50 sources, and ZIP codes. - The open-ils.storage private service supports a deprecated method of + The open-ils.storage private service supports a deprecated method of providing access to Evergreen fieldmapper objects. Implemented in Perl, this service has largely been replaced by the much faster C-based open-ils.cstore service. @@ -1663,30 +1701,30 @@ - The open-ils.supercat service supports transforms of MARC records into + The open-ils.supercat service supports transforms of MARC records into other formats, such as MODS, as well as providing Atom and RSS feeds and SRU access. - The open-ils.trigger private service supports event-based triggers for + The open-ils.trigger private service supports event-based triggers for actions such as overdue and holds available notification emails. - The open-ils.vandelay service supports the import and export of batches of + The open-ils.vandelay service supports the import and export of batches of bibliographic and authority records. - Of some interest is that the open-ils.reporter-store and open-ils.cstore + Of some interest is that the open-ils.reporter-store and open-ils.cstore services have identical implementations. Surfacing them as separate services enables a deployer of Evergreen to ensure that the reporting service does not - interfere with the performance-critical open-ils.cstore service. One can also + interfere with the performance-critical open-ils.cstore service. One can also direct the reporting service to a read-only database replica to, again, avoid - interference with open-ils.cstore which must write to the master database. + interference with open-ils.cstore which must write to the master database. There are only a few significant services that are not built on OpenSRF in Evergreen 1.6.0, such as the SIP and Z39.50 servers. These services implement different protocols and build on existing daemon architectures (Simple2ZOOM diff --git a/1.6/development/customize_opac.xml b/1.6/development/customize_opac.xml index 453b0c5..3daf236 100644 --- a/1.6/development/customize_opac.xml +++ b/1.6/development/customize_opac.xml @@ -1,272 +1,280 @@ - - - - Customizing the OPAC - - While Evergreen is ready to go out of the box, libraries will want to customize Evergreen with their own color scheme, logos and layout. This chapter will explain how to - customize Evergreen to meet the needs of your users. For these task some knowledge of html and css is required. Many of these instructions assume a default installation of - Evergreen using the default file locations. - - Be sure to save a backup copy of all files you edit in a location other than /openils/var/web/opac/ as files here could be overwritten when you upgrade your copy - of Evergreen. - -
- Change the Color Scheme - To change the color scheme of the default Evergreen skin, edit /openils/var/web/opac/theme/default/css/colors.css. From this one file you can - change the 4 base color scheme as well as colors of specific elements. - - You can also create alternate themes for your users. - - - Copy the css folder and its contents from the example alternate theme /openils/var/web/opac/theme/reddish/ - to a new folder /openils/var/web/opac/theme/[your new theme]/. - - - Edit /openils/var/web/opac/theme/[your new theme]/css/colors.css to use the colors you want. - - - Link to your new style sheet by adding the following to /openils/var/web/opac/skin/default/xml/common/css_common.xml. - <link type='text/css' - rel="alternate stylesheet" - title='&opac.style.yourtheme;' - href="<!--#echo var='OILS_THEME_BASE'-->/yourtheme/css/colors.css" - name='Default' csstype='color'/> - - - Give your new theme a name users can select by adding the following to /openils/var/web/opac/locale/ - [your locale]/opac.dtd. - <!ENTITY opac.style.yourtheme "YourTheme"> - - -
-
- customizing Opac Text and Labels - To change text and links used throughout the OPAC, edit the following files: - - /openils/var/web/opac/locale/[your locale]/lang.dtd - /openils/var/web/opac/locale/[your locale]/opac.dtd - - - A better way to customize OPAC text is to create custom dtd files for your lang and opac customizations and then add a include - statement above the default dtd files. - <!DOCTYPE html PUBLIC - "-//W3C//DTD XHTML 1.0 Transitional//EN" - "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd" [ - <!--#include virtual="/opac/locale/${locale}/custom_opac.dtd"--> - <!--#include virtual="/opac/locale/${locale}/opac.dtd"--> - ]> - position is important here. The first/top included dtd files will take precedence over the subsequent dtd includes. - - While it is possible to add text to the xml files itself, it is a good practice to use the DTD file to control the text and refer to the DTD elements in the xml/html code. - For example, the footer.xml file has this code to generate a copyright statement: - <div id='copyright_text'> - <span>&footer.copyright;</span> - The included opac.dtd file in the en-US locale direcotry has this setting for &footer.copyright text: - <!ENTITY footer.copyright "Copyright © 2006-2010 Georgia Public Library Service, and others"> -
-
- Logo Images - To change the logos used by default to your own logos, replace the following files with images of your own, appropriately sized. - - Large main logo:/openils/var/web/opac/images/main_logo.jpg - Small logo:/openils/var/web/opac/images/small_logo.jpg - -
-
- Added Content - By default Evergreen includes customizable Added Content features to enhance the OPAC experience for your user. These features include Amazon book covers - and Google books searching. These features can be turned off or custimized. - - Book Covers - The default install of Evergreen includes Amazon book covers. The settings for this are controlled by the <added_content> section of - /opneils/conf/opensrf.xml. Here are the key elements of this configuration: - <module>OpenILS::WWW::AddedContent::Amazon</module> - This calls the Amazon perl module. If you wish to link to a different book cover service other than Amazon, you must create a new perl module and refer to it here. - You will also need to change other settings accordingly. There are some available book cover perl modules available in - trunk - <base_url>http://images.amazon.com/images/P/</base_url> - Base URL for Amazon added content fetching. This URL may need to be shortened when new (read: non-image) content fetching - capabilities are added. - <timeout>1</timeout> - Max number of seconds to wait for an added content request to return data. Data not returned within the timeout is considered a failure. - <retry_timeout>600</retry_timeout> - After added content lookups have been disabled due to too many lookup failures, this is the amount of time to wait before we try again. - <max_errors>15</max_errors> - Maximum number of consecutive lookup errors a given process can live before added content lookups are disabled for everyone. - <userid>MY_USER_ID</userid> - If a userid is required to access the added content. - - - Google Books Link - The results page will display a Browse in Google Books Search link for items in the results page which have corresponding entries in Google Books. - This will link to Google Books content including table of contents and complete versions of the work if it exists in Google Books. Items not in Google Books will not - display a link. This feature can be turned off by changing the googleBooksLink variable setting to falsein the file - /openils/var/web/opac/skin/default/js/result_common.js. By default, this feature is activated. - -
-
- Customizing the Results Page - The results page is extremely customizable and allows some built in features to be activated with some simple edits or more advanced customizations can be done by more - experienced web developers. - There are several critical files to edit if you wish to customize the results page: - - /openils/var/web/opac/skin/default/js/result_common.js - This file controls the javascript for the top level elements on the results - page and should only be edited by experienced web developers except for the google books link setting mentioned perviously. - /openils/var/web/opac/skin/default/js/rresult.js - Has some good controls of results page settings at the top of this file but - requires web development skills for editing this file. - /openils/var/web/opac/skin/default/xml/result/rresult_table.xml - This controls the layout of the items table on results page. - -
-
- Customizing the Details Page - There are many options when customizing the details page in Evergreen. The default settings are effective for most libraries, but it is important to understand the full potential - of Evergreen when displaying the details of items. - Some quick features can be turned on and off by changing variable values in the file /openils/var/web/opac/skin/default/js/rdedail.js. - You will notice the section at the top of this file called Per-skin configuration settings. Changing setting in this section can control several features includuing - limiting results to local only or showing copy location or displaying serial holdings. Form this section you can also enable refworks and set the Refworks host URL. - Some copy level details settings can be turned on and off from /openils/var/web/opac/skin/default/js/copy_details.js including displaying certain fields - such as due date in the OPAC. - An important file is the /openils/var/web/opac/skin/default/xml/rdetail/rdetail_summary.xml file. This file allows you to control which field to display in - the details summary of the record. The new BibTemnplate feature makes this file even more powerful by allowing you to display any marc fields - with a variey of formatting options. - The /openils/var/web/opac/skin/default/xml/rdetail/rdetail_copyinfo.xml file allows you to format the display of the copy information. -
-
- BibTemplate - BibTemplate is an Evergreen-custom Dojo module which can be used to retrieve and format XML data served by the Evergreen unAPI service. unAPI is a protocol for requesting known objects in specific formats, and Evergreen uses this to supply data – bibliographic records, metarecords, monograph holdings information, Located URIs, and more to come – - in many different formats from MARCXML to MODS to custom XML applications. - Managing the display of information from raw XML can be difficult, and the purpose of BibTemplate is to make this simpler, as well as move the display closer to the - client and away from the source data. This is good from a separation-of-responsibilities perspective, and also makes it easier to contain and control local customization. - BibTemplate supports the foloowing Evergreen metadata formats: - - MARCXML - datatype='marcxml-full' (default) - MODS 3.3: datatype='mods33' - Dublin Core: datatype='rdf_dc' - FGDC: datatype='fgdc' - - - HTML API - BibTemplate follows the Dojo convention of adding attributes to existing (X)HTML in order to progressively change its behavior. The 1.6.0 HTML API consists of a - set of attributes that are added to existing OPAC markup, and fall into two classes: - - The slot marker – Elements that denote the location of bibliographic data to insert. - The slot formatter – Elements that specify how the named data should be formatted for display. - - - - Slot Marker - A slot marker is any displayable HTML element that has a type attribute with a value starting with opac/slot-data. This element will become the container - for the formatted data. A slot marker is required in order to retrieve, format and display data using BibTemplate. A slot marker must also have an - attribute called query containing a CSS3 selector. This selector is applied to the XML returned by the unAPI service in order to gather the specific XML - Nodes that should be considered for formatting. - The slot marker can also specify the format of the data to be returned from the unAPI service. This can be specified by adding +{format} to the type - attribute, as in opac/slot-data+mods33-full. The default data format is marcxml-uri, which is an augmented MARCXML record containing Located URI information - and unAPI links. - Example of a slot marker: - <p type='opac/slot-data' query='datafield[tag=245]'></p> - Most useful attribute match operators include: - - datafield[tag=245] - exact match - datafield[tag^=65] - match start of value - - Selectors always narrow, so select broadly and iterate through the NodeList - - - Slot Formatter - A slot formatter is any invisible HTML element which has a type attribute with the value of opac/slot-format. (NOTE: before 1.6.0.4, only <script> - elements were supported, though this restriction is now removed to support Internet Explorer.) Only one slot formatter element is allowed in each slot. The text contents - of this element are wrapped in a JavaScript function and run for each node returned by the query CSS3 selector specified on the slot marker. This function is passed - one argument, called item, which an XML Node captured by the selector. This function should return HTML text. The output for all runs of the slot formatter is - concatenated into a single string and used to replace the contents of the slot marker. - The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes - specified in the CSS3 selector. - Example of a slot formatter: - - <td class='rdetail_item' id='rdetail_online' type='opac/slot-data' query='volumes volume uris uri' join=", "> - <script type='opac/slot-format'><![CDATA[ - var link = '<a href="' + item.getAttribute('href') + '">' + item.getAttribute('label') + '</a>'; - if (item.getAttribute('use_restriction')) - link += ' (Use restriction: ' + item.getAttribute('use_restriction') + ')'; - return link; - ]]></script> - </td> - - - - JavaScript API - In order for BibTemplate to find the slot markers and invoke the slot formatters JavaScript renderer must be instantiated and called. This must be done - for each record that is to contribute to a pages display. The API for this is simple and straight-forward: - The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes - specified in the CSS3 selector. - Example of a slot formatter: - - dojo.require('openils.BibTemplate'); // Tell Dojo to load BibTemplate, if it is not already loaded - - // Create a renderer supplying the record id and the short name of the org unit, if known, and call the render() method - new openils.BibTemplate({ record : new CGI().param('r'), org_unit : here.shortname() }).render(); - - - The argument hash supplied to the new openils.BibTemplate() constructor can have the following properties: - - record – The bibliographic record ID. - org_unit – The relevant Organizational Unit, used to restrict holdings scope as on a search result or record detail page. - root – The root element within the web page that BibTemplate should search for slot markers - - - - BibTemplate Examples - This is all that we had to add to display the contents of an arbitrary MARC field: - - <tr> - <td>Bibliography note</td> - <td type='opac/slot-data' query='datafield[tag=504]'></td> - </tr> - - If multiple fields match, they are displayed on consecutive lines within the same left-hand cell. - To display a specific MARC subfield, add that subfield to the query attribute. - For example, subfield $a is the only user-oriented subfield in field 586 (Awards Note) - - <tr> - <td>Awards note</td> - <td type='opac/slot-data' query='datafield[tag=586] subfield[code=a]'></td> - </tr> - - Hide empty rows by default, and display them only if they have content: - - <tr class='hide_me' id='tag504'> - <td>Bibliographic note</td> - <td type='opac/slot-data' query='datafield[tag=504]'> - <script type='opac/slot-format'><![CDATA[ - dojo.query('#tag504').removeClass('hide_me'); - return '<span>' + dojox.data.dom.textContent(item) + - '</span><br/>'; - ]]></script> - </td></tr> - - - <![CDATA[ ... ]]> tells Evergreen Web server to treat the contents as literal character data - - avoids hilarity of entity substitution - <script type='opac/slot-format'>...</script>, contained within an 'opac/slot-data' element, receives a variable named item - containing the results of the query (a NodeList) - - Suppressing a subfield: - - <tr class='hide_me' id='tag700'> - <td>Additional authors</td> - <td type='opac/slot-data' query='datafield[tag=700]'> - <script type='opac/slot-format'><![CDATA[ - dojo.query('#tag700').removeClass('hide_me'); - var text = ''; - var list = dojo.query('subfield:not([code=4])', item); - for (var i =0; i < list.length; i++) { - text += dojox.data.dom.textContent(list[i]) + ' '; - } - return '<span>' + text + '</span><br/>'; - ]]></script> - </td></tr> - - -
-
- + + + + Customizing the OPAC + + While Evergreen is ready to go out of the box, libraries will want to customize Evergreen with their own color scheme, logos and layout. This chapter will explain how to + customize Evergreen to meet the needs of your users. For these task some knowledge of html and css is required. Many of these + instructions assume aninstallation of Evergreen using the default file locations. + + Be sure to save a backup copy of all files you edit in a location other than /openils/var/web/opac/ as files here could be + overwritten when you upgrade your copy of Evergreen. + +
+ Change the Color Scheme + To change the color scheme of the default Evergreen skin, edit /openils/var/web/opac/theme/default/css/colors.css. From this one file you can + change the 4 base color scheme as well as colors of specific elements. + + You can also create alternate themes for your users. + + + Copy the css folder and its contents from the example alternate theme /openils/var/web/opac/theme/reddish/ + to a new folder /openils/var/web/opac/theme/[your new theme]/. + + + Edit /openils/var/web/opac/theme/[your new theme]/css/colors.css to use the colors you want. + + + Link to your new style sheet by adding the following to /openils/var/web/opac/skin/default/xml/common/css_common.xml. + +<link type='text/css' +rel="alternate stylesheet" +title='&opac.style.yourtheme;' +href="<!--#echo var='OILS_THEME_BASE'-->/yourtheme/css/colors.css" +name='Default' csstype='color'/> + + + + Give your new theme a name users can select by adding the following to /openils/var/web/opac/locale/ + [your locale]/opac.dtd. + <!ENTITY opac.style.yourtheme "YourTheme"> + + +
+
+ customizing Opac Text and Labels + To change text and links used throughout the OPAC, edit the following files: + + /openils/var/web/opac/locale/[your locale]/lang.dtd + /openils/var/web/opac/locale/[your locale]/opac.dtd + + + A better way to customize OPAC text is to create custom dtd files for your lang and opac customizations and then add a include + statement above the default dtd files. + + <!DOCTYPE html PUBLIC + "-//W3C//DTD XHTML 1.0 Transitional//EN" + "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd" [ + <!--#include virtual="/opac/locale/${locale}/custom_opac.dtd"--> + <!--#include virtual="/opac/locale/${locale}/opac.dtd"--> + ]> + + position is important here. The first/top included dtd files will take precedence over the subsequent dtd includes. + + While it is possible to add text to the xml files itself, it is a good practice to use the DTD file to control the text and refer to the DTD elements in the xml/html code. + For example, the footer.xml file has this code to generate a copyright statement: + +<div id='copyright_text'> +<span>&footer.copyright;</span> + + The included opac.dtd file in the en-US locale direcotry has this setting for &footer.copyright text: + <!ENTITY footer.copyright "Copyright © 2006-2010 Georgia Public Library Service, and others"> +
+
+ Logo Images + To change the logos used by default to your own logos, replace the following files with images of your own, appropriately sized. + + Large main logo:/openils/var/web/opac/images/main_logo.jpg + Small logo:/openils/var/web/opac/images/small_logo.jpg + +
+
+ Added Content + By default Evergreen includes customizable Added Content features to enhance the OPAC experience for your user. These features include Amazon book covers + and Google books searching. These features can be turned off or custimized. + + Book Covers + The default install of Evergreen includes Amazon book covers. The settings for this are controlled by the <added_content> section of + /opneils/conf/opensrf.xml. Here are the key elements of this configuration: + <module>OpenILS::WWW::AddedContent::Amazon</module> + This calls the Amazon perl module. If you wish to link to a different book cover service other than Amazon, you must create a new perl module and refer to it here. + You will also need to change other settings accordingly. There are some available book cover perl modules available in + trunk + <base_url>http://images.amazon.com/images/P/</base_url> + Base URL for Amazon added content fetching. This URL may need to be shortened when new (read: non-image) content fetching + capabilities are added. + <timeout>1</timeout> + Max number of seconds to wait for an added content request to return data. Data not returned within the timeout is considered a failure. + <retry_timeout>600</retry_timeout> + After added content lookups have been disabled due to too many lookup failures, this is the amount of time to wait before we try again. + <max_errors>15</max_errors> + Maximum number of consecutive lookup errors a given process can live before added content lookups are disabled for everyone. + <userid>MY_USER_ID</userid> + If a userid is required to access the added content. + + + Google Books Link + The results page will display a Browse in Google Books Search link for items in the results page which have corresponding entries + in Google Books. + This will link to Google Books content including table of contents and complete versions of the work if it exists in Google Books. Items not in Google Books will not + display a link. This feature can be turned off by changing the googleBooksLink variable setting to false in the file + /openils/var/web/opac/skin/default/js/result_common.js. By default, this feature is activated. + +
+
+ Customizing the Results Page + The results page is extremely customizable and allows some built in features to be activated with some simple edits or more advanced customizations can be done by more + experienced web developers. + There are several critical files to edit if you wish to customize the results page: + + /openils/var/web/opac/skin/default/js/result_common.js - This file controls the javascript for the top level elements on the results + page and should only be edited by experienced web developers except for the google books link setting mentioned perviously. + /openils/var/web/opac/skin/default/js/rresult.js - Has some good controls of results page settings at the top of this file but + requires web development skills for editing this file. + /openils/var/web/opac/skin/default/xml/result/rresult_table.xml - This controls the layout of the items table on results page. + +
+
+ Customizing the Details Page + There are many options when customizing the details page in Evergreen. The default settings are effective for most libraries, but it is important to understand the full potential + of Evergreen when displaying the details of items. + Some quick features can be turned on and off by changing variable values in the file /openils/var/web/opac/skin/default/js/rdedail.js. + You will notice the section at the top of this file called Per-skin configuration settings. Changing setting in this section can control several features includuing + limiting results to local only or showing copy location or displaying serial holdings. Form this section you can also enable refworks and set the Refworks host URL. + Some copy level details settings can be turned on and off from /openils/var/web/opac/skin/default/js/copy_details.js including displaying certain fields + such as due date in the OPAC. + An important file is the /openils/var/web/opac/skin/default/xml/rdetail/rdetail_summary.xml file. This file allows you to control which field to display in + the details summary of the record. The new BibTemnplate feature makes this file even more powerful by allowing you to display any marc fields + with a variey of formatting options. + The /openils/var/web/opac/skin/default/xml/rdetail/rdetail_copyinfo.xml file allows you to format the display of the copy information. +
+
+ BibTemplateBibTemplate + BibTemplate is an Evergreen-custom Dojo module which can be used to retrieve and format XML data served by the Evergreen unAPI service. unAPI is a protocol for requesting known objects in specific formats, and Evergreen uses this to supply data – bibliographic records, + metarecords, monograph holdings information, Located URIs, and more to come – in many different formats from MARCXML to MODS to custom XML applications. + unAPI MARCXMLMODS + Managing the display of information from raw XML can be difficult, and the purpose of BibTemplate is to make this simpler, as well as move the display closer to the + client and away from the source data. This is good from a separation-of-responsibilities perspective, and also makes it easier to contain and control local customization. + BibTemplate supports the foloowing Evergreen metadata formats: + + MARCXML - datatype='marcxml-full' (default) + MODS 3.3: datatype='mods33' + Dublin Core: datatype='rdf_dc'Dublin Core + FGDC: datatype='fgdc'FGDC + + + HTML API + BibTemplate follows the Dojo convention of adding attributes to existing (X)HTML in order to progressively change its behavior. The 1.6.0 HTML API consists of a + set of attributes that are added to existing OPAC markup, and fall into two classes:Dojo toolkit + + The slot marker – Elements that denote the location of bibliographic data to insert. + The slot formatter – Elements that specify how the named data should be formatted for display. + + + + Slot Markerslot marker + A slot marker is any displayable HTML element that has a type attribute with a value starting with opac/slot-data. This element will become the container + for the formatted data. A slot marker is required in order to retrieve, format and display data using BibTemplate. A slot marker must also have an + attribute called query containing a CSS3 selector. This selector is applied to the XML returned by the unAPI service in order to gather the specific XML + Nodes that should be considered for formatting.CSS3 + The slot marker can also specify the format of the data to be returned from the unAPI service. This can be specified by adding +{format} to the type + attribute, as in opac/slot-data+mods33-full. The default data format is marcxml-uri, which is an augmented MARCXML record containing Located URI information + and unAPI links. + Example of a slot marker: + <p type='opac/slot-data' query='datafield[tag=245]'></p> + Most useful attribute match operators include: + + datafield[tag=245] - exact match + datafield[tag^=65] - match start of value + + Selectors always narrow, so select broadly and iterate through the NodeList + + + Slot Formatterslot formatter + A slot formatter is any invisible HTML element which has a type attribute with the value of opac/slot-format. (NOTE: before 1.6.0.4, only <script> + elements were supported, though this restriction is now removed to support Internet Explorer.) Only one slot formatter element is allowed in each slot. The text contents + of this element are wrapped in a JavaScriptJavaScript function and run for each node returned by the query CSS3 selector + specified on the slot marker. This function is passed + one argument, called item, which an XML Node captured by the selector. This function should return HTML text. The output for all runs of the slot formatter is + concatenated into a single string and used to replace the contents of the slot marker. + The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes + specified in the CSS3 selector. + Example of a slot formatter: + + <td class='rdetail_item' id='rdetail_online' type='opac/slot-data' query='volumes volume uris uri' join=", "> + <script type='opac/slot-format'><![CDATA[ + var link = '<a href="' + item.getAttribute('href') + '">' + item.getAttribute('label') + '</a>'; + if (item.getAttribute('use_restriction')) + link += ' (Use restriction: ' + item.getAttribute('use_restriction') + ')'; + return link; + ]]></script> + </td> + + + + JavaScript APIJavaScript + In order for BibTemplate to find the slot markers and invoke the slot formatters JavaScript renderer must be instantiated and called. This must be done + for each record that is to contribute to a pages display. The API for this is simple and straight-forward: + The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes + specified in the CSS3 selector.CSS3 + Example of a slot formatter: + + dojo.require('openils.BibTemplate'); // Tell Dojo to load BibTemplate, if it is not already loaded + + // Create a renderer supplying the record id and the short name of the org unit, if known, and call the render() method + new openils.BibTemplate({ record : new CGI().param('r'), org_unit : here.shortname() }).render(); + + The argument hash supplied to the new openils.BibTemplate() constructor can have the following properties: + + record – The bibliographic record ID. + org_unit – The relevant Organizational Unit, used to restrict holdings scope as on a search result or record detail page. + root – The root element within the web page that BibTemplate should search for slot markers + + + + BibTemplate Examples + This is all that we had to add to display the contents of an arbitrary MARC field: + +<tr> + <td>Bibliography note</td> + <td type='opac/slot-data' query='datafield[tag=504]'></td> +</tr> + + If multiple fields match, they are displayed on consecutive lines within the same left-hand cell. + To display a specific MARC subfield, add that subfield to the query attribute.MARC + For example, subfield $a is the only user-oriented subfield in field 586 (Awards Note) + +<tr> + <td>Awards note</td> + <td type='opac/slot-data' query='datafield[tag=586] subfield[code=a]'></td> +</tr> + + Hide empty rows by default, and display them only if they have content: + + <tr class='hide_me' id='tag504'> + <td>Bibliographic note</td> + <td type='opac/slot-data' query='datafield[tag=504]'> + <script type='opac/slot-format'><![CDATA[ + dojo.query('#tag504').removeClass('hide_me'); + return '<span>' + dojox.data.dom.textContent(item) + + '</span><br/>'; + ]]></script> + </td></tr> + + + <![CDATA[ ... ]]> tells Evergreen Web server to treat the contents as literal character data - + avoids hilarity of entity substitution + <script type='opac/slot-format'>...</script>, contained within an opac/slot-data element, receives a variable named item + containing the results of the query (a NodeList) + + Suppressing a subfield: + +<tr class='hide_me' id='tag700'> + <td>Additional authors</td> + <td type='opac/slot-data' query='datafield[tag=700]'> + <script type='opac/slot-format'><![CDATA[ + dojo.query('#tag700').removeClass('hide_me'); + var text = ''; + var list = dojo.query('subfield:not([code=4])', item); + for (var i =0; i < list.length; i++) { + text += dojox.data.dom.textContent(list[i]) + ' '; + } + return '<span>' + text + '</span><br/>'; + ]]></script> + </td></tr> + + +
+
+ diff --git a/1.6/development/datamodelsandaccess.xml b/1.6/development/datamodelsandaccess.xml index da7c40b..a887ab3 100644 --- a/1.6/development/datamodelsandaccess.xml +++ b/1.6/development/datamodelsandaccess.xml @@ -9,25 +9,25 @@
Exploring the Database Schema - The database schema is tied pretty tightly to PostgreSQL. Although PostgreSQL - adheres closely to ANSI SQL standards, the use of schemas, SQL functions - implemented in both plpgsql and plperl, and PostgreSQL’s native full-text + The database schema is tied pretty tightly to PostgreSQL. Although PostgreSQLdatabasesPostgreSQL + adheres closely to ANSI SQL standards, the use of schemas, SQL functionsANSI + implemented in both plpgsql and plperl, and PostgreSQL’s native full-text search would make it… challenging… to port to other database platforms. A few common PostgreSQL interfaces for poking around the schema and manipulating data are: - psql (the command line client) + psql (the command line client)databasesPostgreSQLpsql - pgadminIII (a GUI client). + pgadminIII (a GUI client).databasesPostgreSQLpgadminIII - Or you can read through the source files in Open-ILS/src/sql/Pg. + Or you can read through the source files in Open-ILS/src/sql/Pg. Let’s take a quick tour through the schemas, pointing out some highlights and some key interdependencies: @@ -81,27 +81,29 @@ IDL basic example (config.language_map) - <class id="clm" controller="open-ils.cstore open-ils.pcrud" - oils_obj:fieldmapper="config::language_map" - oils_persist:tablename="config.language_map" - reporter:label="Language Map" oils_persist:field_safe="true"> - <fields oils_persist:primary="code" oils_persist:sequence=""> - <field reporter:label="Language Code" name="code" - reporter:selector="value" reporter:datatype="text"/> - <field reporter:label="Language" name="value" - reporter:datatype="text" oils_persist:i18n="true"/> - </fields> - <links/> - <permacrud xmlns="http://open-ils.org/spec/opensrf/IDL/permacrud/v1"> - <actions> - <create global_required="true" permission="CREATE_MARC_CODE"> - <retrieve global_required="true" - permission="CREATE_MARC_CODE UPDATE_MARC_CODE DELETE_MARC_CODE"> - <update global_required="true" permission="UPDATE_MARC_CODE"> - <delete global_required="true" permission="DELETE_MARC_CODE"> - </actions> - </permacrud> - </class> + +<class id="clm" controller="open-ils.cstore open-ils.pcrud" + oils_obj:fieldmapper="config::language_map" + oils_persist:tablename="config.language_map" + reporter:label="Language Map" oils_persist:field_safe="true"> + <fields oils_persist:primary="code" oils_persist:sequence=""> + <field reporter:label="Language Code" name="code" + reporter:selector="value" reporter:datatype="text"/> + <field reporter:label="Language" name="value" + reporter:datatype="text" oils_persist:i18n="true"/> + </fields> + <links/> + <permacrud xmlns="http://open-ils.org/spec/opensrf/IDL/permacrud/v1"> + <actions> + <create global_required="true" permission="CREATE_MARC_CODE"> + <retrieve global_required="true" + permission="CREATE_MARC_CODE UPDATE_MARC_CODE DELETE_MARC_CODE"> + <update global_required="true" permission="UPDATE_MARC_CODE"> + <delete global_required="true" permission="DELETE_MARC_CODE"> + </actions> + </permacrud> +</class> + @@ -143,15 +145,19 @@ access to the classes for which oils_persist:field_safe is true. For example, - srfsh# request open-ils.fielder open-ils.fielder.clm.atomic \ - {"query":{"code":{"=":"eng"}}} + + +srfsh# request open-ils.fielder open-ils.fielder.clm.atomic \ +{"query":{"code":{"=":"eng"}}} - Received Data: [ - { - "value":"English", - "code":"eng" - } - ] +Received Data: [ + { + "value":"English", + "code":"eng" + } +] + + @@ -213,7 +219,8 @@ The permacrud element defines the permissions (if any) required - to create, retrieve, update, and delete data for this + to create, retrieve, update, + and delete data for this class. open-ils.permacrud must be defined as a controller for the class for the permissions to be applied. @@ -316,30 +323,32 @@ can contain fields that link to fields in other classes. The <links> element defines which fields link to fields in other classes, and the nature of the relationship: - <class id="aws" controller="open-ils.cstore" - oils_obj:fieldmapper="actor::workstation" - oils_persist:tablename="actor.workstation" - reporter:label="Workstation"> - <fields oils_persist:primary="id" - oils_persist:sequence="actor.workstation_id_seq"> - <field reporter:label="Workstation ID" name="id" - reporter:datatype="id"/> - <field reporter:label="Workstation Name" name="name" - reporter:datatype="text"/> - <field reporter:label="Owning Library" name="owning_lib" - reporter:datatype="org_unit"/> - <field reporter:label="Circulations" name="circulations" - oils_persist:virtual="true" reporter:datatype="link"/> - </fields> - <links> - <link field="owning_lib" reltype="has_a" key="id" - map="" class="aou"/> - <link field="circulations" reltype="has_many" key="workstation" - map="" class="circ"/> - <link field="circulation_checkins" reltype="has_many" - key="checkin_workstation" map="" class="circ"/> - </links> - </class> + +<class id="aws" controller="open-ils.cstore" + oils_obj:fieldmapper="actor::workstation" + oils_persist:tablename="actor.workstation" + reporter:label="Workstation"> + <fields oils_persist:primary="id" + oils_persist:sequence="actor.workstation_id_seq"> + <field reporter:label="Workstation ID" name="id" + reporter:datatype="id"/> + <field reporter:label="Workstation Name" name="name" + reporter:datatype="text"/> + <field reporter:label="Owning Library" name="owning_lib" + reporter:datatype="org_unit"/> + <field reporter:label="Circulations" name="circulations" + oils_persist:virtual="true" reporter:datatype="link"/> + </fields> + <links> + <link field="owning_lib" reltype="has_a" key="id" + map="" class="aou"/> + <link field="circulations" reltype="has_many" key="workstation" + map="" class="circ"/> + <link field="circulation_checkins" reltype="has_many" + key="checkin_workstation" map="" class="circ"/> + </links> +</class> + @@ -538,23 +547,25 @@ conditions. For example, to generate a list of barcodes that are held in a copy location that allows holds and is visible in the OPAC: - srfsh# request open-ils.cstore open-ils.cstore.json_query - {"select": {"acp":["barcode"], "acpl":["name"]}, - "from": {"acp":"acpl"}, - "where": [ - {"+acpl": "holdable"}, - {"+acpl": "opac_visible"} - ]} + +srfsh# request open-ils.cstore open-ils.cstore.json_query + {"select": {"acp":["barcode"], "acpl":["name"]}, + "from": {"acp":"acpl"}, + "where": [ + {"+acpl": "holdable"}, + {"+acpl": "opac_visible"} + ]} - Received Data: { - "barcode":"BARCODE1", - "name":"Stacks" - } +Received Data: { + "barcode":"BARCODE1", + "name":"Stacks" +} - Received Data: { - "barcode":"BARCODE2", - "name":"Stacks" - } +Received Data: { + "barcode":"BARCODE2", + "name":"Stacks" +} + @@ -618,13 +629,15 @@ open-ils.cstore.direct.\*.retrieve methods allow you to specify a JSON structure defining the fields you wish to flesh in the returned object. Fleshing fields in objects returned by <literal>open-ils.cstore</literal> - srfsh# request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ - { - "flesh": 1, - "flesh_fields": { - "acp": ["location"] - } - } + +srfsh# request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ + { + "flesh": 1, + "flesh_fields": { + "acp": ["location"] + } + } + @@ -645,14 +658,16 @@ let’s also flesh the call number attached to the copy, and then flesh the bibliographic record attached to the call number. Fleshing fields in fields of objects returned by <literal>open-ils.cstore</literal> - request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ - { - "flesh": 2, - "flesh_fields": { - "acp": ["location", "call_number"], - "acn": ["record"] - } - } + +request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ + { + "flesh": 2, + "flesh_fields": { + "acp": ["location", "call_number"], + "acn": ["record"] + } + } +
@@ -696,14 +711,14 @@
- Rerun /openils/bin/autogen.sh to regenerate the JavaScript versions + Rerun /openils/bin/autogen.sh to regenerate the JavaScript versionsautogen of the IDL required by the HTTP translator and gateway. - We also need to adjust our JavaScript client to use the nifty new + We also need to adjust our JavaScript client to use the nifty newJavaScript objects that open-ils.resolver.resolve_holdings now returns. - The best approach is to use the support in Evergreen’s Dojo extensions + The best approach is to use the support in Evergreen’s Dojo extensionsDojo toolkit to generate the JavaScript classes directly from the IDL XML file. Accessing classes defined in the IDL via Fieldmapper @@ -716,7 +731,7 @@
- fieldmapper.AutoIDL reads /openils/var/reports/fm_IDL.xml to + fieldmapper.AutoIDL reads /openils/var/reports/fm_IDL.xml to generate a list of class properties. diff --git a/1.6/development/introduction_to_sql.xml b/1.6/development/introduction_to_sql.xml index 5362212..fad37b5 100644 --- a/1.6/development/introduction_to_sql.xml +++ b/1.6/development/introduction_to_sql.xml @@ -36,14 +36,16 @@ worry if the INSERT statement is completely unfamiliar, we’ll talk more about the syntax of the insert statement later. <literal>actor.usr_note</literal> database table - evergreen=# INSERT INTO actor.usr_note (usr, creator, pub, title, value) - VALUES (1, 1, TRUE, 'Who is this guy?', 'He''s the administrator!'); + +evergreen=# INSERT INTO actor.usr_note (usr, creator, pub, title, value) + VALUES (1, 1, TRUE, 'Who is this guy?', 'He''s the administrator!'); - evergreen=# select id, usr, creator, pub, title, value from actor.usr_note; - id | usr | creator | pub | title | value - ----+-----+---------+-----+------------------+------------------------- - 1 | 1 | 1 | t | Who is this guy? | He's the administrator! - (1 rows) +evergreen=# select id, usr, creator, pub, title, value from actor.usr_note; + id | usr | creator | pub | title | value +----+-----+---------+-----+------------------+------------------------- + 1 | 1 | 1 | t | Who is this guy? | He's the administrator! +(1 rows) + PostgreSQL supports table inheritance, which lets you define tables that inherit the column definitions of a given parent table. A search of the data in @@ -100,14 +102,16 @@ the public schema. As a result, you might not find the object that you’re looking for if you don’t use the appropriate schema. Example: Creating a table without a specific schema - evergreen=# CREATE TABLE foobar (foo TEXT, bar TEXT); - CREATE TABLE - evergreen=# \d foobar - Table "public.foobar" - Column | Type | Modifiers - --------+------+----------- - foo | text | - bar | text | + +evergreen=# CREATE TABLE foobar (foo TEXT, bar TEXT); +CREATE TABLE +evergreen=# \d foobar + Table "public.foobar" + Column | Type | Modifiers +--------+------+----------- + foo | text | + bar | text | + Example: Trying to access a unqualified table outside of the public schema evergreen=# SELECT * FROM usr_note; @@ -449,20 +453,22 @@ The actor.org_address table is a simple table in the Evergreen schema that we can use as a concrete example of many of the properties of databases that we have discussed so far. - CREATE TABLE actor.org_address ( - id SERIAL PRIMARY KEY, - valid BOOL NOT NULL DEFAULT TRUE, - address_type TEXT NOT NULL DEFAULT 'MAILING', - org_unit INT NOT NULL REFERENCES actor.org_unit (id) - DEFERRABLE INITIALLY DEFERRED, - street1 TEXT NOT NULL, - street2 TEXT, - city TEXT NOT NULL, - county TEXT, - state TEXT NOT NULL, - country TEXT NOT NULL, - post_code TEXT NOT NULL - ); + +CREATE TABLE actor.org_address ( + id SERIAL PRIMARY KEY, + valid BOOL NOT NULL DEFAULT TRUE, + address_type TEXT NOT NULL DEFAULT 'MAILING', + org_unit INT NOT NULL REFERENCES actor.org_unit (id) + DEFERRABLE INITIALLY DEFERRED, + street1 TEXT NOT NULL, + street2 TEXT, + city TEXT NOT NULL, + county TEXT, + state TEXT NOT NULL, + country TEXT NOT NULL, + post_code TEXT NOT NULL +); + @@ -520,31 +526,33 @@ To display the definition of a database object such as a table, issue the command \d _object-name_. For example, to display the definition of the actor.usr_note table: - $ psql evergreen - psql (8.4.1) - Type "help" for help. + +$ psql evergreen +psql (8.4.1) +Type "help" for help. - evergreen=# \d actor.usr_note - Table "actor.usr_note" - Column | Type | Modifiers - -------------+--------------------------+------------------------------------------------------------- - id | bigint | not null default nextval('actor.usr_note_id_seq'::regclass) - usr | bigint | not null - creator | bigint | not null - create_date | timestamp with time zone | default now() - pub | boolean | not null default false - title | text | not null - value | text | not null - Indexes: - "usr_note_pkey" PRIMARY KEY, btree (id) - "actor_usr_note_creator_idx" btree (creator) - "actor_usr_note_usr_idx" btree (usr) - Foreign-key constraints: - "usr_note_creator_fkey" FOREIGN KEY (creator) REFERENCES actor.usr(id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED - "usr_note_usr_fkey" FOREIGN KEY (usr) REFERENCES actor.usr(id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED +evergreen=# \d actor.usr_note + Table "actor.usr_note" + Column | Type | Modifiers +-------------+--------------------------+------------------------------------------------------------- + id | bigint | not null default nextval('actor.usr_note_id_seq'::regclass) + usr | bigint | not null + creator | bigint | not null + create_date | timestamp with time zone | default now() + pub | boolean | not null default false + title | text | not null + value | text | not null +Indexes: + "usr_note_pkey" PRIMARY KEY, btree (id) + "actor_usr_note_creator_idx" btree (creator) + "actor_usr_note_usr_idx" btree (usr) +Foreign-key constraints: + "usr_note_creator_fkey" FOREIGN KEY (creator) REFERENCES actor.usr(id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED + "usr_note_usr_fkey" FOREIGN KEY (usr) REFERENCES actor.usr(id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED - evergreen=# \q - $ +evergreen=# \q +$ + @@ -610,10 +618,12 @@ For example, to sort the rows returned from your actor.usr_address query by city, with county and then zip code as the tie breakers, issue the following query: - SELECT city, county, state - FROM actor.usr_address - ORDER BY city, county, post_code - ; + +SELECT city, county, state + FROM actor.usr_address + ORDER BY city, county, post_code +; + Filtering results with the WHERE clause @@ -626,47 +636,55 @@ For example, to restrict the results returned from your actor.usr_address query to only those rows containing a state value of Connecticut, issue the following query: - SELECT city, county, state - FROM actor.usr_address - WHERE state = 'Connecticut' - ORDER BY city, county, post_code - ; + +SELECT city, county, state + FROM actor.usr_address + WHERE state = 'Connecticut' + ORDER BY city, county, post_code +; + You can include more conditions in the WHERE clause with the OR and AND operators. For example, to further restrict the results returned from your actor.usr_address query to only those rows where the state column contains a value of Connecticut and the city column contains a value of Hartford, issue the following query: - SELECT city, county, state - FROM actor.usr_address - WHERE state = 'Connecticut' - AND city = 'Hartford' - ORDER BY city, county, post_code - ; + +SELECT city, county, state + FROM actor.usr_address + WHERE state = 'Connecticut' + AND city = 'Hartford' + ORDER BY city, county, post_code +; + To return rows where the state is Connecticut and the city is Hartford or New Haven, you must use parentheses to explicitly group the city value conditions together, or else the database will evaluate the OR city = 'New Haven' clause entirely on its own and match all rows where the city column is New Haven, even though the state might not be Connecticut. Trouble with OR - SELECT city, county, state - FROM actor.usr_address - WHERE state = 'Connecticut' - AND city = 'Hartford' OR city = 'New Haven' - ORDER BY city, county, post_code - ; + +SELECT city, county, state + FROM actor.usr_address + WHERE state = 'Connecticut' + AND city = 'Hartford' OR city = 'New Haven' + ORDER BY city, county, post_code +; - -- Can return unwanted rows because the OR is not grouped! +-- Can return unwanted rows because the OR is not grouped! + Grouped OR’ed conditions - SELECT city, county, state - FROM actor.usr_address - WHERE state = 'Connecticut' - AND (city = 'Hartford' OR city = 'New Haven') - ORDER BY city, county, post_code - ; + +SELECT city, county, state + FROM actor.usr_address + WHERE state = 'Connecticut' + AND (city = 'Hartford' OR city = 'New Haven') + ORDER BY city, county, post_code +; - -- The parentheses ensure that the OR is applied to the cities, and the - -- state in either case must be 'Connecticut' +-- The parentheses ensure that the OR is applied to the cities, and the +-- state in either case must be 'Connecticut' + Comparison operators @@ -727,40 +745,46 @@ To return rows from a table where a given column is not NULL, use the IS NOT NULL comparison operator. Retrieving rows where a column is not <literal>NULL</literal> - SELECT id, first_given_name, family_name - FROM actor.usr - WHERE second_given_name IS NOT NULL - ; + +SELECT id, first_given_name, family_name + FROM actor.usr + WHERE second_given_name IS NOT NULL +; + Similarly, to return rows from a table where a given column is NULL, use the IS NULL comparison operator. Retrieving rows where a column is <literal>NULL</literal> - SELECT id, first_given_name, second_given_name, family_name - FROM actor.usr - WHERE second_given_name IS NULL - ; + +SELECT id, first_given_name, second_given_name, family_name + FROM actor.usr + WHERE second_given_name IS NULL +; - id | first_given_name | second_given_name | family_name - ----+------------------+-------------------+---------------- - 1 | Administrator | | System Account - (1 row) + id | first_given_name | second_given_name | family_name +----+------------------+-------------------+---------------- + 1 | Administrator | | System Account +(1 row) + Notice that the NULL value in the output is displayed as empty space, indistinguishable from an empty string; this is the default display method in psql. You can change the behaviour of psql using the pset command: Changing the way <literal>NULL</literal> values are displayed in <literal>psql</literal> - evergreen=# \pset null '(null)' - Null display is '(null)'. + +evergreen=# \pset null '(null)' +Null display is '(null)'. - SELECT id, first_given_name, second_given_name, family_name - FROM actor.usr - WHERE second_given_name IS NULL - ; +SELECT id, first_given_name, second_given_name, family_name + FROM actor.usr + WHERE second_given_name IS NULL +; - id | first_given_name | second_given_name | family_name - ----+------------------+-------------------+---------------- - 1 | Administrator | (null) | System Account - (1 row) + id | first_given_name | second_given_name | family_name +----+------------------+-------------------+---------------- + 1 | Administrator | (null) | System Account +(1 row) + Database queries within programming languages such as Perl and C have special methods of checking for NULL values in returned results. @@ -776,26 +800,29 @@ For example, to change the last name of a user in the actor.usr table to L’estat, issue the following SQL: Escaping <literal>'</literal> in TEXT values - UPDATE actor.usr - SET family_name = 'L''estat' - WHERE profile IN ( - SELECT id - FROM permission.grp_tree - WHERE name = 'Vampire' - ) - ; - - When you retrieve the row from the database, the value is displayed with just - a single ' character: - SELECT id, family_name - FROM actor.usr - WHERE family_name = 'L''estat' - ; + +UPDATE actor.usr + SET family_name = 'L''estat' + WHERE profile IN ( + SELECT id + FROM permission.grp_tree + WHERE name = 'Vampire' + ) + ; + + When you retrieve the row from the database, the value is displayed with just + a single ' character: + +SELECT id, family_name + FROM actor.usr + WHERE family_name = 'L''estat' +; - id | family_name - ----+------------- - 1 | L'estat - (1 row) + id | family_name +----+------------- + 1 | L'estat +(1 row) + Grouping and eliminating results with the GROUP BY and HAVING clauses @@ -804,101 +831,109 @@ results for a range of values in a single query, rather than requiring you to issue one query per target value. Returning unique results of a single column with <literal>GROUP BY</literal> - SELECT grp - FROM permission.grp_perm_map - GROUP BY grp - ORDER BY grp; + +SELECT grp + FROM permission.grp_perm_map + GROUP BY grp + ORDER BY grp; - grp - -----+ - 1 - 2 - 3 - 4 - 5 - 6 - 7 - 10 - (8 rows) + grp +-----+ + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 10 +(8 rows) + While GROUP BY can be useful for a single column, it is more often used to return the distinct results across multiple columns. For example, the following query shows us which groups have permissions at each depth in the library hierarchy: Returning unique results of multiple columns with <literal>GROUP BY</literal> - SELECT grp, depth - FROM permission.grp_perm_map - GROUP BY grp, depth - ORDER BY depth, grp; + +SELECT grp, depth + FROM permission.grp_perm_map + GROUP BY grp, depth + ORDER BY depth, grp; - grp | depth - -----+------- - 1 | 0 - 2 | 0 - 3 | 0 - 4 | 0 - 5 | 0 - 10 | 0 - 3 | 1 - 4 | 1 - 5 | 1 - 6 | 1 - 7 | 1 - 10 | 1 - 3 | 2 - 4 | 2 - 10 | 2 - (15 rows) + grp | depth +-----+------- + 1 | 0 + 2 | 0 + 3 | 0 + 4 | 0 + 5 | 0 + 10 | 0 + 3 | 1 + 4 | 1 + 5 | 1 + 6 | 1 + 7 | 1 + 10 | 1 + 3 | 2 + 4 | 2 + 10 | 2 +(15 rows) + Extending this further, you can use the COUNT() aggregate function to also return the number of times each unique combination of grp and depth appears in the table. Yes, this is a sneak peek at the use of aggregate functions! Keeners. Counting unique column combinations with <literal>GROUP BY</literal> - SELECT grp, depth, COUNT(grp) - FROM permission.grp_perm_map - GROUP BY grp, depth - ORDER BY depth, grp; + +SELECT grp, depth, COUNT(grp) + FROM permission.grp_perm_map + GROUP BY grp, depth + ORDER BY depth, grp; - grp | depth | count - -----+-------+------- - 1 | 0 | 6 - 2 | 0 | 2 - 3 | 0 | 45 - 4 | 0 | 3 - 5 | 0 | 5 - 10 | 0 | 1 - 3 | 1 | 3 - 4 | 1 | 4 - 5 | 1 | 1 - 6 | 1 | 9 - 7 | 1 | 5 - 10 | 1 | 10 - 3 | 2 | 24 - 4 | 2 | 8 - 10 | 2 | 7 - (15 rows) + grp | depth | count +-----+-------+------- + 1 | 0 | 6 + 2 | 0 | 2 + 3 | 0 | 45 + 4 | 0 | 3 + 5 | 0 | 5 + 10 | 0 | 1 + 3 | 1 | 3 + 4 | 1 | 4 + 5 | 1 | 1 + 6 | 1 | 9 + 7 | 1 | 5 + 10 | 1 | 10 + 3 | 2 | 24 + 4 | 2 | 8 + 10 | 2 | 7 +(15 rows) + You can use the WHERE clause to restrict the returned results before grouping is applied to the results. The following query restricts the results to those rows that have a depth of 0. Using the <literal>WHERE</literal> clause with <literal>GROUP BY</literal> - SELECT grp, COUNT(grp) - FROM permission.grp_perm_map - WHERE depth = 0 - GROUP BY grp - ORDER BY 2 DESC - ; + +SELECT grp, COUNT(grp) + FROM permission.grp_perm_map + WHERE depth = 0 + GROUP BY grp + ORDER BY 2 DESC +; - grp | count - -----+------- - 3 | 45 - 1 | 6 - 5 | 5 - 4 | 3 - 2 | 2 - 10 | 1 - (6 rows) + grp | count +-----+------- + 3 | 45 + 1 | 6 + 5 | 5 + 4 | 3 + 2 | 2 + 10 | 1 +(6 rows) + To restrict results after grouping has been applied to the rows, use the HAVING clause; this is typically used to restrict results based on @@ -906,21 +941,23 @@ the following query restricts the returned rows to those that have more than 5 occurrences of the same value for grp in the table. <literal>GROUP BY</literal> restricted by a <literal>HAVING</literal> clause - SELECT grp, COUNT(grp) - FROM permission.grp_perm_map - GROUP BY grp - HAVING COUNT(grp) > 5 - ; + +SELECT grp, COUNT(grp) + FROM permission.grp_perm_map + GROUP BY grp + HAVING COUNT(grp) > 5 +; - grp | count - -----+------- - 6 | 9 - 4 | 15 - 5 | 6 - 1 | 6 - 3 | 72 - 10 | 18 - (6 rows) + grp | count +-----+------- + 6 | 9 + 4 | 15 + 5 | 6 + 1 | 6 + 3 | 72 + 10 | 18 +(6 rows) + @@ -934,29 +971,31 @@ them, then applying the DISTINCT keyword might be a sign that you are papering over a real problem. Returning unique results of multiple columns with <literal>DISTINCT</literal> - SELECT DISTINCT grp, depth - FROM permission.grp_perm_map - ORDER BY depth, grp - ; + +SELECT DISTINCT grp, depth + FROM permission.grp_perm_map + ORDER BY depth, grp +; - grp | depth - -----+------- - 1 | 0 - 2 | 0 - 3 | 0 - 4 | 0 - 5 | 0 - 10 | 0 - 3 | 1 - 4 | 1 - 5 | 1 - 6 | 1 - 7 | 1 - 10 | 1 - 3 | 2 - 4 | 2 - 10 | 2 - (15 rows) + grp | depth +-----+------- + 1 | 0 + 2 | 0 + 3 | 0 + 4 | 0 + 5 | 0 + 10 | 0 + 3 | 1 + 4 | 1 + 5 | 1 + 6 | 1 + 7 | 1 + 10 | 1 + 3 | 2 + 4 | 2 + 10 | 2 +(15 rows) + @@ -966,21 +1005,23 @@ example, in the following query we list the five most frequently used circulation modifiers: Using the <literal>LIMIT</literal> clause to restrict results - SELECT circ_modifier, COUNT(circ_modifier) - FROM asset.copy - GROUP BY circ_modifier - ORDER BY 2 DESC - LIMIT 5 - ; + +SELECT circ_modifier, COUNT(circ_modifier) + FROM asset.copy + GROUP BY circ_modifier + ORDER BY 2 DESC + LIMIT 5 +; - circ_modifier | count - ---------------+-------- - CIRC | 741995 - BOOK | 636199 - SER | 265906 - DOC | 191598 - LAW MONO | 126627 - (5 rows) + circ_modifier | count +---------------+-------- + CIRC | 741995 + BOOK | 636199 + SER | 265906 + DOC | 191598 + LAW MONO | 126627 +(5 rows) + When you use the LIMIT clause to restrict the total number of rows returned by your query, you can also use the OFFSET clause to determine which subset @@ -989,22 +1030,24 @@ In the following example, we use the OFFSET clause to get results 6 through 10 from the same query that we prevously executed. Using the <literal>OFFSET</literal> clause to return a specific subset of rows - SELECT circ_modifier, COUNT(circ_modifier) - FROM asset.copy - GROUP BY circ_modifier - ORDER BY 2 DESC - LIMIT 5 - OFFSET 5 - ; + +SELECT circ_modifier, COUNT(circ_modifier) + FROM asset.copy + GROUP BY circ_modifier + ORDER BY 2 DESC + LIMIT 5 + OFFSET 5 +; - circ_modifier | count - ---------------+-------- - LAW SERIAL | 102758 - DOCUMENTS | 86215 - BOOK_WEB | 63786 - MFORM SER | 39917 - REF | 34380 - (5 rows) + circ_modifier | count +---------------+-------- + LAW SERIAL | 102758 + DOCUMENTS | 86215 + BOOK_WEB | 63786 + MFORM SER | 39917 + REF | 34380 +(5 rows) + @@ -1025,31 +1068,33 @@ used to convert text values to upper-case, modifies the results in the following set of queries: Using the UPPER() scalar function to convert text values to upper-case - -- First, without the UPPER() function for comparison - SELECT shortname, name - FROM actor.org_unit - WHERE id < 4 - ; + +-- First, without the UPPER() function for comparison +SELECT shortname, name + FROM actor.org_unit + WHERE id < 4 +; - shortname | name - -----------+----------------------- - CONS | Example Consortium - SYS1 | Example System 1 - SYS2 | Example System 2 - (3 rows) + shortname | name +-----------+----------------------- + CONS | Example Consortium + SYS1 | Example System 1 + SYS2 | Example System 2 +(3 rows) - -- Now apply the UPPER() function to the name column - SELECT shortname, UPPER(name) - FROM actor.org_unit - WHERE id < 4 - ; +-- Now apply the UPPER() function to the name column +SELECT shortname, UPPER(name) + FROM actor.org_unit + WHERE id < 4 +; - shortname | upper - -----------+-------------------- - CONS | EXAMPLE CONSORTIUM - SYS1 | EXAMPLE SYSTEM 1 - SYS2 | EXAMPLE SYSTEM 2 - (3 rows) + shortname | upper +-----------+-------------------- + CONS | EXAMPLE CONSORTIUM + SYS1 | EXAMPLE SYSTEM 1 + SYS2 | EXAMPLE SYSTEM 2 +(3 rows) + There are so many scalar functions in PostgreSQL that we cannot cover them all here, but we can list some of the most commonly used functions: @@ -1133,15 +1178,17 @@ returned by the main SELECT statement to only those locations that have an opac_visible value of TRUE: Sub-select example - SELECT call_number - FROM asset.copy - WHERE deleted IS FALSE - AND location IN ( - SELECT id - FROM asset.copy_location - WHERE opac_visible IS TRUE - ) - ; + +SELECT call_number + FROM asset.copy + WHERE deleted IS FALSE + AND location IN ( + SELECT id + FROM asset.copy_location + WHERE opac_visible IS TRUE + ) +; + Sub-selects can be an approachable way to breaking down a problem that requires matching values between different tables, and often result in @@ -1182,37 +1229,39 @@ we have to fully qualify the column names in our queries with the schema and table names. A simple inner join - SELECT * - FROM actor.usr - INNER JOIN actor.org_unit ON actor.usr.home_ou = actor.org_unit.id - WHERE actor.org_unit.shortname = 'CONS' - ; + +SELECT * + FROM actor.usr + INNER JOIN actor.org_unit ON actor.usr.home_ou = actor.org_unit.id + WHERE actor.org_unit.shortname = 'CONS' +; - -[ RECORD 1 ]------------------+--------------------------------- - id | 1 - card | 1 - profile | 1 - usrname | admin - email | - ... - mailing_address | - billing_address | - home_ou | 1 - ... - claims_never_checked_out_count | 0 - id | 1 - parent_ou | - ou_type | 1 - ill_address | 1 - holds_address | 1 - mailing_address | 1 - billing_address | 1 - shortname | CONS - name | Example Consortium - email | - phone | - opac_visible | t - fiscal_calendar | 1 +-[ RECORD 1 ]------------------+--------------------------------- +id | 1 +card | 1 +profile | 1 +usrname | admin +email | +... +mailing_address | +billing_address | +home_ou | 1 +... +claims_never_checked_out_count | 0 +id | 1 +parent_ou | +ou_type | 1 +ill_address | 1 +holds_address | 1 +mailing_address | 1 +billing_address | 1 +shortname | CONS +name | Example Consortium +email | +phone | +opac_visible | t +fiscal_calendar | 1 + Of course, you do not have to return every column from the joined tables; you can (and should) continue to specify only the columns that you want to @@ -1222,21 +1271,23 @@ joining the actor.org_unit table to give us access to the user’s home library: Borrower Count by Profile (Adult, Child, etc)/Library - SELECT permission.grp_tree.name, actor.org_unit.name, COUNT(permission.grp_tree.name) - FROM actor.usr - INNER JOIN permission.grp_tree - ON actor.usr.profile = permission.grp_tree.id - INNER JOIN actor.org_unit - ON actor.org_unit.id = actor.usr.home_ou - WHERE actor.usr.deleted IS FALSE - GROUP BY permission.grp_tree.name, actor.org_unit.name - ORDER BY actor.org_unit.name, permission.grp_tree.name - ; + +SELECT permission.grp_tree.name, actor.org_unit.name, COUNT(permission.grp_tree.name) + FROM actor.usr + INNER JOIN permission.grp_tree + ON actor.usr.profile = permission.grp_tree.id + INNER JOIN actor.org_unit + ON actor.org_unit.id = actor.usr.home_ou + WHERE actor.usr.deleted IS FALSE + GROUP BY permission.grp_tree.name, actor.org_unit.name + ORDER BY actor.org_unit.name, permission.grp_tree.name +; - name | name | count - -------+--------------------+------- - Users | Example Consortium | 1 - (1 row) + name | name | count +-------+--------------------+------- + Users | Example Consortium | 1 +(1 row) + @@ -1257,21 +1308,23 @@ example, we can write the previous INNER JOIN statement example using aliases instead of fully-qualified identifiers: Borrower Count by Profile (using aliases) - SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" - FROM actor.usr au - INNER JOIN permission.grp_tree pgt - ON au.profile = pgt.id - INNER JOIN actor.org_unit aou - ON aou.id = au.home_ou - WHERE au.deleted IS FALSE - GROUP BY pgt.name, aou.name - ORDER BY aou.name, pgt.name - ; + +SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" + FROM actor.usr au + INNER JOIN permission.grp_tree pgt + ON au.profile = pgt.id + INNER JOIN actor.org_unit aou + ON aou.id = au.home_ou + WHERE au.deleted IS FALSE + GROUP BY pgt.name, aou.name + ORDER BY aou.name, pgt.name +; - Profile | Library | Count - ---------+--------------------+------- - Users | Example Consortium | 1 - (1 row) + Profile | Library | Count +---------+--------------------+------- + Users | Example Consortium | 1 +(1 row) + A nice side effect of declaring an alias for your columns is that the alias is used as the column header in the results table. The previous version of @@ -1307,65 +1360,73 @@ Base tables for the OUTER JOIN examples - SELECT * FROM aaa; + +SELECT * FROM aaa; - id | stuff - ----+------- - 1 | one - 2 | two - 3 | three - 4 | four - 5 | five - (5 rows) + id | stuff +----+------- + 1 | one + 2 | two + 3 | three + 4 | four + 5 | five +(5 rows) - SELECT * FROM bbb; +SELECT * FROM bbb; - id | stuff | foo - ----+-------+---------- - 1 | one | oneone - 2 | two | twotwo - 5 | five | fivefive - 6 | six | sixsix - (4 rows) + id | stuff | foo +----+-------+---------- + 1 | one | oneone + 2 | two | twotwo + 5 | five | fivefive + 6 | six | sixsix +(4 rows) + Example of a LEFT OUTER JOIN - SELECT * FROM aaa - LEFT OUTER JOIN bbb ON aaa.id = bbb.id - ; - id | stuff | id | stuff | foo - ----+-------+----+-------+---------- - 1 | one | 1 | one | oneone - 2 | two | 2 | two | twotwo - 3 | three | | | - 4 | four | | | - 5 | five | 5 | five | fivefive - (5 rows) + +SELECT * FROM aaa + LEFT OUTER JOIN bbb ON aaa.id = bbb.id +; + id | stuff | id | stuff | foo +----+-------+----+-------+---------- + 1 | one | 1 | one | oneone + 2 | two | 2 | two | twotwo + 3 | three | | | + 4 | four | | | + 5 | five | 5 | five | fivefive +(5 rows) + Example of a RIGHT OUTER JOIN - SELECT * FROM aaa - RIGHT OUTER JOIN bbb ON aaa.id = bbb.id - ; - id | stuff | id | stuff | foo - ----+-------+----+-------+---------- - 1 | one | 1 | one | oneone - 2 | two | 2 | two | twotwo - 5 | five | 5 | five | fivefive - | | 6 | six | sixsix - (4 rows) + +SELECT * FROM aaa + RIGHT OUTER JOIN bbb ON aaa.id = bbb.id +; + id | stuff | id | stuff | foo +----+-------+----+-------+---------- + 1 | one | 1 | one | oneone + 2 | two | 2 | two | twotwo + 5 | five | 5 | five | fivefive + | | 6 | six | sixsix +(4 rows) + Example of a FULL OUTER JOIN - SELECT * FROM aaa - FULL OUTER JOIN bbb ON aaa.id = bbb.id - ; - id | stuff | id | stuff | foo - ----+-------+----+-------+---------- - 1 | one | 1 | one | oneone - 2 | two | 2 | two | twotwo - 3 | three | | | - 4 | four | | | - 5 | five | 5 | five | fivefive - | | 6 | six | sixsix - (6 rows) + +SELECT * FROM aaa + FULL OUTER JOIN bbb ON aaa.id = bbb.id +; + id | stuff | id | stuff | foo +----+-------+----+-------+---------- + 1 | one | 1 | one | oneone + 2 | two | 2 | two | twotwo + 3 | three | | | + 4 | four | | | + 5 | five | 5 | five | fivefive + | | 6 | six | sixsix +(6 rows) + @@ -1382,26 +1443,28 @@ of columns with compatible data types: the union, intersection, and difference operators. Base tables for the set operation examples - SELECT * FROM aaa; + +SELECT * FROM aaa; - id | stuff - ----+------- - 1 | one - 2 | two - 3 | three - 4 | four - 5 | five - (5 rows) + id | stuff + ----+------- + 1 | one + 2 | two + 3 | three + 4 | four + 5 | five + (5 rows) - SELECT * FROM bbb; +SELECT * FROM bbb; - id | stuff | foo - ----+-------+---------- - 1 | one | oneone - 2 | two | twotwo - 5 | five | fivefive - 6 | six | sixsix - (4 rows) + id | stuff | foo + ----+-------+---------- + 1 | one | oneone + 2 | two | twotwo + 5 | five | fivefive + 6 | six | sixsix +(4 rows) + Union @@ -1410,29 +1473,31 @@ does not return any duplicate rows. To return duplicate rows, use the UNION ALL operator. Example of a UNION set operation - -- The parentheses are not required, but are intended to help - -- illustrate the sets participating in the set operation - ( - SELECT id, stuff - FROM aaa - ) - UNION - ( - SELECT id, stuff - FROM bbb - ) - ORDER BY 1 - ; + +-- The parentheses are not required, but are intended to help +-- illustrate the sets participating in the set operation +( + SELECT id, stuff + FROM aaa +) +UNION +( + SELECT id, stuff + FROM bbb +) +ORDER BY 1 +; - id | stuff - ----+------- - 1 | one - 2 | two - 3 | three - 4 | four - 5 | five - 6 | six - (6 rows) + id | stuff +----+------- + 1 | one + 2 | two + 3 | three + 4 | four + 5 | five + 6 | six +(6 rows) + @@ -1441,24 +1506,26 @@ both the left-hand and right-hand tables. To return duplicate rows, use the INTERSECT ALL operator. Example of an INTERSECT set operation - ( - SELECT id, stuff - FROM aaa - ) - INTERSECT - ( - SELECT id, stuff - FROM bbb - ) - ORDER BY 1 - ; + +( + SELECT id, stuff + FROM aaa +) +INTERSECT +( + SELECT id, stuff + FROM bbb +) +ORDER BY 1 +; - id | stuff - ----+------- - 1 | one - 2 | two - 5 | five - (3 rows) + id | stuff +----+------- + 1 | one + 2 | two + 5 | five +(3 rows) + @@ -1467,42 +1534,44 @@ exist in the right-hand table. You are effectively subtracting the common rows from the left-hand table. Example of an EXCEPT set operation - ( - SELECT id, stuff - FROM aaa - ) - EXCEPT - ( - SELECT id, stuff - FROM bbb - ) - ORDER BY 1 - ; + +( + SELECT id, stuff + FROM aaa +) +EXCEPT +( + SELECT id, stuff + FROM bbb +) +ORDER BY 1 +; - id | stuff - ----+------- - 3 | three - 4 | four - (2 rows) + id | stuff +----+------- + 3 | three + 4 | four +(2 rows) - -- Order matters: switch the left-hand and right-hand tables - -- and you get a different result - ( - SELECT id, stuff - FROM bbb - ) - EXCEPT - ( - SELECT id, stuff - FROM aaa - ) - ORDER BY 1 - ; +-- Order matters: switch the left-hand and right-hand tables +-- and you get a different result +( + SELECT id, stuff + FROM bbb +) +EXCEPT +( + SELECT id, stuff + FROM aaa +) +ORDER BY 1 +; - id | stuff - ----+------- - 6 | six - (1 row) + id | stuff +----+------- + 6 | six +(1 row) + @@ -1513,17 +1582,19 @@ and a SELECT statement on which the view is built. The following example creates a view based on our borrower profile count: Creating a view - CREATE VIEW actor.borrower_profile_count AS - SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" - FROM actor.usr au - INNER JOIN permission.grp_tree pgt - ON au.profile = pgt.id - INNER JOIN actor.org_unit aou - ON aou.id = au.home_ou - WHERE au.deleted IS FALSE - GROUP BY pgt.name, aou.name - ORDER BY aou.name, pgt.name - ; + +CREATE VIEW actor.borrower_profile_count AS + SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" + FROM actor.usr au + INNER JOIN permission.grp_tree pgt + ON au.profile = pgt.id + INNER JOIN actor.org_unit aou + ON aou.id = au.home_ou + WHERE au.deleted IS FALSE + GROUP BY pgt.name, aou.name + ORDER BY aou.name, pgt.name +; + When you subsequently select results from the view, you can apply additional WHERE clauses to filter the results, or ORDER BY clauses to change the @@ -1533,27 +1604,29 @@ Then we issue a SELECT statement with a WHERE clause to further filter the results. Selecting results from a view - SELECT * FROM actor.borrower_profile_count; + +SELECT * FROM actor.borrower_profile_count; - Profile | Library | Count - ----------------------------+----------------------------+------- - Faculty | University Library | 208 - Graduate | University Library | 16 - Patrons | University Library | 62 - ... + Profile | Library | Count +----------------------------+----------------------------+------- + Faculty | University Library | 208 + Graduate | University Library | 16 + Patrons | University Library | 62 +... - -- You can still filter your results with WHERE clauses - SELECT * - FROM actor.borrower_profile_count - WHERE "Profile" = 'Faculty'; +-- You can still filter your results with WHERE clauses +SELECT * + FROM actor.borrower_profile_count + WHERE "Profile" = 'Faculty'; - Profile | Library | Count - ---------+----------------------------+------- - Faculty | University Library | 208 - Faculty | College Library | 64 - Faculty | College Library 2 | 102 - Faculty | University Library 2 | 776 - (4 rows) + Profile | Library | Count +---------+----------------------------+------- + Faculty | University Library | 208 + Faculty | College Library | 64 + Faculty | College Library 2 | 102 + Faculty | University Library 2 | 776 +(4 rows) + @@ -1599,17 +1672,19 @@ row in the biblio.record_entry table with every row in the metabib.full_rec view: Query plan for a terrible query - EXPLAIN SELECT * - FROM biblio.record_entry - FULL OUTER JOIN metabib.full_rec ON 1=1 - ; + +EXPLAIN SELECT * + FROM biblio.record_entry + FULL OUTER JOIN metabib.full_rec ON 1=1 +; - QUERY PLAN - -------------------------------------------------------------------------------// - Merge Full Join (cost=0.00..4959156437783.60 rows=132415734100864 width=1379) - -> Seq Scan on record_entry (cost=0.00..400634.16 rows=2013416 width=1292) - -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) - (3 rows) + QUERY PLAN +-------------------------------------------------------------------------------// + Merge Full Join (cost=0.00..4959156437783.60 rows=132415734100864 width=1379) + -> Seq Scan on record_entry (cost=0.00..400634.16 rows=2013416 width=1292) + -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) +(3 rows) + This query plan shows that the query would return 132415734100864 rows, and it plans to accomplish what you asked for by sequentially scanning (Seq Scan) @@ -1618,43 +1693,47 @@ the left-hand table with every row in the right-hand table and take the saner approach of using an INNER JOIN where the join condition is on the record ID. Query plan for a less terrible query - EXPLAIN SELECT * - FROM biblio.record_entry bre - INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id; - QUERY PLAN - ----------------------------------------------------------------------------------------// - Hash Join (cost=750229.86..5829273.98 rows=65766704 width=1379) - Hash Cond: (real_full_rec.record = bre.id) - -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) - -> Hash (cost=400634.16..400634.16 rows=2013416 width=1292) - -> Seq Scan on record_entry bre (cost=0.00..400634.16 rows=2013416 width=1292) - (5 rows) + +EXPLAIN SELECT * + FROM biblio.record_entry bre + INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id; + QUERY PLAN +----------------------------------------------------------------------------------------// + Hash Join (cost=750229.86..5829273.98 rows=65766704 width=1379) + Hash Cond: (real_full_rec.record = bre.id) + -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) + -> Hash (cost=400634.16..400634.16 rows=2013416 width=1292) + -> Seq Scan on record_entry bre (cost=0.00..400634.16 rows=2013416 width=1292) +(5 rows) + This time, we will return 65766704 rows - still way too many rows. We forgot to include a WHERE clause to limit the results to something meaningful. In the following example, we will limit the results to deleted records that were modified in the last month. Query plan for a realistic query - EXPLAIN SELECT * - FROM biblio.record_entry bre - INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id - WHERE bre.deleted IS TRUE - AND DATE_TRUNC('MONTH', bre.edit_date) > - DATE_TRUNC ('MONTH', NOW() - '1 MONTH'::INTERVAL) - ; + +EXPLAIN SELECT * + FROM biblio.record_entry bre + INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id + WHERE bre.deleted IS TRUE + AND DATE_TRUNC('MONTH', bre.edit_date) > + DATE_TRUNC ('MONTH', NOW() - '1 MONTH'::INTERVAL) +; - QUERY PLAN - ----------------------------------------------------------------------------------------// - Hash Join (cost=5058.86..2306218.81 rows=201669 width=1379) - Hash Cond: (real_full_rec.record = bre.id) - -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) - -> Hash (cost=4981.69..4981.69 rows=6174 width=1292) - -> Index Scan using biblio_record_entry_deleted on record_entry bre - (cost=0.00..4981.69 rows=6174 width=1292) - Index Cond: (deleted = true) - Filter: ((deleted IS TRUE) AND (date_trunc('MONTH'::text, edit_date) - > date_trunc('MONTH'::text, (now() - '1 mon'::interval)))) - (7 rows) + QUERY PLAN +----------------------------------------------------------------------------------------// + Hash Join (cost=5058.86..2306218.81 rows=201669 width=1379) + Hash Cond: (real_full_rec.record = bre.id) + -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) + -> Hash (cost=4981.69..4981.69 rows=6174 width=1292) + -> Index Scan using biblio_record_entry_deleted on record_entry bre + (cost=0.00..4981.69 rows=6174 width=1292) + Index Cond: (deleted = true) + Filter: ((deleted IS TRUE) AND (date_trunc('MONTH'::text, edit_date) + > date_trunc('MONTH'::text, (now() - '1 mon'::interval)))) +(7 rows) + We can see that the number of rows returned is now only 201669; that’s something we can work with. Also, the overall cost of the query is 2306218, @@ -1667,30 +1746,32 @@ flattened MARC subfields is a fairly common operation, we could create a new index and see if that speeds up our query plan. Query plan with optimized access via a new index - -- This index will take a long time to create on a large database - -- of bibliographic records - CREATE INDEX bib_record_idx ON metabib.real_full_rec (record); + +-- This index will take a long time to create on a large database +-- of bibliographic records +CREATE INDEX bib_record_idx ON metabib.real_full_rec (record); - EXPLAIN SELECT * - FROM biblio.record_entry bre - INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id - WHERE bre.deleted IS TRUE - AND DATE_TRUNC('MONTH', bre.edit_date) > - DATE_TRUNC ('MONTH', NOW() - '1 MONTH'::INTERVAL) - ; +EXPLAIN SELECT * + FROM biblio.record_entry bre + INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id + WHERE bre.deleted IS TRUE + AND DATE_TRUNC('MONTH', bre.edit_date) > + DATE_TRUNC ('MONTH', NOW() - '1 MONTH'::INTERVAL) +; - QUERY PLAN - ----------------------------------------------------------------------------------------// - Nested Loop (cost=0.00..1558330.46 rows=201669 width=1379) - -> Index Scan using biblio_record_entry_deleted on record_entry bre - (cost=0.00..4981.69 rows=6174 width=1292) - Index Cond: (deleted = true) - Filter: ((deleted IS TRUE) AND (date_trunc('MONTH'::text, edit_date) > - date_trunc('MONTH'::text, (now() - '1 mon'::interval)))) - -> Index Scan using bib_record_idx on real_full_rec - (cost=0.00..240.89 rows=850 width=87) - Index Cond: (real_full_rec.record = bre.id) - (6 rows) + QUERY PLAN +----------------------------------------------------------------------------------------// + Nested Loop (cost=0.00..1558330.46 rows=201669 width=1379) + -> Index Scan using biblio_record_entry_deleted on record_entry bre + (cost=0.00..4981.69 rows=6174 width=1292) + Index Cond: (deleted = true) + Filter: ((deleted IS TRUE) AND (date_trunc('MONTH'::text, edit_date) > + date_trunc('MONTH'::text, (now() - '1 mon'::interval)))) + -> Index Scan using bib_record_idx on real_full_rec + (cost=0.00..240.89 rows=850 width=87) + Index Cond: (real_full_rec.record = bre.id) +(6 rows) + We can see that the resulting number of rows is still the same (201669), but the execution estimate has dropped to 1558330 because the query planner can @@ -1723,19 +1804,21 @@ Of course, as with the rest of SQL, you can replace individual column values with one or more use sub-selects: Inserting rows using sub-selects instead of integers - INSERT INTO permission.usr_grp_map (usr, grp) - VALUES ( - (SELECT id FROM actor.usr - WHERE family_name = 'Scott' AND first_given_name = 'Daniel'), - (SELECT id FROM permission.grp_tree - WHERE name = 'Local System Administrator') - ), ( - (SELECT id FROM actor.usr - WHERE family_name = 'Scott' AND first_given_name = 'Daniel'), - (SELECT id FROM permission.grp_tree - WHERE name = 'Circulator') - ) - ; + +INSERT INTO permission.usr_grp_map (usr, grp) + VALUES ( + (SELECT id FROM actor.usr + WHERE family_name = 'Scott' AND first_given_name = 'Daniel'), + (SELECT id FROM permission.grp_tree + WHERE name = 'Local System Administrator') + ), ( + (SELECT id FROM actor.usr + WHERE family_name = 'Scott' AND first_given_name = 'Daniel'), + (SELECT id FROM permission.grp_tree + WHERE name = 'Circulator') + ) +; + @@ -1751,13 +1834,15 @@ the grp column value based on the id column values returned from permission.grp_tree: Inserting rows via a <literal>SELECT</literal> statement - INSERT INTO permission.usr_grp_map (usr, grp) - SELECT 1, id - FROM permission.grp_tree - WHERE id > 2 - ; + +INSERT INTO permission.usr_grp_map (usr, grp) + SELECT 1, id + FROM permission.grp_tree + WHERE id > 2 +; - INSERT 0 6 +INSERT 0 6 + @@ -1769,23 +1854,24 @@ permission.grp_perm_map table where the permission maps to UPDATE_ORG_UNIT_CLOSING and the group is anything other than administrators: Deleting rows from a table - DELETE FROM permission.grp_perm_map - WHERE grp IN ( - SELECT id - FROM permission.grp_tree - WHERE name != 'Local System Administrator' - ) AND perm = ( - SELECT id - FROM permission.perm_list - WHERE code = 'UPDATE_ORG_UNIT_CLOSING' - ) - ; + +DELETE FROM permission.grp_perm_map + WHERE grp IN ( + SELECT id + FROM permission.grp_tree + WHERE name != 'Local System Administrator' + ) AND perm = ( + SELECT id + FROM permission.perm_list + WHERE code = 'UPDATE_ORG_UNIT_CLOSING' + ) +; + There are two main reasons that a DELETE statement may not actually delete rows from a table, even when the rows meet the conditional clause. - If the row contains a value that is the target of a relational constraint, for example, if another table has a foreign key pointing at your target @@ -1831,102 +1917,114 @@ Monthly circulation stats by collection code / library Monthly Circulation Stats by Collection Code/Library - SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", acl.name AS "Copy Location" - FROM asset.copy ac - INNER JOIN asset.copy_location acl ON ac.location = acl.id - INNER JOIN action.circulation acirc ON acirc.target_copy = ac.id - INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id - WHERE DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') - AND acirc.desk_renewal IS FALSE - AND acirc.opac_renewal IS FALSE - AND acirc.phone_renewal IS FALSE - GROUP BY aou.name, acl.name - ORDER BY aou.name, acl.name, 1 - ; + +SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", acl.name AS "Copy Location" + FROM asset.copy ac + INNER JOIN asset.copy_location acl ON ac.location = acl.id + INNER JOIN action.circulation acirc ON acirc.target_copy = ac.id + INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id + WHERE DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') + AND acirc.desk_renewal IS FALSE + AND acirc.opac_renewal IS FALSE + AND acirc.phone_renewal IS FALSE + GROUP BY aou.name, acl.name + ORDER BY aou.name, acl.name, 1 +; + Monthly circulation stats by borrower stat / library Monthly Circulation Stats by Borrower Stat/Library - SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", asceum.stat_cat_entry AS "Borrower Stat" - FROM action.circulation acirc - INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id - INNER JOIN actor.stat_cat_entry_usr_map asceum ON asceum.target_usr = acirc.usr - INNER JOIN actor.stat_cat astat ON asceum.stat_cat = astat.id - WHERE DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') - AND astat.name = 'Preferred language' - AND acirc.desk_renewal IS FALSE - AND acirc.opac_renewal IS FALSE - AND acirc.phone_renewal IS FALSE - GROUP BY aou.name, asceum.stat_cat_entry - ORDER BY aou.name, asceum.stat_cat_entry, 1 - ; + +SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", asceum.stat_cat_entry AS "Borrower Stat" + FROM action.circulation acirc + INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id + INNER JOIN actor.stat_cat_entry_usr_map asceum ON asceum.target_usr = acirc.usr + INNER JOIN actor.stat_cat astat ON asceum.stat_cat = astat.id + WHERE DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') + AND astat.name = 'Preferred language' + AND acirc.desk_renewal IS FALSE + AND acirc.opac_renewal IS FALSE + AND acirc.phone_renewal IS FALSE + GROUP BY aou.name, asceum.stat_cat_entry + ORDER BY aou.name, asceum.stat_cat_entry, 1 +; + Monthly intralibrary loan stats by library Monthly Intralibrary Loan Stats by Library - SELECT aou.name AS "Library", COUNT(acirc.id) - FROM action.circulation acirc - INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id - INNER JOIN asset.copy ac ON acirc.target_copy = ac.id - INNER JOIN asset.call_number acn ON ac.call_number = acn.id - WHERE acirc.circ_lib != acn.owning_lib - AND DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') - AND acirc.desk_renewal IS FALSE - AND acirc.opac_renewal IS FALSE - AND acirc.phone_renewal IS FALSE - GROUP by aou.name - ORDER BY aou.name, 2 - ; + +SELECT aou.name AS "Library", COUNT(acirc.id) + FROM action.circulation acirc + INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id + INNER JOIN asset.copy ac ON acirc.target_copy = ac.id + INNER JOIN asset.call_number acn ON ac.call_number = acn.id + WHERE acirc.circ_lib != acn.owning_lib + AND DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') + AND acirc.desk_renewal IS FALSE + AND acirc.opac_renewal IS FALSE + AND acirc.phone_renewal IS FALSE + GROUP by aou.name + ORDER BY aou.name, 2 +; + Monthly borrowers added by profile (adult, child, etc) / library Monthly Borrowers Added by Profile (Adult, Child, etc)/Library - SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" - FROM actor.usr au - INNER JOIN permission.grp_tree pgt - ON au.profile = pgt.id - INNER JOIN actor.org_unit aou - ON aou.id = au.home_ou - WHERE au.deleted IS FALSE - AND DATE_TRUNC('MONTH', au.create_date) = DATE_TRUNC('MONTH', NOW() - '3 months'::interval) - GROUP BY pgt.name, aou.name - ORDER BY aou.name, pgt.name - ; + +SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" + FROM actor.usr au + INNER JOIN permission.grp_tree pgt + ON au.profile = pgt.id + INNER JOIN actor.org_unit aou + ON aou.id = au.home_ou + WHERE au.deleted IS FALSE + AND DATE_TRUNC('MONTH', au.create_date) = DATE_TRUNC('MONTH', NOW() - '3 months'::interval) + GROUP BY pgt.name, aou.name + ORDER BY aou.name, pgt.name +; + Borrower count by profile (adult, child, etc) / library Borrower Count by Profile (Adult, Child, etc)/Library - SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" - FROM actor.usr au - INNER JOIN permission.grp_tree pgt - ON au.profile = pgt.id - INNER JOIN actor.org_unit aou - ON aou.id = au.home_ou - WHERE au.deleted IS FALSE - GROUP BY pgt.name, aou.name - ORDER BY aou.name, pgt.name - ; + +SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" + FROM actor.usr au + INNER JOIN permission.grp_tree pgt + ON au.profile = pgt.id + INNER JOIN actor.org_unit aou + ON aou.id = au.home_ou + WHERE au.deleted IS FALSE + GROUP BY pgt.name, aou.name + ORDER BY aou.name, pgt.name +; + Monthly items added by collection / library - We define a "collection" as a shelving location in Evergreen. + We define a collection as a shelving location in Evergreen. Monthly Items Added by Collection/Library - SELECT aou.name AS "Library", acl.name, COUNT(ac.barcode) - FROM actor.org_unit aou - INNER JOIN asset.call_number acn ON acn.owning_lib = aou.id - INNER JOIN asset.copy ac ON ac.call_number = acn.id - INNER JOIN asset.copy_location acl ON ac.location = acl.id - WHERE ac.deleted IS FALSE - AND acn.deleted IS FALSE - AND DATE_TRUNC('MONTH', ac.create_date) = DATE_TRUNC('MONTH', NOW() - '1 month'::interval) - GROUP BY aou.name, acl.name - ORDER BY aou.name, acl.name - ; + +SELECT aou.name AS "Library", acl.name, COUNT(ac.barcode) + FROM actor.org_unit aou + INNER JOIN asset.call_number acn ON acn.owning_lib = aou.id + INNER JOIN asset.copy ac ON ac.call_number = acn.id + INNER JOIN asset.copy_location acl ON ac.location = acl.id + WHERE ac.deleted IS FALSE + AND acn.deleted IS FALSE + AND DATE_TRUNC('MONTH', ac.create_date) = DATE_TRUNC('MONTH', NOW() - '1 month'::interval) + GROUP BY aou.name, acl.name + ORDER BY aou.name, acl.name +; + @@ -1937,48 +2035,50 @@ bibliographic record ID and display the title / author information for those records that have more than a given threshold of holds. Hold Purchase Alert by Library - -- Title holds - SELECT all_holds.bib_id, aou.name, rmsr.title, rmsr.author, COUNT(all_holds.bib_id) - FROM - ( - ( - SELECT target, request_lib - FROM action.hold_request - WHERE hold_type = 'T' - AND fulfillment_time IS NULL - AND cancel_time IS NULL - ) - UNION ALL - -- Volume holds - ( - SELECT bre.id, request_lib - FROM action.hold_request ahr - INNER JOIN asset.call_number acn ON ahr.target = acn.id - INNER JOIN biblio.record_entry bre ON acn.record = bre.id - WHERE ahr.hold_type = 'V' - AND ahr.fulfillment_time IS NULL - AND ahr.cancel_time IS NULL - ) - UNION ALL - -- Copy holds - ( - SELECT bre.id, request_lib - FROM action.hold_request ahr - INNER JOIN asset.copy ac ON ahr.target = ac.id - INNER JOIN asset.call_number acn ON ac.call_number = acn.id - INNER JOIN biblio.record_entry bre ON acn.record = bre.id - WHERE ahr.hold_type = 'C' - AND ahr.fulfillment_time IS NULL - AND ahr.cancel_time IS NULL - ) - ) AS all_holds(bib_id, request_lib) - INNER JOIN reporter.materialized_simple_record rmsr - INNER JOIN actor.org_unit aou ON aou.id = all_holds.request_lib - ON rmsr.id = all_holds.bib_id - GROUP BY all_holds.bib_id, aou.name, rmsr.id, rmsr.title, rmsr.author - HAVING COUNT(all_holds.bib_id) > 2 - ORDER BY aou.name - ; + +-- Title holds +SELECT all_holds.bib_id, aou.name, rmsr.title, rmsr.author, COUNT(all_holds.bib_id) + FROM + ( + ( + SELECT target, request_lib + FROM action.hold_request + WHERE hold_type = 'T' + AND fulfillment_time IS NULL + AND cancel_time IS NULL + ) + UNION ALL + -- Volume holds + ( + SELECT bre.id, request_lib + FROM action.hold_request ahr + INNER JOIN asset.call_number acn ON ahr.target = acn.id + INNER JOIN biblio.record_entry bre ON acn.record = bre.id + WHERE ahr.hold_type = 'V' + AND ahr.fulfillment_time IS NULL + AND ahr.cancel_time IS NULL + ) + UNION ALL + -- Copy holds + ( + SELECT bre.id, request_lib + FROM action.hold_request ahr + INNER JOIN asset.copy ac ON ahr.target = ac.id + INNER JOIN asset.call_number acn ON ac.call_number = acn.id + INNER JOIN biblio.record_entry bre ON acn.record = bre.id + WHERE ahr.hold_type = 'C' + AND ahr.fulfillment_time IS NULL + AND ahr.cancel_time IS NULL + ) + ) AS all_holds(bib_id, request_lib) + INNER JOIN reporter.materialized_simple_record rmsr + INNER JOIN actor.org_unit aou ON aou.id = all_holds.request_lib + ON rmsr.id = all_holds.bib_id + GROUP BY all_holds.bib_id, aou.name, rmsr.id, rmsr.title, rmsr.author + HAVING COUNT(all_holds.bib_id) > 2 + ORDER BY aou.name +; + @@ -1991,30 +2091,32 @@ Then we issue an UPDATE statement to set the home library for patrons with a physical address with a city that matches the city names in our staging table. Update borrower records with a different home library - CREATE SCHEMA staging; - CREATE TABLE staging.city_home_ou_map (city TEXT, ou_shortname TEXT, - FOREIGN KEY (ou_shortname) REFERENCES actor.org_unit (shortname)); - INSERT INTO staging.city_home_ou_map (city, ou_shortname) - VALUES ('Southbury', 'BR1'), ('Middlebury', 'BR2'), ('Hartford', 'BR3'); - BEGIN; + +CREATE SCHEMA staging; +CREATE TABLE staging.city_home_ou_map (city TEXT, ou_shortname TEXT, + FOREIGN KEY (ou_shortname) REFERENCES actor.org_unit (shortname)); +INSERT INTO staging.city_home_ou_map (city, ou_shortname) + VALUES ('Southbury', 'BR1'), ('Middlebury', 'BR2'), ('Hartford', 'BR3'); +BEGIN; - UPDATE actor.usr au SET home_ou = COALESCE( - ( - SELECT aou.id - FROM actor.org_unit aou - INNER JOIN staging.city_home_ou_map schom ON schom.ou_shortname = aou.shortname - INNER JOIN actor.usr_address aua ON aua.city = schom.city - WHERE au.id = aua.usr - GROUP BY aou.id - ), home_ou) - WHERE ( - SELECT aou.id - FROM actor.org_unit aou - INNER JOIN staging.city_home_ou_map schom ON schom.ou_shortname = aou.shortname - INNER JOIN actor.usr_address aua ON aua.city = schom.city - WHERE au.id = aua.usr - GROUP BY aou.id - ) IS NOT NULL; +UPDATE actor.usr au SET home_ou = COALESCE( + ( + SELECT aou.id + FROM actor.org_unit aou + INNER JOIN staging.city_home_ou_map schom ON schom.ou_shortname = aou.shortname + INNER JOIN actor.usr_address aua ON aua.city = schom.city + WHERE au.id = aua.usr + GROUP BY aou.id + ), home_ou) +WHERE ( + SELECT aou.id + FROM actor.org_unit aou + INNER JOIN staging.city_home_ou_map schom ON schom.ou_shortname = aou.shortname + INNER JOIN actor.usr_address aua ON aua.city = schom.city + WHERE au.id = aua.usr + GROUP BY aou.id +) IS NOT NULL; + diff --git a/1.6/development/json.xml b/1.6/development/json.xml index 036200d..b445fd9 100644 --- a/1.6/development/json.xml +++ b/1.6/development/json.xml @@ -55,23 +55,23 @@ In this minimal example we select from only one table. Later we will see how to join multiple tables. Since we don't supply a WHERE clause, json_query constructs a default WHERE clause for us, including all the available columns. The resulting SQL looks like this: - - SELECT - "aou".billing_address AS "billing_address", - "aou".holds_address AS "holds_address", - "aou".id AS "id", - "aou".ill_address AS "ill_address", - "aou".mailing_address AS "mailing_address", - "aou".name AS "name", - "aou".ou_type AS "ou_type", - "aou".parent_ou AS "parent_ou", - "aou".shortname AS "shortname", - "aou".email AS "email", - "aou".phone AS "phone", - "aou".opac_visible AS "opac_visible" - FROM - actor.org_unit AS "aou" ; - + +SELECT + "aou".billing_address AS "billing_address", + "aou".holds_address AS "holds_address", + "aou".id AS "id", + "aou".ill_address AS "ill_address", + "aou".mailing_address AS "mailing_address", + "aou".name AS "name", + "aou".ou_type AS "ou_type", + "aou".parent_ou AS "parent_ou", + "aou".shortname AS "shortname", + "aou".email AS "email", + "aou".phone AS "phone", + "aou".opac_visible AS "opac_visible" +FROM + actor.org_unit AS "aou" ; + Default SELECT Clauses @@ -111,45 +111,45 @@ The SELECT Clause The following variation also produces a default SELECT clause: - - { - "from":"aou", - "select": { - "aou":"*" - } - } - + +{ + "from":"aou", + "select": { + "aou":"*" + } +} + ...and so does this one: - - { - "select": { - "aou":null - }, - "from":"aou" - } - + +{ + "select": { + "aou":null + }, + "from":"aou" +} + While this syntax may not be terribly useful, it does illustrate the minimal structure of a SELECT clause in a JSON query: an entry in the outermost JSON object, with a key of select. The value associated with this key is another JSON object, whose keys are class names. (These two examples also illustrate another point: unlike SQL, a JSON query doesn't care whether the FROM clause or the SELECT clause comes first.) Usually you don't want the default SELECT clause. Here's how to select only some of the columns: - - { - "from":"aou", - "select": { - "aou":[ "id", "name" ] - } - } - + +{ + "from":"aou", + "select": { + "aou":[ "id", "name" ] + } +} + The value associated with the class name is an array of column names. If you select columns from multiple tables (not shown here), you'll need a separate entry for each table, and a separate column list for each entry. The previous example results in the following SQL: - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" ; - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" ; + Fancier SELECT Clauses @@ -177,44 +177,44 @@ This example assigns a different column alias: - - { - "from":"aou", - "select": { - "aou": [ - "id", - { "column":"name", "alias":"org_name" } - ] - } - } + +{ + "from":"aou", + "select": { + "aou": [ + "id", + { "column":"name", "alias":"org_name" } + ] + } +} - SELECT - "aou".id AS "id", - "aou".name AS "org_name" - FROM - actor.org_unit AS "aou" ; - +SELECT + "aou".id AS "id", + "aou".name AS "org_name" +FROM + actor.org_unit AS "aou" ; + In this case, changing the column alias doesn't accomplish much. But if we were joining to the actor.org_unit_type table, which also has a "name" column, we could use different aliases to distinguish them. The following example uses a function to raise a column to upper case: - - { - "from":"aou", - "select": { - "aou": [ - "id", - { "column":"name", "transform":"upper" } - ] - } - } - - SELECT - "aou".id AS "id", - upper("aou".name ) AS "name" - FROM - actor.org_unit AS "aou" ; - - Here we take a substring of the name, using the "params" element to pass parameters: + +{ + "from":"aou", + "select": { + "aou": [ + "id", + { "column":"name", "transform":"upper" } + ] + } +} + +SELECT + "aou".id AS "id", + upper("aou".name ) AS "name" +FROM + actor.org_unit AS "aou" ; + + Here we take a substring of the name, using the params element to pass parameters: { "from":"aou", @@ -235,29 +235,31 @@ FROM actor.org_unit AS "aou" ; - The parameters specified with params are inserted after the applicable column (name in this case), which is always the first parameter. They are always passed as strings, i.e. enclosed in quotes, even if the JSON expresses them as numbers. PostgreSQL will ordinarily coerce them to the right type. However if the function name is overloaded to accept different types, PostgreSQL may invoke a function other than the one intended. + The parameters specified with params are inserted after the applicable column (name in this case), + which is always the first parameter. They are always passed as strings, i.e. enclosed in quotes, even if the JSON expresses them as numbers. PostgreSQL will ordinarily + coerce them to the right type. However if the function name is overloaded to accept different types, PostgreSQL may invoke a function other than the one intended. Finally we call a fictitious function "frobozz" that returns multiple columns, where we want only one of them: - - { - "from":"aou", - "select": { - "aou": [ - "id", { - "column":"name", - "transform":"frobozz", - "result_field":"zamzam" - } - ] - } - } - - SELECT - "aou".id AS "id", - (frobozz("aou".name ))."zamzam" AS "name" - FROM - actor.org_unit AS "aou" ; - - The "frobozz" function doesn't actually exist, but json_query doesn't know that. The query won't fail until json_query tries to execute it in + +{ + "from":"aou", + "select": { + "aou": [ + "id", { + "column":"name", + "transform":"frobozz", + "result_field":"zamzam" + } + ] + } +} + +SELECT + "aou".id AS "id", + (frobozz("aou".name ))."zamzam" AS "name" +FROM + actor.org_unit AS "aou" ; + + The frobozz function doesn't actually exist, but json_query doesn't know that. The query won't fail until json_query tries to execute it in the database. @@ -270,47 +272,47 @@ to them (and it has to be the first parameter). You can't use a CASE expression. Instead, the client application can do the equivalent branching for itself. You can't select a subquery. In raw SQL you can do something like the following: - - SELECT - id, - name, - ( - SELECT name - FROM actor.org_unit_type AS aout - WHERE aout.id = aou.ou_type - ) AS type_name - FROM - actor.org_unit AS aou; - + +SELECT + id, + name, + ( + SELECT name + FROM actor.org_unit_type AS aout + WHERE aout.id = aou.ou_type + ) AS type_name +FROM + actor.org_unit AS aou; + This contrived example is not very realistic. Normally you would use a JOIN in this case, and that's what you should do in a JSON query. Other cases may not be so easy to solve. The WHERE Clause Most queries need a WHERE clause, as in this simple example: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":"3" - } - } - + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou":"3" + } +} + Like the SELECT clause, the WHERE clause gets its own entry in the top-level object of a JSON query. The key is where, and the associated value is either an object (as shown here) or an array (to be discussed a bit later). Each entry in the object is a separate condition. In this case, we use a special shortcut for expressing an equality condition. The column name is on the left of the colon, and the value to which we are equating it is on the right. Here's the resulting SQL: - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - "aou".parent_ou = 3; - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".parent_ou = 3; + Like the SELECT clause, the generated WHERE clause qualifies each column name with the alias of the relevant table. If you want to compare a column to NULL, put null (without quotation marks) to the right of the colon instead of a literal value. The resulting SQL will include IS NULL instead of an equals sign. @@ -318,37 +320,37 @@ Other Kinds of Comparisons Here's the same query (which generates the same SQL) without the special shortcut: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":{ "=":3 } - } - } - + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou":{ "=":3 } + } +} + We still have an entry whose key is the column name, but this time the associated value is another JSON object. It must contain exactly one entry, with the comparison operator on the left of the colon, and the value to be compared on the right. The same syntax works for other kinds of comparison operators. For example: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":{ ">":3 } - } - } - + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou":{ ">":3 } + } +} + ...turns into: - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - "aou".parent_ou > 3 ; - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".parent_ou > 3 ; + The condition '=:null' turns into IS NULL. Any other operator used with null turns into IS NOT NULL. You can use most of the comparison operators recognized by PostgreSQL: @@ -366,56 +368,53 @@ any semicolons or white space, in order to prevent certain kinds of SQL injection. It also allows "similar to" as a special exception. As a result, you can slip an operator of your own devising into the SQL, so long as it doesn't contain any semicolons or white space, and doesn't create invalid syntax. Here's a contrived and rather silly example: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":{ "<2+":3 } - } - } - + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou":{ "<2+":3 } + } +} + ...which results in the following SQL: - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - "aou".parent_ou <2+ 3; - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".parent_ou <2+ 3; + It's hard to come up with a realistic case where this hack would be useful, but it could happen. - - Comparing One Column to Another Here's how to put another column on the right hand side of a comparison: - - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id": { ">": { "+aou":"parent_ou" } } - } - }; - + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "id": { ">": { "+aou":"parent_ou" } } + } +}; + This syntax is similar to the previous examples, except that instead of comparing to a literal value, we compare to an object. This object has only a single entry, whose key is a table alias preceded by a leading plus sign. The associated value is the name of the column. Here's the resulting SQL: - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - ( - "aou".id > ( "aou".parent_ou ) - ); - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE +( + "aou".id > ( "aou".parent_ou ) +); + The table alias must correspond to the appropriate table. Since json_query doesn't validate the choice of alias, it won't detect an invalid alias until it tries to execute the query. In this simple example there's only one table to choose from. The choice of alias is more important in a subquery or join. The leading plus sign, combined with a table alias, can be used in other situations to designate the table to which a column belongs. We shall defer a discussion of @@ -424,245 +423,244 @@ Testing Boolean Columns In SQL, there are several ways to test a boolean column such as actor.org_unit.opac_visible. The most obvious way is to compare it to true or false: - - SELECT - id - FROM - actor.org_unit - WHERE - opac_visible = true; - + +SELECT + id +FROM + actor.org_unit +WHERE + opac_visible = true; + In a JSON query this approach doesn't work. If you try it, the "= true" test will turn into IS NULL. Don't do that. Instead, use a leading plus sign, as described in the preceding section, to treat the boolean column as a stand-alone condition: - - { - "from":"aou", - "select": { "aou":[ "id" ] }, - "where": { - "+aou":"opac_visible" - } - } - - + +{ + "from":"aou", + "select": { "aou":[ "id" ] }, + "where": { + "+aou":"opac_visible" + } +} + Result: - - SELECT - "aou".id AS "id" - FROM - actor.org_unit AS "aou" - WHERE - "aou".opac_visible ; - + +SELECT + "aou".id AS "id" +FROM + actor.org_unit AS "aou" +WHERE + "aou".opac_visible ; + If you need to test for falsity, then write a test for truth and negate it with the "-not" operator. We will discuss the "-not" operator later, but here's a preview: - - { - "from":"aou", - "select": { "aou":[ "id" ] }, - "where": { - "-not": { - "+aou":"opac_visible" - } - } - } + +{ + "from":"aou", + "select": { "aou":[ "id" ] }, + "where": { + "-not": { + "+aou":"opac_visible" + } + } +} - SELECT - "aou".id AS "id" - FROM - actor.org_unit AS "aou" - WHERE - NOT ( "aou".opac_visible ); - +SELECT + "aou".id AS "id" +FROM + actor.org_unit AS "aou" +WHERE + NOT ( "aou".opac_visible ); + You can also compare a boolean column directly to a more complex condition: - - { - "from":"aou", - "select": { "aou":[ "id" ] }, - "where": { - "opac_visible": { - "=": { "parent_ou":{ ">":3 } } - } - } + +{ + "from":"aou", + "select": { "aou":[ "id" ] }, + "where": { + "opac_visible": { + "=": { "parent_ou":{ ">":3 } } } - + } +} + Here we compare a boolean column, not to a literal value, but to a boolean expression. The resulting SQL looks a little goofy, but it works: - - SELECT - "aou".id AS "id" - FROM - actor.org_unit AS "aou" - WHERE - ( - "aou".opac_visible = ( "aou".parent_ou > 3 ) - ); - + +SELECT + "aou".id AS "id" +FROM + actor.org_unit AS "aou" +WHERE + ( + "aou".opac_visible = ( "aou".parent_ou > 3 ) + ); + In this case we compare the boolean column to a single simple condition. However you can include additional complications -- multiple conditions, IN lists, BETWEEN clauses, and other features as described below. Multiple Conditions If you need multiple conditions, just add them to the "where" object, separated by commas: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou":{ ">":3 }, - "id":{ "<>":7 } - } - } - + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou":{ ">":3 }, + "id":{ "<>":7 } + } +} + The generated SQL connects the conditions with AND: - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - "aou".parent_ou g 3 - AND "aou".id <> 7; - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".parent_ou g 3 + AND "aou".id <> 7; + Later we will see how to use OR instead of AND. Using Arrays Here's a puzzler. Suppose you need two conditions for the same column. How do you code them in the same WHERE clause? For example, suppose you want something like this: - - SELECT - id, - name - FROM - actor.org_unit - WHERE - parent_ou > 3 - AND parent_ou <> 7; - + +SELECT + id, + name +FROM + actor.org_unit +WHERE + parent_ou > 3 + AND parent_ou <> 7; + You might try a WHERE clause like this: - - "where": { - "parent_ou":{ ">":3 }, - "parent_ou":{ "<>":7 } - } - + +"where": { + "parent_ou":{ ">":3 }, + "parent_ou":{ "<>":7 } + } + Nope. Won't work. According to JSON rules, two entries in the same object can't have the same key. After slapping yourself in the forehead, you try something a little smarter: - - "where": { - "parent_ou": { - ">":3, - "<>":7 - } - } - + +"where": { + "parent_ou": { + ">":3, + "<>":7 + } +} + Nice try, but that doesn't work either. Maybe it ought to work -- at least it's legal JSON -- but, no. Here's what works: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": [ - { "parent_ou":{ ">":3 } }, - { "parent_ou":{ "<>":7 } } - ] - } - + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": [ + { "parent_ou":{ ">":3 } }, + { "parent_ou":{ "<>":7 } } + ] +} + We wrapped the two conditions into two separate JSON objects, and then wrapped those objects together into a JSON array. The resulting SQL looks like this: - - SELECT - "aou".id AS "id", - "aou".name AS "name - FROM - actor.org_unit AS "aou" - WHERE - ( "aou".parent_ou > 3 ) - AND - ( "aou".parent_ou <> 7 ); - + +SELECT + "aou".id AS "id", + "aou".name AS "name +FROM + actor.org_unit AS "aou" +WHERE + ( "aou".parent_ou > 3 ) +AND + ( "aou".parent_ou <> 7 ); + That's not quite what we were hoping for, because the extra parentheses are so ugly. But they're harmless. This will do. If you're in the mood, you can use arrays to as many parentheses as you like, even if there is only one condition inside: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": - [[[[[[ - { - "parent_ou":{ ">":3 } - }, - ]]]]]] - } - + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": + [[[[[[ + { + "parent_ou":{ ">":3 } + }, + ]]]]]] +} + ...yields: - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - ( ( ( ( ( ( "aou".parent_ou > 3 ) ) ) ) ) ); - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + ( ( ( ( ( ( "aou".parent_ou > 3 ) ) ) ) ) ); + How to OR By default, json_query combines conditions with AND. When you need OR, here's how to do it: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-or": { - "id":2, - "parent_ou":3 - } - } + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "-or": { + "id":2, + "parent_ou":3 } - - We use "-or" as the key, with the conditions to be ORed in an associated object. The leading minus sign is there to make sure that the operator isn't confused with a + } +} + + We use -or as the key, with the conditions to be ORed in an associated object. The leading minus sign is there to make sure that the operator isn't confused with a column name. Later we'll see some other operators with leading minus signs. In a couple of spots we even use plus signs. Here are the results from the above example: - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - ( - "aou".id = 2 - OR "aou".parent_ou = 3 - ); - - The conditions paired with "-or" are linked by OR and enclosed in parentheses. + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + ( + "aou".id = 2 + OR "aou".parent_ou = 3 + ); + + The conditions paired with -or are linked by OR and enclosed in parentheses. Here's how to do the same thing using an array, except that it produces an extra layer of parentheses: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-or": [ - { "id":2 }, - { "parent_ou":3 } - ] - } - } - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - ( - ( "aou".id = 2 ) - OR ( "aou".parent_ou = 3 ) - ); - - It's possible, though not very useful, to have only a single condition subject to the "-or" operator. In that case, the condition appears by itself, since there's nothing + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "-or": [ + { "id":2 }, + { "parent_ou":3 } + ] + } +} +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + ( + ( "aou".id = 2 ) + OR ( "aou".parent_ou = 3 ) + ); + + It's possible, though not very useful, to have only a single condition subject to the -or operator. In that case, the condition appears by itself, since there's nothing to OR it to. This trick is another way to add an extraneous layer of parentheses. Another way to AND - You can also use the -and operator. It works just like "-or", except that it combines conditions with AND instead of OR. Since AND is the default, we don't usually + You can also use the -and operator. It works just like -or, except that it combines conditions with AND instead of OR. Since AND is the default, we don't usually need a separate operator for it, but it's available. In rare cases, nothing else will do -- you can't include two conditions in the same list because of the duplicate key problem, but you can't combine them with arrays either. In particular, you might need to combine them within an expression that you're comparing to a boolean column (see the subsection above on Testing Boolean Columns). @@ -670,30 +668,30 @@ Negation with NOT The -not operator negates a condition or set of conditions. For example: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-not": { - "id":{ ">":2 }, - "parent_ou":3 - } - } + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "-not": { + "id":{ ">":2 }, + "parent_ou":3 } - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - NOT - ( - "aou".id > 2 - AND "aou".parent_ou = 3 - ); - + } +} + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + NOT + ( + "aou".id > 2 + AND "aou".parent_ou = 3 + ); + In this example we merely negate a combination of two comparisons. However the condition to be negated may be as complicated as it needs to be. Anything that can be subject to where can be subject to -not. In most cases you can achieve the same result by other means. However the -not operator is the only way to represent NOT BETWEEN @@ -703,128 +701,128 @@ EXISTS with Subqueries Two other operators carry a leading minus sign: -exists and its negation -not-exists. These operators apply to subqueries, which have the same format as a full query. For example: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-exists": { - "from":"asv", - "select":{ "asv":[ "id" ] }, - "where": { - "owner":7 - } - } - } + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "-exists": { + "from":"asv", + "select":{ "asv":[ "id" ] }, + "where": { + "owner":7 + } } - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - EXISTS - ( - SELECT "asv".id AS "id" - FROM action.survey AS "asv" - WHERE "asv".owner = 7 - ); - + } +} + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE +EXISTS + ( + SELECT "asv".id AS "id" + FROM action.survey AS "asv" + WHERE "asv".owner = 7 + ); + This kind of subquery is of limited use, because its WHERE clause doesn't have anything to do with the main query. It just shuts down the main query altogether if it isn't satisfied. More typical is a correlated subquery, whose WHERE clause refers to a row from the main query. For example: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "-exists": { - "from":"asv", - "select":{ "asv":[ "id" ] }, - "where": { - "owner":{ "=":{ "+aou":"id" }} - } - } - } - } - + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "-exists": { + "from":"asv", + "select":{ "asv":[ "id" ] }, + "where": { + "owner":{ "=":{ "+aou":"id" }} + } + } + } +} + Note the use of +aou to qualify the id column in the inner WHERE clause. - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - EXISTS - ( - SELECT "asv".id AS "id" - FROM action.survey AS "asv" - WHERE ("asv".owner = ( "aou".id )) - ); - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + EXISTS + ( + SELECT "asv".id AS "id" + FROM action.survey AS "asv" + WHERE ("asv".owner = ( "aou".id )) + ); + This latter example illustrates the syntax, but in practice, it would probably be more natural to use an IN clause with a subquery (to be discussed later). BETWEEN Clauses Here's how to express a BETWEEN clause: - - { - "from":"aou", - "select": { "aou":[ "id" ] }, - "where": { - "parent_ou": { "between":[ 3, 7 ] } - } - } - + +{ + "from":"aou", + "select": { "aou":[ "id" ] }, + "where": { + "parent_ou": { "between":[ 3, 7 ] } + } +} + The value associated with the column name is an object with a single entry, whose key is "between". The corresponding value is an array with exactly two values, defining the range to be tested. The range bounds must be either numbers or string literals. Although SQL allows them to be null, a null doesn't make sense in this context, because a null never matches anything. Consequently json_query doesn't allow them. The resulting SQL is just what you would expect: - - SELECT - "aou".id AS "id" - FROM - actor.org_unit AS "aou" - WHERE - parent_ou BETWEEN '3' AND '7'; - + +SELECT + "aou".id AS "id" +FROM + actor.org_unit AS "aou" +WHERE + parent_ou BETWEEN '3' AND '7'; + IN and NOT IN Lists There are two ways to code an IN list. One way is simply to include the list of values in an array: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou": [ 3, 5, 7 ] - } - } - + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou": [ 3, 5, 7 ] + } +} + As with a BETWEEN clause, the values in the array must be numbers or string literals. Nulls aren't allowed. Here's the resulting SQL, which again is just what you would expect: - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - "aou".parent_ou IN (3, 5, 7); - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".parent_ou IN (3, 5, 7); + The other way is similar to the syntax shown above for a BETWEEN clause, except that the array may include any non-zero number of values: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "parent_ou": { "in": [ 3, 5, 7 ] } - } - } - + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou": { "in": [ 3, 5, 7 ] } + } +} + This version results in the same SQL as the first one. For a NOT IN list, you can use the latter format, using the not in operator instead of in. Alternatively, you can use either format together with the -not operator. @@ -833,65 +831,65 @@ IN and NOT IN Clauses with Subqueries For an IN clause with a subquery, the syntax is similar to the second of the two formats for an IN list (see the previous subsection). The "in" or "not in" operator is paired, not with an array of values, but with an object representing the subquery. For example: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id": { - "in": { - "from":"asv", - "select":{ "asv":[ "owner" ] }, - "where":{ "name":"Voter Registration" } - } - } - } - } - + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "id": { + "in": { + "from":"asv", + "select":{ "asv":[ "owner" ] }, + "where":{ "name":"Voter Registration" } + } + } + } +} + The results: - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".id IN + ( SELECT - "aou".id AS "id", - "aou".name AS "name" + "asv".owner AS "owner" FROM - actor.org_unit AS "aou" + action.survey AS "asv" WHERE - "aou".id IN - ( - SELECT - "asv".owner AS "owner" - FROM - action.survey AS "asv" - WHERE - "asv".name = 'Voter Registration' - ); - + "asv".name = 'Voter Registration' + ); + In SQL the subquery may select multiple columns, but in a JSON query it can select only a single column. For a NOT IN clause with a subquery, use the not in operator instead of in. Comparing to a Function Here's how to compare a column to a function call: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id":{ ">":[ "sqrt", 16 ] } - } - } - - A comparison operator (">" in this case) is paired with an array. The first entry in the array must be a string giving the name of the function. The remaining parameters, + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "id":{ ">":[ "sqrt", 16 ] } + } +} + + A comparison operator (> in this case) is paired with an array. The first entry in the array must be a string giving the name of the function. The remaining parameters, if any, are the parameters. They may be strings, numbers, or nulls. The resulting SQL for this example: - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - "aou".id > sqrt( '16' ); - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".id > sqrt( '16' ); + All parameters are passed as quoted strings -- even if, as in this case, they are really numbers. This syntax is somewhat limited in that the function parameters must be constants (hence the use of a silly example). @@ -900,55 +898,55 @@ In the discussion of the SELECT clause, we saw how you could transform the value of a selected column by passing it to a function. In the WHERE clause, you can use similar syntax to transform the value of a column before comparing it to something else. For example: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "name": { - "=": { - "transform":"upper", - "value":"CARTER BRANCH" - } - } - } + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "name": { + "=": { + "transform":"upper", + "value":"CARTER BRANCH" + } } - + } +} + The "transform" entry gives the name of the function that we will use on the left side of the comparison. The "value" entry designates the value on the right side of the comparison. - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - upper("aou".name ) = 'CARTER BRANCH' ; - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + upper("aou".name ) = 'CARTER BRANCH' ; + As in the SELECT clause, you can pass literal values or nulls to the function as additional parameters by using an array tagged as params: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "name": { - "=": { - "transform":"substr", - "params":[ 1, 6 ], - "value":"CARTER" - } - } - } + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "name": { + "=": { + "transform":"substr", + "params":[ 1, 6 ], + "value":"CARTER" + } } - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - substr("aou".name,'1','6' ) = 'CARTER' ; - + } +} + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + substr("aou".name,'1','6' ) = 'CARTER' ; + The first parameter is always the column name, qualified by the class name, followed by any additional parameters (which are always enclosed in quotes even if they are numeric). As in the SELECT clause: if the function returns multiple columns, you can specify the one you want by using a "result_field" entry (not shown here). @@ -958,29 +956,29 @@ Putting Function Calls on Both Sides - If you want to compare one function call to another, you can use the same syntax shown in the previous subsection -- except that the "value" entry carries an + If you want to compare one function call to another, you can use the same syntax shown in the previous subsection -- except that the value entry carries an array instead of a literal value. For example: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id": { - ">": { - "transform":"factorial", - "value":[ "sqrt", 1000 ] - } - } - } - } - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - factorial("aou".id ) > sqrt( '1000' ) ; - + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "id": { + ">": { + "transform":"factorial", + "value":[ "sqrt", 1000 ] + } + } + } +} +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + factorial("aou".id ) > sqrt( '1000' ) ; + The format for the right side function is similar to what we saw earlier, in the subsection Comparing to a Function. Note that there are two different formats for defining function calls: @@ -996,35 +994,35 @@ Comparing a Function to a Condition - So far we have seen two kinds of data for the "value" tag. A string or number translates to a literal value, and an array translates to a function call. + So far we have seen two kinds of data for the value tag. A string or number translates to a literal value, and an array translates to a function call. The third possibility is a JSON object, which translates to a condition. For example: - - { - "from":"aou", - "select": { "aou":[ "id", "name" ] }, - "where": { - "id": { - "=": { - "value":{ "parent_ou":{ ">":3 } }, - "transform":"is_prime" - } - } - } + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "id": { + "=": { + "value":{ "parent_ou":{ ">":3 } }, + "transform":"is_prime" + } } - + } +} + The function tagged as transform must return boolean, or else json_query will generate invalid SQL. The function used here, is_prime, is fictitious. - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - WHERE - ( - is_prime("aou".id ) = ( "aou".parent_ou > 3 ) - ); - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE +( + is_prime("aou".id ) = ( "aou".parent_ou > 3 ) +); + If we left out the transform entry, json_query would compare the column on the left (which would to be boolean) to the condition on the right. The results are similar to those for a simpler format described earlier (see the subsection Testing Boolean Columns). In the example above we compared the boolean to a simple condition. However the expression on the right may include multiple conditions, IN lists, subqueries, @@ -1048,67 +1046,67 @@ the class name of the relevant table. When the FROM clause joins multiple tables, the corresponding JSON naturally gets more complicated. SQL provides two ways to define a join. One way is to list both tables in the FROM clause, and put the join conditions in the WHERE clause: - - SELECT - aou.id, - aout.name - FROM - actor.org_unit aou, - actor.org_unit_type aout - WHERE - aout.id = aou.ou_type; - + +SELECT + aou.id, + aout.name +FROM + actor.org_unit aou, + actor.org_unit_type aout +WHERE + aout.id = aou.ou_type; + The other way is to use an explicit JOIN clause: - - SELECT - aou.id, - aout.name - FROM - actor.org_unit aou - JOIN actor.org_unit_type aout - ON ( aout.id = aou.ou_type ); - + +SELECT + aou.id, + aout.name +FROM + actor.org_unit aou + JOIN actor.org_unit_type aout + ON ( aout.id = aou.ou_type ); + JSON queries use only the second of these methods. The following example expresses the same query in JSON: - - { - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aou":"aout" - } - } - + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aou":"aout" + } +} + First, let's review the SELECT clause. Since it selects rows from two different tables, the data for select includes two entries, one for each table. As for the FROM clause, it's no longer just a string. It's a JSON object, with exactly one entry. The key of this entry is the class name of the core table, i.e. the table named immediately after the FROM keyword. The data associated with this key contains the rest of the information about the join. In this simple example, that information consists entirely of a string containing the class name of the other table. So where is the join condition? It's in the IDL. Upon reading the IDL, json_query knows that actor.org_unit has a foreign key pointing to actor.org_unit_type, and builds a join condition accordingly: - - SELECT - "aou".id AS "id", - "aout".name AS "name" - FROM - actor.org_unit AS "aou" - INNER JOIN actor.org_unit_type AS "aout" - ON ( "aout".id = "aou".ou_type ) ; - + +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit AS "aou" + INNER JOIN actor.org_unit_type AS "aout" + ON ( "aout".id = "aou".ou_type ) ; + In this case the core table is the child table, and the joined table is the parent table. We could just as well have written it the other way around: - - { - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout":"aou" - } - } - - SELECT - "aou".id AS "id", - "aout".name AS "name" - FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id ) ; - + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout":"aou" + } +} + +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".ou_type = "aout".id ) ; + Specifying The Join Columns Explicitly @@ -1117,19 +1115,19 @@ Json_query can't guess which one you want if you don't tell it. (Actually it will try to guess. It will pick the first matching link that it finds in the IDL, which may or may not be the one you want.) Here's how to define exactly which columns you want for the join: - - { - "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, - "from": { - "aou": { - "aoa": { - "fkey":"holds_address", - "field":"id" - } - } - } + +{ + "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, + "from": { + "aou": { + "aoa": { + "fkey":"holds_address", + "field":"id" + } } - + } +} + Before, the table we were joining was represented merely by its class name. Now it's represented by an entry in a JSON object. The key of that entry is the class name, and the associated data is another layer of JSON object containing the attributes of the join. Later we'll encounter other kinds of join attributes. For now, the only attributes that we're looking at are the ones that identify the join columns: @@ -1145,37 +1143,37 @@ When there are only two tables involved, the core table is on the left, and the non-core table is on the right. In more complex queries neither table may be the core table. Here is the result of the preceding JSON: - - SELECT - "aou".id AS "id", - "aoa".street1 AS "street1" - FROM - actor.org_unit AS "aou" - INNER JOIN actor.org_address AS "aoa" - ON ( "aoa".id = "aou".holds_address ) ; - + +SELECT + "aou".id AS "id", + "aoa".street1 AS "street1" +FROM + actor.org_unit AS "aou" + INNER JOIN actor.org_address AS "aoa" + ON ( "aoa".id = "aou".holds_address ) ; + In this example the child table is on the left and the parent table is on the right. We can swap the tables if we swap the join columns as well: - - { - "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, - "from": { - "aoa": { - "aou": { - "fkey":"id", - "field":"holds_address" - } - } - } + +{ + "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, + "from": { + "aoa": { + "aou": { + "fkey":"id", + "field":"holds_address" + } } - - SELECT - "aou".id AS "id", - "aoa".street1 AS "street1" - FROM - actor.org_address AS "aoa" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".holds_address = "aoa".id ) ; - + } +} + +SELECT + "aou".id AS "id", + "aoa".street1 AS "street1" +FROM + actor.org_address AS "aoa" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".holds_address = "aoa".id ) ; + When you specify both of the join columns, json_query assumes that you know what you're doing. It doesn't check the IDL to confirm that the join makes sense. The burden is on you to avoid absurdities. @@ -1183,18 +1181,18 @@ Specifying Only One Join Column We just saw how to specify both ends of a join. It turns out that there's a shortcut -- most of the time you only need to specify one end. Consider the following variation on the previous example: - - { - "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, - "from": { - "aoa": { - "aou": { - "field":"holds_address" - } - } - } + +{ + "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, + "from": { + "aoa": { + "aou": { + "field":"holds_address" + } } - + } +} + ..which results in exactly the same SQL as before. Here we specified the join column from the child table, the column that is a foreign key pointing to another table. As long as that linkage is defined in the IDL, json_query can look it up and figure out what the corresponding column is in the parent table. @@ -1206,36 +1204,36 @@ Joining to Multiple Tables So far we have joined only two tables at a time. What if we need to join one table to two different tables? Here's an example: - - { - "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, - "from": { - "aou": { - "aout":{}, - "aoa": { - "fkey":"holds_address" - } - } - } - } - + +{ + "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, + "from": { + "aou": { + "aout":{}, + "aoa": { + "fkey":"holds_address" + } + } + } +} + The first join, to actor.org_unit_type, is simple. We could have specified join columns, but we don't have to, because json_query will construct that join on the basis of what it finds in the IDL. Having no join attributes to specify, we leave that object empty. For the second join, to actor.org_address, we have to specify at least the join column in the child table, as discussed earlier. We could also have specified the join column from the parent table, but we don't have to, so we didn't. Here is the resulting SQL: - - SELECT - "aou".id AS "id", - "aout".depth AS "depth", - "aoa".street1 AS "street1" - FROM - actor.org_unit AS "aou" - INNER JOIN actor.org_unit_type AS "aout" - ON ( "aout".id = "aou".ou_type ) - INNER JOIN actor.org_address AS "aoa" - ON ( "aoa".id = "aou".holds_address ) ; - + +SELECT + "aou".id AS "id", + "aout".depth AS "depth", + "aoa".street1 AS "street1" +FROM + actor.org_unit AS "aou" + INNER JOIN actor.org_unit_type AS "aout" + ON ( "aout".id = "aou".ou_type ) + INNER JOIN actor.org_address AS "aoa" + ON ( "aoa".id = "aou".holds_address ) ; + Since there can be only one core table, the outermost object in the FROM clause can have only one entry, whose key is the class name of the core table. The next level has one entry for every table that's joined to the core table. @@ -1243,203 +1241,202 @@ Nested Joins Let's look at that last query again. It joins three tables, and the core table is the one in the middle. Can we make one of the end tables the core table instead? Yes, we can: - - { - "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, - "from": { - "aoa": { - "aou": { - "field":"holds_address", - "join": { - "aout":{ "fkey":"ou_type" } - } - } - } - } + +{ + "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, + "from": { + "aoa": { + "aou": { + "field":"holds_address", + "join": { + "aout":{ "fkey":"ou_type" } + } + } } - - The "join" attribute introduces another level of join. In this case "aou" is the left table for the nested join, and the right table for the original join. + } +} + + The join attribute introduces another level of join. In this case "aou" is the left table for the nested join, and the right table for the original join. Here are the results: - - SELECT - "aou".id AS "id", - "aout".depth AS "depth", - "aoa".street1 AS "street1" - FROM - actor.org_address AS "aoa" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".holds_address = "aoa".id ) - INNER JOIN actor.org_unit_type AS "aout" - ON ( "aout".id = "aou".ou_type ) ; - + +SELECT + "aou".id AS "id", + "aout".depth AS "depth", + "aoa".street1 AS "street1" +FROM + actor.org_address AS "aoa" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".holds_address = "aoa".id ) + INNER JOIN actor.org_unit_type AS "aout" + ON ( "aout".id = "aou".ou_type ) ; + Outer Joins By default, json_query constructs an inner join. If you need an outer join, you can add the join type as an attribute of the join: Yes, we can: - - { - "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, - "from": { - "aoa": { - "aou": { - "field":"mailing_address", - "type":"left" - } - } - } + +{ + "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, + "from": { + "aoa": { + "aou": { + "field":"mailing_address", + "type":"left" + } } - + } +} + Here is the resulting SQL for this example: - - SELECT - "aou".id AS "id", - "aoa".street1 AS "street1" - FROM - actor.org_address AS "aoa" - LEFT JOIN actor.org_unit AS "aou" - ON ( "aou".mailing_address = "aoa".id ) ; - + +SELECT + "aou".id AS "id", + "aoa".street1 AS "street1" +FROM + actor.org_address AS "aoa" + LEFT JOIN actor.org_unit AS "aou" + ON ( "aou".mailing_address = "aoa".id ) ; + Referring to Joined Tables in the WHERE Clause In the WHERE clause of the generated SQL, every column name is qualified by a table alias, which is always the corresponding class name. If a column belongs to the core table, this qualification happens by default. If it belongs to a joined table, the JSON must specify what class name to use for an alias. For example: - - { - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout":"aou" - }, - "where": { - "+aou":{ "parent_ou":2 } - } - } - + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout":"aou" + }, + "where": { + "+aou":{ "parent_ou":2 } + } +} + Note the peculiar operator +aou -- a plus sign followed by the relevant class name. This operator tells json_query to apply the specified class to the condition that follows. The result: - - SELECT - "aou".id AS "id", - "aout".name AS "name" - FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id ) - WHERE - ( "aou".parent_ou = 2 ); - + +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".ou_type = "aout".id ) +WHERE + ( "aou".parent_ou = 2 ); + The plus-class operator may apply to multiple conditions: - - { - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout":"aou" - }, - "where": { - "+aou":{ - "parent_ou":2, - "id":{ "<":42 } - } - } - } - - SELECT - "aou".id AS "id", - "aout".name AS "name" - FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id ) - WHERE - ( - "aou".parent_ou = 2 - AND "aou".id < 42 - ); - + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout":"aou" + }, + "where": { + "+aou":{ + "parent_ou":2, + "id":{ "<":42 } + } + } +} + +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".ou_type = "aout".id ) +WHERE + ( + "aou".parent_ou = 2 + AND "aou".id < 42 + ); + For these artificial examples, it would have been simpler to swap the tables, so that actor.org_unit is the core table. Then you wouldn't need to go through any special gyrations to apply the right table alias. In a more realistic case, however, you might need to apply conditions to both tables. Just swapping the tables wouldn't solve the problem. You can also use a plus-class operator to compare columns from two different tables: - - { - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout":"aou" - }, - "where": { - "depth": { ">": { "+aou":"parent_ou" } } - } - } - + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout":"aou" + }, + "where": { + "depth": { ">": { "+aou":"parent_ou" } } + } +} + - SELECT - "aou".id AS "id", - "aout".name AS "name" - FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id ) - WHERE - ( - "aout".depth > ( "aou".parent_ou ) - ); - +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".ou_type = "aout".id ) +WHERE + ( + "aout".depth > ( "aou".parent_ou ) + ); + Please don't expect that query to make any sense. It doesn't. But it illustrates the syntax. Join Filters While the above approach certainly works, the special syntax needed is goofy and awkward. A somewhat cleaner solution is to include a condition in the JOIN clause: - - - { - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout": { - "aou": { - "filter": { - "parent_ou":2 - } - } - } - } - } + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout": { + "aou": { + "filter": { + "parent_ou":2 + } + } + } + } +} - SELECT - "aou".id AS "id", "aout".name AS "name" - FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id - AND "aou".parent_ou = 2 ) ; - +SELECT + "aou".id AS "id", "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".ou_type = "aout".id + AND "aou".parent_ou = 2 ) ; + By default, json_query uses AND to combine the filter condition with the original join condition. If you need OR, you can use the filter_op attribute to say so: - - { - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout": { - "aou": { - "filter": { - "parent_ou":2 - }, - "filter_op":"or" - } - } - } - } + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout": { + "aou": { + "filter": { + "parent_ou":2 + }, + "filter_op":"or" + } + } + } +} - SELECT - "aou".id AS "id", - "aout".name AS "name" - FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON ( "aou".ou_type = "aout".id - OR "aou".parent_ou = 2 ) ; - +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".ou_type = "aout".id + OR "aou".parent_ou = 2 ) ; + If the data tagged by filter_op is anything but or (in upper, lower, or mixed case), json_query uses AND instead of OR. The condition tagged by filter may be much more complicated. In fact it accepts all the same syntax as the WHERE clause. Remember, though, that it all gets combined with the the original join condition with an AND, or with an OR if you so specify. If @@ -1449,31 +1446,31 @@ Joining to a Subquery In SQL you can put a subquery in a FROM clause, and select from it as if it were a table. A JSON query has no way to do that directly. The IDL, however, can define a class as a subquery instead of as a table. When you SELECT from it, json_query inserts the corresponding subquery into the FROM clause. For example: - - { - "select":{ "iatc":[ "id", "dest", "copy_status" ] }, - "from": "iatc" - } - + +{ + "select":{ "iatc":[ "id", "dest", "copy_status" ] }, + "from": "iatc" +} + There's nothing special-looking about this JSON, but json_query expands it as follows: - - SELECT - "iatc".id AS "id", - "iatc".dest AS "dest", - "iatc".copy_status AS "copy_status" - FROM - ( - SELECT t.* - FROM - action.transit_copy t - JOIN actor.org_unit AS s - ON (t.source = s.id) - JOIN actor.org_unit AS d - ON (t.dest = d.id) - WHERE - s.parent_ou <> d.parent_ou - ) AS "iatc" ; - + +SELECT + "iatc".id AS "id", + "iatc".dest AS "dest", + "iatc".copy_status AS "copy_status" +FROM + ( + SELECT t.* + FROM + action.transit_copy t + JOIN actor.org_unit AS s + ON (t.source = s.id) + JOIN actor.org_unit AS d + ON (t.dest = d.id) + WHERE + s.parent_ou <> d.parent_ou + ) AS "iatc" ; + The iatc class is like a view, except that it's defined in the IDL instead of the database. In this case it provides a way to do a join that would otherwise be impossible through a JSON query, because it joins the same table in two different ways (see the next subsection). @@ -1498,53 +1495,53 @@ condition, or to omit any join condition in order to obtain a Cartesian product. If necessary, you can devise such unconventional joins by combining the normal join conditions with join filters. For example, here's how to get a Cartesian product: - - { - "select": { "aou":[ "id" ], "aout":[ "name" ] }, - "from": { - "aout": { - "aou": { - "filter": { - "ou_type":{ "<>": { "+aout":"id" } } - }, - "filter_op":"or" - } - } - } - } - + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout": { + "aou": { + "filter": { + "ou_type":{ "<>": { "+aout":"id" } } + }, + "filter_op":"or" + } + } + } +} + - SELECT - "aou".id AS "id", - "aout".name AS "name" - FROM - actor.org_unit_type AS "aout" - INNER JOIN actor.org_unit AS "aou" - ON - ( - "aou".ou_type = "aout".id - OR ("aou".ou_type <> ( "aout".id )) - ) ; - +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON + ( + "aou".ou_type = "aout".id + OR ("aou".ou_type <> ( "aout".id )) + ) ; + Yes, it's ugly, but at least you're not likely to do it by accident. Selecting from Functions In SQL, you can put a function call in the FROM clause. The function may return multiple columns and multiple rows. Within the query, the function behaves like a table. A JSON query can also select from a function: - - { - "from": [ "actor.org_unit_ancestors", 5 ] - } - + +{ + "from": [ "actor.org_unit_ancestors", 5 ] +} + The data associated with from is an array instead of a string or an object. The first element in the array specifies the name of the function. Subsequent elements, if any, supply the parameters of the function; they must be literal values or nulls. Here is the resulting query: - - SELECT * - FROM - actor.org_unit_ancestors( '5' ) AS "actor.org_unit_ancestors" ; - + +SELECT * +FROM + actor.org_unit_ancestors( '5' ) AS "actor.org_unit_ancestors" ; + In a JSON query this format is very limited, largely because the IDL knows nothing about the available functions. You can't join the function to a table or to another function. If you try to supply a SELECT list or a WHERE clause, json_query will ignore it. The generated query will always select every column, via a wild card asterisk, from every row. @@ -1552,34 +1549,34 @@ The ORDER BY Clause In most cases you can encode an ORDER BY clause as either an array or an object. Let's take a simple example and try it both ways. First the array: - - { - "select":{ "aou":[ "name" ] }, - "from": "aou", - "order_by": [ - { "class":"aou", "field":"name" } - ] - } - + +{ + "select":{ "aou":[ "name" ] }, + "from": "aou", + "order_by": [ + { "class":"aou", "field":"name" } + ] +} + Now the object: - - { - "select":{ "aou":[ "name" ] }, - "from": "aou", - "order_by": { - "aou":{ "name":{} } - } - } - + +{ + "select":{ "aou":[ "name" ] }, + "from": "aou", + "order_by": { + "aou":{ "name":{} } + } +} + The results are identical from either version: - - SELECT - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - ORDER BY - "aou".name; - + +SELECT + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +ORDER BY + "aou".name; + The array format is more verbose, but as we shall see, it is also more flexible. It can do anything the object format can do, plus some things that the object format can't do. @@ -1596,51 +1593,51 @@ If you want to sort by multiple fields, just include a separate object for each field. If you want to sort a field in descending order, add a direction tag: - - { - "select":{ "aou":[ "name" ] }, - "from": "aou", - "order_by": [ - { - "class":"aou", - "field":"name", - "transform":"upper" - } - ] - } - + +{ + "select":{ "aou":[ "name" ] }, + "from": "aou", + "order_by": [ + { + "class":"aou", + "field":"name", + "transform":"upper" + } + ] +} + - SELECT - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - ORDER BY - upper("aou".name ); - +SELECT + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +ORDER BY + upper("aou".name ); + If you need additional parameters for the function, you can use the params tag to pass them: - - { - "select":{ "aou":[ "name" ] }, - "from": "aou", - "order_by": [ - { - "class":"aou", - "field":"name", - "transform":"substr", - "params":[ 1, 8 ] - } - ] - } - + +{ + "select":{ "aou":[ "name" ] }, + "from": "aou", + "order_by": [ + { + "class":"aou", + "field":"name", + "transform":"substr", + "params":[ 1, 8 ] + } + ] +} + The additional parameters appear as elements in an array. They may be numbers, strings, or nulls. - - SELECT - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - ORDER BY - substr("aou".name,'1','8' ); - + +SELECT + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +ORDER BY + substr("aou".name,'1','8' ); + As we have seen elsewhere, all literal values are passed as quoted strings, even if they are numbers. If the function returns multiple columns, you can use the result_field tag to indicate which one you want (not shown). @@ -1649,16 +1646,16 @@ ORDER BY as an Object When you encode the ORDER BY clause as an object, the keys of the object are class names. Each class must be either the core class or a joined class. The data for each class can be either an array or another layer of object. Here's an example with one of each: - - { - "select":{ "aout":"id", "aou":[ "name" ] }, - "from": { "aou":"aout" }, - "order_by": { - "aout":[ "id" ], - "aou":{ "name":{ "direction":"desc" } } - } - } - + +{ + "select":{ "aout":"id", "aou":[ "name" ] }, + "from": { "aou":"aout" }, + "order_by": { + "aout":[ "id" ], + "aou":{ "name":{ "direction":"desc" } } + } +} + For the aout class, the associated array is simply a list of field names (in this case, just one). Naturally, each field must reside in the class with which it is associated. However, a list of field names provides no way to specify the direction of sorting, or a transforming function. You can add those details only if the class @@ -1666,36 +1663,36 @@ In this example, we use the direction" tag to specify that the name field be sorted in descending order. This tag works the same way here as described earlier. If the associated string starts with "D" or "d", the sort will be descending; otherwise it will be ascending. Here is the resulting SQL: - - SELECT - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - INNER JOIN actor.org_unit_type AS "aout" - ON ( "aout".id = "aou".ou_type ) - ORDER BY - "aout".id, - "aou".name DESC; - - - { - "select":{ "aou":[ "name", "id" ] }, - "from": "aou", - "order_by": { - "aou":{ - "name":{ "transform":"substr", "params":[ 1, 8 ] } - } - } - } + +SELECT + "aou".name AS "name" +FROM + actor.org_unit AS "aou" + INNER JOIN actor.org_unit_type AS "aout" + ON ( "aout".id = "aou".ou_type ) +ORDER BY + "aout".id, + "aou".name DESC; + + +{ + "select":{ "aou":[ "name", "id" ] }, + "from": "aou", + "order_by": { + "aou":{ + "name":{ "transform":"substr", "params":[ 1, 8 ] } + } + } +} - SELECT - "aou".name AS "name", - "aou".id AS "id" - FROM - actor.org_unit AS "aou" - ORDER BY - substr("aou".name,'1','8' ); - +SELECT + "aou".name AS "name", + "aou".id AS "id" +FROM + actor.org_unit AS "aou" +ORDER BY + substr("aou".name,'1','8' ); + Things You Can't Do @@ -1707,24 +1704,24 @@ there are situations where it can be useful, provided that the column is passed to a transforming function. For example, you might want a case-insensitive sort, except that for any given letter you want lower case to sort first. For example, you want diBona to sort before Dibona. Here's a way to do that, coding the ORDER BY clause as an array: - - { - "select":{ "au":[ "family_name", "id" ] }, - "from": "au", - "order_by": [ - { "class":"au", "field":"family_name", "transform":"upper" }, - { "class":"au", "field":"family_name" } - ] - } - SELECT - "au".family_name AS "family_name", - "au".id AS "id" - FROM - actor.usr AS "au" - ORDER BY - upper("au".family_name ), - "au".family_name; - + +{ + "select":{ "au":[ "family_name", "id" ] }, + "from": "au", + "order_by": [ + { "class":"au", "field":"family_name", "transform":"upper" }, + { "class":"au", "field":"family_name" } + ] +} +SELECT + "au".family_name AS "family_name", + "au".id AS "id" +FROM + actor.usr AS "au" +ORDER BY + upper("au".family_name ), + "au".family_name; + Such a sort is not possible where the ORDER BY clause is coded as an object. @@ -1732,28 +1729,28 @@ A JSON query has no separate construct to define a GROUP BY clause. Instead, the necessary information is distributed across the SELECT clause. However, the way it works is a bit backwards from what you might expect, so pay attention. Here's an example: - - { - "select": { - "aou": [ - { "column":"parent_ou" }, - { "column":"name", "transform":"max", "aggregate":true } - ] - }, - "from": "aou" - } - + +{ + "select": { + "aou": [ + { "column":"parent_ou" }, + { "column":"name", "transform":"max", "aggregate":true } + ] + }, + "from": "aou" +} + The transform tag is there just to give us an excuse to do a GROUP BY. What's important to notice is the aggregate tag. Here's the resulting SQL: - - SELECT - "aou".parent_ou AS "parent_ou", - max("aou".name ) AS "name" - FROM - actor.org_unit AS "aou" - GROUP BY - 1; - + +SELECT + "aou".parent_ou AS "parent_ou", + max("aou".name ) AS "name" +FROM + actor.org_unit AS "aou" +GROUP BY + 1; + The GROUP BY clause references fields from the SELECT clause by numerical reference, instead of by repeating them. Notice that the field it references, parent_ou, is the one that doesn't carry the aggregate tag in the JSON. Let's state that more generally. The GROUP BY clause includes only the fields that do not carry the aggregate tag (or that carry it with a value of false). @@ -1766,28 +1763,28 @@ JSON queries don't generate DISTINCT clauses. However, they can generate GROUP BY clauses that include every item from the SELECT clause. The effect is the same as applying DISTINCT to the entire SELECT clause. For example: - - { - "select": { - "aou": [ - "parent_ou", - "ou_type" - ] - }, - "from":"aou", - "distinct":"true" - } - + +{ + "select": { + "aou": [ + "parent_ou", + "ou_type" + ] + }, + "from":"aou", + "distinct":"true" +} + Note the distinct entry at the top level of the query object, with a value of true. - - SELECT - "aou".parent_ou AS "parent_ou", - "aou".ou_type AS "ou_type" - FROM - actor.org_unit AS "aou" - GROUP BY - 1, 2; - + +SELECT + "aou".parent_ou AS "parent_ou", + "aou".ou_type AS "ou_type" +FROM + actor.org_unit AS "aou" +GROUP BY + 1, 2; + The generated GROUP BY clause references every column in the SELECT clause by number. @@ -1795,70 +1792,70 @@ For a HAVING clause, add a having entry at the top level of the query object. For the associated data, you can use all the same syntax that you can use for a WHERE clause. Here's a simple example: - - { - "select": { - "aou": [ - "parent_ou", { - "column":"id", - "transform":"count", - "alias":"id_count", - "aggregate":"true" - } - ] - }, - "from":"aou", - "having": { - "id": { - ">" : { - "transform":"count", - "value":6 - } - } - } - } - + +{ + "select": { + "aou": [ + "parent_ou", { + "column":"id", + "transform":"count", + "alias":"id_count", + "aggregate":"true" + } + ] + }, + "from":"aou", + "having": { + "id": { + ">" : { + "transform":"count", + "value":6 + } + } + } +} + We use the aggregate tag in the SELECT clause to give us a GROUP BY to go with the HAVING. Results: - - SELECT - "aou".parent_ou AS "parent_ou", - count("aou".id ) AS "id_count" - FROM - actor.org_unit AS "aou" - GROUP BY - 1 - HAVING - count("aou".id ) > 6 ; - + +SELECT + "aou".parent_ou AS "parent_ou", + count("aou".id ) AS "id_count" +FROM + actor.org_unit AS "aou" +GROUP BY + 1 +HAVING + count("aou".id ) > 6 ; + In raw SQL we could have referred to count( 1 ). But since JSON queries cannot encode arbitrary expressions, we applied the count function to a column that cannot be null. The LIMIT and OFFSET Clauses To add an LIMIT or OFFSET clause, add an entry to the top level of a query object. For example: - - { - "select": { - "aou": [ "id", "name" ] - }, - "from":"aou", - "order_by": { "aou":[ "id" ] }, - "offset": 7, - "limit": 42 - } - + +{ + "select": { + "aou": [ "id", "name" ] + }, + "from":"aou", + "order_by": { "aou":[ "id" ] }, + "offset": 7, + "limit": 42 +} + The data associated with offset and limit may be either a number or a string, but if it's a string, it should have a number inside. Result: - - SELECT - "aou".id AS "id", - "aou".name AS "name" - FROM - actor.org_unit AS "aou" - ORDER BY - "aou".id - LIMIT 42 - OFFSET 7; - + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +ORDER BY + "aou".id +LIMIT 42 +OFFSET 7; + diff --git a/1.6/development/supercat.xml b/1.6/development/supercat.xml index 18ed504..b3edff0 100644 --- a/1.6/development/supercat.xml +++ b/1.6/development/supercat.xml @@ -13,255 +13,255 @@ Return a list of ISBNs for related records Similar to the OCLC xISBN service, Evergreen can return a list of related records based on its oISBN algorithm: - http://<hostname>/opac/extras/osibn/<ISBN> - For example, http://dev.gapines.org/opac/extras/oisbn/0439136350 returns: - - <idlist metarecord="302670"> - <isbn record="250060">0790783525</isbn> - <isbn record="20717">0736691316</isbn> - <isbn record="250045">0790783517</isbn> - <isbn record="199060">9500421151</isbn> - <isbn record="250061">0790783495</isbn> - <isbn record="154477">0807286028</isbn> - <isbn record="227297">1594130027</isbn> - <isbn record="26682">0786222743</isbn> - <isbn record="17179">0807282316</isbn> - <isbn record="34885">0807282316</isbn> - <isbn record="118019">8478885196</isbn> - <isbn record="1231">0738301477</isbn> - </idlist> - + http://<hostname>/opac/extras/osibn/<ISBN> + For example, http://dev.gapines.org/opac/extras/oisbn/0439136350 returns: + +<idlist metarecord="302670"> +<isbn record="250060">0790783525</isbn> +<isbn record="20717">0736691316</isbn> +<isbn record="250045">0790783517</isbn> +<isbn record="199060">9500421151</isbn> +<isbn record="250061">0790783495</isbn> +<isbn record="154477">0807286028</isbn> +<isbn record="227297">1594130027</isbn> +<isbn record="26682">0786222743</isbn> +<isbn record="17179">0807282316</isbn> +<isbn record="34885">0807282316</isbn> +<isbn record="118019">8478885196</isbn> +<isbn record="1231">0738301477</isbn> +</idlist> + Return records SuperCat can return records and metarecords in many different formats (see - http://<hostname>/opac/extras/supercat/retrieve/<format>/<record-type>/<bib-ID> - For example, http://dev.gapines.org/opac/extras/supercat/retrieve/mods/record/555 returns: - - <mods:modsCollection version="3.0"> - <mods:mods xsi:schemaLocation="http://www.loc.gov/mods/ http://www.loc.gov/standards/mods/mods.xsd"> - <titleInfo> - <title>More Brer Rabbit stories /</title> - </titleInfo> - <typeOfResource>text</typeOfResource> - <originInfo> - <place> - <code authority="marc">xx</c0de> - </place> - <publisher>Award Publications</publisher> - <dateIssued>c1982, 1983</dateIssued> - <dateIssued encoding="marc" point="start">1983</dateIssued> - <dateIssued encoding="marc" point="end">1982</dateIssued> - <issuance>monographic</issuance> - </originInfo> - <language authority="iso639-2b">eng</language> - <physicalDescription> - <form authority="marcform">print</form> - <extent>unp. : col. ill.</extent> - </physicalDescription> - <note type="statement of responsibility">ill. by Rene Cloke.</note> - <subject authority="lcsh"> - <topic>Animals</topic> - <topic>Fiction</topic> - </subject> - <subject authority="lcsh"> - <topic>Fables</topic> - </subject> - <recordInfo> - <recordContentSource>(BRO)</recordContentSource> - <recordCreationDate encoding="marc">930903</recordCreationDate> - <recordChangeDate encoding="iso8601">19990703024637.0</recordChangeDate> - <recordIdentifier>PIN60000007 </recordIdentifier> - </recordInfo> - </mods:mods> - </mods:modsCollection> - + http://<hostname>/opac/extras/supercat/retrieve/<format>/<record-type>/<bib-ID> + For example, http://dev.gapines.org/opac/extras/supercat/retrieve/mods/record/555 returns: + +<mods:modsCollection version="3.0"> + <mods:mods xsi:schemaLocation="http://www.loc.gov/mods/ http://www.loc.gov/standards/mods/mods.xsd"> + <titleInfo> + <title>More Brer Rabbit stories /</title> + </titleInfo> + <typeOfResource>text</typeOfResource> + <originInfo> + <place> + <code authority="marc">xx</c0de> + </place> + <publisher>Award Publications</publisher> + <dateIssued>c1982, 1983</dateIssued> + <dateIssued encoding="marc" point="start">1983</dateIssued> + <dateIssued encoding="marc" point="end">1982</dateIssued> + <issuance>monographic</issuance> + </originInfo> + <language authority="iso639-2b">eng</language> + <physicalDescription> + <form authority="marcform">print</form> + <extent>unp. : col. ill.</extent> + </physicalDescription> + <note type="statement of responsibility">ill. by Rene Cloke.</note> + <subject authority="lcsh"> + <topic>Animals</topic> + <topic>Fiction</topic> + </subject> + <subject authority="lcsh"> + <topic>Fables</topic> + </subject> + <recordInfo> + <recordContentSource>(BRO)</recordContentSource> + <recordCreationDate encoding="marc">930903</recordCreationDate> + <recordChangeDate encoding="iso8601">19990703024637.0</recordChangeDate> + <recordIdentifier>PIN60000007 </recordIdentifier> + </recordInfo> + </mods:mods> +</mods:modsCollection> + Return a feed of recently edited or created records SuperCat can return feeds of recently edited or created authority and bibliographic records: - http://<hostname>/opac/extras/feed/freshmeat/<feed-type>/[authority|biblio]/[import|edit]/<limit>/<date> + http://<hostname>/opac/extras/feed/freshmeat/<feed-type>/[authority|biblio]/[import|edit]/<limit>/<date> The limit records imported or edited following the supplied date will be returned. If you do not supply a date, then the most recent limit records will be returned. If you do not supply a limit, then up to 10 records will be returned. Feed-type can be one of atom, html, htmlholdings, marcxml, mods, mods3, or rss2. - For example, http://dev.gapines.org/opac/extras/feed/freshmeat/atom/biblio/import/10/2008-01-01 + For example, http://dev.gapines.org/opac/extras/feed/freshmeat/atom/biblio/import/10/2008-01-01 Browse records SuperCat can browse records in HTML and XML formats: - http://<hostname>/opac/extras/supercat/browse/<format>/call_number/<org_unit>/<call_number> - For example, http://dev.gapines.org/opac/extras/browse/xml/call_number/-/GV returns: - - <hold:volumes xmlns:hold='http://open-ils.org/spec/holdings/v1'> - <hold:volume id="tag:open-ils.org,2008:asset-call_number/130607" lib="FRRLS-FA" label="GUTCHEON BETH"> - <act:owning_lib id="tag:open-ils.org,2008:actor-org_unit/111" name="Fayette County Public Library"/> - <record xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/ - standards/marcxml/schema/MARC21slim.xsd" - id="tag:open-ils.org,2008:biblio-record_entry/21669/FRRLS-FA"> - <leader>09319pam a2200961 a 4500</leader> - <controlfield tag="001"/> - <controlfield tag="005">20000302124754.0</controlfield> - <controlfield tag="008">990817s2000 nyu 000 1 eng </controlfield> - <datafield tag="010" ind1=" " ind2=" "> - <subfield code="a"> 99045936</subfield> - </datafield> - .. - </record> - <record> - .. - </record> - </hold:volume> - </hold:volumes> - + http://<hostname>/opac/extras/supercat/browse/<format>/call_number/<org_unit>/<call_number> + For example, http://dev.gapines.org/opac/extras/browse/xml/call_number/-/GV returns: + +<hold:volumes xmlns:hold='http://open-ils.org/spec/holdings/v1'> + <hold:volume id="tag:open-ils.org,2008:asset-call_number/130607" lib="FRRLS-FA" label="GUTCHEON BETH"> + <act:owning_lib id="tag:open-ils.org,2008:actor-org_unit/111" name="Fayette County Public Library"/> + <record xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/ + standards/marcxml/schema/MARC21slim.xsd" + id="tag:open-ils.org,2008:biblio-record_entry/21669/FRRLS-FA"> + <leader>09319pam a2200961 a 4500</leader> + <controlfield tag="001"/> + <controlfield tag="005">20000302124754.0</controlfield> + <controlfield tag="008">990817s2000 nyu 000 1 eng </controlfield> + <datafield tag="010" ind1=" " ind2=" "> + <subfield code="a"> 99045936</subfield> + </datafield> + .. + </record> + <record> + .. + </record> + </hold:volume> +</hold:volumes> + Supported formats SuperCat maintains a list of supported formats for records and metarecords: - http://<hostname>/opac/extras/supercat/formats/<record-type> - For example, http://dev.gapines.org/opac/extras/supercat/formats/record returns: - - <formats> + http://<hostname>/opac/extras/supercat/formats/<record-type> + For example, http://dev.gapines.org/opac/extras/supercat/formats/record returns: + +<formats> - <format> - <name>opac</name> - <type>text/html</type> - </format> + <format> + <name>opac</name> + <type>text/html</type> + </format> - <format> - <name>htmlholdings</name> - <type>text/html</type> - </format> - - <format> - <name>html</name> - <type>text/html</type> - </format> - - <format> - <name>htmlholdings-full</name> - <type>text/html</type> - </format> - - <format> - <name>html-full</name> - <type>text/html</type> - </format> - - <format> - <name>marcxml</name> - <type>application/xml</type> - <namespace_uri>http://www.loc.gov/MARC21/slim</namespace_uri> - <docs>http://www.loc.gov/marcxml/</docs> - - <schema_location> - http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd - </schema_location> - </format> - - <format> - <name>marcxml-full</name> - <type>application/xml</type> - <namespace_uri>http://www.loc.gov/MARC21/slim</namespace_uri> - <docs>http://www.loc.gov/marcxml/</docs> - - <schema_location> - http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd - </schema_location> - </format> - - <format> - <name>rss2</name> - <type>application/xml</type> - </format> - - <format> - <name>rss2-full</name> - <type>application/xml</type> - </format> - - <format> - <name>rdf_dc</name> - <type>application/xml</type> - <namespace_uri>http://purl.org/dc/elements/1.1/</namespace_uri> - <schema_location>http://purl.org/dc/elements/1.1/</schema_location> - </format> - - <format> - <name>oai_dc</name> - <type>application/xml</type> - <namespace_uri>http://www.openarchives.org/OAI/2.0/oai_dc/</namespace_uri> - <schema_location>http://www.openarchives.org/OAI/2.0/oai_dc.xsd</schema_location> - </format> - - <format> - <name>srw_dc</name> - <type>application/xml</type> - <namespace_uri>info:srw/schema/1/dc-schema</namespace_uri> - - <schema_location> - http://www.loc.gov/z3950/agency/zing/srw/dc-schema.xsd - </schema_location> - </format> - - <format> - <name>mods32</name> - <type>application/xml</type> - <namespace_uri>http://www.loc.gov/mods/v3</namespace_uri> - <docs>http://www.loc.gov/mods/</docs> - <schema_location>http://www.loc.gov/standards/mods/v3/mods-3-2.xsd</schema_location> - </format> - - <format> - <name>mods3</name> - <type>application/xml</type> - <namespace_uri>http://www.loc.gov/mods/v3</namespace_uri> - <docs>http://www.loc.gov/mods/</docs> - <schema_location>http://www.loc.gov/standards/mods/v3/mods-3-1.xsd</schema_location> - </format> - - <format> - <name>mods3-full</name> - <type>application/xml</type> - <namespace_uri>http://www.loc.gov/mods/v3</namespace_uri> - <docs>http://www.loc.gov/mods/</docs> - <schema_location>http://www.loc.gov/standards/mods/v3/mods-3-1.xsd</schema_location> - </format> - - <format> - <name>mods</name> - <type>application/xml</type> - <namespace_uri>http://www.loc.gov/mods/</namespace_uri> - <docs>http://www.loc.gov/mods/</docs> - <schema_location>http://www.loc.gov/standards/mods/mods.xsd</schema_location> - </format> - - <format> - <name>mods-full</name> - <type>application/xml</type> - <namespace_uri>http://www.loc.gov/mods/</namespace_uri> - <docs>http://www.loc.gov/mods/</docs> - <schema_location>http://www.loc.gov/standards/mods/mods.xsd</schema_location> - </format> - - <format> - <name>atom</name> - <type>application/xml</type> - <namespace_uri>http://www.w3.org/2005/Atom</namespace_uri> - <docs>http://www.ietf.org/rfc/rfc4287.txt</docs> - </format> - - <format> - <name>atom-full</name> - <type>application/xml</type> - <namespace_uri>http://www.w3.org/2005/Atom</namespace_uri> - <docs>http://www.ietf.org/rfc/rfc4287.txt</docs> - </format> - </formats> - + <format> + <name>htmlholdings</name> + <type>text/html</type> + </format> + + <format> + <name>html</name> + <type>text/html</type> + </format> + + <format> + <name>htmlholdings-full</name> + <type>text/html</type> + </format> + + <format> + <name>html-full</name> + <type>text/html</type> + </format> + + <format> + <name>marcxml</name> + <type>application/xml</type> + <namespace_uri>http://www.loc.gov/MARC21/slim</namespace_uri> + <docs>http://www.loc.gov/marcxml/</docs> + + <schema_location> + http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd + </schema_location> + </format> + + <format> + <name>marcxml-full</name> + <type>application/xml</type> + <namespace_uri>http://www.loc.gov/MARC21/slim</namespace_uri> + <docs>http://www.loc.gov/marcxml/</docs> + + <schema_location> + http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd + </schema_location> + </format> + + <format> + <name>rss2</name> + <type>application/xml</type> + </format> + + <format> + <name>rss2-full</name> + <type>application/xml</type> + </format> + + <format> + <name>rdf_dc</name> + <type>application/xml</type> + <namespace_uri>http://purl.org/dc/elements/1.1/</namespace_uri> + <schema_location>http://purl.org/dc/elements/1.1/</schema_location> + </format> + + <format> + <name>oai_dc</name> + <type>application/xml</type> + <namespace_uri>http://www.openarchives.org/OAI/2.0/oai_dc/</namespace_uri> + <schema_location>http://www.openarchives.org/OAI/2.0/oai_dc.xsd</schema_location> + </format> + + <format> + <name>srw_dc</name> + <type>application/xml</type> + <namespace_uri>info:srw/schema/1/dc-schema</namespace_uri> + + <schema_location> + http://www.loc.gov/z3950/agency/zing/srw/dc-schema.xsd + </schema_location> + </format> + + <format> + <name>mods32</name> + <type>application/xml</type> + <namespace_uri>http://www.loc.gov/mods/v3</namespace_uri> + <docs>http://www.loc.gov/mods/</docs> + <schema_location>http://www.loc.gov/standards/mods/v3/mods-3-2.xsd</schema_location> + </format> + + <format> + <name>mods3</name> + <type>application/xml</type> + <namespace_uri>http://www.loc.gov/mods/v3</namespace_uri> + <docs>http://www.loc.gov/mods/</docs> + <schema_location>http://www.loc.gov/standards/mods/v3/mods-3-1.xsd</schema_location> + </format> + + <format> + <name>mods3-full</name> + <type>application/xml</type> + <namespace_uri>http://www.loc.gov/mods/v3</namespace_uri> + <docs>http://www.loc.gov/mods/</docs> + <schema_location>http://www.loc.gov/standards/mods/v3/mods-3-1.xsd</schema_location> + </format> + + <format> + <name>mods</name> + <type>application/xml</type> + <namespace_uri>http://www.loc.gov/mods/</namespace_uri> + <docs>http://www.loc.gov/mods/</docs> + <schema_location>http://www.loc.gov/standards/mods/mods.xsd</schema_location> + </format> + + <format> + <name>mods-full</name> + <type>application/xml</type> + <namespace_uri>http://www.loc.gov/mods/</namespace_uri> + <docs>http://www.loc.gov/mods/</docs> + <schema_location>http://www.loc.gov/standards/mods/mods.xsd</schema_location> + </format> + + <format> + <name>atom</name> + <type>application/xml</type> + <namespace_uri>http://www.w3.org/2005/Atom</namespace_uri> + <docs>http://www.ietf.org/rfc/rfc4287.txt</docs> + </format> + + <format> + <name>atom-full</name> + <type>application/xml</type> + <namespace_uri>http://www.w3.org/2005/Atom</namespace_uri> + <docs>http://www.ietf.org/rfc/rfc4287.txt</docs> + </format> +</formats> + Adding new SuperCat formats - SuperCat web services are based on the OpenSRF application, open-ils.supercat. + SuperCat web services are based on the OpenSRF service, >open-ils.supercat. Developers are able to add new formats by adding the xsl stylesheet for the format in the directory/openils/var/web/opac/extras/xsl/, and by adding the feed references to the perl modules openils/lib/perl5/OpenILS/WWW/SuperCat/feed.pm and openils/lib/perl5/OpenILS/WWW/SuperCat.pm. An Evergreen restart is diff --git a/tools/docbookConvert.sh b/tools/docbookConvert.sh index 0553ff0..711c20f 100755 --- a/tools/docbookConvert.sh +++ b/tools/docbookConvert.sh @@ -1,7 +1,7 @@ #/bin/bash #generate draft html - xsltproc --xinclude --stringparam base.dir /openils/var/web/evergreen_documentation/draft/html/ ~/Evergreen-DocBook/stylesheets/evergreen_docbook_files/evergreen_xhtml.xsl ~/Evergreen-DocBook/1.6/root.xml + xsltproc --xinclude --stringparam base.dir /openils/var/web/evergreen_documentation/1.6/draft/html/ ~/Evergreen-DocBook/stylesheets/evergreen_docbook_files/evergreen_xhtml.xsl ~/Evergreen-DocBook/1.6/root.xml #Generate PDF via FO @@ -10,7 +10,7 @@ # must run fop from same directory as root.xml cd ~/Evergreen-DocBook/1.6/ - ~/doctools/fop/fop -fo pdf/temp.fo -pdf /openils/var/web/evergreen_documentation/draft/pdf/Evergreen_Documentation.pdf + ~/doctools/fop/fop -fo pdf/temp.fo -pdf /openils/var/web/evergreen_documentation/1.6/draft/pdf/Evergreen_Documentation.pdf # remove temporary .fo file rm pdf/temp.fo