From: Robert Soulliere Date: Tue, 22 Feb 2011 17:11:46 +0000 (-0500) Subject: Add files to 2.0. X-Git-Url: https://old-git.evergreen-ils.org/?a=commitdiff_plain;h=3355fdd8bd6c329f55118ebe5b5063ced102309a;p=Evergreen-DocBook.git Add files to 2.0. --- diff --git a/2.0/admin/AdminMisc.xml b/2.0/admin/AdminMisc.xml new file mode 100644 index 0000000..ec6144e --- /dev/null +++ b/2.0/admin/AdminMisc.xml @@ -0,0 +1,320 @@ + + + + + Server Operations and Maintenance + receipt template editor + + This chapter deals with basic server operations such as starting and stopping Evergreen as well wall + security, backing up and troubleshooting Evergreen. + + +
+ Starting, Stopping and Restarting + Occasionally, you may need to restart Evergreen. It is imperative that you understand the basic + commands to stop and start the Evergreen server. You can start and stop Evergreen from the command line of + the server using the osrf_ctl.sh script located in the + openils/bin directory. + The osrf_ctl.sh command must be run as the opensrf user. + To view help on osrf_ctl.sh and get all of its options, run: + osrf_ctl.sh -h + To start Evergreen, run: + osrf_ctl.sh -l -a start_all + The flag is used to indicate that Evergreen is configured to use localhost as + the host. If you have configured opensrf.xml to use your real hostname, do not use the flag. The + option is required and indicates the action of the command. In this case + . + + + If you receive the error message: osrf_ctl.sh: command not found, then your environment variable + PATHenvironment variablePATH does not include the + /openils/bin directory. You can set it using the following command: + export PATH=$PATH:/openils/bin + If you receive the error message Can't locate OpenSRF/System.pm in @INC … BEGIN + failed–compilation aborted, then your environment variable PERL5LIBenvironment + variablePERL5LIB does not + include the /openils/lib/perl5 directory. You can set it + using the following command: + export PERL5LIB=$PERL5LIB:/openils/lib/perl5 + + It is also possible to start a specific service. For example: + osrf_ctl.sh -l -a start_router + will only start the router service. + + If you decide to start each service individually, you need to start them in a specific order + for Evergreen to start correctly. Run the commands in this exact order: + osrf_ctl.sh -l -a start_router + osrf_ctl.sh -l -a start_perl + osrf_ctl.sh -l -a start_c + + After starting or restarting Evergreen, it is also necessary to restart the Apache web server + web serverApache for the OPAC to work correctly. + To stop Evergreen, run: + osrf_ctl.sh -l -a stop_all + As with starting, you can choose to stop services individually. + To restart Evergreen, run: + osrf_ctl.sh -l -a restart_all +
+
+ Backing Up + databasesbacking up + + Backing up your system files and data is a critical task for server and database administrators. + Having a strategy for backing up and recovery could be the difference between a minor annoyance for users and + a complete catastrophe. + + Backing up the <application>Evergreen</application> Databasedatabases + Most of the critical data for an Evergreen system – patrons, bibliographic records, holdings, + transactions, bills – is stored in the PostgreSQLdatabases + PostgreSQL database. You can therefore use normal + PostgreSQL backup procedures to backup this data. For example, the simplest method of backing up the Evergreen + database is to use the pg_dump command to create a live backup of the database without having to + interrupt any Evergreen services. Here is an example pg_dump command which will dump a local Evergreen database into a the file evergreen_db.backup: + pg_dump -U evergreen -h localhost -f evergreen_db.backup evergreen + To restore the backed up database into a new database, create a new database using the + template0 database template and the UTF8 encoding, and run the psql command, specifying the new + database as your target: + createdb -T template0 -E UTF8 -U evergreen -h localhost new_evergreen + psql -U evergreen -h localhost -f evergreen_db.backup new_evergreen + + This method of backup is only suitable for small Evergreen instances. Larger sites + should consider implementing continuous archiving (also known as log shipping) to provide + more granular backups with lower system overhead. More information on backing up PostgreSQL + databases can be found in the official PostgreSQL documentation. + + + + Backing up Evergreen Files + directoriesbacking up + When you deploy Evergreen, you will probably customize many aspects of your system including + the system configuration files, Apache configuration files, OPAC and Staff Client. In order to + protect your investment of time, you should carefully consider the best approach to backing up + files. + There are a number of ways of tackling this problem. You could create a script that regularly + creates a time-stamped tarball of all of these files and copies it to a remote server - but that + would build up over time to hundreds of files. You could use rsync + rsync to ensure that the files of + interest are regularly updated on a remote server - but then you would lose track of the changes to + the files, should you make a change that introduces a problem down the road. + Perhaps one of the best options is to use a version control system like + BazaarVersion Control SystemSubversion, + gitVersion Control Systemgit + or SubversionVersion Control System + Subversion to regularly push updates of the files you care about to a repository on a + remote server. This gives you the advantage of quickly being able to run through the history of the + changes you made, with a commenting system that reminds you why each change was made, combined with + remote storage of the pertinent files in case of disaster on site. In addition, your team can create + local copies of the repository and test their own changes in isolation from the production + system. Using a version control system also helps to recover system customizations after an + upgrade. + + + Full System Backup + A full system backup archives every file on the file system. Some basic methods require you + to shut down most system processes; other methods can use mirrored RAIDRAID setups or + SANSAN storage to + take snapshot backups of your full system while the system continues to run. The subject of how + to implement full system backups is beyond the scope of this documentation. + +
+
+ Security + security + As with an ILS and resource accessible from the world wide web careful consideration needs to be + given to the security of your Evergreen servers and database. While it is impossible to cover all aspects + of security, it is important to take several precautions when setting up production Evergreen site. + + + Change the Evergreen admin password and keep it secure. The + default admin password is known by anyone who has installed Evergreen. It is not a secret + and needs to be changed by the Administrator. It should also only be shared by those who + need the highest level of access to your system. + + + Create strong passwords using a combination of numerical and alphabetical characters + for all of the Administrative passwords including the postgres and + opensrf users + + + Open ports in the firewallfirewall with caution - It is only necessary to open ports + 80 and 443 + for TCP connections to the Evergreen server from the OPAC and the staff client. It is critical for administrators to + understand the concepts of network security and take precautions to minimize vulnerabilities. + + + + Use permissions permissions and permission groups wisely - it is important to understand the + purpose of the permissions and to only give users the level of access that they require. + + + +
+
+ Managing Log Files + logsmanaging + Evergreen comes with a sophisticated logging system, but it is important to manage the OpenSRF + and Evergreen logs. This section will provide a couple of log management techniques and tools. + + Using the <systemitem class="service">logrotate</systemitem> Utility to Manage Log Size + logsLog Rotate + Fortunately, this is not a new problem for Unix administrators, and there are a number of ways of keeping your logs under control. + On Debian and Ubuntu, for example, + the logrotate utility controls when old log files are compressed and a new log file is started. + logrotate runs once a day and checks all log files that it knows about to see if a + threshold of time or size has been reached and rotates the log files if a threshold condition has been met. + To teach logrotate to rotate Evergreen logs on a weekly basis, or if they are > 50MB in size, + create a new file /etc/logrotate.d/evergreen with the following contents: + +compress +/openils/var/log/*.log { +# keep the last 4 archived log files along with the current log file + # log log.1.gz log.2.gz log.3.gz log.4.gz + # and delete the oldest log file (what would have been log.5.gz) +rotate 5 +# if the log file is > 50MB in size, rotate it immediately +size 50M + # for those logs that don't grow fast, rotate them weekly anyway + weekly +} + + + + Changing Logging Level for <application>Evergreen</application> + logslogging levels + Change the Log Levels in your config files. Changing the level of logging will help + narrow down errors. + + A high logging level is not wise to do in a production environment since it + will produce vastly larger log files and thus reduce server performance. + + Change logging levels by editing the configuration file + /openils/conf/opensrf_core.xmlconfiguration filesopensrf_core.xml + you will want to search for lines containing <loglevel>. + the default setting for loglevel is 3 which will log errors, + warnings and information. + The next level is 4 which is for debugging and provides additional information + helpful for the debugging process. + Thus, lines with: + <loglevel>3</loglevel> + Should be changed to: + <loglevel>4</loglevel> + to allow debugging level logging + Other logging levels include 0 for no logging, + 1 for logging errors and 2 for logging warnings + and errors. + +
+
+ Installing PostgreSQL from Source + databasesPostgreSQL + Some Linux distributions, such as Debian Etch (4.0), do not offer PostgreSQL + version 8.2 as an installable package. Before you continue, examine the software dependencies listed in + to ensure that your Linux distribution supports the required version of PostgreSQL. + + + Some Linux distributions, such as Debian Etch (4.0), do not offer PostgreSQL + version 8.2 as an installable package. Before you continue, examine the software dependencies listed in + to ensure that your Linux distribution supports the required version of PostgreSQL. + + + + + Install the application stow on your system if it is not already installed. Issue the following command as + the root user: + +apt-get install stow + + + + Download, compile, and install the latest release for PostgreSQL 8.2 (which was version 8.2.12 at the time of this writing). + As the root user, follow these steps: + + + +wget http://wwwmaster.postgresql.org/redir/198/h/source/v8.2.17/postgresql-8.2.17.tar.bz2 +tar xzf postgresql-8.2.17.tar.gz +cd postgresql-8.2.17 +./configure --with-perl --enable-integer-datetimes --with-openssl --prefix=/usr/local/stow/pgsql +make +make install +cd contrib +make +make install +cd xml2 +make +make install +cd /usr/local/stow +stow pgsql + + + + + + Create the new user postgres to run the PostgreSQL processes. + As the root user, execute this command: + adduser postgres + + + Initialize the database directory and start up PostgreSQL. As the root user, follow these steps: + + + +mkdir -p /usr/local/pgsql/data +chown postgres /usr/local/pgsql/data +su - postgres +initdb -D /usr/local/pgsql/data -E UNICODE --locale=C +pg_ctl -D /usr/local/pgsql/data -l /home/postgres/logfile start + + + + If an error occurs during the final step above, review the path of the home directory for the + postgres user. It may be /var/lib/postresql instead of /home/postres. + + + +
+
+ Configuring PostgreSQL + databasesPostgreSQL + The values of several PostreSQL configuration parameters may be changed for enhanced performance. The following table lists the default values + and some suggested updates for several useful parameters: + + Suggested configuration values + + + + + + + Parameter + Default + Suggested + + + + + default_statistics_target + 10 + 100 + + + work_mem + 4Mb + 128Mb + + + shared_buffers + 8Mb + 512Mb + + + effective_cache_size + 128Mb + 4Gb + + + +
+
+
diff --git a/2.0/admin/actiontriggers.xml b/2.0/admin/actiontriggers.xml new file mode 100644 index 0000000..7e7941c --- /dev/null +++ b/2.0/admin/actiontriggers.xml @@ -0,0 +1,322 @@ + + + + Action Triggers + action triggers + + Action Triggers were introduced to Evergreen in 1.6. They allow administrators the ability to set up actions for specific events. They are useful for notification events such as + hold notifications. + + + To access the Action Triggers module, select + + Admin + Local Administration + Notifications / Action triggers + + + You must have Local Administrator permissions to access the Action Triggers module. + You will notice four tabs on this page: Event Definitions, Hooks, + Reactors and Validators. + +
+ Event Definitions + action triggersevent definitions + Event Definitions is the main tab and contains the key fields when working with action triggers. These fields include: + + Action Trigger Event Definitions + + + + + + Field + Description + + + + + Owning library + The shortname of the library for which the action / trigger / hook is defined. + + + Name + The name of the trigger event, that links to a trigger event environment containing a set of fields + that will be returned to the Validators / Reactors for processing. + + + Hooks + The name of the trigger for the trigger event. The underlying action_trigger.hook table defines the Fieldmapper + class in the core_type column off of which the rest of the field definitions hang. + + + Enabled + Sets the given trigger as enabled or disabled. This must be set to enabled for the Action trigger to run. + + + Processing Delay + Defines how long after a given trigger / hook event has occurred before the associated action (Reactor) + will be taken. + + + Processing Delay Field + Defines the field associated with the event on which the processing delay is calculated. For example, the processing delay + context field on the hold.capture hook (which has a core_type of ahr) is capture_time. + + + Processing Group Context Field + Used to batch actions based on its associated group. + + + Validators + The subroutines receive the trigger environment as an argument (see the linked Name for + the environment definition) and returns either 1 if the validator is true or 0 + if the validator returns false. + + + Reactors + Links the action trigger to the Reactor. + + + Max Event Validity Delay + Define the threshold for how far back the action_trigger_runner.pl script should reach to generate + a batch of events. + + + +
+ + + Creating Action Triggers + action triggerscreating + + From the top menu, select + + Admin + Local Administration + Notifications / Action triggers + + + + Click on the New button. + Select an Owning Library. + Create a unique Name for your new action trigger. + Select the Hook. + Check the Enabled check box. + + Create a unique Name for your new action trigger. + Set the Processing Delay in the appropriate format. Eg. 7 days to run 7 days from the trigger event + or 00:01:00 to run 1 hour after the Processing Delay Context Field. + Set the Processing Delay Context Field and Processing Group Context Field. + Select the Validator, Reactor, Failure Cleanup and Success Cleanup. + + Set the Processing Delay Context Field and Processing Group Context Field. + Enter text in the Template text box if required. These are for email messages. Here is an sample + template for sending 90 day overdue notices: + + + + Once you are satisfied with your new event trigger , click the Save button located at the bottom of the + form + + A quick and easy way to create new action triggers is to clone an existing action trigger. + + Cloning Existing Action Triggers + + Check the check box next to the action trigger you wish to clone + + + Click the Clone Selected on the top left of the page. + + + An editing window with open. Notice that the fields will be populated with content from the cloned action trigger. Edit as necessary and + give the new action trigger a unique Name. + + + Click Save. + + + + Editing Action Triggers + + Check the check box next to the action trigger you wish to delete + + + Click the Delete Selected on the top left of the page. + + + + Before deleting an action trigger, you should consider disabling it through the editing form. This way you can simply enable it if you decide that you would like to use + the action trigger in the future. + + Deleting Action Triggers + + Check the check box next to the action trigger you wish to delete + + + Click the Delete Selected on the top left of the page. + + +
+
+ Hooks + action triggershooks + Hooks define the Fieldmapper class in the core_type column off of which the rest of the field definitions hang. + + Hooks + + + + + + Field + Description + + + + + Hook Key + A unique name given to the hook. + + + Core Type + Used to link the action trigger to the IDL class in fm_IDL.xml + + + Description + Text to describe the purpose of the hook. + + + Passive + Indicates whether or not an event is created by direct user action or is circumstantial. + + + +
+ You may also create, edit and delete Hooks but the Core Type must refer to an IDL class in the fm_IDL.xml file. +
+
+ Reactors + action triggersreactors + Reactors link the trigger definition to the action to be carried out. + + Action Trigger Reactors + + + + + + Field + Description + + + + + Module Name + The name of the Module to run if the action trigger is validated. It must be defined as a subroutine in + /openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm or as a module + in /openils/lib/perl5/OpenILS/Application/Trigger/Reactor/*.pm. + + + Description + Description of the Action to be carried out. + + + +
+ You may also create, edit and delete Reactors. Just remember that their must be an associated subroutine or module in the Reactor Perl module. +
+
+ Validators + action triggersvalidators + Validators set the validation test to be preformed to determine whether the action trigger is executed. + + Action Trigger Validators + + + + + + + Field + Description + + + + + Module Name + The name of the subroutine in + /openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm to validate the action trigger. + + + Description + Description of validation test to run. + + + +
+ You may also create, edit and delete Validators. Just remember that their must be an associated subroutine in the Reactor.pm Perl module. +
+
+ Processing Action Triggers + action triggersprocessing + To run the action triggers, an Evergreen administrator will need to run the trigger processing script /openils/bin/action_trigger_runner.pl + . This should be set up as a cron job to run + periodically. + You have several options when running the script: + + : Run the pending events. + : Create hook events + : OpenSRF core config file. Defaults to: + /openils/conf/opensrf_core.xml. + : File containing a JSON Object which describes any hooks that should + use a user-defined filter to find their target objects. Defaults to: /openils/conf/action_trigger_filters.json + : When in process-hooks mode, wait up to [seconds] for the lock file to go away. + Defaults to 3600 (1 hour). + : Define which hooks to create events for. If none are defined, it defaults to the list of hooks defined + in the option. + : Print server responses to stdout (as JSON) for debugging. + : Sets the lock file for the process. + : Show help information. + +
+
+ diff --git a/2.0/admin/admin-booking.xml b/2.0/admin/admin-booking.xml new file mode 100644 index 0000000..1a5ae44 --- /dev/null +++ b/2.0/admin/admin-booking.xml @@ -0,0 +1,310 @@ + + + + Booking Module Administration + + Adapted with permission from original material by the Evergreen + Community + + + + The Evergreen booking module is included in Evergreen 1.6.1.x and above.The following + documentation will include information about making cataloged items bookable; making + non-bibliographic items bookable; and setting permissions in the booking module for + staff. + + +
+ Make a Cataloged Item Bookable in Advance + booking reservationmaking a cataloged item bookable + If their permission settings allow, staff members can make items bookable. Staff members + can do this in advance of a booking request, or they can do it on the fly. + If you know in advance of the request that an item will need to be booked, you can make + the item bookable. + + + + + In the staff client, select + Search + Search the Catalog + + + + Begin a title search to find an item. + + + Click the title of the item that you want to book. + + + + The Record Summary will appear. In this view you can see + information about the item and its locations. Click + Actions for this Record + Holdings Maintenance + in the top right corner of the screen. + + + + The Holdings Maintenance screen will appear. In this screen, + you can view the volumes and copies of an item avaialable at each branch. To view the + barcodes and other information for each copy, click the arrow adjacent to the branch + with the copy that you need to view. Click on successive arrows until you find the + copy that you need to view. + + + Select the item that you want to make bookable. Right click to open the menu, and + click Make Item Bookable. + + + The item has now been added to the list of resources that are bookable. To book + the item, return to the Record Summary, and proceed with + booking.. + + + + + In Evergreen 1.6.1, there is no way to make an item “unbookable” after it has been + made bookable and has been reserved. The Delete Selected button + on this screen deletes the resource from the screen, but the item will be able to be + booked after it has been returned. + + +
+
+ Make a Cataloged Item Bookable On the Fly + If a patron wants to book an item immediately that does not have bookable status, you + can book the item on the fly if you have the appropriate permissions. + + + + Follow steps one through five in . + + + Select the item that you want to make bookable. Right click to open the menu, and + click Book Item Now. + + + A Reservations screen will appear in a new tab, and you can + make the reservation. + + + + +
+
+ Create a Bookable Status for Non-Bibliographic Items + booking reservationnon-bibliographic items + Staff with the required permissions can create a bookable status for non-bibliographic + items. For example, staff can book conference rooms or laptops. You will be able to create + types of resources, specify the names of individual resources within each type, and set + attributes to describe those resources. You can then bring the values together through the + Resource Attribute Map. + + + First, create the type of resource that you want to make bookable. Select + Admin + Server Administration + Booking + Resource Types + . + + + A list of resource types will appear. You may also see titles of cataloged items + on this screen if they were added using the Make Item Bookable + or Book Now links. You should not attempt to add cataloged items + on this screen; it is best to use the aforementioned links to make those items + bookable. In this screen, you will create a type of resource. + + + In the right corner, click New Resource Type. + + + A box will appear in which you will create a type of resource. In this box, you + can set fines, determine “elbow room” periods between reservations on this type of + resource, and indicate if this type of resource can be transferred to another + library. Click Save when you have entered the needed + information. + + + After you click Save, the box will disappear. Refresh the + screen to see the item that you have added. + + + Next, set the attributes for the type of resource that you have created. Select + Server Administration + Booking + Resource Attributes + . + + + + Click New Resource Attribute. + + + + A box will appear in which you can add the attributes of the resource. Attributes + are descriptive information that is provided to the staff member when the booking + request is made. For example, an attribute of the projector may be a cart that allows + for its transportation. Other attributes might be number of seats available in a + room, or MAC or PC attributes for a laptop. Click Save when + the necessary information has been entered. + + + + The box will disappear. Refresh the screen to see the added attribute. + + + + Next, add the values for the resource attributes. A value can be a number, yes/no, + or any other meaningful information. Select + Server Administration + Booking + Resource Attribute Values + . + + + + Select New Resource Attribute Value. + + + + A pop up box will appear. Select the Resource Attribute from + the drop down box. Add the value. You can add multiple values for this field. Click + Save when the required information has been added. + + + + If you refresh the screen, the attribute value may not appear, but it has been + saved. + + + + Next, identify the specific objects that are associated with this resource type. + Click + Admin + Server Administration + Booking + Resources + . + + + + Click New Resource. + + + + A pop-up box will appear. Add information for the resource and click + Save. Repeat this process for each resource. + + + + Refresh the screen, and the resource(s) that you added will appear. + + + + Finally, use Resource Attribute Maps to bring together the + resource and its attributes. Select + Admin + Server Administration + Booking + Resource Attribute Maps + . + + + + Select New Resource Attribute Map + + + + Select the resource that you want to match with its attributes, then click + Save. Repeat for all applicable resources. + + + + You have now created bookable, non-bibliographic resource(s) with + attributes. + + + +
+
+ Setting Booking Permissions + booking reservationsetting booking permissions + Administrators can set permissions so that staff members can view reservations, make + reservations, and make bibliographic or non-bibliographic items bookable. + + If a staff member attempts to book an item for which they do not have the appropriate + permissions, they will receive an error message. + + To set permissions, select + Admin + Server Administration + Permissions + . + + Staff members should be assigned the following permissions to do common tasks in the + booking module. These permissions could be assigned to front line staff members, such as + circulation staff. Permissions with an asterisk (*) are + already included in the Staff permission group. All other + booking permissions must be applied individually. + + + + View Reservations: VIEW_TRANSACTION* + + + Use the pull list: + RETRIEVE_RESERVATION_PULL_LIST + + + Capture reservations: CAPTURE_RESERVATION + + + Assist patrons with pickup and return: + VIEW_USER* + + + Create/update/delete reservations: + ADMIN_BOOKING_RESERVATION + + + + The following permissions allow users to do more advanced tasks, such as making items + bookable, booking items on the fly, and creating non-bibliographic resources for + booking. + + + + Create/update/delete booking resource type: + ADMIN_BOOKING_RESOURCE_TYPE + + + Create/update/delete booking resource attributes: + ADMIN_BOOKING_RESOURCE_ATTR + + + Create/update/delete booking resource attribute + values: ADMIN_BOOKING_RESOURCE_ATTR_VALUE + + + Create/update/delete booking resource: + ADMIN_BOOKING_RESOURCE + + + Create/update/delete booking resource attribute + maps: ADMIN_BOOKING_RESOURCE_ATTR_MAP + + + + In addition to having the permissions listed above, staff members will need a valid + working location in their profiles. This should be done when registering new staff members. + + + +
+
diff --git a/2.0/admin/serversideinstallation_2.0.xml b/2.0/admin/serversideinstallation_2.0.xml new file mode 100644 index 0000000..89a6f5f --- /dev/null +++ b/2.0/admin/serversideinstallation_2.0.xml @@ -0,0 +1,1745 @@ + + + + Server-side Installation of Evergreen Software + + This section describes installation of the Evergreen server-side software and its associated components. + Installation, configuration, testing and verification + of the software is straightforward if you follow some simple directions. + + + Installing, configuring and testing the Evergreen server-side software is straightforward with the current + stable software release. + The current version of the Evergreen server-side software runs as a native application on any of several + well-known Linux distributions + (e.g., Ubuntu and Debian). + It does not currently run as a native application on the Microsoft Windows + operating system (e.g., WindowsXP, WindowsXP + Professional, Windows7), but the software can still be + installed and run on Windows via a so-called + virtualized Linux-guest Operating System (using, for example, + "VirtualBox" or "VMware" + to emulate a Linux + environment). It can also be installed to run on other Linux + systems via virtualized environments (using, for example, "VirtualBox" or + "VMware"). More information on virtualized environments can be found in + . + Installation of the Evergreen Staff Client software is reviewed in . + The Evergreen server-side software has dependencies on particular versions of certain major software + sub-components. Successful installation of Evergreen software requires that software versions agree with those + listed here: + + + Evergreen Software Dependencies + + Evergreen software dependencies + + + + + + + + Evergreen + OpenSRF + PostgreSQL + + + + + 2.0 + 1.6.2 + 8.4 + + + +
+
+ Installing Server-Side Software + This section describes the installation of the major components of Evergreen server-side software. + As far as possible, you should perform the following steps in the exact order given since the + success of many steps relies on the successful completion of earlier steps. You should make backup + copies of files and environments when you are instructed to do so. In the event of installation problems + those copies can allow you to back out of a step gracefully and resume the installation from a known + state. See for further information. + Of course, after you successfully complete and test the entire Evergreen installation you should + take a final snapshot backup of your system(s). This can be the first in the series of regularly + scheduled system backups that you should probably also begin. +
+ + OpenSRF + installation + + Installing OpenSRF 1.6.2 On <systemitem class="osname">Ubuntu</systemitem> or + <systemitem class="osname">Debian</systemitem> + + Linux + Debian + + + Linux + Ubuntu + + This section describes the installation of the latest version of the Open Service Request + Framework (OpenSRF), a major component of the Evergreen server-side software, on + Ubuntu or Debian + systems. Evergreen software is integrated with and depends on the OpenSRF software + system. + Follow the steps outlined here and run the specified tests to ensure that OpenSRF is + properly installed and configured. Do not + continue with any further Evergreen installation steps + until you have verified that OpenSRF has been successfully installed and tested. + + The following steps have been tested on the x86 (32-bit) and x86-64 (64-bit) + platforms. OpenSRF 1.6.2 has been tested on Debian Lenny (5.0), Debian Squeeze (6.0) + and Ubuntu Lucid Lynx (10.04), Debian Lenny (5.0), + CentOS 5, Red Hat Enterprise Linux 5. + In the following instructions, you are asked to perform certain steps as + either the root user, the + opensrf user, or the + postgres user. + + + Debian -- To become the + root user, issue the command + su - and enter the password of the + root user. + + + Ubuntu -- To become the + root user, issue the command + sudo su - and enter the password of the + root user. + + + To switch from the root user to a + different user, issue the command su - USERNAME. For example, to + switch from the root user to the + opensrf user, issue the command + su - opensrf. Once you have become a non-root user, to become + the root user again, simply issue the command + exit. + + + + Add New <systemitem class="username">opensrf</systemitem> User + As the root user, add the + opensrf user to the system. + In the following example, the default shell for the + opensrf user is automatically set + to /bin/bash to inherit a reasonable environment: + + + # as the root user: + useradd -m -s /bin/bash opensrf + passwd opensrf + + + + Download and Unpack Latest OpenSRF Version + + OpenSRF + download + + The latest version of OpenSRF can be found here: + . + As the opensrf user, change to + the directory /home/opensrf then download + and extract OpenSRF. The new subdirectory + /home/opensrf/OpenSRF-1.6.2 will be created: + + + # as the opensrf user: + cd /home/opensrf + wget http://evergreen-ils.org/downloads/OpenSRF-1.6.2.tar.gz + tar zxf OpenSRF-1.6.2.tar.gz + + + + Install Prerequisites to Build OpenSRF + In this section you will install and configure a set of prerequisites that will be + used to build OpenSRF. In a following step you will actually build the OpenSRF software + using the make utility. + As the root user, enter the commands show + below to build the prerequisites from the software distribution that you just downloaded + and unpacked. Remember to replace [DISTRIBUTION] in the following + example with the keyword corresponding to the name of one of the + Linux listed distributions. + For example, to install the prerequisites for Ubuntu version 10.04 (Lucid Lynx) you would + enter this command: make -f src/extras/Makefile.install ubuntu-lucid . + + + # as the root user: + cd /home/opensrf/OpenSRF-1.6.2 + make -f src/extras/Makefile.install [DISTRIBUTION] + + + + for Debian Squeeze (6.0) + LinuxDebian + + + for Ubuntu Lucid Lynx + (10.04) + + + for CentOS 5 + + + for Red Hat Enterprise Linux 5 + + + This will install a number of packages on the system that are required by OpenSRF, + including some Perl modules from CPAN. You can say No to the initial + CPAN configuration prompt to allow it to automatically configure itself to download and + install Perl modules from CPAN. The CPAN installer will ask you a number of times whether + it should install prerequisite modules - say Yes. + + + Build OpenSRF + In this section you will configure, build and install the OpenSRF + components that support other Evergreen services. + + + Configure OpenSRF + + OpenSRF + configure + + As the opensrf + user, return to the new OpenSRF build directory and use the + configure utility to prepare for the next + step of compiling and linking the software. If you wish to + include support for Python and Java, add the configuration + options and + , respectively: + + + # as the opensrf user: + cd /home/opensrf/OpenSRF-1.6.2 + ./configure --prefix=/openils --sysconfdir=/openils/conf + make + + This step will take several minutes to complete. + + + Compile, Link and Install OpenSRF + As the root + user, return to the new OpenSRF build directory and use the + make utility to compile, link and install + OpenSRF: + + + # as the root user: + cd /home/opensrf/OpenSRF-1.6.2 + make install + + This step will take several minutes to complete. + + + Update the System Dynamic Library Path + You must update the system dynamic library path to force + your system to recognize the newly installed libraries. As the + root user, do this by + creating the new file + /etc/ld.so.conf.d/osrf.conf containing a + new library path, then run the command + ldconfig to automatically read the file and + modify the system dynamic library path: + + + # as the root user: + echo "/openils/lib" > /etc/ld.so.conf.d/osrf.conf + ldconfig + + + + Define Public and Private OpenSRF Domains + For security purposes, OpenSRF uses Jabber domains to separate services + into public and private realms. On a single-server system the easiest way to + define public and private OpenSRF domains is to define separate host names by + adding entries to the file /etc/hosts. + In the following steps we will use the example domains + public.localhost for the public + domain and private.localhost + for the private domain. In an upcoming step, you will configure two special + ejabberd users + to handle communications for these two domains. + As the root user, edit the file + /etc/hosts and add the following example domains: + + Jabber + + + + # as the root user: + 127.0.1.2 public.localhost public + 127.0.1.3 private.localhost private + + + + Change File Ownerships + Finally, as the root + user, change the ownership of all files installed in the + directory /openils to the + user opensrf: + + + # as the root user: + chown -R opensrf:opensrf /openils + + + + + + Stop the <systemitem class="service">ejabberd</systemitem> Service + + ejabberd + + Before continuing with configuration of ejabberd + you must stop that service. As the root user, + execute the following command to stop the service: + + + # as the root user: + /etc/init.d/ejabberd stop + + If ejabberd reports that it + is already stopped, there may have been a problem when it started back + in the installation step. If there are any remaining daemon processes such as + beam or + epmd + you may need to perform the following commands to kill them: + + + # as the root user: + epmd -kill + killall beam; killall beam.smp + rm /var/lib/ejabberd/* + echo 'ERLANG_NODE=ejabberd@localhost' >> /etc/default/ejabberd + + + + Edit the <systemitem class="service">ejabberd</systemitem> configuration + You must make several configuration changes for the + ejabberd service before + it is started again. + As the root user, edit the file + /etc/ejabberd/ejabberd.cfg and make the following changes: + + + Change the line: + {hosts, ["localhost"]}. + to instead read: + {hosts, ["localhost", "private.localhost", "public.localhost"]}. + + + + Change the line: + {max_user_sessions, 10} + to instead read: + {max_user_sessions, 10000} + + If the line looks something like this: + {access, max_user_sessions, [{10, all}]} + then change it to instead read: + {access, max_user_sessions, [{10000, all}]} + + + Change all three occurrences of: + max_stanza_size + to instead read: + 2000000 + + + Change both occurrences of: + maxrate + to instead read: + 500000 + + + Comment out the line: + {mod_offline, []} + by placing two % comment signs in front + so it instead reads: + %%{mod_offline, []} + + + + + Restart the <systemitem class="service">ejabberd</systemitem> service + As the root user, restart the + ejabberd service to test the + configuration changes and to register your users: + + + # as the root user: + /etc/init.d/ejabberd start + + + + Register <systemitem class="username">router</systemitem> and + <systemitem class="username">opensrf</systemitem> as + <systemitem class="service">ejabberd</systemitem> users + The two ejabberd users + router and + opensrf must be registered + and configured to manage OpenSRF router service and communications + for the two domains public.localhost and + private.localhost that you added to the file + /etc/hosts in a previous step + (see ). + The users include: + + + the router user, + to whom all requests to connect to an OpenSRF service will be + routed; + + + the opensrf user, + which clients use to connect to OpenSRF services (you may name + the user anything you like, but we use + opensrf in these examples) + + + As the root user, execute the + ejabberdctl utility as shown below to register and create passwords + for the users router and + opensrf on each domain (remember to replace + NEWPASSWORD with the appropriate password): + + + # as the root user: + # Note: the syntax for registering a user with ejabberdctl is: + # ejabberdctl register USER DOMAIN PASSWORD + ejabberdctl register router private.localhost NEWPASSWORD + ejabberdctl register router public.localhost NEWPASSWORD + ejabberdctl register opensrf private.localhost NEWPASSWORD + ejabberdctl register opensrf public.localhost NEWPASSWORD + + Note that the users router and + opensrf and their respective passwords + will be used again in when + we modify the OpenSRF configuration file /openils/conf/opensrf_core.xml . + + + Create OpenSRF configuration files + As the opensrf user, + execute the following commands to create the new configuration files + /openils/conf/opensrf_core.xml and + /openils/conf/opensrf.xml from the example templates: + + + # as the opensrf user: + cd /openils/conf + cp opensrf.xml.example opensrf.xml + cp opensrf_core.xml.example opensrf_core.xml + + + + Update usernames and passwords in the OpenSRF configuration file + As the opensrf user, edit the + OpenSRF configuration file /openils/conf/opensrf_core.xml + and update the usernames and passwords to match the values shown in the + following table. The left-hand side of + shows common XPath syntax to indicate the approximate position within the XML + file that needs changes. The right-hand side of the table shows the replacement + values: + + + Sample XPath syntax for editing "opensrf_core.xml" + + + + + + XPath location + Value + + + + + /config/opensrf/username + + opensrf + + + + /config/opensrf/passwd + private.localhost + password for + opensrf user + + + + /config/gateway/username + + opensrf + + + + /config/gateway/passwd + public.localhost + password for + opensrf user + + + + /config/routers/router/transport/username, + first entry where server == public.localhost + + router + + + + /config/routers/router/transport/password, + first entry where server == public.localhost + public.localhost + password for + router user + + + + /config/routers/router/transport/username, + second entry where server == private.localhost + + router + + + + /config/routers/router/transport/password, + second entry where server == private.localhost + private.localhost + password for + router user + + + + +
+ You may also need to modify the file to specify the domains from which + OpenSRF will accept connections, + and to which it will make connections. + If you are installing OpenSRF on a single server + and using the private.localhost and + public.localhost domains, + these will already be set to the correct values. Otherwise, search and replace + to match values for your own systems. +
+ + Set location of the persistent database + As the opensrf user, edit the + file /openils/conf/opensrf.xml, then find and modify the + element dbfile (near the end of the file) to set the + location of the persistent database. Change the default line: + /openils/var/persist.db + to instead read: + /tmp/persist.db + Following is a sample modification of that portion of the file: + + + + /tmp/persist.db + + +]]> + + + Create configuration files for users needing <command>srfsh</command> + In this section you will set up a special configuration file for each user + who will need to run the srfsh (pronounced surf + shell) utility. + + srfsh + + The software installation will automatically create the utility + srfsh (surf shell), a command line diagnostic tool for + testing and interacting with OpenSRF. It will be used + in a future step to complete and test the Evergreen installation. See + for further information. + As the root user, copy the + sample configuration file /openils/conf/srfsh.xml.example + to the home directory of each user who will use srfsh. + For instance, do the following for the + opensrf user: + + + # as the root user: + cp /openils/conf/srfsh.xml.example /home/opensrf/.srfsh.xml + + Edit each user's file ~/.srfsh.xml and make the + following changes: + + + Modify domain to be the router hostname + (following our domain examples, + private.localhost will give + srfsh access to all OpenSRF services, while + public.localhost + will only allow access to those OpenSRF services that are + publicly exposed). + + + Modify username and + password to match the + opensrf Jabber user for the chosen + domain + + + Modify logfile to be the full path for + a log file to which the user has write access + + + Modify loglevel as needed for testing + + + Change the owner of the file to match the owner of the home directory + + + Following is a sample of the file: + + + + +router +private.localhost +opensrf +SOMEPASSWORD +5222 +/tmp/srfsh.log + +4 + +]]> + + + Modify the environmental variable <envar>PATH</envar> for the + <systemitem class="username">opensrf</systemitem> user + As the opensrf user, modify the + environmental variable PATH by adding a new file path to the + opensrf user's shell configuration + file ~/.bashrc: + + + # as the opensrf user: + echo "export PATH=/openils/bin:\$PATH" >> ~/.bashrc + + + + Start OpenSRF + As the root user, start the + ejabberd and + memcached services: + + + # as the root user: + /etc/init.d/ejabberd start + /etc/init.d/memcached start + + As the opensrf user, + start OpenSRF as follows: + + + # as the opensrf user: + osrf_ctl.sh -l -a start_all + + The flag forces Evergreen to use + localhost (your current system) + as the hostname. The flag starts the other + OpenSRF router , + Perl , and + C services. + + + You can also start Evergreen without the + flag, but the osrf_ctl.sh + utility must know the fully qualified domain name for the system + on which it will execute. That hostname was probably specified + in the configuration file opensrf.xml which + you configured in a previous step. + + + If you receive an error message similar to + osrf_ctl.sh: command not found, then your + environment variable PATH does not include the + directory /openils/bin. + As the opensrf user, + edit the configuration file ~/.bashrc and + add the following line: + export PATH=$PATH:/openils/bin + + + + + Test connections to OpenSRF + Once you have installed and started OpenSRF, as the + root user, test your connection to + OpenSRF using the srfsh + utility and trying to call the add method on the OpenSRF + math service: + + + # as the root user: + /openils/bin/srfsh + + srfsh# request opensrf.math add 2 2 + + Received Data: 4 + ------------------------------------ + Request Completed Successfully + Request Time in seconds: 0.007519 + ------------------------------------ + + For other srfsh commands, type in + help at the prompt. + + + Stop OpenSRF + After OpenSRF has started, you can stop it at any time by using the + osrf_ctl.sh again. As the + opensrf + user, stop OpenSRF as follows: + + + # as the opensrf user: + osrf_ctl.sh -l -a stop_all + + +
+
+
+ Installing Evergreen 2.0 On <systemitem class="osname">Ubuntu</systemitem> or + <systemitem class="osname">Debian</systemitem> + + Linux + Debian + + + Linux + Ubuntu + + This section outlines the installation process for the latest stable version of + Evergreen. + In this section you will download, unpack, install, configure and test the Evergreen + system, including the Evergreen server and the PostgreSQL database system. You will make several + configuration changes and adjustments to the software, including updates to configure the system + for your own locale, and some updates needed to work around a few known issues. + + The following steps have been tested on the x86 (32-bit) and x86-64 (64-bit) + architectures. There may be differences between the Desktop and Server editions of + Ubuntu. These instructions assume the Server + edition. + In the following instructions, you are asked to perform certain steps as + either the root user, the + opensrf user, or the + postgres user. + + + Debian -- To become the + root user, issue the command + su - and enter the password of the + root user. + + + Ubuntu -- To become the + root user, issue the command + sudo su - and enter the password of the + root user. + + + To switch from the root user to a + different user, issue the command su - USERNAME. For example, to + switch from the root user to the + opensrf user, issue the command + su - opensrf. Once you have become a non-root user, to become the + root user again, simply issue the command + exit. + + + + Install OpenSRF + Evergreen software is integrated with and depends on the Open Service + Request Framework (OpenSRF) software system. For further information on + installing, configuring and testing OpenSRF, see + . + Follow the steps outlined in that section and run the specified tests to + ensure that OpenSRF is properly installed and configured. Do + not continue with + any further Evergreen installation steps until you have verified that OpenSRF + has been successfully installed and tested. + + + Download and Unpack Latest Evergreen Version + The latest version of Evergreen can be found here: + . + As the opensrf user, change to + the directory /home/opensrf then download + and extract Evergreen. The new subdirectory + /home/opensrf/Evergreen-ILS-2.0.1 will be created: + + + # as the opensrf user: + cd /home/opensrf + wget http://evergreen-ils.org/downloads/Evergreen-ILS-2.0.1.tar.gz + tar zxf Evergreen-ILS-2.0.1.tar.gz + + + + Install Prerequisites to Build Evergreen + In this section you will install and configure a set of prerequisites that will be + used later in and + to build the Evergreen software + using the make utility. + As the root user, enter the commands show + below to build the prerequisites from the software distribution that you just downloaded + and unpacked. Remember to replace [DISTRIBUTION] in the following + example with the keyword corresponding to the name of one of the + Linux distributions listed in the following + distribution keywords table . + For example, to install the prerequisites for Ubuntu version 10.05 (Karmic Koala) you would + enter this command: make -f Open-ILS/src/extras/Makefile.install + ubuntu-lucid. + + + # as the root user: + cd /home/opensrf/Evergreen-ILS-2.0.1 + make -f Open-ILS/src/extras/Makefile.install [DISTRIBUTION] + + + + + for Debian Squeeze (6.0) + + + + for Ubuntu Lucid Lynx + (10.04) + + + + + (OPTIONAL) Install the PostgreSQL Server + + databases + PostgreSQL + + Since the PostgreSQL server is usually a standalone server in multi-server + production systems, the prerequisite installer Makefile in the previous section + (see ) + does not automatically install PostgreSQL. You must install the PostgreSQL server + yourself, either on the same system as Evergreen itself or on another system. + If your PostgreSQL server is on a different system, just skip this step. + If your PostgreSQL server will be on the same system as your Evergreen + software, you can install the required PostgreSQL server packages as described + in , or you can visit the official + web site http://www.postgresql.org + for more information. + + PostgreSQL version 8.4 is the minimum supported version to work + with Evergreen 2.0. If you have an older version of PostgreSQL, + you should upgrade before installing Evergreen. To find your current version + of PostgreSQL, as the postgres + user execute the command psql, then type + SELECT version(); to get detailed information + about your version of PostgreSQL. + + + + Install Perl Modules on PostgreSQL Server + If PostgreSQL is running on the same system as your Evergreen software, + then the Perl modules will automatically be available. Just skip this step. + Otherwise, continue if your PostgreSQL server is running on another system. + You will need to install several Perl modules on the other system. As the + root user install the following Perl + modules: + as the root user, ensure the gcc compiler is installed: + +aptitude install gcc libxml-libxml-perl libxml-libxslt-perl + + then install the Perl modules: + +perl -MCPAN -e shell +cpan> Business::ISBN +cpan> install JSON::XS +cpan> Library::CallNumber::LC +cpan> install MARC::Record +cpan> install MARC::File::XML +cpan> cpan UUID::Tiny + + For more information on installing Perl Modules vist the official + CPAN site. + + Perl + CPAN + + + + Update the System Dynamic Library Path + You must update the system dynamic library path to force your system to recognize + the newly installed libraries. As the root user, + do this by creating the new file /etc/ld.so.conf.d/osrf.conf + containing a new library path, then run the command ldconfig to + automatically read the file and modify the system dynamic library path: + + + # as the root user: + echo "/usr/local/lib" >> /etc/ld.so.conf.d/osrf.conf + echo "/usr/local/lib/dbd" >> /etc/ld.so.conf.d/osrf.conf + ldconfig + + + + Restart the PostgreSQL Server + If PostgreSQL is running on the same system as the rest of Evergreen, as + the root user you must restart + PostgreSQL to re-read the new library paths just configured. If PostgreSQL is + running on another system, you may skip this step. + As the opensrf user, + execute the following command (remember to replace + PGSQL_VERSION with your installed PostgreSQL version, + for example 8.4): + + + # as the opensrf user: + /etc/init.d/postgresql-PGSQL_VERSION restart + + + + Configure Evergreen + In this step you will use the configure and + make utilities to configure Evergreen so it can be compiled + and linked later in . + As the opensrf user, return to + the Evergreen build directory and execute these commands: + + + # as the opensrf user: + cd /home/opensrf/Evergreen-ILS-2.0.1 + ./configure --prefix=/openils --sysconfdir=/openils/conf + make + + + + Compile, Link and Install Evergreen + In this step you will actually compile, link and install Evergreen and the + default Evergreen Staff Client. + As the root user, return to the + Evergreen build directory and use the make utility as shown below: + + + # as the root user: + cd /home/opensrf/Evergreen-ILS-2.0.1 + make STAFF_CLIENT_BUILD_ID=rel_2_0_1 install + + The Staff Client will also be automatically built, but you must remember + to set the variable STAFF_CLIENT_BUILD_ID to match the version of the + Staff Client you will use to connect to the Evergreen server. + The above commands will create a new subdirectory + /openils/var/web/xul/rel_2_0_1 + containing the Staff Client. + To complete the Staff Client installation, as the + root user execute the following commands to + create a symbolic link named server in the head of the Staff Client + directory /openils/var/web/xul that points to the + subdirectory /server of the new Staff Client + build: + + + # as the root user: + cd /openils/var/web/xul + ln -sf rel_2_0_1/server server + + + + Copy the OpenSRF Configuration Files + In this step you will replace some OpenSRF configuration files that you set up in + when you installed and + tested OpenSRF. + You must copy several example OpenSRF configuration files into place after first + creating backup copies for troubleshooting purposes, then change all the file ownerships + to opensrf. + As the root user, execute the following + commands: + + + # as the root user: + cd /openils/conf + cp opensrf.xml opensrf.xml.BAK + cp opensrf_core.xml opensrf_core.xml.BAK + cp opensrf.xml.example opensrf.xml + cp opensrf_core.xml.example opensrf_core.xml + cp oils_web.xml.example oils_web.xml + chown -R opensrf:opensrf /openils/ + + + + Create and Configure PostgreSQL Database + + databases + PostgreSQL + + In this step you will create the Evergreen database. In the commands + below, remember to adjust the path of the contrib + repository to match your PostgreSQL server + layout. For example, if you built PostgreSQL from source the path would be + /usr/local/share/contrib , and if you + installed the PostgreSQL 8.4 server packages on Ubuntu, + the path would be + /usr/share/postgresql/8.4/contrib/ . + + + + Create and configure the database + + As the postgres + user on the PostgreSQL system create the PostgreSQL database, + then set some internal paths: + + + # as the postgres user: + createdb evergreen -E UTF8 -T template0 + createlang plperl evergreen + createlang plperlu evergreen + createlang plpgsql evergreen + + Continue as the postgres user + and execute the SQL scripts as shown below (remember to adjust the paths as needed, + where PGSQL_VERSION is your installed PostgreSQL + version, for example 8.4). + + + # as the postgres user: + psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/tablefunc.sql evergreen + psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/tsearch2.sql evergreen + psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/pgxml.sql evergreen + + + + Create <systemitem class="username">evergreen</systemitem> PostgreSQL user + As the postgres + user on the PostgreSQL system, create a new PostgreSQL user + named evergreen and + assign a password (remember to replace NEWPASSWORD + with an appropriate new password): + + + # as the postgres user: + createuser -P -s evergreen + + Enter password for new role: NEWPASSWORD + Enter it again: NEWPASSWORD + + + + Create database schema + In this step you will create the database schema and configure your + system with the corresponding database authentication details for the + evergreen database user that you just created in + . + As the root user, enter + the following commands and replace HOSTNAME, PORT, + PASSWORD and DATABASENAME with appropriate + values: + + + # as the root user: + cd /home/opensrf/Evergreen-ILS-2.0.1 + perl Open-ILS/src/support-scripts/eg_db_config.pl --update-config \ + --service all --create-schema --create-offline \ + --hostname HOSTNAME --port PORT \ + --user evergreen --password PASSWORD --database DATABASENAME + + On most systems, HOSTNAME will be + localhost and + PORT will be 5432. + Of course, values for PASSWORD and + DATABASENAME must match the values you used in + . + As the command executes, you may see warnings similar to: + ERROR: schema SOMENAME does not exist (in fact, + you may see one warning per schema) but they can be safely ignored. + If you are entering the above command on a single line, do not + include the \ (backslash) characters. If you are using + the bash shell, these should only be used at the end of + a line at a bash prompt to indicate that the command is + continued on the next line. + + + + + Configure the Apache web server + + web server + Apache + + In this step you will configure the Apache web server to support Evergreen + software. + First, you must enable some built-in Apache modules and install some + additional Apache configuration files. Then you will create a new Security + Certificate. Finally, you must make several changes to the Apache configuration + file. + + + Enable the required Apache Modules + As the root + user, enable some modules in the Apache server, then copy the + new configuration files to the Apache server directories: + + Apache modules + + + + # as the root user: + a2enmod ssl # enable mod_ssl + a2enmod rewrite # enable mod_rewrite + a2enmod expires # enable mod_expires + + As the commands execute, you may see warnings similar to: + Module SOMEMODULE already enabled but you can + safely ignore them. + + + Copy Apache configuration files + You must copy the Apache configuration files from the + Evergreen installation directory to the Apache directory. As the + root user, perform the + following commands: + + + # as the root user: + cd /home/opensrf/Evergreen-ILS-2.0.1 + cp Open-ILS/examples/apache/eg.conf /etc/apache2/sites-available/ + cp Open-ILS/examples/apache/eg_vhost.conf /etc/apache2/ + cp Open-ILS/examples/apache/startup.pl /etc/apache2/ + + + + Create a Security Certificate + In this step you will create a new Security Certificate (SSL Key) + for the Apache server using the openssl command. For a + public production server you must configure or purchase a signed SSL + certificate, but for now you can just use a self-signed certificate and + accept the warnings in the Staff Client and browser during testing and + development. As the root user, + perform the following commands: + + + # as the root user: + mkdir /etc/apache2/ssl + cd /etc/apache2/ssl + openssl req -new -x509 -days 365 -nodes -out server.crt -keyout server.key + + You will be prompted for several items of information; enter + the appropriate information for each item. The new files + server.crt and server.key will + be created in the directory + /etc/apache2/ssl . + This step generates a self-signed SSL certificate. You must install + a proper SSL certificate for a public production system to avoid warning + messages when users login to their account through the OPAC or when staff + login through the Staff Client. For further information on + installing a proper SSL certificate, see + . + + + Update Apache configuration file + You must make several changes to the new Apache + configuration file + /etc/apache2/sites-available/eg.conf . + As the root user, + edit the file and make the following changes: + + + In the section + <Directory "/openils/var/cgi-bin"> + replace the line: + Allow from 10.0.0.0/8 + with the line: + Allow from all + This change allows access to your configuration + CGI scripts from any workstation on any network. This is + only a temporary change to expedite testing and should be + removed after you have finished and successfully tested + the Evergreen installation. See + + for further details on removing this change after the + Evergreen installation is complete. + + + + Comment out the line: + Listen 443 + since it conflicts with the same declaration in + the configuration file: + /etc/apache2/ports.conf. + + + The following updates are needed to allow the logs + to function properly, but it may break other Apache + applications on your server: + + Edit the Apache configuration file and change the lines: + + + export APACHE_RUN_USER=www-data + export APACHE_RUN_GROUP=www-data + + to instead read: + + + export APACHE_RUN_USER=opensrf + export APACHE_RUN_GROUP=opensrf + + + + As the + root user, + edit the Apache configuration file + /etc/apache2/apache2.conf and + modify the value for KeepAliveTimeout + and MaxKeepAliveRequests to match + the following: + + + KeepAliveTimeout 1 + MaxKeepAliveRequests 100 + + + + Further configuration changes to Apache may be + necessary for busy systems. These changes increase the + number of Apache server processes that are started to + support additional browser connections. + As the + root user, + edit the Apache configuration file + /etc/apache2/apache2.conf, locate + and modify the section related to prefork + configuration to suit the load on your + system: + + StartServers 20 + MinSpareServers 5 + MaxSpareServers 15 + MaxClients 150 + MaxRequestsPerChild 10000 + +]]> + + + + + Enable the Evergreen web site + Finally, you must enable the Evergreen web site. As the + root user, execute the + following Apache configuration commands to disable the default + It Works web page and enable the Evergreen + web site, and then restart the Apache server: + + + # as the root user: + # disable/enable web sites + a2dissite default + a2ensite eg.conf + # restart the server + /etc/init.d/apache2 reload + + + + + + Update the OpenSRF Configuration File + As the opensrf user, edit the + OpenSRF configuration file /openils/conf/opensrf_core.xml + to update the Jabber usernames and passwords, and to specify the domain from + which we will accept and to which we will make connections. + If you are installing Evergreen on a single server and using the + private.localhost / + public.localhost domains, + these will already be set to the correct values. Otherwise, search and replace + to match your customized values. + The left-hand side of + shows common XPath syntax to indicate the approximate position within the XML + file that needs changes. The right-hand side of the table shows the replacement + values: + + + Sample XPath syntax for editing "opensrf_core.xml" + + + + + + XPath location + Value + + + + + /config/opensrf/username + + opensrf + + + + /config/opensrf/passwd + private.localhost + password for + opensrf user + + + + /config/gateway/username + + opensrf + + + + /config/gateway/passwd + public.localhost + password for + opensrf user + + + + /config/routers/router/transport/username, + first entry where server == public.localhost + + router + + + + /config/routers/router/transport/password, + first entry where server == public.localhost + public.localhost + password for + router user + + + + /config/routers/router/transport/username, + second entry where server == private.localhost + + router + + + + /config/routers/router/transport/password, + second entry where server == private.localhost + private.localhost + password for + router user + + + + +
+
+ + (OPTIONAL) Create Configuration Files for Users Needing <command>srfsh</command> + When OpenSRF was installed in , the + software installation automatically created a utility named srfsh (surf + shell). This is a command line diagnostic tool for testing and interacting with + OpenSRF. It will be used in a future step to complete and test the Evergreen installation. + Earlier in you also created a configuration + file ~/.srfsh.xml for each user that might need to use the utility. + See for further information. + + + Modify the OpenSRF Environment + In this step you will make some minor modifications to the OpenSRF environment: + + + As the opensrf user, + modify the shell configuration file ~/.bashrc for + user opensrf by adding a Perl + environmental variable, then execute the shell configuration file to load + the new variables into your current environment. + In a multi-server environment, you must add any + modifications to ~/.bashrc to the top of the file + before the line [ -z "$PS1" ] && + return . This will allow headless (scripted) logins to load the + correct environment. + + + # as the opensrf user: + echo "export PERL5LIB=/openils/lib/perl5:\$PERL5LIB" >> ~/.bashrc + . ~/.bashrc + + + + + + (OPTIONAL) Enable and Disable Language Localizations + You can load translations such as Armenian (hy-AM), Canadian French + (fr-CA), and others into the database to complete the translations available in + the OPAC and Staff Client. For further information, see + . + +
+
+
+ Starting Evergreen + In this section you will learn how to start the Evergreen services. + For completeness, instructions for stopping Evergreen can be found later in + . + + + As the root + user, start the ejabberd and + memcached services as follows: + + + # as the root user: + /etc/init.d/ejabberd start + /etc/init.d/memcached start + + + + As the opensrf user, + start Evergreen as follows: + + + # as the opensrf user: + osrf_ctl.sh -l -a start_all + + The flag forces Evergreen to use + localhost (your current system) + as the hostname. The flag starts the other + OpenSRF router , + Perl , and + C services. + + + You can also start Evergreen without the + flag, but the osrf_ctl.sh + utility must know the fully qualified domain name for the system + on which it will execute. That hostname was probably specified + in the configuration file opensrf.xml which + you configured in a previous step. + + + If you receive an error message similar to + osrf_ctl.sh: command not found, then your + environment variable PATH does not include the + directory /openils/bin. + As the opensrf user, + edit the configuration file ~/.bashrc and + add the following line: + export PATH=$PATH:/openils/bin + + + If you receive an error message similar to Can't + locate OpenSRF/System.pm in @INC ... BEGIN failed--compilation + aborted, then your environment variable + PERL5LIB does not include the + directory /openils/lib/perl5. + As the opensrf user, + edit the configuration file ~/.bashrc and + add the following line: + export PERL5LIB=$PERL5LIB:/openils/lib/perl5 + + + + + In this step you will generate the Web files needed by the Staff Client + and catalog, and update the proximity of locations in the Organizational Unit + tree (which allows Holds to work properly). + You must do this the first time you start Evergreen and after making any + changes to the library hierarchy. + As the opensrf user, execute the + following command and review the results: + + + # as the opensrf user: + cd /openils/bin + ./autogen.sh -c /openils/conf/opensrf_core.xml -u + + Updating Evergreen organization tree and IDL using '/openils/conf/opensrf_core.xml' + Updating fieldmapper + Updating web_fieldmapper + Updating OrgTree + removing OrgTree from the cache for locale hy-AM... + removing OrgTree from the cache for locale cs-CZ... + removing OrgTree from the cache for locale en-CA... + removing OrgTree from the cache for locale en-US... + removing OrgTree from the cache for locale fr-CA... + removing OrgTree from the cache for locale ru-RU... + Updating OrgTree HTML + Updating locales selection HTML + Updating Search Groups + Refreshing proximity of org units + Successfully updated the organization proximity + Done + + + + As the root user, restart the + Apache Web server: + + + # as the root user: + /etc/init.d/apache2 restart + + If the Apache Web server was running when you started the OpenSRF + services, you might not be able to successfully log into the OPAC or Staff + Client until the Apache Web server has been restarted. + + +
+
+ Testing Your Evergreen Installation + This section describes several simple tests you can perform to verify that the Evergreen + server-side software has been installed and configured properly and is running as + expected. + + Testing Connections to Evergreen + Once you have installed and started Evergreen, test your connection to Evergreen. Start the + srfsh application and try logging onto the Evergreen server using the default + administrator username and password. Following is sample output generated by executing + srfsh after a successful Evergreen installation. For help with + srfsh commands, type help at the prompt. + As the opensrf user, + execute the following commands to test your Evergreen connection: + + + # as the opensrf user: + /openils/bin/srfsh + + srfsh% login admin open-ils + Received Data: "250bf1518c7527a03249858687714376" + ------------------------------------ + Request Completed Successfully + Request Time in seconds: 0.045286 + ------------------------------------ + Received Data: { + "ilsevent":0, + "textcode":"SUCCESS", + "desc":" ", + "pid":21616, + "stacktrace":"oils_auth.c:304", + "payload":{ + "authtoken":"e5f9827cc0f93b503a1cc66bee6bdd1a", + "authtime":420 + } + } + ------------------------------------ + Request Completed Successfully + Request Time in seconds: 1.336568 + ------------------------------------ + + If this does not work, try the following: + + + As the opensrf user, run the + settings-tester.pl utility to review your Evergreen + installation for any system configuration problems: + + + # as the opensrf user: + cd /home/opensrf + ./Evergreen-ILS-2.0.1/Open-ILS/src/support-scripts/settings-tester.pl + + If the output of settings-tester.pl does not help you + find the problem, please do not make any significant changes to your + configuration. + + + Follow the steps in the troubleshooting guide in + . + + + If you have followed the entire set of installation steps listed here + closely, you are probably extremely close to a working system. Gather your + configuration files and log files and contact the + Evergreen Development Mailing List + list for assistance before making any drastic changes to your system + configuration. + + + + + Testing the Staff Client on Linux + In this section you will confirm that a basic login on the Staff Client works + properly. + Run the Evergreen Staff Client on a Linux system by using the application + XULRunner (installed automatically and by default with Firefox + version 3.0 and later on Ubuntu and Debian distributions). + As the root user, start the Staff Client + as shown: + + + # as the root user: + xulrunner /home/opensrf/Evergreen-ILS-2.0.1/Open-ILS/xul/staff_client/build/application.ini + + A login screen for the Staff Client similar to this should appear: + + Logging into the Staff Client + + + + + First, add the name of your Evergreen server to the field + Hostname in the Server section. You will probably + want to use 127.0.0.1. After adding the server name, click Re-Test + Server. You should now see the messages 200:OK in the fields + Status and Version. + Because this is the initial run of the Staff Client, you will see a warning in the + upper-right saying: Not yet configured for the specified + server. To continue, you must assign a workstation name. Refer to + for further details. + Try to log into the Staff Client with the username admin and + the password open-ils. If the login is successful, you will see the + following screen: + + Logging into the Staff Client + + + + + Otherwise, you may need to click 'Add SSL Exception' in the + main window. You should see a popup window titled Add Security Exception: + + Adding an SSL Exception in the Staff Client + + + + + Click 'Get Certificate', then click 'Confirm + Security Exception', then click 'Re-Test Server' in the + main window and try to log in again. + + + Testing the Apache Web Server + In this section you will test the Apache configuration file(s), then restart the + Apache web server. + As the root user, execute the following + commands. Note the use of restart to force the new Evergreen + modules to be reloaded even if the Apache server is already running. Any problems found + with your configuration files should be displayed: + + + # as the root user: + apache2ctl configtest && /etc/init.d/apache2 restart + + + + Stopping Evergreen + In you learned how to start the + Evergreen services. For completeness, following are instructions for stopping the + Evergreen services. + As the opensrf user, stop all Evergreen + services by using the following command: + + + # as the opensrf user + # stop the server; use "-l" to force hostname to be "localhost" + osrf_ctl.sh -l -a stop_all + + You can also stop Evergreen services without the + flag, but the osrf_ctl.sh utility must know the + fully qualified domain name for the system on which it will execute. That hostname may + have been specified in the configuration file opensrf.xml, which + you configured in a previous step. + +
+
+ Post-Installation Chores + There are several additional steps you may need to complete after Evergreen has been + successfully installed and tested. Some steps may not be needed (e.g., setting up support for + Reports). +
+ Remove temporary Apache configuration changes + You modified the Apache configuration file + /etc/apache2/sites-available/eg.conf in an earlier step as a + temporary measure to expedite testing (see + for further information). + Those changes must now be reversed in order to deny unwanted access to your + CGI scripts from users on other public networks. + + + This temporary network update was done to expedite + testing. You must correct + this for a public production system. + + + As the root user, edit the configuration + file again and comment out the line Allow from all and uncomment the + line Allow from 10.0.0.0/8, then change it to match your network + address scheme. +
+
+ Configure a permanent SSL key + You used the command openssl in an earlier step to + temporarily create a new SSL key for the Apache server (see + for further + information). This self-signed security certificate was adequate during + testing and development, but will continue to generate warnings in the Staff + Client and browser. For a public production server you should configure or + purchase a signed SSL certificate. + There are several open source software solutions that provide schemes to + generate and maintain public key security certificates for your library + system. Some popular projects are listed below; please review them for + background information on why you need such a system and how you can provide + it: + + + http://www.openca.org/projects/openca/ + + + http://sourceforge.net/projects/ejbca/ + + + http://pki.fedoraproject.org + + + + + The temporary SSL key was only created to expedite + testing. You should install a proper SSL certificate for a public + production system. + + +
+
+ (OPTIONAL) IP-Redirection + By default, Evergreen is configured so searching the OPAC always starts in the + top-level (regional) library rather than in a second-level (branch) library. Instead, + you can use "IP-Redirection" to change the default OPAC search location to use the IP + address range assigned to the second-level library where the seach originates. You must + configure these IP ranges by creating the configuration file + /openils/conf/lib_ips.txt and modifying the Apache startup script + /etc/apache2/startup.pl. + First, copy the sample file + /home/opensrf/Evergreen-ILS-1.6.1.2/Open-ILS/examples/lib_ips.txt.example + to /openils/conf/lib_ips.txt. The example file contains the single + line: "MY-LIB 127.0.0.1 127.0.0.254". You must modify the file to use + the IP address ranges for your library system. Add new lines to represent the IP address + range for each branch library. Replace the values for MY-LIB with the + values for each branch library found in the table + actor.org_unit. + Finally, modify the Apache startup script + /etc/apache2/startup.pl by uncommenting two lines as shown, then + restarting the Apache server: + +# - e.g. LIB123 10.0.0.1 10.0.0.254 +use OpenILS::WWW::Redirect qw(/openils/conf/opensrf_core.xml); +OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); +]]> +
+
+ (OPTIONAL) Set Up Support For Reports + Evergreen reports are extremely powerful but require some simple configuration. + See for information on starting and + stopping the Reporter daemon processes. +
+
+
+
diff --git a/2.0/admin/sip.xml b/2.0/admin/sip.xml new file mode 100644 index 0000000..7f670c9 --- /dev/null +++ b/2.0/admin/sip.xml @@ -0,0 +1,595 @@ + + + + SIP Server + + SIP, standing for Standard Interchange Protocol, was developed by the + 3Mcorporation to be a common protocol for data transfer between ILS' + (referred to in SIP as an ACS, or Automated Circulation System) + Automated Circulation System and a + third party device. Originally, the protocol was developed for + use with 3M SelfCheck (often abbreviated SC, not to be confused with Staff Client) systems, but has since expanded to other companies and devices. It is now common to find + SIP in use in several other vendors' SelfCheck systemsSelfCheck, as well as other non-SelfCheck devices. + Some examples include: + + Patron Authentication (computer access, subscription databases) + Automated Material Handling (AMH) Automated Material Handling (AMH)- The automated sorting of items, often to bins or + book carts, based on shelving location or other programmable criteria + + +
+ + Installing the <systemitem class="protocol">SIP</systemitem> Server + + This is a rough intro to installing the SIP server for Evergreen. + + Getting the code + Current SIP code lives at github: + cd /opt + git clone git://github.com/atz/SIPServer.git SIPServer + Or use the old style: + $ cd /opt + $ sudo cvs -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip login + When prompted for the CVS password, just hit Enter (sudo password may be req'd) + $ sudo cvs -z3 -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip co -P SIPServer + + + + Configuring the Server + + + Type the following commands from the command prompt:configuration filesoils_sip.xml + $ sudo su opensrf + $ cd /openils/conf + $ cp oils_sip.xml.example oils_sip.xml + + + Edit oils_sip.xmlconfiguration filesoils_sip.xml. + Change the commented out <server-params> section to this: + +<server-params +min_servers='1' +min_spare_servers='0' +max_servers='25' +/> + + + + max_servers will directly correspond to the number of allowed SIP clients. Set the number accordingly, but + bear in mind that too many connections can + exhaust memory. On a 4G RAM/4 CPU server (that is also running evergreen), it is not recommended to exceed 100 + SIP client connections. + + + + + Adding <systemitem class="protocol">SIP</systemitem> Users + + + Type the following commands from the command prompt:configuration filesoils_sip.xml + $ sudo su opensrf + $ cd /openils/conf + $ cp oils_sip.xml.example oils_sip.xml + + + in the <accounts> section, add SIP client login information. Make sure that all + <logins> use the same institution attribute, and make + sure the institution is listed in <institutions>. All attributes in the <login> section will be + used by the SIP client. + + + + In Evergreen, create a new profile group called SIP. + This group should be a sub-group of Users + (not Staff or Patrons). + Set Editing Permission as group_application.user.sip_client and give the group the following permissions: + + COPY_CHECKIN + COPY_CHECKOUT + RENEW_CIRC + VIEW_CIRCULATIONS + VIEW_COPY_CHECKOUT_HISTORY + VIEW_PERMIT_CHECKOUT + VIEW_USER + VIEW_USER_FINES_SUMMARY + VIEW_USER_TRANSACTIONS + + OR use SQL like: + + +INSERT INTO permission.grp_tree (id,name,parent,description,application_perm) +VALUES (8, 'SIP', 1, 'SIP2 Client Systems', 'group_application.user.sip_client'); + +INSERT INTO permission.grp_perm_map (grp,perm,depth) +VALUES (8,15,0),(8,16,0),(8,17,0),(8,31,0),(8,32,0),(8,48,0),(8,54,0),(8,75,0),(8,82,0); + + + + Verify: + + +SELECT * +FROM permission.grp_perm_map JOIN permission.perm_list ON +permission.grp_perm_map.perm=permission.perm_list.id +WHERE grp=8; + + + + Keep in mind that the id (8) may not necessarily be available on your system. + + + For each account created in the <login> section of oils_sip.xml, create a user (via the staff client user editor) + that has the same username + and password and put that user into the SIP group. + The expiration date will affect the SIP users' connection so you might want to make a note of + this somewhere. + + + + + Running the server + To start the SIP server type the following commands from the command prompt: + $ sudo su opensrf + $ oils_ctl.sh -d /openils/var/run -s /openils/conf/oils_sip.xml -a [start|stop|restart]_sip + + + Logging-SIPSIP + + <systemitem class="service">Syslog</systemitem> + It is useful to log SIP requests to a separate file especially during initial setup by modifying your + syslog config file.syslog + + + Edit syslog.conf. + $ sudo vi /etc/syslog.conf # maybe /etc/rsyslog.conf + + + Add this: + local6.* -/var/log/SIP_evergreen.log + + + Syslog expects the logfile to exist so create the file. + $ sudo touch /var/log/SIP_evergreen.log + + + Restart sysklogd. + $ sudo /etc/init.d/sysklogd restart + + + + + <systemitem class="service">Syslog-NG</systemitem> + + + + Edit logging config.syslog-NG + sudo vi /etc/syslog-ng/syslog-ng.conf + + + Add: + +# SIP2 for Evergreen +filter f_eg_sip { level(warn, err, crit) and facility(local6); }; +destination eg_sip { file("/var/log/SIP_evergreen.log"); }; +log { source(s_all); filter(f_eg_sip); destination(eg_sip); }; + + + + Syslog-ng expects the logfile to exist so create the file. + $ sudo touch /var/log/SIP_evergreen.log + + + Restart syslog-ng + $ sudo /etc/init.d/syslog-ng restart + + + + + + Testing Your <systemitem class="protocol">SIP</systemitem> ConnectionSIP + + + In the top level CVS checkout of the SIPServer code. + $ cd SIPServer/t + + + Edit SIPtest.pm, change the $instid, $server, $username, and + $password variables. This will be enough to test connectivity. + To run all tests, you'll need to change all the variables in the Configuration section. + $ PERL5LIB=../ perl 00sc_status.t + This should produce something like: + +1..4 +ok 1 - Invalid username +ok 2 - Invalid username +ok 3 - login +ok 4 - SC status + + + + Don't be dismayed at Invalid Username. That's just one of the many tests that are run. + + + + + More Testing + + + Once you have opened up either the SIP OR SIP2 ports to be + accessible from outside you can do some testing via telnet. You can try this with localhost + if you so wish, but we want to prove that SIP2 works from non-localhost. + Replace $instid, $server, $barcode, $username, + and $password variables below as necessary. + We are using 6001 here which is associated with SIP2 as per our configuration.telnet + +$ telnet $server 6001 +Connected to $server. +Escape character is '^]'. +9300CN**$username**|CO**$password**|CP**$instid** + + You should get back. + 941 + + + Now just copy in the following line (with variables replaced) you don't need to hit enter, just paste! + 2300120080623 172148AO**$instid**|AA**$barcode**|AC$password|AD**$password** + You will get back the patron information for $barcode (something similar to the what's below). +24 Y 00120100113 170738AEFirstName MiddleName LastName|AA**$barcode**|BLY|CQY +|BHUSD|BV0.00|AFOK|AO**$instid**| + + The response declares it is a valid patron BLY with a valid password CQY and shows the user's + $name. + + + +
+
+ + <systemitem class="protocol">SIP</systemitem> CommunicationSIP + + SIP generally communicates over a TCP connection (either raw sockets or over + telnet), but can also communicate via serial connections and other methods. In Evergreen, + the most common deployment is a RAW socket connection on port 6001. + SIP communication consists of strings of messages, each message request and response begin with a 2-digit + command - Requests usually being an odd + number and responses usually increased by 1 to be an even number. The combination numbers for the request command and response is often referred to as a + Message Pair (for example, a 23 command is a request for patron status, a 24 response is a patron status, and the message pair 23/24 is + patron status message pair). The table in the next section shows the message pairs and a description of them. + For clarification, the Request is from the device (selfcheck or otherwise) to the ILS/ACS. The response is… the response + to the request ;). + Within each request and response, a number of fields (either a fixed width or separated with a | [pipe symbol] and preceeded with a + 2-character field identifier) + are used. The fields vary between message pairs. + + + + + + + + + Pair + Name + Supported? + Details + + + + + 01 + Block Patron + Yes + 01_Block_Patron - ACS responds with 24 Patron Status Response + + + 09/10 + Checkin + Yes (with extensions) + 09/10_Checkin + + + 11/12 + Checkout + Yes (no renewals) + 11/12_Checkout + + + 15/16 + Hold + No + 15/16_Hold + + + 17/18 + Item Information + Yes (no extensions) + 17/18_Item_Information + + + 19/20 + Item Status Update + No + 19/20_Item_Status_Update - Returns Patron Enable response, but doesn't make any changes in EG + + + 23/24 + Patron Status + Yes + 23/24_Patron_Status - 63/64 Patron Information preferred + + + 25/26 + Patron Enable + No + 25/26_Patron_Enable - Used during system testing and validation + + + 29/30 + Renew + NO (maybe?) + 29/30_Renew + + + 35/36 + End Session + Yes + 35/36_End_Session + + + 37/38 + Fee Paid + No + 37/38_Fee_Paid + + + 63/64 + Patron Information + Yes (no extensions) + 63/64_Patron_Information + + + 65/66 + Renew All + No + 65/66_Renew_All + + + 93/94 + Login + Yes + 93/94_Login - Must be first command to Evergreen ACS (via socket) or SIP will terminate + + + 97/96 + Resend last message + Yes + 97/96_Resend + + + 99/98 + SC/ACS Status + Yes + 99/98_SC_and_ACS_Status + + + + + + 01 Block Patron + A selfcheck will issue a Block Patron command if a patron leaves their card in a selfcheck machine or if the selfcheck detects tampering (such as attempts + to disable multiple items during a single item checkout, multiple failed pin entries, etc).SelfCheck + In Evergreen, this command does the following: + + User alert message: CARD BLOCKED BY SELF-CHECK MACHINE (this is independent of the AL + Blocked Card Message field). + Card is marked inactive. + + The request looks like: + 01<card retained><date>[fields AO, AL, AA, AC] + Card Retained: A single character field of Y or N - tells the ACS whether the SC has + retained the card (ex: left in the machine) or not. + Date: An 18 character field for the date/time when the block occurred. + Format: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, Z (3 blanks and a Z) represents UTC(GMT/Zulu) + Fields: See Fields for more details. + The response is a 24 Patron Status Response with the following: + + Charge privileges denied + Renewal privileges denied + Recall privileges denied (hard-coded in every 24 or 64 response) + hold privileges denied + Screen Message 1 (AF): blocked + Patron + + + + + 09/10 Checkin + The request looks like: + 09<No block (Offline)><xact date><return date>[Fields AP,AO,AB,AC,CH,BI] + No Block (Offline): A single character field of Y or N - Offline transactions are not currently + supported so send N. + xact date: an 18 character field for the date/time when the checkin occurred. Format: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - + 4 blanks when local time, Z (3 blanks and a Z) represents UTC(GMT/Zulu) + Fields: See Fields for more details. + The response is a 10 Checkin Response with the following: + 10<resensitize><magnetic media><alert><xact date>[Fields AO,AB,AQ,AJ,CL,AA,CK,CH,CR,CS,CT,CV,CY,DA,AF,AG] + Example (with a remote hold): + 09N20100507 16593720100507 165937APCheckin Bin 5|AOBR1|AB1565921879|ACsip_01| + +101YNY20100623 165731AOBR1|AB1565921879|AQBR1|AJPerl 5 desktop reference|CK001|CSQA76.73.P33V76 1996 +|CTBR3|CY373827|DANicholas Richard Woodard|CV02| + + Here you can see a hold alert for patron CY 373827, named DA Nicholas Richard Woodard, + to be picked up at CT BR3. Since the transaction is happening + at AO BR1, the alert type CV is 02 for hold at remote library. + The possible values for CV are: + + 00: unknown + 01: local hold + 02: remote hold + 03: ILL transfer (not used by EG) + 04: transfer + 99: other + + + the logic for Evergreen to determine the content is magnetic_media comes from either legacy circ scripts or search_config_circ_modifier. + The default is non-magnetic.magnetic media + The same is true for media_type (default 001). Evergreen does not populate the collection_code because it does not really have any, but it will provide the + call_number where available. + Unlike the item_id (barcode), the title_id is actually a title string, unless the configuration forces the return of + the bib ID. + Don't be confused by the different branches that can show up in the same response line. + + AO is where the transaction took place, + AQ is the permanent location, and + CT is the destination location (i.e., pickup lib for a hold or target lib for a transfer). + + + + + 11/12 Checkout + + + 15/16 Hold + Not yet supported. + + + 17/18 Item Information + The request looks like: + 17<xact_date>[fields: AO,AB,AC] + The request is very terse. AC is optional. + The following response structure is for SIP2. (Version 1 of the protocol had only 6 total fields.) + +18<circulation_status><security_marker><fee_type><xact_date> +[fields: CF,AH,CJ,CM,AB,AJ,BG,BH,BV,CK,AQ,AP,CH,AF,AG,+CT,+CS] + + Example: + 1720060110 215612AOBR1|ABno_such_barcode| + 1801010120100609 162510ABno_such_barcode|AJ| + 1720060110 215612AOBR1|AB1565921879| +1810020120100623 171415AB1565921879|AJPerl 5 desktop reference|CK001|AQBR1|APBR1|BGBR1 +|CTBR3|CSQA76.73.P33V76 1996| + + The first case is with a bogus barcode. The latter shows an item with a circulation_status of 10 for in transit between libraries. + The known values of circulation_status are enumerated in the spec. + EXTENSIONS: The CT field for destination location and CS call number are used by + Automated Material Handling systems.Automated Material Handling (AMH) + + + 19/20 Item Status Update + + + 23/24 Patron Status + Example: + 2300120060101 084235AOUWOLS|AAbad_barcode|ACsip_01|ADbad_password| + 24YYYY 00120100507 013934AE|AAbad_barcode|BLN|AOUWOLS| + 2300120060101 084235AOCONS|AA999999|ACsip_01|ADbad_password| + 24 Y 00120100507 022318AEDoug Fiander|AA999999|BLY|CQN|BHUSD|BV0.00|AFOK|AOCONS| + 2300120060101 084235AOCONS|AA999999|ACsip_01|ADuserpassword|LY|CQN|BHUSD|BV0.00|AFOK|AOCONS| + 24 Y 00120100507 022803AEDoug Fiander|AA999999|BLY|CQY|BHUSD|BV0.00|AFOK|AOCONS| + + The BL field (SIP2, optional) is valid patron, so the + N value means + bad_barcode doesn't match a patron, the Y value means 999999 does. + The CQ field (SIP2, optional) is valid password, so the N + value means bad_password doesn't match 999999's password, the Y means userpassword + does. + + So if you were building the most basic SIP2 authentication client, you would check for + |CQY| in the response to know the user's barcode and password + are correct (|CQY| implies |BLY|, since you cannot check the password unless the barcode exists). However, in practice, + depending on the application, there are other factors to consider in authentication, like whether the user is blocked from checkout, owes excessive fines, reported their + card lost, etc. These limitations are reflected in the 14-character patron status string immediately following the 24 code. + See the field definitions in your copy of the spec. + + + 25/26 Patron Enable + Not yet supported. + + + 29/30 Renew + Evergreen ACS status message indicates renew is supported. + + + 35/36 End Session + 3520100505 115901AOBR1|AA999999| + 36Y20100507 161213AOCONS|AA999999|AFThank you!| + The Y/N code immediately after the 36 indicates success/failure. Failure is not particularly meaningful or + important in this context, and for evergreen it is hardcoded Y. + + + 37/38 Fee Paid + Not implemented. + + + 63/64 Patron Information + Attempting to retrieve patron info with a bad barcode: + 6300020060329 201700 AOBR1|AAbad_barcode| + 64YYYY 00020100623 141130000000000000000000000000AE|AAbad_barcode|BLN|AOBR1| + Attempting to retrieve patron info with a good barcode (but bad patron password): + 6300020060329 201700 AOBR1|AA999999|ADbadpwd| + +64 Y 00020100623 141130000000000000000000000000AA999999|AEDavid J. Fiander|BHUSD|BV0.00 +|BD2 Meadowvale Dr. St Thomas, ON Canada + +90210|BEdjfiander@somemail.com|BF(519) 555 1234|AQBR1|BLY|CQN|PB19640925|PCPatrons +|PIUnfiltered|AFOK|AOBR1| + + See 23/24 Patron Status for info on BL and CQ fields. + + + 65/66 Renew All + Not yet supported. + + + 93/94 Login + Example: + 9300CNsip_01|CObad_value|CPBR1| + [Connection closed by foreign host.] + ... + 9300CNsip_01|COsip_01|CPBR1| + 941 + 941 means successful terminal login. 940 or getting dropped means failure. + + + 97/96 Resend + + + 99/98 SC and ACS Status + 99<status code><max print width><protocol version> + All 3 fields are required: + + status code - 1 character: + + 0: SC is OK + 1: SC is out of paper + 2: SC shutting down + + max print width - 3 characters - the integer number of characters the client can print + protocol version - 4 characters - x.xx + + +98<on-line status><checkin ok><checkout ok><ACS renewal policy> +<status update ok><offline ok><timeout period> + + +<retries allowed><date/time sync><protocol version><institution id> +<library name><supported messages><terminal + + location><screen message><print line> + Example: + 9910302.00 + 98YYYYNN60000320100510 1717202.00AOCONS|BXYYYYYYYYYNYNNNYN| + The Supported Messages field BX appears only in SIP2, and specifies whether 16 different + SIP commands are supported by the ACS or not. + + + Fields + All fixed-length fields in a communication will appear before the first variable-length field. This allows for simple parsing. Variable-length fields are by + definition delimited, though there will not necessarily be an initial delimiter between the last fixed-length field and the first variable-length one. It would be + unnecessary, since you should know the exact position where that field begins already. + +
+
+ diff --git a/2.0/admin/troubleshooting.xml b/2.0/admin/troubleshooting.xml new file mode 100644 index 0000000..8ab2fe1 --- /dev/null +++ b/2.0/admin/troubleshooting.xml @@ -0,0 +1,305 @@ + + + Troubleshooting System Errors + If you have Evergreen installed and are encountering systematic errors, here is the steps to find the + cause and solution to most problems. These instructions assume standard locations and file names for Evergreen + installations, and may also include commands for specific Linux distributions. + + Systematic Evergreen Restart to Isolate Errors + + Stop Apache:web serverApachestopping + /etc/init.d/apache2 stop + or + apache2ctl stop + + + Stop OpenSRF:OpenSRF + osrf_ctl.sh -l -a stop_all + You should get either output simlar to this: + +Stopping OpenSRF C process 12515... +Stopping OpenSRF C process 12520... +Stopping OpenSRF C process 12526... +Stopping OpenSRF Perl process 12471... +Stopping OpenSRF Router process 12466... + + Or, if services have already been stopped, output may look like this: + OpenSRF C not running + OpenSRF Perl not running + OpenSRF Router not running + Occasionally osrf_ctl.sh fails to kill OpenSRF processes, so we should check to make + sure that none are still running with the command: + ps -aef | grep OpenSRF + You should manually kill any OpenSRF processes. + If you were unable to stop OpenSRF with the above methods, you could also try this + command: + rm –R /openils/var/run/*.pid + This will remove the temporary OpenSRF process files from the run directory which may + have been left over from a previous system boot cycle. + + + Restart Ejabberd and + Memcached with the following commands:Ejabberd + sudo /etc/init.d/ejabberd restart + sudo /etc/init.d/memcached restartmemcached + + + Start the OpenSRF router and check for errors + OpenSRFservicesrouter + /openils/bin/osrf_ctl.sh -l -a start_router + If the router started correctly, output will be: + Starting OpenSRF Router + If router does not start correctly, you should check the router error log files + for error information. + Evergreen 1.6 uses two routers, a public one and a private one, with two different + logfiles: + /openils/var/log/private.router.loglogsrouter + /openils/var/log/public.router.log + A quick way to find error information in the logs is with the grep command. + grep ERR /openils/var/log/*router.log + As a final sanity check, look for router processes using the process status + command: + ps -aef | grep Router + + + Start the OpenSRF perl services and check for errors + OpenSRFservicesperl + /openils/bin/osrf_ctl.sh -l -a start_perl + You should see the output similar to the following: + +Starting OpenSRF Perl +* starting all services for ... +* starting service pid=7484 opensrf.settings +* starting service pid=7493 open-ils.cat +* starting service pid=7495 open-ils.supercat +* starting service pid=7497 open-ils.search +* starting service pid=7499 open-ils.circ +* starting service pid=7501 open-ils.actor +* starting service pid=7502 open-ils.storage +... + + If the perl services do not start correctly or you receive errors, search for errors + in the following log files: + + /openils/var/log/router.log + /openils/var/log/osrfsys.loglogsosrfsys.log + + At this point you can use the grep command to find errors in + any of the Evergreen log files: + grep ERR /openils/var/log/*.log + As a final sanity check, look for OpenSRF processes: + ps -aef | grep -i opensrf + + + Start the OpenSRF c services and check for errors:] + OpenSRFservicesC + /openils/bin/osrf_ctl.sh -l -a start_c + And output should be: + Starting OpenSRF C (host=localhost) + If the c service does not start, check for errors by grepping + the log files for errors: + grep ERR /openils/var/log/*.log + Check for OpenSRF processes: + ps -aef | grep -i opensrf + + + Smoke test with autogen.shautogen + The autogen tool will take some dynamic information from the database and generate + static JavaScript JavaScript files for use by the OPAC and staff client. It is also able to refresh + the proximity mapproximity map between libraries for the purpose of efficiently routing hold + requests. + As user opensrf, you invoke autogen with the command: + /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u + If Autogen completes successfully, the output will be: + +Updating fieldmapper +Updating web_fieldmapper +Updating OrgTree +removing OrgTree from the cache... +Updating OrgTree HTML +Updating locales selection HTML +Updating Search Groups +Refreshing proximity of org units +Successfully updated the organization proximity +Done + + If Autogen does not complete its task and you receive errors, use + grep to find errors in the log files: + grep ERR /openils/var/log/*.log + + + Connect to Evergreen using the srfsh command-line OpenSRF clientsrfsh (command-line OpenSRF client) + /openils/bin/srfsh + + In order for you to connect using srfsh, you will need to + have set up the .srfsh.xml configuration file in your home directory as as + described in the installation chapter. + + You will then see the srfsh prompt: + srfsh# + At the srfsh prompt, enter this command: + login admin open-ils + You should the request verification: + +Received Data: "6f63ff5542da1fead4431c6c280efc75" +------------------------------------ +Request Completed Successfully +Request Time in seconds: 0.018414 +------------------------------------ + +Received Data: { +"ilsevent":0, +"textcode":"SUCCESS", +"desc":" ", +"pid":7793, +"stacktrace":"oils_auth.c:312", +"payload":{ +"authtoken":"28804ebf99508496e2a4d2593aaa930e", + "authtime":420.000000 +} +} + +------------------------------------ +Request Completed Successfully +Request Time in seconds: 0.552430 +------------------------------------ +Login Session: 28804. Session timeout: 420.000 +srfsh# + If you encounter errors or if you are unable to connect, you should consult the + srfsh.log file. The location of this file is configured in your logssrfsh.log + .srfsh.xml configuration file and is + /openils/var/log/srfsh.log by default. + Pressing + + Ctrl + D + + or entering exit will terminate srfsh. + + + Start Apache and check for errors: + web serverApachestarting + /etc/init.d/apache2 start + or + apache2ctl start + You should see output: + +* Starting web server apache2 +...done. + + the Apache OpenSRF modules write to the + /openils/var/log/gateway.loglogsgateway.log + However, you should check all of the log files for errors: + grep ERR /openils/var/log/*.log + Another place to check for errors is the Apache error logs + generally located in in the /var/log/Apache2 logsApache + Apachelogs + directory + If you encounter errors with Apache, a common source of potential problems are the + Evergreen site configuration files /etc/apache2/eg_vhost.conf and + /etc/apache2/sites-available/eg.conf + configuration filesApacheeg.conf + configuration filesApacheeg_vhost.conf + + + + Testing with <command>settings-tester.pl</command> + As the opensrf user, run the script settings-tester.pl to see if it finds any + system configuration problems. + +cd /home/opensrf/Evergreen-ILS-1.6.0.0 +perl Open-ILS/src/support-scripts/settings-tester.pl + + Here is example output from running settings-tester.pl: + +LWP::UserAgent version 5.810 +XML::LibXML version 1.70 +XML::LibXML::XPathContext version 1.70 +XML::LibXSLT version 1.70 +Net::Server::PreFork version 0.97 +Cache::Memcached version 1.24 +Class::DBI version 0.96 +Class::DBI::AbstractSearch version 0.07 +Template version 2.19 +DBD::Pg version 2.8.2 +Net::Z3950::ZOOM version 1.24 +MARC::Record version 2.0.0 +MARC::Charset version 1.1 +MARC::File::XML version 0.92 +Text::Aspell version 0.04 +CGI version 3.29 +DateTime::TimeZone version 0.7701 +DateTime version 0.42 +DateTime::Format::ISO8601 version 0.06 +DateTime::Format::Mail version 0.3001 +Unix::Syslog version 1.1 +GD::Graph3d version 0.63 +JavaScript::SpiderMonkey version 0.19 +Log::Log4perl version 1.16 +Email::Send version 2.192 +Text::CSV version 1.06 +Text::CSV_XS version 0.52 +Spreadsheet::WriteExcel::Big version 2.20 +Tie::IxHash version 1.21 +Parse::RecDescent version 1.95.1 +SRU version 0.99 +JSON::XS version 2.27 + + +Checking Jabber connection for user opensrf, domain private.localhost +* Jabber successfully connected + +Checking Jabber connection for user opensrf, domain public.localhost +* Jabber successfully connected + +Checking Jabber connection for user router, domain public.localhost +* Jabber successfully connected + +Checking Jabber connection for user router, domain private.localhost +* Jabber successfully connected + +Checking database connections +* /opensrf/default/reporter/setup :: Successfully connected to database... + * Database has the expected server encoding UTF8. +* /opensrf/default/apps/open-ils.storage/app_settings/databases :: Successfully... +* /opensrf/default/apps/open-ils.cstore/app_settings :: Successfully... + * Database has the expected server encoding UTF8. +* /opensrf/default/apps/open-ils.pcrud/app_settings :: Successfully ... + * Database has the expected server encoding UTF8. +* /opensrf/default/apps/open-ils.reporter-store/app_settings :: Successfully... + * Database has the expected server encoding UTF8. + +Checking database drivers to ensure <driver> matches <language> +* OK: Pg language is undefined for reporter base configuration +* OK: Pg language is undefined for reporter base configuration +* OK: Pg language is perl in /opensrf/default/apps/open-ils.storage/language +* OK: pgsql language is C in /opensrf/default/apps/open-ils.cstore/language +* OK: pgsql language is C in /opensrf/default/apps/open-ils.pcrud/language +* OK: pgsql language is C in /opensrf/default/apps/open-ils.reporter-store/language + +Checking libdbi and libdbi-drivers + * OK - found locally installed libdbi.so and libdbdpgsql.so in shared library path + +Checking hostname + * OK: found hostname 'localhost' in <hosts> section of opensrf.xml +$ + + If the output from the script does not help you find the problem, please do not make any further significant changes to your configuration. + Follow the steps in the troubleshooting guide in . + + + Try to login from the staff clientstaff clienttesting + + + Testing the Catalog + OPACtesting + By default, the OPAC will live at the URL http://my.domain.com/opac/. + Navigate to this URL and the front page of the OPAC should load. There is a basic text entry field with some extra search options. If you have any + problems loading this page, check the Apache error logs. If the page loads but does not function correctly, then check for possible javascript errors. We + highly recommend testing with the Firefox browser because of the helpful javascript debugging tools. + Assuming that the OPAC is functioning and there is data in your database, you can now perform other simple functional tests + (e.g., searching the catalog). + + + diff --git a/2.0/admin/z3950.xml b/2.0/admin/z3950.xml new file mode 100644 index 0000000..5bf2d4a --- /dev/null +++ b/2.0/admin/z3950.xml @@ -0,0 +1,205 @@ + + + + <systemitem class="protocol">SRU</systemitem> and <systemitem class="protocol">Z39.50</systemitem> Server + + Evergreen is extremely scalable and can serve the need of a large range of libraries. The specific requirements and configuration of your system should be determined based on your + specific needs of your organization or consortium. +
+ + Testing SRU with yaz-clientyaz + + yaz-client is installed as a part of Index Data's YAZ software. Recent versions include support for querying SRU servers. + Evergreen ships an SRU configuration SRU + that works out of the box. To search Evergreen with yaz-client, choose the GET query method and issue the find command. + In the following example, we connect to the Evergreen test server dev.gapines.org - substitute this hostname with your own + Evergreen server hostname: + Some older versions of yaz-client have known issues with SRU. Ensure that you are using the latest edition of yaz from + http://www.indexdata.com/yaz. + $ yaz-client http://dev.gapines.org/opac/extras/sru + Z> sru GET 1.1 + Z> find hemingway + + If your database has records that match that term, you will get the corresponding MARCXML records + in your response from yaz-client.MARCXML + Here's what the SRU request looks like as sent to the Evergreen web server: + GET /opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 + You can see what the response looks like by hitting the same URL in your Web browser: + + http://dev.gapines.org/opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 + CQL queries + Evergreen supports some CQL CQL index-sets for advanced queries such as a subset of + Dublin Core (DC) elements. Those DC elements that are + supported map to Evergreen default indexes as follows:Dublin Core + + + + + + + DC element + Evergreen index + + + + + title + title + + + creator + author + + + contributor + author + + + publisher + keyword + + + subject + subject + + + identifier + keyword + + + type + none + + + format + none + + + language + lang + + + + + Here are a few examples of SRU searches against some of these indexes: + + dc.title all complete dinosaur + dc.subject all britain france + dc.title exact The Empire Strikes Back + dc.author=king and dc.title=zone + +
+
+ + Setting up <systemitem class="protocol">Z39.50</systemitem> server supportZ39.50 + + + You must have Evergreen's SRU server running before you can enable Z39.50 server support. + + This support uses an Z39.50-to-SRU translator service supplied + by the Net::Z3950::Simple2ZOOMSimple2ZOOM Perl module to enable Evergreen to act as a Z39.50 server. + You could run the Z39.50 server on a different machine. It just needs to be able to connect to the + Evergreen SRU server. + + Setting up the <systemitem class="protocol">Z39.50</systemitem> server + + Install a recent version of yaz (the Makefile.install should have installed a suitable version). + + Install Net::Z3950::Simple2ZOOM (sudo cpan Net::Z3950::Simple2ZOOM) + + Create a Simple2ZOOM configuration file. Something like the following is a good start, and is + based on the Simple2ZOOM + documentation example. We'll name the file dgo.conf for our example: + +<client> + <database name="gapines"> + <zurl>http://dev.gapines.org/opac/extras/sru</zurl> + <option name="sru">get</option> + <charset>marc-8</charset> + <search> + <querytype>cql</querytype> + <map use="4"><index>eg.title</index></map> + <map use="7"><index>eg.keyword</index></map> + <map use="8"><index>eg.keyword</index></map> + <map use="21"><index>eg.subject</index></map> + <map use="1003"><index>eg.author</index></map> + <map use="1018"><index>eg.publisher</index></map> + <map use="1035"><index>eg.keyword</index></map> + <map use="1016"><index>eg.keyword</index></map> + </search> + </database> +</client> + + You can have multiple <database> sections in a single file, each pointing to a different scope of your consortium. The name attribute on + the <database> element is used in your Z39.50 connection string to name the database. The + <zurl> element must point to + http://hostname/opac/extras/sru. As of Evergreen 1.6, you can append an optional organization unit shortname for search + scoping purposes, and you can also append /holdings if you want to expose the holdings for any returned records. So your zurl + could be http://dev.gapines.org/opac/extras/sru/BR1/holdings to limit the search scope to BR1 and its children, and + to expose its holdings. + + + Run simple2ZOOM as a daemon, specifying the configuration files and one or more listener addresses that the + Z39.50 server will + be accessible on. If you do not specify a port, it will automatically run on port 9999. In the following example, + we tell it to listen both to localhost on port 2210, and on dev.gapines.org + n port 210: + + <yazgfs> + <server id="server1"> + <retrievalinfo> + <retrieval syntax="xml"/> + <retrieval syntax="marc21"> + <backend syntax="xml"> + <marc inputformat="xml" outputformat="marc" inputcharset="utf-8" outputcharset="marc-8"/> + </backend> + </retrieval> + </retrievalinfo> + </server> +</yazgfs> + + + + Run simple2ZOOM as a daemon, specifying the configuration files and one or more listener addresses that + the Z39.50 server will be accessible on. + If you do not specify a port, it will automatically run on port 9999. In the following example, we tell it to listen both to localhost on port 2210, and on dev.gapines.org on port 210: + simple2zoom -c dgo.conf -- -f xml2marc-yaz.cfg localhost:2210 dev.gapines.org:210 + + + To test the Z39.50 server, we can use yaz-client again: + +yaz-client +Z> open localhost:2210/gapines +Connecting...OK. +Sent initrequest. +Connection accepted by v3 target. +ID : 81/81 +Name : Simple2ZOOM Universal Gateway/GFS/YAZ +Version: 1.03/1.128/3.0.34 +Options: search present delSet triggerResourceCtrl scan sort namedResultSets +Elapsed: 0.010718 +Z> format marcxml +Z> find dc.title=zone and dc.author=king +Sent searchRequest. +Received SearchResponse. +Search was a success. +Number of hits: 0, setno 4 +records returned: 0 +Elapsed: 0.611432 +Z> find dead zone +Sent searchRequest. +Received SearchResponse. +Search was a success. +Number of hits: 4, setno 5 +records returned: 0 +Elapsed: 1.555461 +Z> show 1 +Sent presentRequest (1+1). +Records: 1 +[]Record type: XML +<record xmlns:... (rest of record deliberately truncated) + +
+
+ diff --git a/2.0/appendices/glossary.xml b/2.0/appendices/glossary.xml index 1230f84..59c3a16 100644 --- a/2.0/appendices/glossary.xml +++ b/2.0/appendices/glossary.xml @@ -7,59 +7,64 @@ A - + Apache Apache - Open-source web server software used to serve both static content and - dynamic web pages in a secure and reliable way. More information is available - at http://apache.org. + Open-source web server software used to serve both static + content and dynamic web pages in a secure and reliable way. More + information is available at + http://apache.org. B - + Bookbags bookbags Bookbags are lists of items that can be used for any number of - purposes. For example, to keep track of what books you have read, books you - would like to read, to maintain a class reading list, to maintain a reading - list for a book club, to keep a list of books you would like for your - birthday. There are an unlimited number of uses. + purposes. For example, to keep track of what books you have read, + books you would like to read, to maintain a class reading list, to + maintain a reading list for a book club, to keep a list of books you + would like for your birthday. There are an unlimited number of + uses. C - + CentOS Linux CentOS - A popular open-source operating system based on Red Hat Enterprises - Linux (a.k.a. "RHEL") and often used - for in web servers. More information is available at http://www.centos.org. + A popular open-source operating system based on Red Hat + Enterprises Linux + (also known as "RHEL") and often used for in web servers. More + information is available at + http://www.centos.org. - + Closure Compiler A suite of open-source tools used to build web applications with - Javascript; originally developed by Google. - More information is available at + Javascript; originally developed by Google. + It is used to create special builds of the Evergreen Staff Client. + More information is available at http://code.google.com/closure/compiler/. - + CPAN Perl @@ -67,14 +72,15 @@ An open-source archive of software modules written in - Perl. More information is available at + Perl. More information is available at http://www.cpan.org. + D - + Debian Linux @@ -82,10 +88,11 @@ One of the most popular open-source operating system using the - Linux kernel that provides over 25000 - useful precompiled software packages. Also known as - Debian GNU/Linux. More information is - available at http://www.debian.org. + Linux kernel that provides + over 25000 useful precompiled software packages. Also known as + Debian GNU/Linux. More + information is available at + http://www.debian.org. @@ -94,30 +101,34 @@ domain name - A unique set of case-insensitive, alphanumeric strings separated by - periods that are used to name organizations, web sites and addresses on the - Internet (e.g.: www.esilibrary.com). Domain names can be reserved - via third-party registration services, and can be associated with a unique IP - address or suite of IP addresses. + A unique set of case-insensitive, alphanumeric strings + separated by periods that are used to name organizations, web sites + and addresses on the Internet (e.g.: + www.esilibrary.com). Domain names can be reserved via + third-party registration services, and can be associated with a + unique IP address or suite of IP addresses. E - + ejabberd ejabberd - An open-source Jabber/XMPP instant messaging server that runs under popular - operating systems (e.g., Mac OSX, + An open-source Jabber/XMPP instant messaging server that is + used for client-server message passing within Evergreen. It runs + under popular operating systems (e.g., + Mac OSX, GNU/Linux, and - Microsoft Windows). One popular use is - to provide XMPP messaging services for a - Jabber domain across an extendable cluster of - cheap, easily-replaced machine nodes. More information is available at + Microsoft Windows). One + popular use is to provide XMPP messaging + services for a Jabber domain across an + extendable cluster of cheap, easily-replaced machine nodes. More + information is available at http://www.ejabberd.im. @@ -130,16 +141,17 @@ G - + Gentoo Linux Gentoo - A popular open-source operating system built on the - Linux kernel. More information is available - at http://www.gentoo.org. + A popular open-source operating system built on the + Linux kernel. More + information is available at + http://www.gentoo.org. @@ -149,22 +161,23 @@ I - + IP Address IP Address - (Internet Protocol address) A numerical label consisting of four numbers - separated by periods (e.g., "192.168.1.15") assigned to individual members of - networked computing systems. It uniquely identifies each system on the network - and allows controlled communication between such systems. The numerical label - scheme must adhere to a strictly defined naming convention that is currently - defined and overseen by the Internet Corporation for Assigned Names and - Numbers ("ICANN"). + (Internet Protocol address) A numerical label consisting of + four numbers separated by periods (e.g., "192.168.1.15") assigned to + individual members of networked computing systems. It uniquely + identifies each system on the network and allows controlled + communication between such systems. The numerical label scheme must + adhere to a strictly defined naming convention that is currently + defined and overseen by the Internet Corporation for Assigned Names + and Numbers ("ICANN"). - + Item/copy Buckets copy buckets @@ -175,24 +188,26 @@ copy buckets - Virtual containers to use in batch processing of item or - copy records. They can be used to perform various cataloging/holdings - maintenance tasks in batch. + Virtual containers to use in batch processing + of item or copy records. They can be used to perform various + cataloging/holdings maintenance tasks in batch. J - + Jabber jabber XMPP - Now known as XMPP (eXtensible Messaging and Presence Protocol), it was - originally named "Jabber". + The communications protocol used for client-server message + passing within Evergreen. Now known as XMPP (eXtensible Messaging and + Presence Protocol), it was originally named "Jabber". + @@ -206,17 +221,18 @@ M - + MARC MARC - The MARC formats are standards for the representation and communication - of bibliographic and related information in machine-readable form. + The MARC formats are standards for the representation and + communication of bibliographic and related information in + machine-readable form. - + MARCXML MARCXML @@ -225,32 +241,34 @@ Framework for working with MARC data in a XML environment. - + McCoy - An open-source application that allows add-on authors to provide secure - updates to their users. More information is available at - + An open-source application that allows add-on authors to + provide secure updates to their users. It is used to create special + builds of the Evergreen Staff Client. More information is available + at http://developer.mozilla.org/en/McCoy. - + memcached memcached - A general-purpose distributed memory caching system, usually with a - client~server architecture spread over multiple computing systems. It reduces - the number of times a data source (e.g., a database) must be directly accessed - by temporarily caching data in memory, therefore dramatically speeding up - database-driven web applications. + A general-purpose distributed memory caching system, usually + with a client-server architecture spread over multiple computing + systems. It reduces the number of times a data source (e.g., a + database) must be directly accessed by temporarily caching data in + memory, therefore dramatically speeding up database-driven web + applications. N - + Network address network address @@ -261,76 +279,92 @@ - + nsis - An open-source software tool used to create Windows installers. - More information is available at - http://nsis.sourceforge.net. + An open-source software tool used to create Windows + installers. It is used to create special builds of the Evergreen + Staff Client. More information is available at + + http://nsis.sourceforge.net. O - + OPAC OPAC - The "Online Public Access Catalog"; an online database of a library's - holdings; used to find resources in their collections; possibly searchable by - keyword, title, author, subject or call number. + The "Online Public Access Catalog"; an online database of a + library's holdings; used to find resources in their collections; + possibly searchable by keyword, title, author, subject or call + number. - + OpenSRF OpenSRF - The "Open Scalable Request Framework" (pronounced 'open surf') is a - stateful, decentralized service architecture that allows developers to create - applications for Evergreen with a minimum of knowledge of its - structure. + The "Open Scalable Request Framework" (pronounced 'open surf') + is a stateful, decentralized service architecture that allows + developers to create applications for Evergreen with a minimum of + knowledge of its structure. P - + + Perl + + Perl + + + The high-level scripting language in which most of the business logic of Evergreen is written. + + + + PKI PKI - Public Key Infrastructure (PKI) describes the schemes needed to generate - and maintain digital SSL Certificates. + Public Key Infrastructure (PKI) describes the schemes needed + to generate and maintain digital SSL Certificates. - + PostgreSQL databases PostgreSQL - A popular open-source object-relational database management system that - underpins Evergreen software. + A popular open-source object-relational database management + system that underpins Evergreen software. - - Putty + + PuTTY SSH Putty - A popular open-source telnet/ssh client for the Windows and Unix - platforms. More information is available at + A popular open-source telnet/ssh client for the Windows and + Unix platforms. As used in Evergreen, a handy utility used to create + an SSH Tunnel for connecting Staff Clients to Evergreen servers over + insecure networks. More information is available at http://www.chiark.greenend.org.uk/~sgtatham/putty/. + @@ -340,159 +374,187 @@ R - + Resource Hacker - An open-source utility used to view, modify, rename, add, delete and - extract resources in 32bit Windows executables. - More information is available at - Resource Hacker + An open-source utility used to view, modify, rename, add, + delete and extract resources in 32bit Windows executables. It is + used to create special builds of the Evergreen Staff Client. More + information is available at + + Resource Hacker - + RHEL Linux RHEL - Also known as "Red Hat Enterprises - Linux". An official - Linux distribution that is targeted - at the commercial market. It is the basis of other popular - Linux distributions, e.g., - CentOS. More information is available - at http://www.redhat.com. + Also known as "Red Hat Enterprises + Linux". An official + Linux distribution that is + targeted at the commercial market. It is the basis of other popular + Linux distributions, e.g., + CentOS. More information is + available at + http://www.redhat.com. S - + SIP SIP - SIP, standing for Standard Interchange Protocol, was developed by the 3M - Corporation to be a common protocol for data transfer between ILS' and third party - devices. + SIP (Standard Interchange Protocol) is a communications + protocol used within Evergreen for transferring data to and from + other third party devices, such as RFID and barcode scanners that + handle patron and library material information. Version 2.0 (also + known as "SIP2") is the current standard. It was originally + developed by the 3M Corporation. - + srfsh srfsh - A command language interpreter (shell) that executes commands read from the - standard input. It is used to test the Open Service Request Framework (OpenSRF). + A command language interpreter (shell) that executes commands + read from the standard input. It is used to test the Open Service + Request Framework (OpenSRF). - + SRU SRU - SRU is a standard XML-focused search protocol for Internet search - queries, utilizing CQL (Contextual Query Language), a standard syntax for - representing queries. + SRU (Search & Retrieve URL Service) is a search protocol + used in web search and retrieval. It expresses queries in Contextual + Query Language (CQL) and transmits them as a URL, returning XML data + as if it were a web page. + - + + SRW + + SRW + + + SRW (Search & Retrieve Web Service), also known as "SRU + via HTTP SOAP", is a search protocol used in web search and + retrieval. It uses a SOAP interface and expresses both the query and + result as XML data streams. + + + + SSH SSH - An encrypted network protocol using public-key cryptography that allows - secure communications between systems on an insecure network. Typically used - to access shell accounts but also supports tunneling, forwarding TCP ports and - X11 connections, and transferring files. + An encrypted network protocol using public-key cryptography + that allows secure communications between systems on an insecure + network. Typically used to access shell accounts but also supports + tunneling, forwarding TCP ports and X11 connections, and + transferring files. - + SSH proxy SSH proxy - As used in Evergreen, a method of allowing one or more Staff Clients to - communicate with one or more Evergreen servers over an insecure network by - sending data through a secure SSH tunnel. It also buffers and caches all data - travelling to and from Staff Clients to speed up access to resources on - Evergreen servers. + As used in Evergreen, a method of allowing one or more Staff + Clients to communicate with one or more Evergreen servers over an + insecure network by sending data through a secure SSH tunnel. It + also buffers and caches all data travelling to and from Staff + Clients to speed up access to resources on Evergreen servers. - + SSH tunnel SSH tunneling - An encrypted data channel existing over an SSH network connection. Used - to securely transfer unencrypted data streams over insecure networks. + An encrypted data channel existing over an SSH network + connection. Used to securely transfer unencrypted data streams over + insecure networks. - + SSL Certificate SSL - As used in Evergreen, it is a method of ensuring that Staff Clients are - able to connect to legitimate Evergreen servers. - In general, it is a special electronic document used to guarantee - authenticity of a digital message. Also known as a "public key", or "identity" - or "digital" certificate. It combines an identity (of a person or an - organization) and a unique public key to form a so-called digital signature, - and is used to verify that the public key does, in fact, belong with that - particular identity. + As used in Evergreen, it is a method of ensuring that Staff + Clients are able to connect to legitimate Evergreen servers. + In general, it is a special electronic document used to + guarantee authenticity of a digital message. Also known as a "public + key", or "identity" or "digital" certificate. It combines an + identity (of a person or an organization) and a unique public key to + form a so-called digital signature, and is used to verify that the + public key does, in fact, belong with that particular + identity. + T - + tunneling tunneling SSH tunneling - As used in Evergreen, it is a method of allowing Staff Clients to securely - connect to legitimate Evergreen servers. - In general, it is a method of encapsulating data provided in one network - protocol (the "delivery"protocol), within data in a different network protocol - (the "tunneling" protocol). Used to provide a secure path and secure - communications through an insecure or incompatible network. Can be used to - bypass firewalls by communicating via a protocol the firewall normally blocks, - but "wrapped" inside a protocol that the firewall does not block. + As used in Evergreen, it is a method of allowing Staff Clients + to securely connect to legitimate Evergreen servers. + In general, it is a method of encapsulating data provided in + one network protocol (the "delivery"protocol), within data in a + different network protocol (the "tunneling" protocol). Used to + provide a secure path and secure communications through an insecure + or incompatible network. Can be used to bypass firewalls by + communicating via a protocol the firewall normally blocks, but + "wrapped" inside a protocol that the firewall does not block. U - + Ubuntu Linux Ubuntu - A popular open-source operating system using the - Linux kernel that was originally based on the - Debian GNU/Linux operating system. - More information is available at + A popular open-source operating system using the + Linux kernel that was + originally based on the + Debian GNU/Linux + operating system. More information is available at http://www.ubuntu.com. @@ -500,88 +562,98 @@ V - + Virtualization virtualization - A method of executing software in a special environment that is partitioned or - separated from the real underlying hardware and software resources. In typical usage, it - allows a host operating system to encapsulate or emulate another - operating system environment in such a way that the emulated environment is completely - unaware of the hosting environment. For instance, it allows a - Linux application to execute within a + A method of executing software in a special environment that + is partitioned or separated from the real underlying hardware and + software resources. In typical usage, it allows a + host operating system to encapsulate or emulate + a guest operating system environment in such a + way that the emulated environment is completely unaware of the + hosting environment. As used in Evergreen, it enables a copy of the + Linux operating system + running Evergreen software to execute within a Windows environment. + + + - + VirtualBox virtualization software VirtualBox - A popular commercial package of virtualization software that emulates the x86 - microprocessor architecture. It can be installed on - Linux, Mac OS X, - Windows or - Solaris "host" operating systems and allows - other "guest" (typically including Linux and - Windows) operating systems to be loaded and executed. + A popular commercial package of virtualization software that + emulates the x86 microprocessor architecture. It can be installed on + Linux, + Mac OS X, + Windows or + Solaris "host" operating + systems and allows other "guest" (typically including + Linux and + Windows) operating systems + to be loaded and executed. - + Virtual PC virtualization software Virtual PC - A popular commercial package of virtualization software that emulates the x86 - microprocessor architecture. It is installed on a Windows "host" operating system and - allows other "guest" (typically including Linux - and Windows) operating systems to be loaded and - executed. + A popular commercial package of virtualization software that + emulates the x86 microprocessor architecture. It is installed on a + Windows "host" operating system and allows other "guest" (typically + including Linux and + Windows) operating systems + to be loaded and executed. - - Volume Buckets - - volume buckets - - - Virtual containers to use in batch processing of multiple - volumes. They can be used to perform various cataloging/holdings maintenance - tasks in batch. - - - + VMware virtualization software VMware - A popular commercial package of virtualization software that emulates the x86 - microprocessor architecture. It can be installed on - Linux, + A popular commercial package of virtualization software that + emulates the x86 microprocessor architecture. It can be installed on + Linux, Mac OS X, - Windows or - Solaris "host" operating systems and - allows other "guest" (typically including - Linux and - Windows) operating systems to be - loaded and executed. + Windows or + Solaris "host" operating systems + and allows other "guest" (typically including + Linux and + Windows) operating systems + to be loaded and executed. + + Volume Buckets + + volume buckets + + + Virtual containers to use in batch processing + of multiple volumes. They can be used to perform various + cataloging/holdings maintenance tasks in batch. + + W - + Wine Linux @@ -589,84 +661,88 @@ A popular open-source application that allows - Linux and + Linux and Unix - systems to run Windows executables. - More information is available at + systems to run Windows + executables. More information is available at http://www.winehq.org/. X - + XML XML - The eXtensible Markup Language, a subset of SGML; a set of rules for - encoding information in a way that is both human- and machine-readable. It is - primarily used to define documents but can also be used to define arbitrary - data structures. It was originally defined by the World Wide Web Consortium - (W3C). + The eXtensible Markup Language, a subset of SGML; a set of + rules for encoding information in a way that is both human- and + machine-readable. It is primarily used to define documents but can + also be used to define arbitrary data structures. It was originally + defined by the World Wide Web Consortium (W3C). - + XMPP XMPP jabber - An open-standard communications protocol, based on XML, used in - message-oriented middleware. It supports the concept of a consistent - domain of message types that flow between software - applications, possibly on different operating systems and architectures. - More information is available at - http://xmpp.org. + The open-standard communications protocol (based on XML) used + for client-server message passing within Evergreen. It supports the + concept of a consistent domain of message types + that flow between software applications, possibly on different + operating systems and architectures. More information is available + at http://xmpp.org. + - + xpath xpath - The XML Path Language, a query language based on a tree representation - of an XML document. It is used to programmatically select nodes from an XML - document and to do minor computation involving strings, numbers and Boolean - values. It allows you to identify parts of the XML document tree, to navigate - around the tree, and to uniquely select nodes. The currently version is "XPath - 2.0". It was originally defined by the World Wide Web Consortium (W3C). + The XML Path Language, a query language based on a tree + representation of an XML document. It is used to programmatically + select nodes from an XML document and to do minor computation + involving strings, numbers and Boolean values. It allows you to + identify parts of the XML document tree, to navigate around the + tree, and to uniquely select nodes. The currently version is "XPath + 2.0". It was originally defined by the World Wide Web Consortium + (W3C). - + XUL xUL - The XML User Interface Language, a specialized interface language that allows - building cross-platform applications that drive Mozilla - -based browsers such as Firefox. More information is - available at + The XML User Interface Language, a specialized interface + language that allows building cross-platform applications that drive + Mozilla-based browsers such as + Firefox. More information is available at https://developer.mozilla.org/en/XUL. - + xulrunner XULRunner - A specialized run-time application environment that provides support for - installing, upgrading and uninstalling XUL - applications. It operates with Mozilla-based - applications such as the Firefox browser. - More information is available at + A specialized run-time application environment that provides + support for installing, upgrading and uninstalling + XUL applications. It operates with + Mozilla-based applications such as the + Firefox browser. More information is + available at https://developer.mozilla.org/en/XULRunner. @@ -675,38 +751,45 @@ Y - + YAZ yaz - A programmers’ toolkit supporting the development of Z39.50/SRW/SRU - clients and servers. + A programmers’ toolkit supporting the development of + Z39.50 / SRW / SRU clients and servers. + + + - + yaz yaz-client - Z39.50/SRU client for connecting to YAZ servers. - More information is available at - http://www.indexdata.com/yaz + A Z39.50/SRU client for connecting to YAZ servers. + More information is available at + + http://www.indexdata.com/yaz/doc/yaz-client.html + Z - + Z39.50 Z39.50 - A client–server protocol for searching and retrieving information from - remote computer databases. + An international standard client–server protocol for + communication between computer systems, primarily library and + information related systems. + diff --git a/2.0/media/serversideinstallation-staffclient-running-1.png b/2.0/media/serversideinstallation-staffclient-running-1.png new file mode 100755 index 0000000..4ae44db Binary files /dev/null and b/2.0/media/serversideinstallation-staffclient-running-1.png differ diff --git a/2.0/media/serversideinstallation-staffclient-running-2.png b/2.0/media/serversideinstallation-staffclient-running-2.png new file mode 100755 index 0000000..bd04752 Binary files /dev/null and b/2.0/media/serversideinstallation-staffclient-running-2.png differ diff --git a/2.0/media/serversideinstallation-staffclient-running-3.png b/2.0/media/serversideinstallation-staffclient-running-3.png new file mode 100755 index 0000000..6d49b34 Binary files /dev/null and b/2.0/media/serversideinstallation-staffclient-running-3.png differ diff --git a/2.0/media/serversideinstallation-staffclient-running-4.png b/2.0/media/serversideinstallation-staffclient-running-4.png new file mode 100755 index 0000000..4640fa6 Binary files /dev/null and b/2.0/media/serversideinstallation-staffclient-running-4.png differ diff --git a/2.0/media/serversideinstallation-testing-1.png b/2.0/media/serversideinstallation-testing-1.png new file mode 100755 index 0000000..96b60ef Binary files /dev/null and b/2.0/media/serversideinstallation-testing-1.png differ diff --git a/2.0/media/serversideinstallation-testing-2.png b/2.0/media/serversideinstallation-testing-2.png new file mode 100755 index 0000000..dbe991d Binary files /dev/null and b/2.0/media/serversideinstallation-testing-2.png differ diff --git a/2.0/media/serversideinstallation-virtual-vbox-install-1.png b/2.0/media/serversideinstallation-virtual-vbox-install-1.png new file mode 100755 index 0000000..d5c94a2 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vbox-install-1.png differ diff --git a/2.0/media/serversideinstallation-virtual-vbox-install-2.png b/2.0/media/serversideinstallation-virtual-vbox-install-2.png new file mode 100755 index 0000000..ffd4683 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vbox-install-2.png differ diff --git a/2.0/media/serversideinstallation-virtual-vbox-install-3.png b/2.0/media/serversideinstallation-virtual-vbox-install-3.png new file mode 100755 index 0000000..41cd113 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vbox-install-3.png differ diff --git a/2.0/media/serversideinstallation-virtual-vbox-install-4.png b/2.0/media/serversideinstallation-virtual-vbox-install-4.png new file mode 100755 index 0000000..640aeba Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vbox-install-4.png differ diff --git a/2.0/media/serversideinstallation-virtual-vbox-install-5.png b/2.0/media/serversideinstallation-virtual-vbox-install-5.png new file mode 100755 index 0000000..b079638 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vbox-install-5.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-1.png b/2.0/media/serversideinstallation-virtual-vm-install-1.png new file mode 100755 index 0000000..e6c5e1e Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-1.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-10.png b/2.0/media/serversideinstallation-virtual-vm-install-10.png new file mode 100755 index 0000000..1e4d841 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-10.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-11.png b/2.0/media/serversideinstallation-virtual-vm-install-11.png new file mode 100755 index 0000000..12b4c88 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-11.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-12.png b/2.0/media/serversideinstallation-virtual-vm-install-12.png new file mode 100755 index 0000000..f7d2457 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-12.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-13.png b/2.0/media/serversideinstallation-virtual-vm-install-13.png new file mode 100755 index 0000000..6020264 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-13.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-14.png b/2.0/media/serversideinstallation-virtual-vm-install-14.png new file mode 100755 index 0000000..e3a644f Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-14.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-15.png b/2.0/media/serversideinstallation-virtual-vm-install-15.png new file mode 100755 index 0000000..6020264 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-15.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-16.png b/2.0/media/serversideinstallation-virtual-vm-install-16.png new file mode 100755 index 0000000..e3a644f Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-16.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-17.png b/2.0/media/serversideinstallation-virtual-vm-install-17.png new file mode 100755 index 0000000..ccdcc8f Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-17.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-18.png b/2.0/media/serversideinstallation-virtual-vm-install-18.png new file mode 100755 index 0000000..1a5cabf Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-18.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-19.png b/2.0/media/serversideinstallation-virtual-vm-install-19.png new file mode 100755 index 0000000..b8dcbe3 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-19.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-2.png b/2.0/media/serversideinstallation-virtual-vm-install-2.png new file mode 100755 index 0000000..3464efd Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-2.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-20.png b/2.0/media/serversideinstallation-virtual-vm-install-20.png new file mode 100755 index 0000000..ff2dcfb Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-20.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-21.png b/2.0/media/serversideinstallation-virtual-vm-install-21.png new file mode 100755 index 0000000..d2a5201 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-21.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-3.png b/2.0/media/serversideinstallation-virtual-vm-install-3.png new file mode 100755 index 0000000..248d132 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-3.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-4.png b/2.0/media/serversideinstallation-virtual-vm-install-4.png new file mode 100755 index 0000000..a288fdc Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-4.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-5.png b/2.0/media/serversideinstallation-virtual-vm-install-5.png new file mode 100755 index 0000000..c88fa09 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-5.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-6.png b/2.0/media/serversideinstallation-virtual-vm-install-6.png new file mode 100755 index 0000000..730bc20 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-6.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-7.png b/2.0/media/serversideinstallation-virtual-vm-install-7.png new file mode 100755 index 0000000..f57fd9b Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-7.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-8.png b/2.0/media/serversideinstallation-virtual-vm-install-8.png new file mode 100755 index 0000000..adfc034 Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-8.png differ diff --git a/2.0/media/serversideinstallation-virtual-vm-install-9.png b/2.0/media/serversideinstallation-virtual-vm-install-9.png new file mode 100755 index 0000000..74ab56f Binary files /dev/null and b/2.0/media/serversideinstallation-virtual-vm-install-9.png differ diff --git a/2.0/pdf/temp.fo b/2.0/pdf/temp.fo new file mode 100644 index 0000000..f599260 --- /dev/null +++ b/2.0/pdf/temp.fo @@ -0,0 +1,42041 @@ + +Evergreen 2.0 Documentation - Draft VersionDocumentation Interest GroupDocBook XSL Stylesheets with Apache FOPEvergreen 2.0 DocumentationTable of ContentsPart I. IntroductionChapter 1. About EvergreenChapter 2. 2.0 Feature ListPart II. Public Access CatalogPart III. Core Staff TasksChapter 3. Alternate Serial ControlChapter 4. Using the Booking ModuleCreating a Booking ReservationCancelling a ReservationCreating a Pull ListCapturing Items for ReservationsPicking Up ReservationsReturning ReservationsPart IV. AdministrationChapter 5. Server-side Installation of Evergreen SoftwareInstalling Server-Side SoftwareInstalling OpenSRF 1.6.2 On Ubuntu or DebianInstalling Evergreen 2.0 On Ubuntu or DebianStarting EvergreenTesting Your Evergreen InstallationPost-Installation ChoresRemove temporary Apache configuration changesConfigure a permanent SSL key(OPTIONAL) IP-Redirection(OPTIONAL) Set Up Support For ReportsChapter 6. Upgrading Evergreen to 2.0Backing Up DataUpgrading OpenSRF to 1.6.2Upgrade Evergreen from 1.6.1 to 2.0Restart Evergreen and TestUpgrading PostgreSQL from 8.2 to 8.4 (if required)Chapter 7. Migrating DataMigrating Bibliographic RecordsMigrating Bibliographic Records Using the ESI Migration ToolsAdding Copies to Bibliographic RecordsMigrating Patron DataRestoring your Evergreen Database to an Empty StateExporting Bibliographic Records into MARC filesImporting Authority RecordsChapter 8. Server Operations and MaintenanceStarting, Stopping and RestartingBacking UpSecurityManaging Log FilesInstalling PostgreSQL from SourceConfiguring PostgreSQLChapter 9. SIP ServerInstalling the SIP ServerSIP CommunicationChapter 10. SRU and Z39.50 ServerTesting SRU with yaz-clientSetting up Z39.50 server supportChapter 11. Troubleshooting System ErrorsChapter 12. Action TriggersEvent DefinitionsHooksReactorsValidatorsProcessing Action TriggersChapter 13. Booking Module AdministrationMake a Cataloged Item Bookable in AdvanceMake a Cataloged Item Bookable On the FlyCreate a Bookable Status for Non-Bibliographic ItemsSetting Booking PermissionsPart V. ReportsPart VI. Third Party System IntegrationPart VII. DevelopmentChapter 14. Evergreen File Structure and Configuration FilesEvergreen Directory StructureEvergreen Configuration FilesChapter 15. Customizing the Staff ClientChanging Colors and ImagesChanging Labels and MessagesChanging the Search SkinChapter 16. Customizing the OPACChange the Color Schemecustomizing Opac Text and LabelsLogo ImagesAdded ContentCustomizing the Results PageCustomizing the Details PageBibTemplateCustomizing the SlimpacIntegrating a Evergreen Search Form on a Web PageChapter 17. OpenSRFIntroducing OpenSRFWriting an OpenSRF ServiceOpenSRF Communication FlowsEvergreen-specific OpenSRF servicesChapter 18. Evergreen Data Models and AccessExploring the Database SchemaDatabase access methodsEvergreen Interface Definition Language (IDL)open-ils.cstore data access interfacesopen-ils.pcrud data access interfacesTransaction and savepoint controlAdding an IDL entry for ResolverResolverChapter 19. Introduction to SQL for Evergreen AdministratorsIntroduction to SQL DatabasesBasic SQL queriesAdvanced SQL queriesUnderstanding query performance with EXPLAINInserting, updating, and deleting dataQuery requestsChapter 20. JSON QueriesChapter 21. SuperCatUsing SuperCatAdding new SuperCat FormatsCustomizing SuperCat FormatsPart VIII. AppendicesChapter 22. Database SchemaSchema acqSchema actionSchema action_triggerSchema actorSchema assetSchema auditorSchema authoritySchema biblioSchema bookingSchema configSchema containerSchema extend_reporterSchema metabibSchema moneySchema offlineSchema permissionSchema publicSchema querySchema reporterSchema searchSchema serialSchema stagingSchema statsSchema vandelayAppendix A. About this DocumentationAbout the Documentation Interest Group (DIG)How to ParticipateAppendix B. Getting More InformationGlossaryIndex + Report errors in this documentation using Launchpad. + + Report any errors in this documentation using Launchpad. + Evergreen 2.0 DocumentationDraft VersionDocumentation Interest GroupEvergreen 2.0 Documentation: Draft VersionDocumentation Interest GroupCopyright © 2011 Evergreen Community + + + + This document was updated 2011-02-21. + Evergreen 2.0 DocumentationEvergreen 2.0 Documentation + Report errors in this documentation using Launchpad. + Evergreen 2.0 Documentation + Report any errors in this documentation using Launchpad. + Evergreen 2.0 DocumentationTable of ContentsI. Introduction 1. About Evergreen 2. 2.0 Feature List II. Public Access Catalog III. Core Staff Tasks 3. Alternate Serial Control 4. Using the Booking Module Creating a Booking Reservation Cancelling a Reservation Creating a Pull List Capturing Items for Reservations Picking Up Reservations Returning Reservations IV. Administration 5. Server-side Installation of Evergreen Software Installing Server-Side Software Installing OpenSRF 1.6.2 On Ubuntu or + Debian Installing Evergreen 2.0 On Ubuntu or + Debian Starting Evergreen Testing Your Evergreen Installation Post-Installation Chores 6. Upgrading Evergreen to 2.0 Backing Up Data Upgrading OpenSRF to 1.6.2 Upgrade Evergreen from 1.6.1 to 2.0 Restart Evergreen and Test Upgrading PostgreSQL from 8.2 to 8.4 (if required) 7. Migrating Data Migrating Bibliographic Records Migrating Bibliographic Records Using the ESI Migration Tools Adding Copies to Bibliographic Records Migrating Patron Data Restoring your Evergreen Database to an Empty State Exporting Bibliographic Records into MARC files Importing Authority Records 8. Server Operations and Maintenance Starting, Stopping and Restarting Backing Up Security Managing Log Files Installing PostgreSQL from Source Configuring PostgreSQL 9. SIP Server Installing the SIP Server SIP Communication 10. SRU and Z39.50 Server Testing SRU with yaz-client Setting up Z39.50 server support 11. Troubleshooting System Errors 12. Action Triggers Event Definitions Hooks Reactors Validators Processing Action Triggers 13. Booking Module Administration Make a Cataloged Item Bookable in Advance Make a Cataloged Item Bookable On the Fly Create a Bookable Status for Non-Bibliographic Items Setting Booking Permissions V. Reports VI. Third Party System Integration VII. Development 14. Evergreen File Structure and Configuration Files Evergreen Directory Structure Evergreen Configuration Files 15. Customizing the Staff Client Changing Colors and Images Changing Labels and Messages Changing the Search Skin 16. Customizing the OPAC Change the Color Scheme customizing Opac Text and Labels Logo Images Added Content Customizing the Results Page Customizing the Details Page BibTemplate Customizing the Slimpac Integrating a Evergreen Search Form on a Web Page 17. OpenSRF Introducing OpenSRF Writing an OpenSRF Service OpenSRF Communication Flows Evergreen-specific OpenSRF services 18. Evergreen Data Models and Access Exploring the Database Schema Database access methods Evergreen Interface Definition Language (IDL) open-ils.cstore data access interfaces open-ils.pcrud data access interfaces Transaction and savepoint control Adding an IDL entry for ResolverResolver 19. Introduction to SQL for Evergreen Administrators Introduction to SQL Databases Basic SQL queries Advanced SQL queries Understanding query performance with EXPLAIN Inserting, updating, and deleting data Query requests 20. JSON Queries 21. SuperCat Using SuperCat Adding new SuperCat Formats Customizing SuperCat Formats VIII. Appendices 22. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema query Schema reporter Schema search Schema serial Schema staging Schema stats Schema vandelay A. About this Documentation About the Documentation Interest Group (DIG) How to Participate B. Getting More Information Glossary Index Evergreen 2.0 DocumentationEvergreen 2.0 Documentation + Report errors in this documentation using Launchpad. + Evergreen 2.0 Documentation + Report any errors in this documentation using Launchpad. + Evergreen 2.0 DocumentationList of Tables5.1. Evergreen Software Dependencies 5.2. Sample XPath syntax for editing "opensrf_core.xml" 5.3. Sample XPath syntax for editing "opensrf_core.xml" 8.1. Suggested configuration values 12.1. Action Trigger Event Definitions 12.2. Hooks 12.3. Action Trigger Reactors 12.4. Action Trigger Validators 14.1. Evergreen Directory Structure 14.2. Key Evergreen Configuration Files 14.3. Useful Evergreen Scripts 19.1. Examples: database object names 19.2. Evergreen schema names 19.3. PostgreSQL data types used by Evergreen 19.4. Example: Some potential natural primary keys for a table of people 19.5. Example: Evergreen’s copy / call number / bibliographic record relationships A.1. Evergreen DIG Participants A.2. Past DIG Participants + + + + + Report errors in this documentation using Launchpad. + + Report any errors in this documentation using Launchpad. + Part I. IntroductionThe book you’re holding in your hands or viewing on a screen is The Book of Evergreen, the official guide to the 2.x version of the Evergreen open source library automation software. This guide was produced by the Evergreen Documentation Interest Group (DIG), consisting of numerous volunteers from many different organizations. The DIG has drawn together, edited, and supplemented pre-existing documentation contributed by libraries and consortia running Evergreen that were kind enough to release their documentation into the creative commons. For a full list of authors and contributing organizations, see Appendix A, About this Documentation. Just like the software it describes, this guide is a work in progress, continually revised to meet the needs of its users, so if you find errors or omissions, please let us know, by contacting the DIG facilitators at docs@evergreen-ils.org.This guide to Evergreen is intended to meet the needs of front-line library staff, catalogers, library administrators, system administrators, and software developers. It is organized into Parts, Chapters, and Sections addressing key aspects of the software, beginning with the topics of broadest interest to the largest groups of users and progressing to some of the more specialized and technical topics of interest to smaller numbers of users.Copies of this guide can be accessed in PDF and HTML formats from the Documentation section of http://evergreen-ils.org/ and are included in DocBook XML format along with the Evergreen source code, available for download from the same Web site. + Chapter 1. About EvergreenChapter 1. About Evergreen + Report errors in this documentation using Launchpad. + Chapter 1. About Evergreen + Report any errors in this documentation using Launchpad. + Chapter 1. About EvergreenChapter 1. About Evergreen + + Evergreen is an open source library automation software designed to meet the needs of the very smallest to the very largest libraries and consortia. Through its staff interface, it facilitates the management, cataloging, and circulation of library materials, and through its online public access interface it helps patrons find those materials. + The Evergreen software is freely licensed under the GNU General Public License, meaning that it is free to download, use, view, modify, and share. It has an active development and user community, as well as several companies offering migration, support, hosting, and development services. + The community’s development requirements state that Evergreen must be: + •Stable, even under extreme load.•Robust, and capable of handling a high volume of transactions and simultaneous users.•Flexible, to accommodate the varied needs of libraries.•Secure, to protect our patrons’ privacy and data.•User-friendly, to facilitate patron and staff use of the system. + Evergreen, which first launched in 2006 now powers over 544 libraries of every type – public, academic, special, school, and even tribal and home libraries – in over a dozen countries worldwide. + + Chapter 2. 2.0 Feature ListChapter 2. 2.0 Feature List + Report errors in this documentation using Launchpad. + Chapter 2. 2.0 Feature List + Report any errors in this documentation using Launchpad. + Chapter 2. 2.0 Feature ListChapter 2. 2.0 Feature List + + + + CirculationCirculation + + Patron Registration EnhancementsPatron Registration Enhancements + + •Zip code information can be added to a local table which will pre-populate the City/State fields during patron registration. •Added the ability to delete patrons by anonymizing the patron's personally identifiable data and purging the related data from other tables + without destroying information important to the integrity of the database as a whole (does not delete the actor.usr row). •Supports the ability to merge patrons; when it is determined that more than one account exists for a single patron. There is an interface for + side-by-side comparison of the records; ability to delete addresses on merged accounts, delete cards and deactivate cards. Patrons with a status of in collections + are not eligible for merging. •Added quick links for staff to copy and paste patron address information. Information will paste in a standard mailing format. •Patrons with an address alert (invalid/bad address) will be displayed at the top of a duplicates list. •Patrons may create library accounts through the OPAC. These are set as pending until they can be confirmed by staff. The backend support for this + is done.•The system recognizes certain categories of patrons like Card Canceled, Deceased, etc. and will not place holds for these categories. •The patron record screen obscures certain information which can be considered sensitive. •Patrons may create library accounts through the OPAC. These are set as pending until they can be confirmed by staff. The backend support for this + is done.•Evergreen has the ability to automatically enter date, user, and location in messages and notes. + + Item Checkout enhancementsItem Checkout enhancements + + •During check-out, the patron's fines list appears first if there is a balance. If there is an alert, the alert page will show first, then fines + screen. •Evergreen has the ability to track hourly checkout stats. Self-check now operates by workstation and it's possible to gather statistics for checkouts + between staff workstations and self-check workstations. (There is a workstation registration wizard built into the self-check UI.) •Audible cue support, for successful and unsuccessful check-out, at self check-out stations has been added. This is customizable at the database level.•Evergreen has fast-add capability. During check-out, if an item is found not to be cataloged,you can pre-cat the item quickly, we've added other field + such as library, ISBN and circ modifier to this pre-cat. •The system supports sets or kits of items and has the ability to display the number of items and a list of descriptions. •Evergreen allows patrons to renew a title as long as they have not exceeded the allowed number of renewals and there are more available items + than there are unfrozen holds. This is an administration setting. + + Self Check module enhancementsSelf Check module enhancements + + •In self check and SC, if a staff member checks out an item to a patron that is already checked out to that patron, the item will simply renew. This does + have configurable age-based parameters to prevent a double scan at checkout resulting in a renewal. •For self check receipts, receipts include the same information for renewal as checkouts and includes notes on items that failed to renew. •In the self-check UI, patrons can view holds and patron position vs. the number of circulating copies. •The self check-out station displays holds ready for pickup, then removes each hold as the item is checked out. •Evergreen supports the ability to pay fines with a credit card at self check-out stations. This requires the library to have a merchant account with a credit + card processor like Paypal. The current supported processors include Authorize.net and Paypal. + + Item Check-in enhancementsItem Check-in enhancements + + •Evergreen supports a set number of claim returns allowed; beyond that, additional claim returns require supervisor authorization. This is based off the + claims returned counter. This only blocks another claim returned, and circulation can still occur. Also, there is a new permission to allow changing the claims + returned count for a patron. In order to use this feature, staff needs to have the appropriate permission. •There's a new calendar widget in the backdating function in the item check-in module. The system has the ability to select items already checked in + and retroactively backdate those items, using a button with a calendar selector. Any fines resulting from original check-in are removed. When a check-in is backdated, + the item record retains both the actual date of check-in and backdate used. This information will display in the copy details interface. •When marking an item damaged, several library settings are checked to determine whether the patron should be charged the copy price and/or a processing fee. + Staff is prompted with this amount, which can either be applied or modified or canceled. + + Holds EnhancementsHolds Enhancements + + •Evergreen allows for hold slips to be customized to include any field from the patron record and/or item record, in any position and orientation on the + slip. Font, font size, and font weight are customizable. In addition, the hold slip may include a branch symbol (gif or jpg format) •Evergreen supports behind the desk indicator printing on holds slip for patrons who have this flag in their patron record. (This would be for libraries + with public hold shelves.) •In Evergreen, between the time that a hold is checked in and a hold is placed on the hold shelf, there is a configurable delay before the status is changed + to On Hold Shelf. •Evergreen has the ability to ensure that manually edited copies (either deleting or changing to a non-holdable status) will have their holds + retargeted. •In Evergreen, between the time that a hold is checked in and a hold is placed on the hold shelf, there is a configurable delay before the status is + changed to On Hold Shelf. •The system supports a Clear Hold Shelf process. First, it removes holds from items that have expired on the hold shelf, and generates a report (aka clear + hold shelf report) listing items to be cleared from hold shelf. Then staff can print the list, go out and physically pull the items off of the hold shelf. Next, + staff scan the items in EG to either reset the items to the correct shelving location, capture the next hold or put the items in transit to the correct owning + location.•Staff can extend pickup deadlines for holds.•In the patron view in the SC (staff client), you can select multiple holds in actions for selected holds and choose to change the pickup location. + Evergreen has the ability to change the pickup location for all of a patron's holds in a single process. Additionally, Evergreen has the ability to modify all + holds attached to a bibliographic record, including the ability to change the hold expiration date. This functionality is covered with current bib holds list + interface. •Evergreen allows patrons with specific permissions to place holds on items they have already checked out. All other patrons cannot. This works by warning the + user that the item is already checked out to them and, if they have the permission, the system gives them the ability to override. •The system supports the ability to place holds on titles with status on-order. For additional information, see the Acquisitions notes later in this + document. •Evergreen has the ability to designate specific org units that will not trigger a hold upon check-in. •Evergreen added logic to hold targeting to skip branches that are closed at the time of hold placement and x time (x + time being a set interval). This is + to prevent the hold being targeted at branches that will be closed Saturday and Sunday (for example), making it impossible for patrons to receive their hold. This + presumes there is another copy available at another branch. •There are more options now for hold settings. One option is library weighting as well as looping. If looping is set, the holds targeter will skip any + libraries that it targeted in a previous loop and will continue doing so until it has tried all libraries at which point it will start the process over again. If max + loops are being used in hold management, at the end of the last determined loop, if there are no copies that could potentially fill a hold, the hold may be canceled. + If there are checked-out copies, the hold stays in queue; otherwise, the hold is canceled and a cancellation notice is sent to the patron. •The system offers the ability to secondarily sort the Holds Pull List by physical shelving location within the library.•The system offers the ability to distinguish between staff-placed holds and patron-placed holds through a column in the holds interface. •Hold cancellation can be displayed, along with information regarding the cancellation (e.g., cause, cancellation type, date, item, patron, etc.) •There is support now in the system to allow configuration to disallow holds for items that are on the shelf at the location from which the patron is + searching. •The system supports patron specific hold notes that can display in the OPAC and print in the hold notice, but do not necessarily print on hold slips. •The system supports ability for staff to move someone to the top of the holds queue. This was developed due to cases where a patron picked up a hold but the + item was damaged. Since the patron had picked up the hold, it was considered filled. •The patron can change the pickup location before the hold is ready for pickup. Then, the item is put in transit & a new holds slip is printed with a + special symbol to indicate that the pickup location has been changed. If the location is changed while the item is in transit, than at next checkin the item is put + in transit to the new location. A new holds slip is printed. •The system supports a separate hold note field for staff use that can print on hold slip.•Ability for patrons to view recently canceled holds and easily re-place holds. + + + Staff Client Interface EnhancementsStaff Client Interface Enhancements + + •Evergreen includes color-coding into staff view of patrons when there is a bad or invalid address. Also included is an alert to patrons in the My Account view + in the OPAC to alert them to the bad address problem. System automatically blocks /unblocks a patron when an address is marked invalid/valid. •Ability to have the staff client automatically minimize after a settable period of inactivity to protect patron privacy. This is controlled through an org + unit setting.•Summary of bills, checkouts, and holds are visible from all of the patron screens.•Historical summary of paid fines is sortable by column and displays sub-totals for each column; also allows the ability to limit by voided/non-voided + payments. Fines history detail includes more information including location and time/date where item was returned and much more. •More streamlined display of copy information including number of copies, copy status, and number of holds in both staff client interface and patron + OPAC.•The system supports the ability to edit item records from any item record access point. •From holding maintenance or item status by barcode, you can retrieve more item details. For example, total circulations by current and previous year, last + status change, last checkout date & works station, last checkin time and workstation, and more. •The system includes a separate date field for the last change to the item in the item record. •In the item record, the system displays total check-outs and renewals for year-to-date, previous year, and lifetime. •Better audio signal handling.•In Evergreen, there is an org setting to disable all staff client circ popups unless an unhandled exception occurs. The exception handling has been automated + as much as possible, based in settings, to prevent the amount of popups that require staff attention at the circ desk. Alerts are communicated visually (e.g., screen + color change) or audibly. •The system supports two views of patron information: horizontal and vertical. •From the patrons screen, under holds, clicking place hold will bring up an embedded catalog. Placing a hold from the embedded catalog will automatically + generate a hold for the current account of the patron you are viewing. •The system supports a new messages (notes) UI in the info tab of the patron screen. •The system supports a new interface that shows the most recent activity on the workstation (checkout/checkin/ renewal/ patron-reg activity, with links to + relevant patron from each item). This would be helpful to a supervisor trying to backtrack an issue to assist a staff member. •The system now captures and displays check-in and workstation history. •Added the ability to pre-define messages, populated in a drop-down menu, to be applied to patron accounts. Includes: the ability to configure the message to + act as a penalty (if desired), record the date and staff who applied the message, include a flag to mark item as resolved. If item is marked as resolved it will not + display as an alert. •Under grocery billings in Evergreen, billing type can be pre-populated with a list of common fine events (such as types and costs). •Evergreen has the ability to retrieve users by numberic ID (separate from the barcode) in the staff client. This functionality is optional and set to false + by default. •Backend support for other types of receipts (like holds/fines). + + OPAC and My Account EnhancementsOPAC and My Account Enhancements + + •There is backend code support for a method to allow patrons to link their records in a way that grant privileges. This could be utilized in future + implementations for social networking features. •Patron passwords are now more flexible in length and content (shorter and numeric-only passwords are now allowed). Libraries can set minimum and maximum + limits on password length in Password format in the Library Settings Editor. •Patrons can select a username, which can then be used to access OPAC and self check-out stations.•My Account can allow patrons to update some information including: street address, e-mail address and preferred pick-up library for holds. Changes to address + will be marked as pending in the patron's file until a staff member verifies the new address and completes the change. •From the My Account interface, patrons can see their estimated wait time for a hold. Evergreen calculates the estimated wait time from the circ mods on the set + of potential copies available to fill the holds on that title. Hold wait estimate is configurable at the consortial level and each Evergreen implementation would need + to take into consideration their avg circulation time, hold wait time or other factors like transit time which might influence hold wait estimates. •Patrons can title their bookbags (aka reading list) and place holds from it. •Backend support has been developed to allow patrons to waive certain kinds of notices. •The system supports combining multiple notices of the same type to the same patron into one event, so long as the system is configured to batch notices + on an approximately daily basis. + + Billing, Collections and Fine/Fee EnhancementsBilling, Collections and Fine/Fee Enhancements + + •Fines now consistently link to item record details.•The fine record includes a comments field, editable by staff. Staff can annotate payments and add notes to a specific billing. Staff can sort on payment + type. When adding note, the current text shows as default in a pop-up window, so it can be appended or over-written. •Staff and users can now only pay using the latest user data, which prevents accidental/duplicate payments against the same transaction or against stale + data. •The system supports setting the maximum fine based on item type (e.g. generic=.50) AND not to exceed the cost of item. This works as an inheritable OU + setting, circ.max_fine.cap_at_price, that changes the max_fine amount to the price IF the price is not null and is less than the rule-based max_fine amount. •The system has the ability to run a report of accounts with users with overall negative balances, including the balance owed and last billing activity + time, optionally filtered by home org. There is an option for issuing refunds for selected accounts on the resulting list. The report also captures patrons with + any refundable transaction. •Evergreen provides 3 distinct and independent types of blocks: system, manual and collections. Manual and collections are set manually by staff. •A new penalty type of PATRON_IN_COLLECTIONS has been added. Its set when the collections agency puts the patron into collections, staff can define the blocks + and clear threshold for each group, etc. The system supports removing collection block immediately once charges are paid down to zero (applies to both ecommerce and at + CIRC desk). + + Action/Triggered Event and Notice EnhancementsAction/Triggered Event and Notice Enhancements + + •Action Triggers (AT) support many new notices for events such as items that are about to expire off of the hold shelf; items that are on hold and are about + to reach the max hold time (if one is set); courtesy notices that are prior to due date. AT also logs all notices sent to patrons and this is accessible to staff in the + SC to view all notices or cancel all pending notices.•The system has the ability to cancel unsent notices before they are sent and the ability to search pending notices by item barcode.•Administrators can choose to implement a collections warning prior to sending patrons to collections. When the account balance of the patron meets a + certain threshold, they are sent a bill notice. This is driven by the total amount owed, not by individual bills. The patron is sent to collections after a + configurable number of days since the bill notice was sent. The billing notice is handled with a new PATRON_EXCEEDS_COLLECTIONS_WARNING penalty. Files can be sent via + SCP and FTP. + + AcquisitionsAcquisitions + + •From within the general acquisitions search page, users are able to search on many fields in the acquisitions /serials workflow. For example on attributes + of invoices, purchase orders, selection lists, bib records, etc.•General catalog searching is now supported for explicit truncation/wildcard searches.•Acquisitions line item searches support NOT searches. •Money can be transferred from one fund to another (or to none).•All transactions (except batch EDI delivery to vendors) post in real time including: purchase orders, invoices, fund balances, vendors balances, vendor + statistics and history. EDI delivery delay is configurable at the system level admin interface.•In the User Interface, users now have access to all active funds, spanning multiple years, in the various ordering/invoicing/etc interfaces.•There is support for year-end fiscal turnover process that closes out funds and transfers encumbered amounts into a new fiscal year. This includes the ability + to selectively roll certain funds over, while not rolling over others. • Evergreen handles validation of ordering, receiving,and invoicing processes, using validated data, to satisfy auditor requirements. In the staff client, + there is a menu option which allows staff to locate the PO that resulted in the purchase of that copy.•Selection lists are collections of bibliographic records (short or full) that temporarily store titles being considered for purchase. Selection lists can be + shared for collaborative input.•Library staff have the ability to create distribution formulas for ease of receiving, processing and distributing materials. Branch, shelving location, and fund need + to be separate from the distribution formula, so that staff can enter the distribution sets. Staff are able to use that formula for any shelving location they decide. Staff + also have the ability to add multiple distribution formulas together and the ability to override distribution formulas. After applying the distribution formula; it will be an + all or none redistribution of copies from one branch to another. Staff can add or delete individual copies because the distribution pattern may not account for the exact total + of copies. If the total number of copies has not been allocated, the user will receive a flag or warning. This puts the use count for each distribution formula in the DF + dropdown for users to see.•The system supports Batch ISBN/UPC search. This is located in the general Acquisitions search page, where you can choose to search by single isbn, upc, or you can + choose to upload a batch of isbns. The ISBN search method looks at MARC tag 024, where UPC codes are supposed to live. For LI searching, the system uses + open-ils.acq.lineitem.search.ident. Catalog records are included in the batch isbn/upc search and staff can now search catalog records in the acq search. •Backend support has been integrated to give patrons the ability to submit purchase requests through the OPAC. The UI for this has not yet been integrated into the + OPAC.• The system supports claiming, specifically there is: + •a place to store the default claim interval for each vendor•a way to show the selected claim date during the order•a way to show the selected claim date during the order•a way to override the claim date during order•a way to list items/orders that have reached the claim date. + A list of items that meet claims requirements can be generated, but claims must be initiated by librarians. •From the UI, staff can access the lineitem and PO history. Entries in the history table are ordered from most recent to oldest.•The purchase order printout is customizable, including the ability to break up a single order into separate purchase orders. Also, staff can print groups + of POs from a search as a single printout, which can be used to generate physical POs for vendors who do not support EDI. Staff can add notes and there is an indicator + in the PO interface of the existence/number of attached notes. •Staff are able to see all of the lineitems (with prices, copy counts, etc.) for a set of Pos and summary information is listed along the top of the page. + The summary information includes: total price, total # lineitems, and total # of copies. Additionally, staff can do a PO search by vendor for all + activated-but-not-yet-sent Pos (i.e., show me what we are about to order) and view the results.•The system supports flagging prepaid orders so that invoicing is handled correctly.•The system allows building orders based on templates (distribution formulas); by shelving location or owning library.•The system supports the ability to gather orders together and send them all at once, instead of manually and individually, a rolling FTP function that runs + every 15 minutes (or other set interval) with detailed log information and control of frequency and action. Additionally, there is an automatic retrieval of status + reports records from the vendor, which are then automatically inserted into the order records. •Staff have the ability to apply and view notes and cancel causes on purchase orders as well as cancel causes on lineitems. In the UI, there is a staff client + menu entry for cancel cause.•There is an interface in the ACQ system for viewing what was sent to vendors via EDI. There are two ways to approach the viewing of sent orders: via PO + search interface (for the general case) which gives finer detail on EDI orders and the ability to reset failed outbound EDI deliveries. •Pending final UI work in the OPAC, the system has the ability to allow patrons to place volume level and issue level holds. •Ability to create and print routing worksheets for manual receiving processes.•Nothing in the selection lists is holdable (either by patrons or by most staff, apart from acquisitions staff). When an on-order title has been canceled and + the lineitem is canceled, the corresponding bib record and on-order copies will be deleted so the copies will no longer be holdable. The lineitem has a cancel cause to + show why order was canceled. Selection list records are never visible in the OPAC. Catalog records with no visible copies (within the search scope) do not show up in + the public OPAC. This also applies to on-order records. •Deleted bibs, callnumbers, copies, and patrons are retained for reporting purposes. Only patrons can be purged (by staff). “Deleted” items are more + accurately described as “inactive.” However, patrons can now be complete purged, however this isn't recommended as you lose historical data.• The system supports shared and floating items by collection. Item records can be added or removed from the collection group and can be updated in batch + via buckets in the copy edit interface.•ACQ permissions control which workgroups have view/edit access to lineitem and catalog records while PO/PL and copy-level ownership and permission depths + affect viewing in other, more location-specific interfaces. •The system supports the ability to transfer patron holds queue from one bibliographic record to another, singly or in batch, while preserving the original + hold order. •The system has a reporting view which allows staff to identify bibs (shows ISBNs) for which the last item was removed based on the date of removal. + Report templates can be built from this view for external processes.•The system supports lineitem alerts, lineitem receive alerts, and lineitem detail alerts for EDI messaging.•The system supports the ability to exclude some types of items from patron hold limits.•There is support for new, locally defined, cancel reasons for EDI. There is also support for EG interpretation of EDI defined cancellation standards. •The system supports the ability to send batches of orders to vendors, including orders for multiple accounts. The process of breaking outbound EDI messages + into controlled and timed batch sizes is automated but settable to a specific, preferred, time interval.•The system supports the ability to FTP orders directly to vendors and receive acknowledgements and status reports from vendors. More specifically, the + system supports push and pull of files via FTP, SFTP and + SSH. •The system supports MARC file import with PO data.•The OPAC accepts enhanced content from the following vendors: ChiliFresh, Content Café & Novelist. (note that these are subscription services) + •You can set up vendor profiles and flag those that are active. Those that aren't can be saved for historical purposes.•The system supports the ability to “flag” vendor records for vendor who require pre-payment of purchase orders with a number of visual cues in the UI. During + PO creation, the pre-payment flag in the form will show and pre-populate it's value with the value from the chosen provider. During PO activation, if prepayment + is required, a confirmation dialog is inserted before sending the activate request. It indicates in the PO summary when a PO requires pre-payment. •The system supports sequential barcode generation for ease of receiving and processing of new items and easily changing large groups of barcodes. There is + a choice to use auto generated barcodes in interfaces where they would normally be used (such as receiving). Some parameters about the barcode symbology may need to + be entered in the admin interface to correctly calculate the barcodes.•The system supports the ability to manually select libraries to receive items when partial orders are received or when items come in multiple deliveries. + Orders with multiple copies will have an owning library per copy, so staff can pick which copies to mark as received.•The system is compatible with Zebra Z4M thermal transfer printers. •The system supports the ability to create, format and print spine labels. •In the ACQ UI, there is a batch fund updater. When there is a given set of line items, the batch fund updater updates the fund for all attached copies in + batch.•The system has a configurable drop-down of alerts for line items that staff can control. •The system supports the ability to update order records at the receiving stage; the ability to receive partial orders and unreceive orders; and the order + record is updated automatically when the balance of a partial order is received. •The system supports the ability to transfer item records from one bibliographic record to another. •The system supports a worksheet for each title received, to include title, call number, number of copies received, distribution, and processing notes. •The system supports the ability to easily scan over a “dummy” or placeholder barcode in a temporary, brief or on-order record by simply scanning the + “real” barcode.•The system supports the import/export of MARC bibliographic and authority records via Vandelay. An option has been added to use the internal bib ID as the TCN + for all records while retaining the OCLC number in the record. The authority import now matches bib import in overlay/merge functionality.•The system is fully compatible with OCLC Connexion for editing and transferring bibliographic and authority records (Z39.50). •The system supports the ability to create a “short bib” record pending creation of the full MARC record. Short bibs can be created from a lineitem + search. •The system supports a utility to facilitate searching for full bibliographic records and create temporary “short” bibliographic records if no full + records are found. •Added the ability to perform electronic receiving and invoicing as follows: ability to receive electronic packing slips and invoices by purchase order or + invoice number; ability to edit number of copies, amount due, freight and service charges, and tax; ability to delete line items; ability to recalculate total + amounts; ability to authorize payment within ILS.•System supports the ability to do both regular and generic or blanket invoicing (refers to invoices without a purchase order number, e.g., direct charges to + a fund).•System supports simultaneous access to invoice interface.•System supports a number of fields including: date, invoice number, invoice type, shipping vendor name, billing vendor, purchase order number, title, + author, number of copies ordered, number of copies paid or received, number of copies available for payment, number of copies being paid for, amount, notes, + invoice subtotal, freight charge, service charge, tax, invoice total, & vendor order was placed with. •The system prevents overpayment in the invoice view page by linking invoices to PO/Lineitems. •Staff can print a list of invoices paid before/after a specified date. When searching for invoices in the unified search interface, there's now a button that + will print a voucher for whichever invoices have checked checkboxes.•The system supports the ability to search invoices by number or vendor name, with links to vendors, and vendor records include links to invoice history.•Staff can retrieve a PO or lineitem and access all the related invoicing data. •The system supports reopening a closed invoice (example: an invoice was paid from the wrong fund; staff wants to go back and change the fund). There is + a Reopen button, which requires permissions. •The system has the ability to pay a partial invoice for partial receipt of shipment, and then generate claims for the items that were not received. Also, + the system supports invoicing extra copies when a vendor sends more copies than what staff ordered and staff decides to keep the extra copies. •Issues can be automatically moved to a configured shelving location upon receipt of the newer issue. This can be done on a per item basis and is based on + the owning library of the copies. •When using full serials control, the default behavior for serials issue sorting and display in the holdings display will be reverse chronological order.•Staff can label serials issuances with easily identifiable text such as “YYYYMONTH” or “V.12 NO.1”.•In serials receiving staff are able to choose which issues to receive and distribute to which locations.•Staff can add regular, supplemental, and index issues in the serials interface. •The system supports purchase alert query (aka holds ratio report, holds alert report) compares holds to items and flags titles that need more copies. + The option exists to include inprint/out-of-print status from the bibliographic record. The system also handles the ability to add query results directly to + selection lists, singly or in batch, and the ability to create order records directly from query results. This is handled by an interface for uploading a CSV file + to generate a page of bib records that can have lineitems created from them to go into selection lists and/or POs. + + + + + + + Report errors in this documentation using Launchpad. + + Report any errors in this documentation using Launchpad. + Part II. Public Access CatalogThis part of the documentation explains how to use the Evergreen public OPAC. It covers the basic catalog and more advanced search topics. It also describes the “My Account” tools users have to find information and manage their personal library accounts through the OPAC. This section could be used by staff and patrons but would be more useful for staff as a generic reference when developing custom guides and tutorials for their users. + + + + + Report errors in this documentation using Launchpad. + + Report any errors in this documentation using Launchpad. + Part III. Core Staff TasksThis part of the documentation covers a broad range of the common tasks carried out by your library and includes tasks performed by circulation staff and catalogers among others. Some of these procedures should only be performed by Local System Administrators, but most of these sections will give all staff a better understanding of the Evergreen system and its features. + Chapter 3. Alternate Serial ControlChapter 3. Alternate Serial Control + Report errors in this documentation using Launchpad. + Chapter 3. Alternate Serial Control + Report any errors in this documentation using Launchpad. + Chapter 3. Alternate Serial ControlChapter 3. Alternate Serial ControlAbstract + This tutorial describes a basic workflow in which the user will + register a subscription to a serial publication, express the + distribution of copies of that publication to branches, define the + format of captions to be affixed to each issue, specify details of + the publication pattern, instruct the system to predict future + issues, and finally receive copies of an issue. This tutorial is + not intended to represent exhaustive documentation of Evergreen + features relating to serials, as those features are continually + evolving as of this writing, but it should provide a basis on which + user exploration of serials features can take place. Hopefully, + that exploration will initiate feedback that will lead to the + continuing improvement of serials in Evergreen. + + + Creating a Copy TemplateCreating a Copy Template + + + To create a serial subscription in the Alternate Serial Control + interfaces, you're first going to need a copy template. For many + use cases, you can create one copy template and re-use it all the + time, but if you don't yet have one ready, follow these steps. + + + Find the copy template editor under the Admin menu of the staff + client. + + + + + Once that page has loaded, click New Template. + + + + + You don't actually need to fill out all of these fields. If you don't + want serial copies to get barcodes and show up individually in your + catalog, you only need to set the first two fields, which are "owning + library" and "name." Note that "owning library" in this case refers + to the library that owns the copy template itself. This has nothing to + do with what libraries receive copies or what library manages the + subscription. We'll get to that later. + + + If you do want your copies to have barcodes (and perhaps to circulate) + and to appear individually in your catalog, you will need at least to + fill in the fields shown in the above image. + + + To the CatalogTo the Catalog + + + Initiate a catalog search in the staff client to find the bibliographic + record to which you'd like to attach a subscription. If you don't + already have the record in your system, you can import it via any of + your preferred methods (MARC import, Z39.50 search, etc.) and then + look it up in the catalog. + + + From the record detail page, click "Actions for this Record," and then + click "Alternate Serial Control." You note that we've used a magazine + called Flying for our example. + + + + + + The SubscriptionThe Subscription + + + Here you'll be presented with an interface that would show you any + existing subscriptions against the bibliographic record you've chosen, + if there were any. More importantly for our purposes, it provides a + "New Subscription" button. Click that. + + + + + + The only required fields here are owning library and start date. + You can choose to specify and end date if you have one. Expected date + offset means the difference between the nominal publishing date of any + given issue and the date that you generally expect to receive your copy. + If a publication is dated with the first of each month, but you + generally receive it five days before that, you might enter “-5 days” + into that field. + + + Once you have created this basic subscription, you'll see that it has + an ID number, which is display as a link that you can click. Click + that link to manage the subscription in greater detail. + + + + + + Now you're looking at the Subscription Details page, which has four + tabs. The first tab, labeled Summary, shows information you've already + seen. Proceed to the next tab, Distributions, to start telling + Evergreen where you want copies of your serial to go. + + + DistributionsDistributions + + + + + + Using the New Distribution button, create a distribution for each + branch for which you expect to receive copies. Each distribution + needs at least a label, a “holding library”, and a “receive unit + template.” “Receive unit template” is where you select the copy + template we created at the beginning of this tutorial. Label can be + anything, and will only appear at receive time. It is not publicly + visible. "Holding library" refers to the library that will get copies. + + + The last two fields have something to do with binding multiple copies + into larger shelving units, but they are currently ignored by the + Alternate Serial Control, which does not support such binding, and you + should leave these fields blank. + + + + + + After saving your distribution (and creating any others for other + libraries for which you will receive items), click on each link in the + Label column to set up the streams for each distribution. + + + StreamsStreams + + + “Streams” are perhaps the most confusing concept in the Alternate + Serials Control interfaces, but I'll try to explain them concisely: Each + stream represents one *recurring* copy of a serial. So if you have a + library called Example Branch 1 (BR1 for short), and you want BR1 to + get four copies for every issue, then you should create one + distribution for BR1 and four streams for that distribution. + + + + + + You can create streams one at a time by clicking New Stream. In this + case you have the opportunity to give each stream a routing label. This + routing label only shows up at receive time and on routing lists, and + is not visible in the catalog or anywhere publicly viewable. The + routing label is entirely optional. + + + + + + If you don't care about routing labels, or need to create more than + just a couple of steams, use the Create Many Streams button to create + several at once. + + + + + + If you wish to set up routing lists, use the checkboxes on the left + side of the grid interface to select one stream at a time, and click + Routing List for Selected Stream. If you don't care about routing + lists, you may skip to the Captions and Patterns heading of this + document. + + + Setting Up Routing ListsSetting Up Routing Lists + + + + As of this writing, routing lists features have been written, + but have not yet been slated for inclusion in a 2.0 series + Evergreen release. + + + + + + + A routing list is made up of users, who will presumably get their + hands on a copy of a serial before that copy hits its regular shelving + location. Those users can be either readers, meaning actual + Evergreen users with a barcode, or departments, which can really be + anything and are represented by a free-form text field. There is also + a note field available in either case. + + + + + + Enter any combination of readers and departments you need using + the supplied fields and the "Add" button. Readers are specified by + their barcodes in the appropriately labeled field. + + + + + + You can re-arrange users in the list by dragging and dropping each + numbered row. When you've got the list you want, click Save Changes. + You can remove unwanted users by clicking the [X] link by that + user's list order number. + + + Captions and PatternsCaptions and Patterns + + + After you've set up all the streams you need on all of your + distributions, it's time to move on to the next tab in the Subscription + Details interface: the Captions and Patterns tab. + + + + + + + Caption and Pattern objects define the same material that would be + described in an 853, 854, or 855 MARC tag. Here you define how your + issues will be captioned and how often you get them. + + + Click the "Add Caption and Pattern" to get a blank row to work with, + and set the leftmost dropdown to Basic, Supplement, or Index, depending + on what you want to define the pattern for. For common periodicals, + Basic is often all that's needed. + + + Next, unless you know how to type a JSON representation of your 85X + tags by hand, click the Wizard button. + + + + + + This Caption and Pattern Wizard is where you'll enter information + according to Library of Congress-specified standards about how this + serial works. The first page of the wizard is for specifying + enumeration captions (commonly involving particles labeled v. and + no.). + + + You can have up to six levels of enumeration captions and two + alternate levels. Each level except the first and first alternate + come with attendant questions about how many units of this level + belong to the higher level. This is all directly based on subfields + $a through $h and $u and $v of the MFHD standard. + + + The wizard has several pages, and after you fill out each page the + way you want, click Next in the upper right corner. You can go + back if you've forgotten something by using the Prev button. + + + + + + The wizard's second page is concerned with calendar changes (i.e., at + what point on the calendar does the highest level enumeration unit + change). You can add as many of these as you like, and each one can be + one of a) the start of a given month, b) the start of a given season, + or c) a specific date. This all corresponds to subfield $x of the MFHD + standard. + + + + + + The wizard's third page is for defining chronology captions. Make sure + that each chronology caption you specify is smaller than the last. Only + mark the Display in Holding Field checkbox if you want the literal + words “year” and “month” and so on to appear next to values like “2010” + and “Nov.” + + + + + + The fourth page of the wizard deals with indicator codes and the + subfield $w from the MFHD standard. I recommend setting the first two + dropdowns as shown in the above image, unless you are a serials + librarian who knows your stuff and you have a good reason to do + otherwise. Set your frequency ($w) to the appropriate value for your + publication. For truly irregular frequencies, you may wish to select + use number of issues per year, in which case you enter a raw number + of issues per year. + + + After you have finished the wizard and clicked Compile Pattern Code," + make sure the Active checkbox is marked for the caption and pattern + object you have just created, and click Save Changes. + + + + + + On to IssuancesOn to Issuances + + + We're finally close to the point where we define an initial issuance + and let Evergreen predict a run of issuances, and attendant items, from + there. + + + Proceed to the Issuances tab of the Subscription Detail interface, and + click on New Issuance. + + + + + + What we're doing here is hand-entering one example issuance, and you + should use the *last issuance you have before you want prediction to + take over.* So if you want to predict all the issues of a monthly + periodical beginning with November 2010, enter the information for your + October 2010 issue. + + + In the holding code section of the New Issuance dialog, click the + Wizard button to get a fields tailor-made for the caption and pattern + you're using, and fill in the information that's appropriate for the + example issuance you're using. Click Compile when you have all those + fields filled in. + + + + + + Once everything is filled in on your example issue, click Save. You + have now given the system everything it needs to predict a run of + issues (and the attendant copies that will go to your branches). + + + Click the Generate Predictions button. You'll get a mini-dialog + asking you how many issues to predict. If your subscription has an + end date, you can choose to predict until the subscription's end date. + If your subscription doesn't have an end date, you can indicate how + many issues you want the system to predict (so enter 12 if you want + a year's worth of issues on a monthly publication). + + + + + + After you click Generate, the system should take a moment to predict + your run of issuances and copies, and then you should see the grid of + issuances populated below. + + + You can now delete the example issuance that we created for the + system to base its prediction on. Mark its checkbox on the left side + of the grid and clickDelete Selected. + + + + + + Your subscription is now completely set up. Let's receive some + copies. + + + + Batch Receiving + + Batch Receiving + + + + The Subscription Details interface has a Batch Item Receive button + that will take you to the Batch Receiving interface for this + subscription. + + + Generally, you won't need to edit anything pertaining to the + subscription itself when receiving items, so you can also get to + Batch Receiving through the Actions for this Item menu when + viewing a record in the catalog (right next to Alternate Serial + Control from earlier in this tutorial). + + + + + + The Batch Receiving interface will present you with a selection of + as-yet unreceived issuances. The earliest expected issuance with + any as-yet unreceived copies will always be at the top of the list, + so generally you will click next here. + + + + + + “Simple” mode for Batch Receiving gives you few options - this is + how you receive items that won't have barcodes and won't appear + individually in the catalog. Each item can have a optional note + (stored internally; not displayed anywhere as of this writing, but + not ultimately intended as a publicly-viewable note), and you can + unmark any rows in the table for items that you have not received. + + + More discussion on how to indicate that you haven't received all the + items you were expecting will follow a few paragraphs later in this + tutorial. + + + If you do want to barcode your items, check the Create Units for + Received Items checkbox in the extreme lower right of the + interface. + Units are copy-equivalent objects that will hold a barcode and can + appear in the catalog (and even be targeted for holds). Marking + this checkbox will give you many more fields on each row of the + receiving table. + + + + + + If you have a printed stack of barcodes available, you can scan + each one into the barcode field of each row of the table. You can + also let the system generate your barcodes automatically, if you + so desire. To accomplish this, mark the auto-generate checkbox, + and enter your first barcode into the first row of the table. Then + press the tab key. + + + The rest of the barcode fields will automatically populate with the + next barcodes in sequence, including check digits. + + + + + + As for the other fields in the table besides barcode, you can set + them to whatever values you need. Note that anything with a + barcode must also have a call number, so you'll have to put + something there. Drop-downs for call numbers will be populated + with any existing call-number associated with the bibliographic + record for your serial. You can choose from these call numbers, + or, if perhaps you're using a call-number-per-issue policy, you + can create a new call number to apply to the table of items every + time you receive a batch. + + + To spare you the pain of setting potentially dozens of fields in the + receiving table individually, you can enter a value into the very + top row and click Apply at the far right to apply that same value + to its entire column throughout the table. + + + + + + Now, as for the question of what to do when you didn't receive all + the items you were supposed to get, you can choose what rows in + the table you want to represent the unreceived items. So if you + only received six out of the expected eight, and you're trying to + distribute items evenly between two branches, you might unmark two + checkboxes as shown in the image below. + + + Not only does unmarking the checkbox turn the row grey and prevent + that item from being received when you later click Receive Selected + Items later, but the system also remembers which items you have not + yet received, so that you can receive them later if they arrive + separately. The system's keeping track of unreceived items will + also facilitate a claiming interface, when that is designed and + implemented. + + + + + + When you've filled in all the item rows, look in the lower left of + the interface for the Receive Selected Items button and click + that. + + + + + + You see that the items that were marked for receipt are now cleared + from this interface, as they have been received. + + + + + + Since we left all Routing List checkboxes marked, if any of the items + we just received actually have a routing list, we now have another tab + open with a routing list ready to print. + + + + + + If you set up a routing list as described earlier in this tutorial, + yours will look like this. Multiple routing lists will automatically + print on separate pages. + + + If you received some items with a barcode (and if the copy template + and shelving location you used are OPAC visible), you can now see the + items you received in the catalog. + + + + + + + Chapter 4. Using the Booking ModuleChapter 4. Using the Booking Module + Report errors in this documentation using Launchpad. + Chapter 4. Using the Booking Module + Report any errors in this documentation using Launchpad. + Chapter 4. Using the Booking ModuleChapter 4. Using the Booking ModuleAbstractThe following chapter will help staff create reservations for cataloged and non- + bibliographic items; create pull lists for reserved items; capture resources; and pick up and + return reservations. + + + Creating a Booking ReservationCreating a Booking Reservation + + + Only staff members can create reservations. To initiate a reservation, staff can + •search the catalog,•enter a patron record,•or use the booking module. + + Search the catalog to create a reservationSearch the catalog to create a reservation + + 1. + In the staff client, select Search → Search the Catalog2. + Search for the item to be booked.3. + Click Submit Search.4. + A list of results will appear. Select the title of the item to be reserved.5. + After clicking the title, the record summary appears. Beneath the record summary, + the copy summary will appear. In the Actions column, select Copy Details.6. + The Copy Details will appear in a new row. In the barcode column, click the book now + link.7. + A screen showing the title and barcodes of available copies will appear.8. + Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode + does not exist, a pop up box will appear to alert you to the error. After entering the + patron’s barcode, the user’s existing reservations will appear at the bottom of the + screen.9. + To the right, a section titled, I need this resource... will allow you to set the dates and + times for which the item should be reserved. If the date/time boxes appear in red, + then the date and time set is incorrect. For example, if the time for which the + reservation is set has already passed, the boxes will appear in red. The times must be + set correctly for the reservation to be accomplished. If the item has already been + reserved at the time for which you are trying to reserve the item, then you will receive + an error message.10. + Finally, select the barcode of the item that you want to reserve. If multiple copies of + the item exist, choose the barcode of the copy that you want to reserve, and click + Reserve Selected. If you do not select a barcode, and you click Reserve Selected, you + will receive an error message. If you do not have a preference, you do not have to + select a barcode, and you may click Reserve Any. One of the barcodes will be pulled + from the list. + An item must have a status of available or reshelving in order to + be targeted for a reservation. If the item is in another status, the reservation will fail.11. + After you have made the reservation, a message will confirm that the action succeeded. Click OK.12. + The screen will refresh, and the reservation will appear below the user’s name. + + Enter a patron’s record to create a reservationEnter a patron’s record to create a reservation + + 1. + Enter the barcode or patron information, and click Search to retrieve the patron’s record.2. + The match(es) should appear in the right pane. Click the desired patron’s name. In the + left panel, a summary of the patron’s information will appear. Click the Retrieve + Patron button in the right corner to access more options in the patron’s record.3. + Eight buttons will appear in the top right corner. Select Other → Booking to create, cancel, pick up, and return reservations.4. + The Copy Details will appear in a new row. In the barcode column, click the book now + link.5. + A screen showing the title and barcodes of available copies will appear.6. + Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode + does not exist, a pop up box will appear to alert you to the error. After entering the + patron’s barcode, the user’s existing reservations will appear at the bottom of the + screen.7. + To the right, a section titled, I need this resource... will allow you to set the dates and + times for which the item should be reserved. If the date/time boxes appear in red, + then the date and time set is incorrect. For example, if the time for which the + reservation is set has already passed, the boxes will appear in red. The times must be + set correctly for the reservation to be accomplished. If the item has already been + reserved at the time for which you are trying to reserve the item, then you will receive + an error message.8. + Finally, select the barcode of the item that you want to reserve. If multiple copies of + the item exist, choose the barcode of the copy that you want to reserve, and click + Reserve Selected. If you do not select a barcode, and you click Reserve Selected, you + will receive an error message. If you do not have a preference, you do not have to + select a barcode, and you may click Reserve Any. One of the barcodes will be pulled + from the list. + An item must have a status of available or reshelving in order to + be targeted for a reservation. If the item is in another status, the reservation will fail.9. + After you have made the reservation, a message will confirm that the action succeeded. Click OK.10. + The screen will refresh, and the reservation will appear below the user’s name. + + Use the booking module to create a reservationUse the booking module to create a reservation + + 1. + Select Booking → Create or Edit Reservations2. + Enter the barcode of the item and click Next.3. + A screen showing the name of the available resource will appear.4. + Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode + does not exist, a pop up box will appear to alert you to the error. After entering the + patron’s barcode, the user’s existing reservations will appear.5. + To the right, a section titled, I need this resource... will allow you to set the dates and + times for which the item should be reserved. If the date/time boxes appear in red, + then the date and time set is incorrect. For example, if the time for which the + reservation is set has already passed, the boxes will appear in red. The times must be + set correctly for the reservation to be accomplished. If the resource has already been + reserved at the time for which you want to reserve the item, then the item will + disappear.6. + Finally, select the resource that you want to reserve. If multiple items or rooms exist, + choose the resource that you want to reserve, and click Reserve Selected. If you do + not select a resource, and you click Reserve Selected, you will receive an error + message. If you do not have a preference, you may click Reserve Any, and one of the + resources will be pulled from the list.7. + After you have made the reservation, a message will confirm that the action + succeeded. Click OK.8. + The screen will refresh, and the reservation will appear below the user’s name. + + + Cancelling a ReservationCancelling a Reservation + + + Staff members can cancel a patron’s reservation through the Create or Cancel Reservations tab available in a patron’s record. Staff members can also cancel a + reservation immediately after it has been made. + Enter the patron’s record to cancel a reservationEnter the patron’s record to cancel a reservation + + 1. + Search for and retrieve a patron’s record.2. + Select Other → Booking → Create or Cancel Reservations.3. + The existing reservations will appear at the bottom of the screen.4. + To cancel a reservation, highlight the reservation that you want to cancel. Click Cancel Selected.5. + A pop-up window will confirm that you cancelled the reservation. Click OK.6. + The screen will refresh, and the cancelled reservation will disappear.7. + To the right, a section titled, I need this resource... will allow you to set the dates and + times for which the item should be reserved. If the date/time boxes appear in red, + then the date and time set is incorrect. For example, if the time for which the + reservation is set has already passed, the boxes will appear in red. The times must be + set correctly for the reservation to be accomplished. If the item has already been + reserved at the time for which you are trying to reserve the item, then you will receive + an error message. + + Cancel a reservation immediately after it has been madeCancel a reservation immediately after it has been made + + 1. + Create the reservation.2. + Follow steps four through six in the section, Enter the patron’s record to cancel a reservation, to cancel the reservation.3. + The existing reservations will appear at the bottom of the screen. + + + Creating a Pull ListCreating a Pull List + + + Staff members can create a pull list to retrieve items from the stacks. + 1. + To create a pull list, select Booking → Pull List.2. + To find a pull list for your library, select a library from the dropdown box adjacent to See pull list for library.3. + You can decide how many days in advance you would like to select reserved items. Enter the number of days in the box adjacent to Generate + list for this many days hence. For example, if you would like to pull items that are needed today, you can enter 1 in the box, and you will retrieve items that need to be pulled today.4. + Click Fetch to retrieve the pull list.5. + The pull list will appear. Click Print to print the pull list. + + Capturing Items for ReservationsCapturing Items for Reservations + + + Staff members can capture items for reservations. + 1. + In the staff client, select Booking → Capture Resources.2. + Enter the barcode of the items to be captured. Click Capture.3. + A Capture Succeeded message will appear to the right. Information about the item will appear below the message. You can print this + information as a receipt and add it to the item if desired. + + Picking Up ReservationsPicking Up Reservations + + + Staff members can help users pick up their reservations. + 1. + In the staff client, select Booking → Pick Up Reservations2. + Enter the user’s barcode. Click Go.3. + The title available for pickup will appear. Highlight the title of the item to pick up, and click Pick Up.4. + The screen will refresh to show that the patron has picked up the reservation. + + Returning ReservationsReturning Reservations + + + Staff members can help users return their reservations. + 1. + In the staff client, select Booking → Return Reservations.2. + You can return the item by patron or item barcode. Choose Resource or Patron, enter the + barcode, and click Go.3. + A pop up box will tell you that the item was returned. Click OK.4. + The screen will refresh to show the reservations that remain out and the resources that have been returned. + + + + + + + Report errors in this documentation using Launchpad. + + Report any errors in this documentation using Launchpad. + Part IV. AdministrationThis part of the documentation is intended for Evergreen administrators and requires root access to your Evergreen server(s) and administrator access to + the Evergreen + staff client. It deals with maintaining servers, installation, upgrading, and configuring both system wide and local library settings. + Some sections require understanding of Linux system administration while others require an understanding of your system hierarchy of locations + and users. Many procedures explained in the following + chapters are accomplished with Linux commands run from the + terminal without a Graphical User Interface (GUI).In order to accomplish some of the tasks, prerequisite knowledge or experience will be required and you may need to consult system administration documentation for your + specific Linux distribution if you have limited Linux system experience. A vast ammount of free + resources can be found on the on the web for various experinece levels. You might also consider consulting + PostgreSQL and + Apache documentation for a greater understanding + of the software stack on which Evergreen is built. + Chapter 5. Server-side Installation of Evergreen SoftwareChapter 5. Server-side Installation of Evergreen Software + Report errors in this documentation using Launchpad. + Chapter 5. Server-side Installation of Evergreen Software + Report any errors in this documentation using Launchpad. + Chapter 5. Server-side Installation of Evergreen SoftwareChapter 5. Server-side Installation of Evergreen SoftwareAbstractThis section describes installation of the Evergreen server-side software and its associated components. + Installation, configuration, testing and verification + of the software is straightforward if you follow some simple directions. + + Installing, configuring and testing the Evergreen server-side software is straightforward with the current + stable software release. + The current version of the Evergreen server-side software runs as a native application on any of several + well-known Linux distributions + (e.g., Ubuntu and Debian). + It does not currently run as a native application on the Microsoft Windows + operating system (e.g., WindowsXP, WindowsXP + Professional, Windows7), but the software can still be + installed and run on Windows via a so-called + virtualized Linux-guest Operating System (using, for example, + "VirtualBox" or "VMware" + to emulate a Linux + environment). It can also be installed to run on other Linux + systems via virtualized environments (using, for example, "VirtualBox" or + "VMware"). More information on virtualized environments can be found in + ???. + Installation of the Evergreen Staff Client software is reviewed in ???. + The Evergreen server-side software has dependencies on particular versions of certain major software + sub-components. Successful installation of Evergreen software requires that software versions agree with those + listed here: + Table 5.1. Evergreen Software DependenciesEvergreenOpenSRFPostgreSQL2.01.6.28.4 + Installing Server-Side SoftwareInstalling Server-Side Software + + This section describes the installation of the major components of Evergreen server-side software. + As far as possible, you should perform the following steps in the exact order given since the + success of many steps relies on the successful completion of earlier steps. You should make backup + copies of files and environments when you are instructed to do so. In the event of installation problems + those copies can allow you to back out of a step gracefully and resume the installation from a known + state. See the section called “Backing Up” for further information. + Of course, after you successfully complete and test the entire Evergreen installation you should + take a final snapshot backup of your system(s). This can be the first in the series of regularly + scheduled system backups that you should probably also begin. + Installing OpenSRF 1.6.2 On Ubuntu or + DebianInstalling OpenSRF 1.6.2 On Ubuntu or + Debian + + + + + This section describes the installation of the latest version of the Open Service Request + Framework (OpenSRF), a major component of the Evergreen server-side software, on + Ubuntu or Debian + systems. Evergreen software is integrated with and depends on the OpenSRF software + system. + Follow the steps outlined here and run the specified tests to ensure that OpenSRF is + properly installed and configured. Do not + continue with any further Evergreen installation steps + until you have verified that OpenSRF has been successfully installed and tested. + + The following steps have been tested on the x86 (32-bit) and x86-64 (64-bit) + platforms. OpenSRF 1.6.2 has been tested on Debian Lenny (5.0), Debian Squeeze (6.0) + and Ubuntu Lucid Lynx (10.04), Debian Lenny (5.0), + CentOS 5, Red Hat Enterprise Linux 5. + In the following instructions, you are asked to perform certain steps as + either the root user, the + opensrf user, or the + postgres user. + • + Debian -- To become the + root user, issue the command + su - and enter the password of the + root user. + • + Ubuntu -- To become the + root user, issue the command + sudo su - and enter the password of the + root user. + + To switch from the root user to a + different user, issue the command su - USERNAME. For example, to + switch from the root user to the + opensrf user, issue the command + su - opensrf. Once you have become a non-root user, to become + the root user again, simply issue the command + exit. + + 1. + + Add New opensrf User + As the root user, add the + opensrf user to the system. + In the following example, the default shell for the + opensrf user is automatically set + to /bin/bash to inherit a reasonable environment: + + + # as the root user: + useradd -m -s /bin/bash opensrf + passwd opensrf + + 2. + + Download and Unpack Latest OpenSRF Version + + The latest version of OpenSRF can be found here: + http://evergreen-ils.org/downloads/OpenSRF-1.6.2.tar.gz . + As the opensrf user, change to + the directory /home/opensrf then download + and extract OpenSRF. The new subdirectory + /home/opensrf/OpenSRF-1.6.2 will be created: + + + # as the opensrf user: + cd /home/opensrf + wget http://evergreen-ils.org/downloads/OpenSRF-1.6.2.tar.gz + tar zxf OpenSRF-1.6.2.tar.gz + + 3. + + Install Prerequisites to Build OpenSRF + In this section you will install and configure a set of prerequisites that will be + used to build OpenSRF. In a following step you will actually build the OpenSRF software + using the make utility. + As the root user, enter the commands show + below to build the prerequisites from the software distribution that you just downloaded + and unpacked. Remember to replace [DISTRIBUTION] in the following + example with the keyword corresponding to the name of one of the + Linux listed distributions. + For example, to install the prerequisites for Ubuntu version 10.04 (Lucid Lynx) you would + enter this command: make -f src/extras/Makefile.install ubuntu-lucid . + + + # as the root user: + cd /home/opensrf/OpenSRF-1.6.2 + make -f src/extras/Makefile.install [DISTRIBUTION] + + • + debian-squeeze for Debian Squeeze (6.0) + + • + ubuntu-lucid for Ubuntu Lucid Lynx + (10.04) + • + centos for CentOS 5 + • + rhel for Red Hat Enterprise Linux 5 + + This will install a number of packages on the system that are required by OpenSRF, + including some Perl modules from CPAN. You can say No to the initial + CPAN configuration prompt to allow it to automatically configure itself to download and + install Perl modules from CPAN. The CPAN installer will ask you a number of times whether + it should install prerequisite modules - say Yes. + 4. + + Build OpenSRF + In this section you will configure, build and install the OpenSRF + components that support other Evergreen services. + + a. + + Configure OpenSRF + + As the opensrf + user, return to the new OpenSRF build directory and use the + configure utility to prepare for the next + step of compiling and linking the software. If you wish to + include support for Python and Java, add the configuration + options --enable-python and + --enable-java, respectively: + + + # as the opensrf user: + cd /home/opensrf/OpenSRF-1.6.2 + ./configure --prefix=/openils --sysconfdir=/openils/conf + make + + This step will take several minutes to complete. + + b. + + Compile, Link and Install OpenSRF + As the root + user, return to the new OpenSRF build directory and use the + make utility to compile, link and install + OpenSRF: + + + # as the root user: + cd /home/opensrf/OpenSRF-1.6.2 + make install + + This step will take several minutes to complete. + + c. + + Update the System Dynamic Library Path + You must update the system dynamic library path to force + your system to recognize the newly installed libraries. As the + root user, do this by + creating the new file + /etc/ld.so.conf.d/osrf.conf containing a + new library path, then run the command + ldconfig to automatically read the file and + modify the system dynamic library path: + + + # as the root user: + echo "/openils/lib" > /etc/ld.so.conf.d/osrf.conf + ldconfig + + + d. + + Define Public and Private OpenSRF Domains + For security purposes, OpenSRF uses Jabber domains to separate services + into public and private realms. On a single-server system the easiest way to + define public and private OpenSRF domains is to define separate host names by + adding entries to the file /etc/hosts. + In the following steps we will use the example domains + public.localhost for the public + domain and private.localhost + for the private domain. In an upcoming step, you will configure two special + ejabberd users + to handle communications for these two domains. + As the root user, edit the file + /etc/hosts and add the following example domains: + + + + # as the root user: + 127.0.1.2 public.localhost public + 127.0.1.3 private.localhost private + + + e. + + Change File Ownerships + Finally, as the root + user, change the ownership of all files installed in the + directory /openils to the + user opensrf: + + + # as the root user: + chown -R opensrf:opensrf /openils + + + + 5. + + Stop the ejabberd Service + + Before continuing with configuration of ejabberd + you must stop that service. As the root user, + execute the following command to stop the service: + + + # as the root user: + /etc/init.d/ejabberd stop + + If ejabberd reports that it + is already stopped, there may have been a problem when it started back + in the installation step. If there are any remaining daemon processes such as + beam or + epmd + you may need to perform the following commands to kill them: + + + # as the root user: + epmd -kill + killall beam; killall beam.smp + rm /var/lib/ejabberd/* + echo 'ERLANG_NODE=ejabberd@localhost' >> /etc/default/ejabberd + + 6. + + Edit the ejabberd configuration + You must make several configuration changes for the + ejabberd service before + it is started again. + As the root user, edit the file + /etc/ejabberd/ejabberd.cfg and make the following changes: + + a. + + Change the line: + {hosts, ["localhost"]}. + to instead read: + {hosts, ["localhost", "private.localhost", "public.localhost"]}. + + + b. + + Change the line: + {max_user_sessions, 10} + to instead read: + {max_user_sessions, 10000} + + If the line looks something like this: + {access, max_user_sessions, [{10, all}]} + then change it to instead read: + {access, max_user_sessions, [{10000, all}]} + + c. + + Change all three occurrences of: + max_stanza_size + to instead read: + 2000000 + + d. + + Change both occurrences of: + maxrate + to instead read: + 500000 + + e. + + Comment out the line: + {mod_offline, []} + by placing two % comment signs in front + so it instead reads: + %%{mod_offline, []} + + + 7. + + Restart the ejabberd service + As the root user, restart the + ejabberd service to test the + configuration changes and to register your users: + + + # as the root user: + /etc/init.d/ejabberd start + + 8. + + Register router and + opensrf as + ejabberd users + The two ejabberd users + router and + opensrf must be registered + and configured to manage OpenSRF router service and communications + for the two domains public.localhost and + private.localhost that you added to the file + /etc/hosts in a previous step + (see Step 4.d). + The users include: + • + the router user, + to whom all requests to connect to an OpenSRF service will be + routed; + • + the opensrf user, + which clients use to connect to OpenSRF services (you may name + the user anything you like, but we use + opensrf in these examples) + + As the root user, execute the + ejabberdctl utility as shown below to register and create passwords + for the users router and + opensrf on each domain (remember to replace + NEWPASSWORD with the appropriate password): + + + # as the root user: + # Note: the syntax for registering a user with ejabberdctl is: + # ejabberdctl register USER DOMAIN PASSWORD + ejabberdctl register router private.localhost NEWPASSWORD + ejabberdctl register router public.localhost NEWPASSWORD + ejabberdctl register opensrf private.localhost NEWPASSWORD + ejabberdctl register opensrf public.localhost NEWPASSWORD + + Note that the users router and + opensrf and their respective passwords + will be used again in Step 10 when + we modify the OpenSRF configuration file /openils/conf/opensrf_core.xml . + 9. + + Create OpenSRF configuration files + As the opensrf user, + execute the following commands to create the new configuration files + /openils/conf/opensrf_core.xml and + /openils/conf/opensrf.xml from the example templates: + + + # as the opensrf user: + cd /openils/conf + cp opensrf.xml.example opensrf.xml + cp opensrf_core.xml.example opensrf_core.xml + + 10. + + Update usernames and passwords in the OpenSRF configuration file + As the opensrf user, edit the + OpenSRF configuration file /openils/conf/opensrf_core.xml + and update the usernames and passwords to match the values shown in the + following table. The left-hand side of Table 5.2, “Sample XPath syntax for editing "opensrf_core.xml"” + shows common XPath syntax to indicate the approximate position within the XML + file that needs changes. The right-hand side of the table shows the replacement + values: + Table 5.2. Sample XPath syntax for editing "opensrf_core.xml"XPath locationValue/config/opensrf/username + opensrf + /config/opensrf/passwd private.localhost + password for + opensrf user + /config/gateway/username + opensrf + /config/gateway/passwdpublic.localhost + password for + opensrf user + /config/routers/router/transport/username, + first entry where server == public.localhost + router + /config/routers/router/transport/password, + first entry where server == public.localhostpublic.localhost + password for + router user + /config/routers/router/transport/username, + second entry where server == private.localhost + router + /config/routers/router/transport/password, + second entry where server == private.localhostprivate.localhost + password for + router user + + You may also need to modify the file to specify the domains from which + OpenSRF will accept connections, + and to which it will make connections. + If you are installing OpenSRF on a single server + and using the private.localhost and + public.localhost domains, + these will already be set to the correct values. Otherwise, search and replace + to match values for your own systems. + 11. + + Set location of the persistent database + As the opensrf user, edit the + file /openils/conf/opensrf.xml, then find and modify the + element dbfile (near the end of the file) to set the + location of the persistent database. Change the default line: + /openils/var/persist.db + to instead read: + /tmp/persist.db + Following is a sample modification of that portion of the file: + +<!-- Example of an app-specific setting override --> +<opensrf.persist> + <app_settings> + <dbfile>/tmp/persist.db</dbfile> + </app_settings> +</opensrf.persist> + + 12. + + Create configuration files for users needing srfsh + In this section you will set up a special configuration file for each user + who will need to run the srfsh (pronounced surf + shell) utility. + + The software installation will automatically create the utility + srfsh (surf shell), a command line diagnostic tool for + testing and interacting with OpenSRF. It will be used + in a future step to complete and test the Evergreen installation. See + the section called “Testing Your Evergreen Installation” for further information. + As the root user, copy the + sample configuration file /openils/conf/srfsh.xml.example + to the home directory of each user who will use srfsh. + For instance, do the following for the + opensrf user: + + + # as the root user: + cp /openils/conf/srfsh.xml.example /home/opensrf/.srfsh.xml + + Edit each user's file ~/.srfsh.xml and make the + following changes: + • + Modify domain to be the router hostname + (following our domain examples, + private.localhost will give + srfsh access to all OpenSRF services, while + public.localhost + will only allow access to those OpenSRF services that are + publicly exposed). + • + Modify username and + password to match the + opensrf Jabber user for the chosen + domain + • + Modify logfile to be the full path for + a log file to which the user has write access + • + Modify loglevel as needed for testing + • + Change the owner of the file to match the owner of the home directory + + Following is a sample of the file: + +<?xml version="1.0"?> +<!-- This file follows the standard bootstrap config file layout --> +<!-- found in opensrf_core.xml --> +<srfsh> +<router_name>router</router_name> +<domain>private.localhost</domain> +<username>opensrf</username> +<passwd>SOMEPASSWORD</passwd> +<port>5222</port> +<logfile>/tmp/srfsh.log</logfile> +<!-- 0 None, 1 Error, 2 Warning, 3 Info, 4 debug, 5 Internal (Nasty) --> +<loglevel>4</loglevel> +</srfsh> + + 13. + + Modify the environmental variable PATH for the + opensrf user + As the opensrf user, modify the + environmental variable PATH by adding a new file path to the + opensrf user's shell configuration + file ~/.bashrc: + + + # as the opensrf user: + echo "export PATH=/openils/bin:\$PATH" >> ~/.bashrc + + 14. + + Start OpenSRF + As the root user, start the + ejabberd and + memcached services: + + + # as the root user: + /etc/init.d/ejabberd start + /etc/init.d/memcached start + + As the opensrf user, + start OpenSRF as follows: + + + # as the opensrf user: + osrf_ctl.sh -l -a start_all + + The flag -l forces Evergreen to use + localhost (your current system) + as the hostname. The flag -a start_all starts the other + OpenSRF router , + Perl , and + C services. + • + You can also start Evergreen without the + -l flag, but the osrf_ctl.sh + utility must know the fully qualified domain name for the system + on which it will execute. That hostname was probably specified + in the configuration file opensrf.xml which + you configured in a previous step. + • + If you receive an error message similar to + osrf_ctl.sh: command not found, then your + environment variable PATH does not include the + directory /openils/bin. + As the opensrf user, + edit the configuration file ~/.bashrc and + add the following line: + export PATH=$PATH:/openils/bin + + 15. + + Test connections to OpenSRF + Once you have installed and started OpenSRF, as the + root user, test your connection to + OpenSRF using the srfsh + utility and trying to call the add method on the OpenSRF + math service: + + + # as the root user: + /openils/bin/srfsh + + srfsh# request opensrf.math add 2 2 + + Received Data: 4 + ------------------------------------ + Request Completed Successfully + Request Time in seconds: 0.007519 + ------------------------------------ + + For other srfsh commands, type in + help at the prompt. + 16. + + Stop OpenSRF + After OpenSRF has started, you can stop it at any time by using the + osrf_ctl.sh again. As the + opensrf + user, stop OpenSRF as follows: + + + # as the opensrf user: + osrf_ctl.sh -l -a stop_all + + + + Installing Evergreen 2.0 On Ubuntu or + DebianInstalling Evergreen 2.0 On Ubuntu or + Debian + + + + This section outlines the installation process for the latest stable version of + Evergreen. + In this section you will download, unpack, install, configure and test the Evergreen + system, including the Evergreen server and the PostgreSQL database system. You will make several + configuration changes and adjustments to the software, including updates to configure the system + for your own locale, and some updates needed to work around a few known issues. + + The following steps have been tested on the x86 (32-bit) and x86-64 (64-bit) + architectures. There may be differences between the Desktop and Server editions of + Ubuntu. These instructions assume the Server + edition. + In the following instructions, you are asked to perform certain steps as + either the root user, the + opensrf user, or the + postgres user. + • + Debian -- To become the + root user, issue the command + su - and enter the password of the + root user. + • + Ubuntu -- To become the + root user, issue the command + sudo su - and enter the password of the + root user. + + To switch from the root user to a + different user, issue the command su - USERNAME. For example, to + switch from the root user to the + opensrf user, issue the command + su - opensrf. Once you have become a non-root user, to become the + root user again, simply issue the command + exit. + + 1. + + Install OpenSRF + Evergreen software is integrated with and depends on the Open Service + Request Framework (OpenSRF) software system. For further information on + installing, configuring and testing OpenSRF, see + the section called “Installing OpenSRF 1.6.2 On Ubuntu or + Debian”. + Follow the steps outlined in that section and run the specified tests to + ensure that OpenSRF is properly installed and configured. Do + not continue with + any further Evergreen installation steps until you have verified that OpenSRF + has been successfully installed and tested. + 2. + + Download and Unpack Latest Evergreen Version + The latest version of Evergreen can be found here: + http://evergreen-ils.org/downloads/Evergreen-ILS-2.0.1.tar.gz . + As the opensrf user, change to + the directory /home/opensrf then download + and extract Evergreen. The new subdirectory + /home/opensrf/Evergreen-ILS-2.0.1 will be created: + + + # as the opensrf user: + cd /home/opensrf + wget http://evergreen-ils.org/downloads/Evergreen-ILS-2.0.1.tar.gz + tar zxf Evergreen-ILS-2.0.1.tar.gz + + 3. + + Install Prerequisites to Build Evergreen + In this section you will install and configure a set of prerequisites that will be + used later in Step 8 and + Step 9 to build the Evergreen software + using the make utility. + As the root user, enter the commands show + below to build the prerequisites from the software distribution that you just downloaded + and unpacked. Remember to replace [DISTRIBUTION] in the following + example with the keyword corresponding to the name of one of the + Linux distributions listed in the following + distribution keywords table ??? . + For example, to install the prerequisites for Ubuntu version 10.05 (Karmic Koala) you would + enter this command: make -f Open-ILS/src/extras/Makefile.install + ubuntu-lucid. + + + # as the root user: + cd /home/opensrf/Evergreen-ILS-2.0.1 + make -f Open-ILS/src/extras/Makefile.install [DISTRIBUTION] + + • + debian-squeeze for Debian Squeeze (6.0) + • + ubuntu-lucid for Ubuntu Lucid Lynx + (10.04) + + 4. + + (OPTIONAL) Install the PostgreSQL Server + + Since the PostgreSQL server is usually a standalone server in multi-server + production systems, the prerequisite installer Makefile in the previous section + (see Step 3) + does not automatically install PostgreSQL. You must install the PostgreSQL server + yourself, either on the same system as Evergreen itself or on another system. + If your PostgreSQL server is on a different system, just skip this step. + If your PostgreSQL server will be on the same system as your Evergreen + software, you can install the required PostgreSQL server packages as described + in the section called “Installing PostgreSQL from Source”, or you can visit the official + web site http://www.postgresql.org + for more information. + + PostgreSQL version 8.4 is the minimum supported version to work + with Evergreen 2.0. If you have an older version of PostgreSQL, + you should upgrade before installing Evergreen. To find your current version + of PostgreSQL, as the postgres + user execute the command psql, then type + SELECT version(); to get detailed information + about your version of PostgreSQL. + + 5. + + Install Perl Modules on PostgreSQL Server + If PostgreSQL is running on the same system as your Evergreen software, + then the Perl modules will automatically be available. Just skip this step. + Otherwise, continue if your PostgreSQL server is running on another system. + You will need to install several Perl modules on the other system. As the + root user install the following Perl + modules: + as the root user, ensure the gcc compiler is installed: + +aptitude install gcc libxml-libxml-perl libxml-libxslt-perl + + then install the Perl modules: + +perl -MCPAN -e shell +cpan> Business::ISBN +cpan> install JSON::XS +cpan> Library::CallNumber::LC +cpan> install MARC::Record +cpan> install MARC::File::XML +cpan> cpan UUID::Tiny + + For more information on installing Perl Modules vist the official + CPAN site. + + 6. + + Update the System Dynamic Library Path + You must update the system dynamic library path to force your system to recognize + the newly installed libraries. As the root user, + do this by creating the new file /etc/ld.so.conf.d/osrf.conf + containing a new library path, then run the command ldconfig to + automatically read the file and modify the system dynamic library path: + + + # as the root user: + echo "/usr/local/lib" >> /etc/ld.so.conf.d/osrf.conf + echo "/usr/local/lib/dbd" >> /etc/ld.so.conf.d/osrf.conf + ldconfig + + 7. + + Restart the PostgreSQL Server + If PostgreSQL is running on the same system as the rest of Evergreen, as + the root user you must restart + PostgreSQL to re-read the new library paths just configured. If PostgreSQL is + running on another system, you may skip this step. + As the opensrf user, + execute the following command (remember to replace + PGSQL_VERSION with your installed PostgreSQL version, + for example 8.4): + + + # as the opensrf user: + /etc/init.d/postgresql-PGSQL_VERSION restart + + 8. + + Configure Evergreen + In this step you will use the configure and + make utilities to configure Evergreen so it can be compiled + and linked later in Step 9. + As the opensrf user, return to + the Evergreen build directory and execute these commands: + + + # as the opensrf user: + cd /home/opensrf/Evergreen-ILS-2.0.1 + ./configure --prefix=/openils --sysconfdir=/openils/conf + make + + 9. + + Compile, Link and Install Evergreen + In this step you will actually compile, link and install Evergreen and the + default Evergreen Staff Client. + As the root user, return to the + Evergreen build directory and use the make utility as shown below: + + + # as the root user: + cd /home/opensrf/Evergreen-ILS-2.0.1 + make STAFF_CLIENT_BUILD_ID=rel_2_0_1 install + + The Staff Client will also be automatically built, but you must remember + to set the variable STAFF_CLIENT_BUILD_ID to match the version of the + Staff Client you will use to connect to the Evergreen server. + The above commands will create a new subdirectory + /openils/var/web/xul/rel_2_0_1 + containing the Staff Client. + To complete the Staff Client installation, as the + root user execute the following commands to + create a symbolic link named server in the head of the Staff Client + directory /openils/var/web/xul that points to the + subdirectory /server of the new Staff Client + build: + + + # as the root user: + cd /openils/var/web/xul + ln -sf rel_2_0_1/server server + + 10. + + Copy the OpenSRF Configuration Files + In this step you will replace some OpenSRF configuration files that you set up in + Step 9 when you installed and + tested OpenSRF. + You must copy several example OpenSRF configuration files into place after first + creating backup copies for troubleshooting purposes, then change all the file ownerships + to opensrf. + As the root user, execute the following + commands: + + + # as the root user: + cd /openils/conf + cp opensrf.xml opensrf.xml.BAK + cp opensrf_core.xml opensrf_core.xml.BAK + cp opensrf.xml.example opensrf.xml + cp opensrf_core.xml.example opensrf_core.xml + cp oils_web.xml.example oils_web.xml + chown -R opensrf:opensrf /openils/ + + 11. + + Create and Configure PostgreSQL Database + + In this step you will create the Evergreen database. In the commands + below, remember to adjust the path of the contrib + repository to match your PostgreSQL server + layout. For example, if you built PostgreSQL from source the path would be + /usr/local/share/contrib , and if you + installed the PostgreSQL 8.4 server packages on Ubuntu, + the path would be + /usr/share/postgresql/8.4/contrib/ . + + a. + + + Create and configure the database + + As the postgres + user on the PostgreSQL system create the PostgreSQL database, + then set some internal paths: + + + # as the postgres user: + createdb evergreen -E UTF8 -T template0 + createlang plperl evergreen + createlang plperlu evergreen + createlang plpgsql evergreen + + Continue as the postgres user + and execute the SQL scripts as shown below (remember to adjust the paths as needed, + where PGSQL_VERSION is your installed PostgreSQL + version, for example 8.4). + + + # as the postgres user: + psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/tablefunc.sql evergreen + psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/tsearch2.sql evergreen + psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/pgxml.sql evergreen + + + b. + + Create evergreen PostgreSQL user + As the postgres + user on the PostgreSQL system, create a new PostgreSQL user + named evergreen and + assign a password (remember to replace NEWPASSWORD + with an appropriate new password): + + + # as the postgres user: + createuser -P -s evergreen + + Enter password for new role: NEWPASSWORD + Enter it again: NEWPASSWORD + + + c. + + Create database schema + In this step you will create the database schema and configure your + system with the corresponding database authentication details for the + evergreen database user that you just created in + Step 11.b. + As the root user, enter + the following commands and replace HOSTNAME, PORT, + PASSWORD and DATABASENAME with appropriate + values: + + + # as the root user: + cd /home/opensrf/Evergreen-ILS-2.0.1 + perl Open-ILS/src/support-scripts/eg_db_config.pl --update-config \ + --service all --create-schema --create-offline \ + --hostname HOSTNAME --port PORT \ + --user evergreen --password PASSWORD --database DATABASENAME + + On most systems, HOSTNAME will be + localhost and + PORT will be 5432. + Of course, values for PASSWORD and + DATABASENAME must match the values you used in + Step 11.b. + As the command executes, you may see warnings similar to: + ERROR: schema SOMENAME does not exist (in fact, + you may see one warning per schema) but they can be safely ignored. + If you are entering the above command on a single line, do not + include the \ (backslash) characters. If you are using + the bash shell, these should only be used at the end of + a line at a bash prompt to indicate that the command is + continued on the next line. + + + 12. + + Configure the Apache web server + + In this step you will configure the Apache web server to support Evergreen + software. + First, you must enable some built-in Apache modules and install some + additional Apache configuration files. Then you will create a new Security + Certificate. Finally, you must make several changes to the Apache configuration + file. + + a. + + Enable the required Apache Modules + As the root + user, enable some modules in the Apache server, then copy the + new configuration files to the Apache server directories: + + + + # as the root user: + a2enmod ssl # enable mod_ssl + a2enmod rewrite # enable mod_rewrite + a2enmod expires # enable mod_expires + + As the commands execute, you may see warnings similar to: + Module SOMEMODULE already enabled but you can + safely ignore them. + + b. + + Copy Apache configuration files + You must copy the Apache configuration files from the + Evergreen installation directory to the Apache directory. As the + root user, perform the + following commands: + + + # as the root user: + cd /home/opensrf/Evergreen-ILS-2.0.1 + cp Open-ILS/examples/apache/eg.conf /etc/apache2/sites-available/ + cp Open-ILS/examples/apache/eg_vhost.conf /etc/apache2/ + cp Open-ILS/examples/apache/startup.pl /etc/apache2/ + + + c. + + Create a Security Certificate + In this step you will create a new Security Certificate (SSL Key) + for the Apache server using the openssl command. For a + public production server you must configure or purchase a signed SSL + certificate, but for now you can just use a self-signed certificate and + accept the warnings in the Staff Client and browser during testing and + development. As the root user, + perform the following commands: + + + # as the root user: + mkdir /etc/apache2/ssl + cd /etc/apache2/ssl + openssl req -new -x509 -days 365 -nodes -out server.crt -keyout server.key + + You will be prompted for several items of information; enter + the appropriate information for each item. The new files + server.crt and server.key will + be created in the directory + /etc/apache2/ssl . + This step generates a self-signed SSL certificate. You must install + a proper SSL certificate for a public production system to avoid warning + messages when users login to their account through the OPAC or when staff + login through the Staff Client. For further information on + installing a proper SSL certificate, see + the section called “Configure a permanent SSL key”. + + d. + + Update Apache configuration file + You must make several changes to the new Apache + configuration file + /etc/apache2/sites-available/eg.conf . + As the root user, + edit the file and make the following changes: + • + In the section + <Directory "/openils/var/cgi-bin"> + replace the line: + Allow from 10.0.0.0/8 + with the line: + Allow from all + This change allows access to your configuration + CGI scripts from any workstation on any network. This is + only a temporary change to expedite testing and should be + removed after you have finished and successfully tested + the Evergreen installation. See + the section called “Post-Installation Chores” + for further details on removing this change after the + Evergreen installation is complete. + + • + Comment out the line: + Listen 443 + since it conflicts with the same declaration in + the configuration file: + /etc/apache2/ports.conf. + • + The following updates are needed to allow the logs + to function properly, but it may break other Apache + applications on your server: + + Edit the Apache configuration file and change the lines: + + + export APACHE_RUN_USER=www-data + export APACHE_RUN_GROUP=www-data + + to instead read: + + + export APACHE_RUN_USER=opensrf + export APACHE_RUN_GROUP=opensrf + + • + As the + root user, + edit the Apache configuration file + /etc/apache2/apache2.conf and + modify the value for KeepAliveTimeout + and MaxKeepAliveRequests to match + the following: + + + KeepAliveTimeout 1 + MaxKeepAliveRequests 100 + + • + Further configuration changes to Apache may be + necessary for busy systems. These changes increase the + number of Apache server processes that are started to + support additional browser connections. + As the + root user, + edit the Apache configuration file + /etc/apache2/apache2.conf, locate + and modify the section related to prefork + configuration to suit the load on your + system: + +<IfModule mpm_prefork_module> + StartServers 20 + MinSpareServers 5 + MaxSpareServers 15 + MaxClients 150 + MaxRequestsPerChild 10000 +</IfModule> + + + + e. + + Enable the Evergreen web site + Finally, you must enable the Evergreen web site. As the + root user, execute the + following Apache configuration commands to disable the default + It Works web page and enable the Evergreen + web site, and then restart the Apache server: + + + # as the root user: + # disable/enable web sites + a2dissite default + a2ensite eg.conf + # restart the server + /etc/init.d/apache2 reload + + + + 13. + + Update the OpenSRF Configuration File + As the opensrf user, edit the + OpenSRF configuration file /openils/conf/opensrf_core.xml + to update the Jabber usernames and passwords, and to specify the domain from + which we will accept and to which we will make connections. + If you are installing Evergreen on a single server and using the + private.localhost / + public.localhost domains, + these will already be set to the correct values. Otherwise, search and replace + to match your customized values. + The left-hand side of Table 5.3, “Sample XPath syntax for editing "opensrf_core.xml"” + shows common XPath syntax to indicate the approximate position within the XML + file that needs changes. The right-hand side of the table shows the replacement + values: + Table 5.3. Sample XPath syntax for editing "opensrf_core.xml"XPath locationValue/config/opensrf/username + opensrf + /config/opensrf/passwd private.localhost + password for + opensrf user + /config/gateway/username + opensrf + /config/gateway/passwdpublic.localhost + password for + opensrf user + /config/routers/router/transport/username, + first entry where server == public.localhost + router + /config/routers/router/transport/password, + first entry where server == public.localhostpublic.localhost + password for + router user + /config/routers/router/transport/username, + second entry where server == private.localhost + router + /config/routers/router/transport/password, + second entry where server == private.localhostprivate.localhost + password for + router user + + 14. + + (OPTIONAL) Create Configuration Files for Users Needing srfsh + When OpenSRF was installed in the section called “Installing OpenSRF 1.6.2 On Ubuntu or + Debian”, the + software installation automatically created a utility named srfsh (surf + shell). This is a command line diagnostic tool for testing and interacting with + OpenSRF. It will be used in a future step to complete and test the Evergreen installation. + Earlier in Step 12 you also created a configuration + file ~/.srfsh.xml for each user that might need to use the utility. + See the section called “Testing Your Evergreen Installation” for further information. + 15. + + Modify the OpenSRF Environment + In this step you will make some minor modifications to the OpenSRF environment: + • + As the opensrf user, + modify the shell configuration file ~/.bashrc for + user opensrf by adding a Perl + environmental variable, then execute the shell configuration file to load + the new variables into your current environment. + In a multi-server environment, you must add any + modifications to ~/.bashrc to the top of the file + before the line [ -z "$PS1" ] && + return . This will allow headless (scripted) logins to load the + correct environment. + + + # as the opensrf user: + echo "export PERL5LIB=/openils/lib/perl5:\$PERL5LIB" >> ~/.bashrc + . ~/.bashrc + + + 16. + + (OPTIONAL) Enable and Disable Language Localizations + You can load translations such as Armenian (hy-AM), Canadian French + (fr-CA), and others into the database to complete the translations available in + the OPAC and Staff Client. For further information, see + ???. + + + Starting EvergreenStarting Evergreen + + In this section you will learn how to start the Evergreen services. + For completeness, instructions for stopping Evergreen can be found later in + the section called “Stopping Evergreen”. + 1. + + As the root + user, start the ejabberd and + memcached services as follows: + + + # as the root user: + /etc/init.d/ejabberd start + /etc/init.d/memcached start + + 2. + + As the opensrf user, + start Evergreen as follows: + + + # as the opensrf user: + osrf_ctl.sh -l -a start_all + + The flag -l forces Evergreen to use + localhost (your current system) + as the hostname. The flag -a start_all starts the other + OpenSRF router , + Perl , and + C services. + • + You can also start Evergreen without the + -l flag, but the osrf_ctl.sh + utility must know the fully qualified domain name for the system + on which it will execute. That hostname was probably specified + in the configuration file opensrf.xml which + you configured in a previous step. + • + If you receive an error message similar to + osrf_ctl.sh: command not found, then your + environment variable PATH does not include the + directory /openils/bin. + As the opensrf user, + edit the configuration file ~/.bashrc and + add the following line: + export PATH=$PATH:/openils/bin + • + If you receive an error message similar to Can't + locate OpenSRF/System.pm in @INC ... BEGIN failed--compilation + aborted, then your environment variable + PERL5LIB does not include the + directory /openils/lib/perl5. + As the opensrf user, + edit the configuration file ~/.bashrc and + add the following line: + export PERL5LIB=$PERL5LIB:/openils/lib/perl5 + + 3. + + In this step you will generate the Web files needed by the Staff Client + and catalog, and update the proximity of locations in the Organizational Unit + tree (which allows Holds to work properly). + You must do this the first time you start Evergreen and after making any + changes to the library hierarchy. + As the opensrf user, execute the + following command and review the results: + + + # as the opensrf user: + cd /openils/bin + ./autogen.sh -c /openils/conf/opensrf_core.xml -u + + Updating Evergreen organization tree and IDL using '/openils/conf/opensrf_core.xml' + Updating fieldmapper + Updating web_fieldmapper + Updating OrgTree + removing OrgTree from the cache for locale hy-AM... + removing OrgTree from the cache for locale cs-CZ... + removing OrgTree from the cache for locale en-CA... + removing OrgTree from the cache for locale en-US... + removing OrgTree from the cache for locale fr-CA... + removing OrgTree from the cache for locale ru-RU... + Updating OrgTree HTML + Updating locales selection HTML + Updating Search Groups + Refreshing proximity of org units + Successfully updated the organization proximity + Done + + 4. + + As the root user, restart the + Apache Web server: + + + # as the root user: + /etc/init.d/apache2 restart + + If the Apache Web server was running when you started the OpenSRF + services, you might not be able to successfully log into the OPAC or Staff + Client until the Apache Web server has been restarted. + + + Testing Your Evergreen InstallationTesting Your Evergreen Installation + + This section describes several simple tests you can perform to verify that the Evergreen + server-side software has been installed and configured properly and is running as + expected. + Testing Connections to Evergreen + + Once you have installed and started Evergreen, test your connection to Evergreen. Start the + srfsh application and try logging onto the Evergreen server using the default + administrator username and password. Following is sample output generated by executing + srfsh after a successful Evergreen installation. For help with + srfsh commands, type help at the prompt. + As the opensrf user, + execute the following commands to test your Evergreen connection: + + + # as the opensrf user: + /openils/bin/srfsh + + srfsh% login admin open-ils + Received Data: "250bf1518c7527a03249858687714376" + ------------------------------------ + Request Completed Successfully + Request Time in seconds: 0.045286 + ------------------------------------ + Received Data: { + "ilsevent":0, + "textcode":"SUCCESS", + "desc":" ", + "pid":21616, + "stacktrace":"oils_auth.c:304", + "payload":{ + "authtoken":"e5f9827cc0f93b503a1cc66bee6bdd1a", + "authtime":420 + } + } + ------------------------------------ + Request Completed Successfully + Request Time in seconds: 1.336568 + ------------------------------------ + + If this does not work, try the following: + • + As the opensrf user, run the + settings-tester.pl utility to review your Evergreen + installation for any system configuration problems: + + + # as the opensrf user: + cd /home/opensrf + ./Evergreen-ILS-2.0.1/Open-ILS/src/support-scripts/settings-tester.pl + + If the output of settings-tester.pl does not help you + find the problem, please do not make any significant changes to your + configuration. + • + Follow the steps in the troubleshooting guide in + Chapter 11, Troubleshooting System Errors. + • + If you have followed the entire set of installation steps listed here + closely, you are probably extremely close to a working system. Gather your + configuration files and log files and contact the + Evergreen Development Mailing List + list for assistance before making any drastic changes to your system + configuration. + + + Testing the Staff Client on Linux + + In this section you will confirm that a basic login on the Staff Client works + properly. + Run the Evergreen Staff Client on a Linux system by using the application + XULRunner (installed automatically and by default with Firefox + version 3.0 and later on Ubuntu and Debian distributions). + As the root user, start the Staff Client + as shown: + + + # as the root user: + xulrunner /home/opensrf/Evergreen-ILS-2.0.1/Open-ILS/xul/staff_client/build/application.ini + + A login screen for the Staff Client similar to this should appear: + + First, add the name of your Evergreen server to the field + Hostname in the Server section. You will probably + want to use 127.0.0.1. After adding the server name, click Re-Test + Server. You should now see the messages 200:OK in the fields + Status and Version. + Because this is the initial run of the Staff Client, you will see a warning in the + upper-right saying: Not yet configured for the specified + server. To continue, you must assign a workstation name. Refer to + ??? for further details. + Try to log into the Staff Client with the username admin and + the password open-ils. If the login is successful, you will see the + following screen: + + Otherwise, you may need to click 'Add SSL Exception' in the + main window. You should see a popup window titled Add Security Exception: + + Click 'Get Certificate', then click 'Confirm + Security Exception', then click 'Re-Test Server' in the + main window and try to log in again. + + Testing the Apache Web Server + + In this section you will test the Apache configuration file(s), then restart the + Apache web server. + As the root user, execute the following + commands. Note the use of restart to force the new Evergreen + modules to be reloaded even if the Apache server is already running. Any problems found + with your configuration files should be displayed: + + + # as the root user: + apache2ctl configtest && /etc/init.d/apache2 restart + + + Stopping Evergreen + + In the section called “Starting Evergreen” you learned how to start the + Evergreen services. For completeness, following are instructions for stopping the + Evergreen services. + As the opensrf user, stop all Evergreen + services by using the following command: + + + # as the opensrf user + # stop the server; use "-l" to force hostname to be "localhost" + osrf_ctl.sh -l -a stop_all + + You can also stop Evergreen services without the + -l flag, but the osrf_ctl.sh utility must know the + fully qualified domain name for the system on which it will execute. That hostname may + have been specified in the configuration file opensrf.xml, which + you configured in a previous step. + + + Post-Installation ChoresPost-Installation Chores + + There are several additional steps you may need to complete after Evergreen has been + successfully installed and tested. Some steps may not be needed (e.g., setting up support for + Reports). + Remove temporary Apache configuration changes + + You modified the Apache configuration file + /etc/apache2/sites-available/eg.conf in an earlier step as a + temporary measure to expedite testing (see + Step 12.d for further information). + Those changes must now be reversed in order to deny unwanted access to your + CGI scripts from users on other public networks. + + + This temporary network update was done to expedite + testing. You must correct + this for a public production system. + + + As the root user, edit the configuration + file again and comment out the line Allow from all and uncomment the + line Allow from 10.0.0.0/8, then change it to match your network + address scheme. + + Configure a permanent SSL key + + You used the command openssl in an earlier step to + temporarily create a new SSL key for the Apache server (see + Step 12.c for further + information). This self-signed security certificate was adequate during + testing and development, but will continue to generate warnings in the Staff + Client and browser. For a public production server you should configure or + purchase a signed SSL certificate. + There are several open source software solutions that provide schemes to + generate and maintain public key security certificates for your library + system. Some popular projects are listed below; please review them for + background information on why you need such a system and how you can provide + it: + • + http://www.openca.org/projects/openca/ + • + http://sourceforge.net/projects/ejbca/ + • + http://pki.fedoraproject.org + + + + The temporary SSL key was only created to expedite + testing. You should install a proper SSL certificate for a public + production system. + + + + (OPTIONAL) IP-Redirection + + By default, Evergreen is configured so searching the OPAC always starts in the + top-level (regional) library rather than in a second-level (branch) library. Instead, + you can use "IP-Redirection" to change the default OPAC search location to use the IP + address range assigned to the second-level library where the seach originates. You must + configure these IP ranges by creating the configuration file + /openils/conf/lib_ips.txt and modifying the Apache startup script + /etc/apache2/startup.pl. + First, copy the sample file + /home/opensrf/Evergreen-ILS-1.6.1.2/Open-ILS/examples/lib_ips.txt.example + to /openils/conf/lib_ips.txt. The example file contains the single + line: "MY-LIB 127.0.0.1 127.0.0.254". You must modify the file to use + the IP address ranges for your library system. Add new lines to represent the IP address + range for each branch library. Replace the values for MY-LIB with the + values for each branch library found in the table + actor.org_unit. + Finally, modify the Apache startup script + /etc/apache2/startup.pl by uncommenting two lines as shown, then + restarting the Apache server: + +# - Uncomment the following 2 lines to make use of the IP redirection code +# - The IP file should contain a map with the following format: +# - actor.org_unit.shortname <start_ip> <end_ip> +# - e.g. LIB123 10.0.0.1 10.0.0.254 +use OpenILS::WWW::Redirect qw(/openils/conf/opensrf_core.xml); +OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); + + + (OPTIONAL) Set Up Support For Reports + + Evergreen reports are extremely powerful but require some simple configuration. + See ??? for information on starting and + stopping the Reporter daemon processes. + + + + + Chapter 6. Upgrading Evergreen to 2.0Chapter 6. Upgrading Evergreen to 2.0 + Report errors in this documentation using Launchpad. + Chapter 6. Upgrading Evergreen to 2.0 + Report any errors in this documentation using Launchpad. + Chapter 6. Upgrading Evergreen to 2.0Chapter 6. Upgrading Evergreen to 2.0AbstractThis Chapter will explain the step-by-step process of upgrading Evergreen + to 2.0, including steps to upgrade OpenSRF. Before + upgrading, it is important to carefully plan an upgrade strategy to minimize system downtime and + service interruptions. All of the steps in this chapter are to be completed from the command line. + + Evergreen 2.0 has several software requirements: + •PostgreSQL: Version 8.4 is the minimum supported version of PostgreSQL. •Linux: Evergreen 2.0 has been tested on Debian Squeeze (6.0) and Ubuntu Lucid Lynx (10.04). If you are runnung an older version of these distributions, + you may want to upgrade befor installing Evergreen 2.0. For instructions on upgrading these distributions, visit the + Debian or Ubuntu websites. + In the following instructions, you are asked to perform certain steps as either the root or + opensrf user. + •Debian: To become the root user, issue the su command and enter the password of the + root user.•Ubuntu: To become the root user, issue the sudo su command and enter the password of your current user. + To switch from the root user to a different user, issue the su - [user] command; for example, + su - opensrf. Once you have become a non-root user, to become the root user again simply issue the exit command. + In the following instructions, /path/to/OpenSRF/ represents the path to the OpenSRF source directory. + Backing Up DataBacking Up Data + + 1. + + As root, stop the Apache + web server. + 2. + + As the opensrf user, stop all + Evergreen + and OpenSRF services: + osrf_ctl.sh -l -a stop_all + 3. + + Back up of the /openils + directory. + 4. + + Back up the evergreen + database. + + + Upgrading OpenSRF to 1.6.2Upgrading OpenSRF to 1.6.2 + + 1. + + As the opensrf user, download and extract the source files for OpenSRF + 1.6.2: + +wget http://open-ils.org/downloads/OpenSRF-1.6.2.tar.gz +tar xzf OpenSRF-1.6.2.tar.gz + + A new directory OpenSRF-1.6.2 is created. + For the latest edition of OpenSRF, check the Evergreen download page at + http://www.open-ils.org/downloads.php. + + 2. + + As the root user, install the software prerequisites using the automatic + prerequisite installer. + +aptitude install make +cd /home/opensrf/OpenSRF-1.6.2 + + Replace [distribution] below with the following value + for your distribution: + • + debian-squeeze for Debian Squeeze (6.0) + + • + ubuntu-lucid for Ubuntu Lucid Lynx + (10.04) + • + centos for CentOS 5 + + • + rhel for Red Hat Enterprise Linux 5 + + + +cd /path/to/OpenSRF +make -f src/extras/Makefile.install [distribution] + + This will install a number of packages required by OpenSRF on your system, + including some Perl modules from CPAN. You can type no to the initial CPAN + configuration prompt to allow it to automatically configure itself to download + and install Perl modules from CPAN. The CPAN installer will ask you a number of + times whether it should install prerequisite modules - type yes. + 3. + + As the opensrf user, configure and compile OpenSRF: + You can include the –enable-python and –enable-java configure options if + you want to include support for Python and Java + , respectively. + +cd /home/opensrf/OpenSRF-1.6.2 +./configure --prefix=/openils --sysconfdir=/openils/conf +make + + 4. + + As the root user, return to your OpenSRF build directory and install + OpenSRF: + +cd /home/opensrf/OpenSRF-1.6.2 +make install + + 5. + + As the root user, change the ownership of the installed files to the + opensrf user: + chown -R opensrf:opensrf /openils + 6. + + Restart and Test OpenSRF + +osrf_ctl.sh -l -a start_all +/openils/bin/srfsh +srfsh# request opensrf.math add 2 2 + + You should see output such as: + +Received Data: 4 + +------------------------------------ +Request Completed Successfully +Request Time in seconds: 0.007519 +------------------------------------ + +srfsh# + + If test completed successfully move onto the next section. + Otherwise, refer to the troubleshooting chapter + of this documentation. + + + Upgrade Evergreen from 1.6.1 to 2.0Upgrade Evergreen from 1.6.1 to 2.0 + + + PostgreSQL 8.4 is the minimum supported version of PostgreSQL. + Evergreen 2.0 has been tested on Debian Squeeze (6.0) and Ubuntu Lucid (10.04). If you are runnung an older version of + these distributions, you may want to upgrade befor installing Evergreen 2.0. For instructions on upgrading these distribuitions, visit the + Debian or Ubuntu websites. + + + Copying these Apache configuration files will remove any customizations you have made to them. Remember to redo your customizations after copying + them. For example, if you purchased an SSL certificate, you + will need to edit eg.conf to point to the appropriate SSL certificate files. + + 1. + + As the opensrf user, download and extract Evergreen 2.0 + + +wget http://www.open-ils.org/downloads/Evergreen-ILS-2.0.1.tar.gz +tar xzf Evergreen-ILS-2.0.1.tar.gz + + For the latest edition of Evergreen 2.0, check the Evergreen download page at + http://www.open-ils.org/downloads.php and adjust upgrading instructions accordingly. + 2. + + As the root user, install the prerequisites: + cd /home/opensrf/Evergreen-ILS-2.0.1 + On the next command, replace [distribution] with one of + these values for your distribution of Debian or Ubuntu: + • + debian-squeeze for Debian Squeeze (6.0) + • + ubuntu-lucid for Ubuntu Lucid Lynx + (10.04) + + make -f Open-ILS/src/extras/Makefile.install [distribution] + 3. + + As the opensrf user, configure and compile + Evergreen: + cd /home/opensrf/Evergreen-ILS-2.0.1 + ./configure --prefix=/openils --sysconfdir=/openils/conf + make + 4. + + As the root user, install + Evergreen: + make STAFF_CLIENT_BUILD_ID=rel_2_0_1 install + 5. + + Change to the Evergreen installation + directory: + cd /home/opensrf/Evergreen-ILS-2.0.1 + 6. + + As the root user, change all files to be owned by the + opensrf user and group: + chown -R opensrf:opensrf /openils + 7. + + As the opensrf user, update server symlink in /openils/var/web/xul/: + +cd /openils/var/web/xul/ +rm server +ln -s rel_2_0_1/server + + 8. + + Update the evergreen database: + It is recommended that you back up your Evergreen database in order to restore your data if anything goes wrong. + + +psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1-2.0-upgrade-db.sql evergreen +psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/2.0.0-2.0.1-upgrade-db.sql evergreen + + + 9. + + Run the reingest-1.6-2.0.pl script to generate an sql script. Then use the sql file to reingest bib records into your + evergreen database. This is required to make the new facet sidebar in OPAC search results work and to upgrade the keyword indexes to use + the revised NACO normalization routine. + If you are running a large Evergreen installation, it is recommend that you examine the script first. Reingesting a large number of bibliographic records + may take several hours. +perl Open-ILS/src/sql/Pg/reingest-1.6-2.0.pl +psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/reingest-1.6-2.0.sql evergreen + 10. + + As the opensrf user, + copy /openils/conf/oils_web.xml.example to /openils/conf/oils_web.xml + . (If upgrading from 1.6.1.x, oils_web.xml should already exist.) + + cp /openils/conf/oils_web.xml.example /openils/conf/oils_web.xml + 11. + + Update opensrf_core.xml and opensrf.xml by copying the new example files + (/openils/conf/opensrf_core.xml.example and /openils/conf/opensrf.xml). + + cp /openils/conf/opensrf_core.xml.example /openils/conf/opensrf_core.xml + + cp /openils/conf/opensrf.xml.example /openils/conf/opensrf.xml + Copying these configuration files will remove any customizations you have made to them. Remember to redo your customizations after copying + them. + 12. + + Update opensrf.xml with the database connection info: + +perl Open-ILS/src/support-scripts/eg_db_config.pl --update-config --service all --create-offline --user evergreen \ +--password evergreen --hostname localhost --port 5432 --database evergreen + + 13. + + Update /etc/apache2/startup.pl by copying the example from + Open-ILS/examples/apache/startup.pl. + 14. + + Update /etc/apache2/eg_vhost.conf by copying the example from + Open-ILS/examples/apache/eg_vhost.conf. + 15. + + Update /etc/apache2/sites-available/eg.conf by copying the example from Open-ILS/ + examples/apache/eg.conf. + + + Restart Evergreen and TestRestart Evergreen and Test + + 1. + + As the opensrf user, start all + Evergreen and OpenSRF + services: + osrf_ctl.sh -l -a start_all + 2. + + As the opensrf user, run autogen to refresh the static + organizational data files: + +cd /openils/bin +./autogen.sh -c /openils/conf/opensrf_core.xml -u + + + 3. + + Start srfsh and try logging in using your Evergreen + username and password: + +/openils/bin/srfsh +srfsh% login username password + + 4. + + Start the Apache web server. + + + If you encounter errors, refer to the troubleshooting + section of this documentation for tips + on finding solutions and seeking further assistance from the Evergreen community. + + + Upgrading PostgreSQL from 8.2 to 8.4 (if required)Upgrading PostgreSQL from 8.2 to 8.4 (if required) + + Evergreen 1.6.1 supports PostgreSQL version 8.4 and it is recommended that you upgrade PostgreSQL when you upgrade Evergreen to 1.6. + The order of the following steps is very important. + 1. + + As opensrf, stop the evergreen and opensrf services: + osrf_ctl.sh -l -a stop_all + 2. + + Backup the Evergreen database data + 3. + + Upgrade to Postgresql 8.4 by removing old version and installing Postgresql 8.4 + 4. + + Create an empty Evergreen database in postgresql 8.4 by issuing the following commands as the postgres user: + + +createdb -E UNICODE evergreen +createlang plperl evergreen +createlang plperlu evergreen +createlang plpgsql evergreen +psql -f /usr/share/postgresql/8.4/contrib/tablefunc.sql evergreen +psql -f /usr/share/postgresql/8.4/contrib/tsearch2.sql evergreen +psql -f /usr/share/postgresql/8.4/contrib/pgxml.sql evergreen + + + 5. + + As the postgres user on the PostgreSQL server, create a PostgreSQL user named evergreen for the database cluster: + createuser -P -s evergreen + Enter the password for the new PostgreSQL superuser (evergreen) + 6. + + Restore data from backup created in step 1. + 7. + + To point tsearch2 to proper function names in 8.4, run the SQL script + /home/opensrf/Evergreen-ILS*/Open-ILS/src/sql/Pg/000.english.pg84.fts-config.sql using the psql command. + cd /home/opensrf/Evergreen-ILS* + psql -f /Open-ILS/src/sql/Pg/000.english.pg84.fts-config.sql evergreen + 8. + + Restart Evergreen and OpenSRF services + 9. + + For additional information regarding upgrading PostgreSQL, see the following documentation in PostgreSQL: + http://www.postgresql.org/docs/8.4/static/install-upgrading.html + http://www.postgresql.org/docs/8.4/interactive/textsearch-migration.html + + http://www.postgresql.org/docs/current/static/tsearch2.html#AEN102824 + + + + Chapter 7. Migrating DataChapter 7. Migrating Data + Report errors in this documentation using Launchpad. + Chapter 7. Migrating Data + Report any errors in this documentation using Launchpad. + Chapter 7. Migrating DataChapter 7. Migrating DataAbstractMigrating data into Evergreen can be one of the most daunting tasks for an administrator. This chapter will explain some procedures to help to migrate + bibliographic records, copies and patrons into the Evergreen system. This chapter requires advanced ILS Administration experience, knowledge of Evergreen data structures, + as well as knowledge of how to export data from your current system or access to data export files from your current system. + + Migrating Bibliographic RecordsMigrating Bibliographic Records + + + + One of the most important and challenging tasks is migrating your bibliographic records to a new system. The procedure may be different depending on the system from which you + are migrating and the content of the marc records exported from the existing system. The procedures in this section deal with the process once the data from the existing system + is exported into marc records. It does not cover exporting data from your existing non-Evergreen system. + Several tools for importing bibliographic records into Evergreen can be found in the Evergreen installation folder + (/home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/src/extras/import/ ) and are also available from the Evergreen repository + ( + http://svn.open-ils.org/trac/ILS/browser/branches/rel_1_6_1/Open-ILS/src/extras/import). + Converting MARC records to Evergreen BRE JSON formatConverting MARC records to Evergreen BRE JSON format + + + If you are starting with MARC records from your existing system or another source, use the marc2bre.pl script to create the JSON representation of a bibliographic + record entry (hence bre) in Evergreen. marc2bre.pl can perform the following functions: + •Converts MARC-8 encoded records to UTF-8 encoding•Converts MARC21 to MARCXML21•Select the unique record number field (common choices are '035' or '001'; check your records as you might be surprised how a supposedly unique field + actually has duplicates, though marc2bre.pl will select a unique identifier for subsequent duplicates)•Extracts certain pertinent fields indexing and display purposes (along with the complete MARCXML21 record)•Sets the ID number of the first record from this batch to be imported into the biblio.record_entry table (hint - run the following + SQL to determine what this number should be to avoid conflicts: + +psql -U postgres evergreen + # SELECT MAX(id)+1 FROM biblio.record_entry; + + • + If you are processing multiple sets of MARC records with marc2bre.plbefore loading the records into the database, you will need to keep track + of the starting ID number for each subsequent batch of records that you are importing. For example, if you are processing three files of MARC records with 10000 + records each into a clean database, you would use –startid 1, –startid 10001, and –startid 20001 + parameters for each respective file. + • + Ignore “trash” fields that you do not want to retain in Evergreen + • + If you use marc2bre.pl to convert your MARC records from the MARC-8 encoding to the UTF-8 encoding, it relies + on the MARC::Charset Perl module to complete the conversion. When importing a large set of items, you can speed up the process by using a + utility like marc4j or marcdumper to convert the records + to MARC21XML and UTF-8 before running them through marc2bre.pl with the + –marctype=XML flag to tell marc2bre.pl that the records are already in MARC21XML format with + the UTF-8 encoding. If you take this approach, due to a current limitation of MARC::File::XML you have to do a + horrible thing and ensure that there are no namespace prefixes in front of the element names. marc2bre.pl cannot parse the following + example: + + + + +<?xml version="1.0" encoding="UTF-8" ?> +<marc:collection xmlns:marc="http://www.loc.gov/MARC21/slim" + xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" + xsi:schemaLocation="http://www.loc.gov/MARC/slim +http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> + <marc:record> + <marc:leader>00677nam a2200193 a 4500</marc:leader> + <marc:controlfield tag="001">H01-0000844</marc:controlfield> + <marc:controlfield tag="007">t </marc:controlfield> + <marc:controlfield tag="008">060420s1950 xx 000 u fre d</marc:controlfield> + <marc:datafield tag="040" ind1=" " ind2=" "> + <marc:subfield code="a">CaOHCU</marc:subfield> + <marc:subfield code="b">fre</marc:subfield> + </marc:datafield> +... +; + + + But marc2bre.pl can parse the same example with the namespace prefixes removed: + + +<?xml version="1.0" encoding="UTF-8" ?> +<collection xmlns:marc="http://www.loc.gov/MARC21/slim" + xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" + xsi:schemaLocation="http://www.loc.gov/MARC/slim +http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> + <record> + <leader>00677nam a2200193 a 4500</leader> + <controlfield tag="001">H01-0000844</controlfield> + <controlfield tag="007">t </controlfield> + <controlfield tag="008">060420s1950 xx 000 u fre d</controlfield> + <datafield tag="040" ind1=" " ind2=" "> + <subfield code="a">CaOHCU</subfield> + <subfield code="b">fre</subfield> + </datafield> +... +; + + + + Converting Records for Import into PostgreSQLConverting Records for Import into PostgreSQL + + + Once you have your records in Open-ILS JSON ingest format, you then need to use pg_loader.pl to convert these records into a + set of SQL statements that you can use to + load the records into PostgreSQL. The –order and –autoprimary command line options (bre, mrd, mfr, etc) map to class IDs defined in + /openils/conf/fm_IDL.xml. + + Adding Metarecords to the DatabaseAdding Metarecords to the Database + + + One you have loaded the records into PostgreSQL, you can create metarecord entries in the metabib.metarecord table by running the following SQL: + +psql evergreen +# \i /home/opensrf/Evergreen-ILS-1.6*/src/extras/import/quick_metarecord_map.sql + + Metarecords are required to place holds on items, among other actions. + + + + + +Migrating Bibliographic Records Using the ESI Migration ToolsMigrating Bibliographic Records Using the ESI Migration Tools + + + The following procedure explains how to migrate bibliographic records from marc records into Evergreen. This is a general guide and will need to be adjusted for your + specific environment. It does not cover exporting records from specific proprietary ILS + systems. For assistance with exporting records from your current system please refer to the manuals for your system or you might try to ask for help from the + Evergreen community. + + 1. + + Download the Evergreen migration utilities from the git repository. + Use the command git clone git://git.esilibrary.com/git/migration-tools.git to clone the migration tools. + Install the migration tools: + + + +cd migration-tools/Equinox-Migration +perl Makefile.PL +make +make test +make install + + + +2. + + Add environmental variables for migration and import tools. These paths must point to: + •the import perl scripts bundled with Evergreen •the folder where you extracted the migration tools•the location of the Equinox-Migration perl modules •the location of the Evergreen perl modules (e.g. perl5) + + +export PATH=[path to Evergreen]/Open-ILS/src/extras/import: \ +/[path to migration-tools]/migration-tools:$PATH:. +export PERL5LIB=/openils/lib/perl5: \ +/[path to migration-tools/Equinox-Migration/lib + + +3. + + Dump marc records into MARCXML using yaz-marcdump + + + + +echo '<?xml version="1.0" encoding="UTF-8" ?>' > imported_marc_records.xml +yaz-marcdump -f MARC-8 -t UTF-8 -o marcxml imported_marc_records.mrc >> imported_marc_records.xml + + + +4. + + Test validity of XML file using xmllint + + + + + xmllint --noout imported_marc_records.xml 2> marc.xml.err + + + +5. + + Clean up the marc xml file using the marc_cleanup utility: + + +marc_cleanup --marcfile=imported_marc_records.xml --fullauto [--renumber-from #] -ot 001 + + + The --renumber-from is required if you have bibliographic records already in your system. Use this to set the starting id number higher + then the last id in the biblio.record_entry table. The marc_cleanup command will generate a file called clean.marc.xml +6. + + Create a fingerprinter file using the fingerprinter utility: + + +fingerprinter -o incumbent.fp -x incumbent.ex clean.marc.xml + + + fingerprinter is used for deduplification of the incumbent records. The -o option specifies the + output file and the -x option is used to specify the error output file. +7. + + Create a fingerprinter file for existing Evergreen bibliographic records using the fingerprinter utility if you + have existing bibliographic records in your system previously imported: + + +fingerprinter -o production.fp -x production.fp.ex --marctype=MARC21 existing_marc_records.mrc \ +--tag=901 --subfield=c + + + fingerprinter is used for deduplification of the incumbant records. +8. + + Create a merged fingerprint file removing duplicate records. + + +cat cat production.fp incumbent.fp | sort -r > dedupe.fp +match_fingerprints [-t start id] -o records.merge dedupe.fp + + +9. + + Create a new import XML file using the extract_loadset utility + +extract_loadset -l 1 -i clean.marc.xml -o merged.xml records.merge + +10. + + Extract all of the currently used TCN's an generate the .bre and .ingest files to prepare for the bibliographic record load. + + +psql -U evergreen -c "select tcn_value from biblio.record_entry where not deleted" \ +| perl -npe 's/^\s+//;' > used_tcns +marc2bre.pl --idfield 903 [--startid=#] --marctype=XML -f final.xml \ +--used_tcn_file=used_tcns > evergreen_bre_import_file.bre + + + + The option --startid needs to match the start id used in earlier steps and must be higher than largest id value + in the biblio.record_entry table. the option --idfield should match the marc datafield used to store your records ids. + +11. + + Ingest the bibliographic records into the Evergreen database. + + + +parallel_pg_loader.pl \ +-or bre \ +-or mrd \ +-or mfr \ +-or mtfe \ +-or mafe \ +-or msfe \ +-or mkfe \ +-or msefe \ +-a mrd \ +-a mfr \ +-a mtfe \ +-a mafe \ +-a msfe \ +-a mkfe \ +-a msefe evergreen_bre_import_file.bre > bibrecords.sql + + + + 12. + + Load the records using psql and the sql scripts generated from the previous step. + + + +psql -U evergreen -h localhost -d evergreen -f bibrecords.sql +psql -U evergreen < ~/Ever*/Open-ILS/src/extras/import/quick_metarecord_map.sql > log.create_metabib + + + + 13. + + Extract holdings from marc records for importing copies into Evergreen using the extract_holdings utility. + + +extract_holdings --marcfile=clean.marc.xml --holding 999 --copyid 999i --map holdings.map + + + This command would extract holdings based on the 949 datafield in the marc records. The copy id is generated from the subfile i in the 999 datafield. You may + need to adjust these options based on the field used for holdings informatiom in your marc records. + The map option holdings.map refers to a file to be used for mapping subfields to the holdings data you would like extracted. Here is an example based on mapping holdings data to the 999 data field: + + +callnum 999 a +barcode 999 i +location 999 l +owning_lib 999 m +circ_modifier 999 t + + + Running the extract holdings script should produce an sql script HOLDINGS.pg similar to: + +BEGIN; + +egid, hseq, l_callnum, l_barcode, l_location, l_owning_lib, l_circ_modifier, +40 0 HD3616.K853 U54 1997 30731100751928 STACKS FENNELL BOOK +41 1 HV6548.C3 S984 1998 30731100826613 STACKS FENNELL BOOK +41 2 HV6548.C3 S984 1998 30731100804958 STACKS BRANTFORD BOOK +... + + + Edit the holdings.pg sql script like so: + +BEGIN; + +TRUNCATE TABLE staging_items; + +INSERT INTO staging_items (egid, hseq, l_callnum, l_barcode, l_location, +l_owning_lib, l_circ_modifier FROM stdin; +40 0 HD3616.K853 U54 1997 30731100751928 STACKS FENNELL BOOK +41 1 HV6548.C3 S984 1998 30731100826613 STACKS FENNELL BOOK +41 2 HV6548.C3 S984 1998 30731100804958 STACKS BRANTFORD BOOK +COMMIT; + + This file can be used for importing holdings into Evergreen. the egid is a critical column. It is used to link the volume and + copy to the bibliographic record. Please refer to for the steps to import your holdings into Evergreen. + + + + Adding Copies to Bibliographic RecordsAdding Copies to Bibliographic Records + + Before bibliographic records can be found in an OPAC search copies will need to be created. It is very important to understand how various tables related to each other in regards + to holdings maintenance. + The following procedure will guide you through the process of populating Evergreen with volumes and copies. This is a very simple example. The SQL queries may need to be adjusted + for the specific data in your holdings. + 1. + + Create a staging_items staging table to hold the holdings data: + +CREATE TABLE staging_items ( + l_callnum text, -- call number label + hseq int, + egid int, -- biblio.record_entry_id + createdate date, + l_location text, + l_barcode text, + l_circ_modifier text, + l_owning_lib text -- actor.org_unit.shortname +); + + 2. + + Import the items using the HOLDINGS.pg SQL script created using the extract_holdings utility. + +psql -U evergreen -f HOLDINGS.pg evergreen + + the file HOLDINGS.pg and/or the COPY query may need to be adjusted for your particular circumstances. + 3. + + Generate shelving locations from your staging table. + +INSERT INTO asset.copy_location (name, owning_lib) +SELECT DISTINCT l.location, ou.id +FROM staging_items l + JOIN actor.org_unit ou ON (l.owning_lib = ou.shortname); + + 4. + + Generate circulation modifiers from your staging table. + +INSERT INTO config.circ_modifier (code, name, description, sip2_media_type, magnetic_media) + SELECT DISTINCT l_circ_modifier AS code, + l_circ_modifier AS name, + LOWER(l_circ_modifier) AS description, + '001' AS sip2_media_type, + FALSE AS magnetic_media + FROM staging_items + WHERE l_circ_modifier NOT IN (SELECT code FROM config.circ_modifier); + + 5. + + Generate call numbers from your staging table: + +INSERT INTO asset.call_number (creator,editor,record,label,owning_lib) +SELECT DISTINCT 1, 1, egid, l.callnum, ou.id +FROM staging.staging_items l +JOIN actor.org_unit ou ON (l.owning_lib = ou.shortname); + + 6. + + Generate copies from your staging table: + +INSERT INTO asset.copy ( +circ_lib, creator, editor, create_date, barcode, +STATUS, location, loan_duration, fine_level, circ_modifier, deposit, ref, call_number) + +SELECT DISTINCT ou.id AS circ_lib, + 1 AS creator, + 1 AS editor, + l.l_createdate AS create_date, + l.l_barcode AS barcode, + 0 AS STATUS, + cl.id AS location, + 2 AS loan_duration, + 2 AS fine_level, + l.l_circ_modifier AS circ_modifier, + FALSE AS deposit, + CASE + WHEN l.l_circ_modifier = 'REFERENCE' THEN TRUE + ELSE FALSE + END AS ref, + cn.id AS call_number + FROM staging_items l + JOIN actor.org_unit ou + ON (l.l_owning_lib = ou.shortname) + JOIN asset.copy_location cl + ON (ou.id = cl.owning_lib AND l.l_location = cl.name) + JOIN metabib.real_full_rec m + ON (m.value = l.egid) + JOIN asset.call_number cn + ON (ou.id = cn.owning_lib + AND m.record = cn.record + AND l.l_callnum = cn.label) + + You should now have copies in your Evergreen database and should be able to search and find the bibliographic records with attached copies. + + + Migrating Patron DataMigrating Patron Data + + + + This section will explain the task of migrating your patron data from comma delimited files into Evergreen. + It does not deal with the process of exporting from the non-Evergreen + system since this process may vary depending on where you are extracting your patron records. Patron could come from an ILS or it could come from a student database in the case of + academic records. + + When importing records into Evergreen you will need to populate 3 tables in your Evergreen database: + •actor.usr - The main table for user data•actor.card - Stores the barcode for users; Users can have more than 1 card but only 1 can be active at a given time;•actor.usr_address - Used for storing address information; A user can have more than one address. + Before following the procedures below to import patron data into Evergreen, it is a good idea to examine the fields in these tables in order to decide on a strategy + for data to include + in your import. It is important to understand the data types and constraints on each field. + 1. + + Export the patron data from your existing ILS or from another source into a comma delimited file. The comma delimited file used for importing + the records should use Unicode (UTF8) character encoding. + 2. + + Create a staging table. A staging table will allow you to tweak the data before importing. + Here is an example sql statement: + + +CREATE TABLE students ( + student_id int, barcode text, last_name text, first_name text, program_number text, + program_name text, email text, address_type text, street1 text, street2 text, + city text, province text, country text, postal_code text, phone text, profile int, + ident_type int, home_ou int, claims_returned_count int DEFAULT 0, usrname text, + net_access_level int DEFAULT 2, password text +); + + Note the DEFAULT variables. These allow you to set default for your library or to populate required fields if you data allows + NULL values where fields are required in Evergreen. + 3. + + Formatting of some fields to fit Evergreen filed formatting may be required. Here is an example of sql to adjust phone numbers in the staging + table to fit the evergreen field: + +UPDATE students phone = replace(replace(replace(rpad(substring(phone from 1 for 9), 10, '-') || +substring(phone from 10), '(', ''), ')', ''), ' ', '-'); + + Data “massaging” may be required to fit formats used in Evergreen. + 4. + + Insert records from the staging table into the actor.usr Evergreen table: + + INSERT INTO actor.usr ( + profile, usrname, email, passwd, ident_type, ident_value, first_given_name, + family_name, day_phone, home_ou, claims_returned_count, net_access_level) + SELECT profile, students.usrname, email, student_id, ident_type, student_id, + first_name, last_name, phone, home_ou, claims_returned_count, net_access_level + FROM students; + + 5. + + insert records into actor.card from actor.usr. + +INSERT INTO actor.card (usr, barcode) + SELECT actor.usr.id, students.barcode + FROM students + INNER JOIN actor.usr + ON students.usrname = actor.usr.usrname; + + This assumes a one to one card patron relationship. If your patron data import has multiple cards assigned to one patron more complex import scripts may be required which look for inactive or active flags. + 6. + + Update actor.usr.card field with actor.card.id to associate active card with the user: + +UPDATE actor.usr + SET card = actor.card.id + FROM actor.card + WHERE actor.card.usr = actor.usr.id; + + 7. + + Insert records into actor.usr_address to add address information for users: + +INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code) + SELECT actor.usr.id, students.street1, students.street2, students.city, students.province, + students.country, students.postal_code + FROM students + INNER JOIN actor.usr ON students.usrname = actor.usr.usrname; + + 8. + + update actor.usr.address with address id from address table. + +UPDATE actor.usr + SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id + FROM actor.usr_address + WHERE actor.usr.id = actor.usr_address.usr; + + This assumes 1 address per patron. More complex scenarios may require more sophisticated SQL. + + Creating an sql Script for Importing PatronsCreating an sql Script for Importing Patrons + + The procedure for importing patron can be automated with the help of an sql script. Follow these steps to create an import script: + + 1. + + Create an new file and name it import.sql + + 2. + + Edit the file to look similar to this: + +BEGIN; + +-- Create staging table. +CREATE TABLE students ( + student_id int, barcode text, last_name text, first_name text, program_number text, + program_name text, email text, address_type text, street1 text, street2 text, + city text, province text, country text, postal_code text, phone text, profile int, + ident_type int, home_ou int, claims_returned_count int DEFAULT 0, usrname text, + net_access_level int DEFAULT 2, password text +); + + +--Insert records from the staging table into the actor.usr table. +INSERT INTO actor.usr ( + profile, usrname, email, passwd, ident_type, ident_value, first_given_name, family_name, + day_phone, home_ou, claims_returned_count, net_access_level) + SELECT profile, students.usrname, email, student_id, ident_type, student_id, first_name, + last_name, phone, home_ou, claims_returned_count, net_access_level FROM students; + +--Insert records from the staging table into the actor.usr table. +INSERT INTO actor.card (usr, barcode) + SELECT actor.usr.id, students.barcode + FROM students + INNER JOIN actor.usr + ON students.usrname = actor.usr.usrname; + +--Update actor.usr.card field with actor.card.id to associate active card with the user: +UPDATE actor.usr + SET card = actor.card.id + FROM actor.card + WHERE actor.card.usr = actor.usr.id; + +--INSERT records INTO actor.usr_address from staging table. +INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code) + SELECT actor.usr.id, students.street1, students.street2, students.city, students.province, + students.country, students.postal_code + FROM students + INNER JOIN actor.usr ON students.usrname = actor.usr.usrname; + + +--Update actor.usr mailing address with id from actor.usr_address table.: +UPDATE actor.usr + SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id + FROM actor.usr_address + WHERE actor.usr.id = actor.usr_address.usr; + +COMMIT; + + Placing the sql statements between BEGIN; and COMMIT; creates a transaction block so that if any sql statements fail, the + entire process is canceled and the database is rolled back to its original state. Lines beginning with -- are comments to let you you what + each sql statement is doing and are not processed. + + + Batch Updating Patron DataBatch Updating Patron Data + + + For academic libraries, doing batch updates to add new patrons to the Evergreen database is a critical task. The above procedures and + import script can be easily adapted to create an update script for importing new patrons from external databases. If the data import file contains only new patrons, then, + the above procedures will work well to insert those patrons. However, if the data load contains all patrons, a second staging table and a procedure to remove existing patrons from that second staging table may be required before importing the new patrons. Moreover, additional steps to update address information and perhaps delete + inactive patrons may also be desired depending on the requirements of the institution. + After developing the scripts to import and update patrons have been created, another important task for library staff is to develop an import strategy and schedule + which suits the needs of the library. This could be determined by registration dates of your institution in the case of academic libraries. It is important to balance + the convenience of patron loads and the cost of processing these loads vs staff adding patrons manually. + + + Restoring your Evergreen Database to an Empty StateRestoring your Evergreen Database to an Empty State + + If you've done a test import of records and you want to quickly get Evergreen back to a pristine state, you can create a clean Evergreen database schema by performing the + following: + 1. + + +cd ILS/Open-ILS/src/sql/Pg/ + + 2. + + Rebuild the database schema: + +./build-db.sh [db-hostname> [db-port] [db-name] [db-user] [db-password] [db-version] + + This will remove all of your data from the database and restore the default values. + + + Exporting Bibliographic Records into MARC filesExporting Bibliographic Records into MARC files + + + The following procedure explains how to export Evergreen bibliographic records into MARC files using the marc_export support script. All steps should be performed by the + opensrf user from your Evergreen server. + + Processing time for exporting records will depond on several factors such as the number of records you are exporting. It is recommended that you divide the + export id files (records.txt) into manageable number of records if you are exporting a large number of records. + 1. + + Create a text file list of the Bibliographic record ids you would like to export from Evergreen. One way to do this is using SQL: + +SELECT DISTINCT bre.id FROM biblio.record_entry AS bre + JOIN asset.call_number AS acn ON acn.record-bre.id + WHERE bre.deleted='false' and ownling_lib=101 \g /home/opensrf/records.txt; + + This query will create a file called records.txt containing a column of distinct ids of items owned by the organizational unit with the + id 101. + 2. + + Navigate to the support-scripts folder + +cd /home/opensrf/Evergreen-ILS*/Open-ILS/src/support-scripts/ + + 3. + + Run marc_export, using the id file you created in step 1 to define which files to export. + +cat /home/opensrf/records.txt | ./marc_export -i -c /openils/conf/opensrf_core.xml \ +-x /openils/conf/fm_IDL.xml -f XML --timeout 5 > exported_files.xml + + + The example above export the records into MARCXML format. + + For help or for more options when running marc_export, run marc_export with the -h option: + +./marc_export -h + + + + + + Importing Authority RecordsImporting Authority Records + + + The following procedure explains how to export Evergreen bibliographic records into MARC files using the marc_export support script. All steps should be performed by the + opensrf user from your Evergreen server. + Importing Authority Records from Command LineImporting Authority Records from Command Line + + The major advantages of the command line approach are its speed and its convenience for system administrators who can perform bulk loads of authority records in a + controlled environment. + 1. + + Run marc2are.pl against the authority records, specifying the user name, password, MARC type (USMARC or XML). Use + STDOUT redirection + to either pipe the output directly into the next command or into an output file for inspection. For example, to process a set of authority records + named auth_small.xml using the default user name and password and directing the output into a file named auth.are: + +cd Open-ILS/src/extras/import/ +perl marc2are.pl --user admin --pass open-ils auth_small.xml > auth.are + + + 2. + + Run pg_loader.pl to generate the SQL necessary for importing the authority records into your system. To save time for very large batches of records, you could + simply pipe the output of marc2are.pl directly into pg_loader.pl. + +cd Open-ILS/src/extras/import/ + perl pg_loader.pl --auto are --order are auth.are > auth_load.sql + + + 3. + + Load the authority records from the SQL file that you generated in the last step into your Evergreen database using the psql tool. Assuming the default user + name, host name, and database name for an Evergreen instance, that command looks like: + +psql -U evergreen -h localhost -d evergreen -f auth_load.sql + + + + + Importing authority records using the MARC Batch Import/Export interface from the Staff ClientImporting authority records using the MARC Batch Import/Export interface from the Staff Client + + Good for loading batches of up to 5,000 records (roughly) at a time, the major advantages to importing authority records using the MARC Batch Import/Export interface are + that it does not require command-line or direct database access – good for both security in that it minimizes the number of people who need this access and for spreading the + effort around to others in the library – and it does most of the work (for example, figuring out whether the batch of records is in XML or USMARC format) for you. + To import a set of MARC authority records from the MARC Batch Import/Export interface: + 1. + + From the Evergreen staff client, select Cataloging → MARC Batch Import/Export. + The Evergreen MARC File Upload screen opens, with Import Records as the highlighted tab. + 2. + + From the Bibliographic records drop-down menu, select Authority records. + + 3. + + Enter a name for the queue (batch import job) in the Create a new upload queue field. + + 4. + + Select the Auto-Import Non-Colliding Records checkbox. + + 5. + + Click the Browse… button to select the file of MARC authorities to import. + + 6. + + Click the Upload button to begin importing the records. The screen displays Uploading… + Processing… to show that the records are being transferred to the server, then displays a progress bar to show the actual import + progress. When the staff client displays the progress bar, you can disconnect your staff client safely. Very large batches of records might time out at this + stage. + + 7. + + Once the import is finished, the staff client displays the results of the import process. You can manually display the import progress by selecting + the Inspect Queue tab of the MARC Batch Import/Export interface and selecting the queue name. By default, the staff client does not + display records that were imported successfully; it only shows records that conflicted with existing entries in the database. The screen shows the overall + status of the import process in the top right-hand corner, with the Total and Imported number of records for the + queue. + + + + + + + Chapter 8. Server Operations and MaintenanceChapter 8. Server Operations and Maintenance + Report errors in this documentation using Launchpad. + Chapter 8. Server Operations and Maintenance + Report any errors in this documentation using Launchpad. + Chapter 8. Server Operations and MaintenanceChapter 8. Server Operations and MaintenanceAbstractThis chapter deals with basic server operations such as starting and stopping Evergreen as well wall + security, backing up and troubleshooting Evergreen. + + Starting, Stopping and RestartingStarting, Stopping and Restarting + + Occasionally, you may need to restart Evergreen. It is imperative that you understand the basic + commands to stop and start the Evergreen server. You can start and stop Evergreen from the command line of + the server using the osrf_ctl.sh script located in the + openils/bin directory. + The osrf_ctl.sh command must be run as the opensrf user. + To view help on osrf_ctl.sh and get all of its options, run: + osrf_ctl.sh -h + To start Evergreen, run: + osrf_ctl.sh -l -a start_all + The -l flag is used to indicate that Evergreen is configured to use localhost as + the host. If you have configured opensrf.xml to use your real hostname, do not use the -l flag. The -a + option is required and indicates the action of the command. In this case + start_all. + + + If you receive the error message: osrf_ctl.sh: command not found, then your environment variable + PATH does not include the + /openils/bin directory. You can set it using the following command: + export PATH=$PATH:/openils/bin + If you receive the error message Can't locate OpenSRF/System.pm in @INC … BEGIN + failed–compilation aborted, then your environment variable PERL5LIB does not + include the /openils/lib/perl5 directory. You can set it + using the following command: + export PERL5LIB=$PERL5LIB:/openils/lib/perl5 + + It is also possible to start a specific service. For example: + osrf_ctl.sh -l -a start_router + will only start the router service. + + If you decide to start each service individually, you need to start them in a specific order + for Evergreen to start correctly. Run the commands in this exact order: + osrf_ctl.sh -l -a start_router + osrf_ctl.sh -l -a start_perl + osrf_ctl.sh -l -a start_c + + After starting or restarting Evergreen, it is also necessary to restart the Apache web server + for the OPAC to work correctly. + To stop Evergreen, run: + osrf_ctl.sh -l -a stop_all + As with starting, you can choose to stop services individually. + To restart Evergreen, run: + osrf_ctl.sh -l -a restart_all + + Backing UpBacking Up + + + + Backing up your system files and data is a critical task for server and database administrators. + Having a strategy for backing up and recovery could be the difference between a minor annoyance for users and + a complete catastrophe. + Backing up the Evergreen DatabaseBacking up the Evergreen Database + + Most of the critical data for an Evergreen system – patrons, bibliographic records, holdings, + transactions, bills – is stored in the PostgreSQL database. You can therefore use normal + PostgreSQL backup procedures to backup this data. For example, the simplest method of backing up the Evergreen + database is to use the pg_dump command to create a live backup of the database without having to + interrupt any Evergreen services. Here is an example pg_dump command which will dump a local Evergreen database into a the file evergreen_db.backup: + pg_dump -U evergreen -h localhost -f evergreen_db.backup evergreen + To restore the backed up database into a new database, create a new database using the + template0 database template and the UTF8 encoding, and run the psql command, specifying the new + database as your target: + createdb -T template0 -E UTF8 -U evergreen -h localhost new_evergreen + psql -U evergreen -h localhost -f evergreen_db.backup new_evergreen + + This method of backup is only suitable for small Evergreen instances. Larger sites + should consider implementing continuous archiving (also known as “log shipping”) to provide + more granular backups with lower system overhead. More information on backing up PostgreSQL + databases can be found in the official PostgreSQL documentation. + + + Backing up Evergreen FilesBacking up Evergreen Files + + + When you deploy Evergreen, you will probably customize many aspects of your system including + the system configuration files, Apache configuration files, OPAC and Staff Client. In order to + protect your investment of time, you should carefully consider the best approach to backing up + files. + There are a number of ways of tackling this problem. You could create a script that regularly + creates a time-stamped tarball of all of these files and copies it to a remote server - but that + would build up over time to hundreds of files. You could use rsync + to ensure that the files of + interest are regularly updated on a remote server - but then you would lose track of the changes to + the files, should you make a change that introduces a problem down the road. + Perhaps one of the best options is to use a version control system like + Bazaar, + git + or Subversion to regularly push updates of the files you care about to a repository on a + remote server. This gives you the advantage of quickly being able to run through the history of the + changes you made, with a commenting system that reminds you why each change was made, combined with + remote storage of the pertinent files in case of disaster on site. In addition, your team can create + local copies of the repository and test their own changes in isolation from the production + system. Using a version control system also helps to recover system customizations after an + upgrade. + + Full System BackupFull System Backup + + A full system backup archives every file on the file system. Some basic methods require you + to shut down most system processes; other methods can use mirrored RAID setups or + SAN storage to + take “snapshot” backups of your full system while the system continues to run. The subject of how + to implement full system backups is beyond the scope of this documentation. + + + SecuritySecurity + + + As with an ILS and resource accessible from the world wide web careful consideration needs to be + given to the security of your Evergreen servers and database. While it is impossible to cover all aspects + of security, it is important to take several precautions when setting up production Evergreen site. + 1. + Change the Evergreen admin password and keep it secure. The + default admin password is known by anyone who has installed Evergreen. It is not a secret + and needs to be changed by the Administrator. It should also only be shared by those who + need the highest level of access to your system. + 2. + Create strong passwords using a combination of numerical and alphabetical characters + for all of the Administrative passwords including the postgres and + opensrf users + 3. + Open ports in the firewall with caution - It is only necessary to open ports + 80 and 443 + for TCP connections to the Evergreen server from the OPAC and the staff client. It is critical for administrators to + understand the concepts of network security and take precautions to minimize vulnerabilities. + + 4. + Use permissions and permission groups wisely - it is important to understand the + purpose of the permissions and to only give users the level of access that they require. + + + + Managing Log FilesManaging Log Files + + + Evergreen comes with a sophisticated logging system, but it is important to manage the OpenSRF + and Evergreen logs. This section will provide a couple of log management techniques and tools. + Using the logrotate Utility to Manage Log SizeUsing the logrotate Utility to Manage Log Size + + + Fortunately, this is not a new problem for Unix administrators, and there are a number of ways of keeping your logs under control. + On Debian and Ubuntu, for example, + the logrotate utility controls when old log files are compressed and a new log file is started. + logrotate runs once a day and checks all log files that it knows about to see if a + threshold of time or size has been reached and rotates the log files if a threshold condition has been met. + To teach logrotate to rotate Evergreen logs on a weekly basis, or if they are > 50MB in size, + create a new file /etc/logrotate.d/evergreen with the following contents: + +compress +/openils/var/log/*.log { +# keep the last 4 archived log files along with the current log file + # log log.1.gz log.2.gz log.3.gz log.4.gz + # and delete the oldest log file (what would have been log.5.gz) +rotate 5 +# if the log file is > 50MB in size, rotate it immediately +size 50M + # for those logs that don't grow fast, rotate them weekly anyway + weekly +} + + + Changing Logging Level for EvergreenChanging Logging Level for Evergreen + + + Change the Log Levels in your config files. Changing the level of logging will help + narrow down errors. + + A high logging level is not wise to do in a production environment since it + will produce vastly larger log files and thus reduce server performance. + + Change logging levels by editing the configuration file + /openils/conf/opensrf_core.xml + you will want to search for lines containing <loglevel>. + the default setting for loglevel is 3 which will log errors, + warnings and information. + The next level is 4 which is for debugging and provides additional information + helpful for the debugging process. + Thus, lines with: + <loglevel>3</loglevel> + Should be changed to: + <loglevel>4</loglevel> + to allow debugging level logging + Other logging levels include 0 for no logging, + 1 for logging errors and 2 for logging warnings + and errors. + + + Installing PostgreSQL from SourceInstalling PostgreSQL from Source + + + Some Linux distributions, such as Debian Etch (4.0), do not offer PostgreSQL + version 8.2 as an installable package. Before you continue, examine the software dependencies listed in Table 5.1, “Evergreen Software Dependencies” + to ensure that your Linux distribution supports the required version of PostgreSQL. + + + Some Linux distributions, such as Debian Etch (4.0), do not offer PostgreSQL + version 8.2 as an installable package. Before you continue, examine the software dependencies listed in Table 5.1, “Evergreen Software Dependencies” + to ensure that your Linux distribution supports the required version of PostgreSQL. + + + 1. + + Install the application stow on your system if it is not already installed. Issue the following command as + the root user: + +apt-get install stow + + 2. + + Download, compile, and install the latest release for PostgreSQL 8.2 (which was version 8.2.12 at the time of this writing). + As the root user, follow these steps: + + + +wget http://wwwmaster.postgresql.org/redir/198/h/source/v8.2.17/postgresql-8.2.17.tar.bz2 +tar xzf postgresql-8.2.17.tar.gz +cd postgresql-8.2.17 +./configure --with-perl --enable-integer-datetimes --with-openssl --prefix=/usr/local/stow/pgsql +make +make install +cd contrib +make +make install +cd xml2 +make +make install +cd /usr/local/stow +stow pgsql + + + + 3. + + Create the new user postgres to run the PostgreSQL processes. + As the root user, execute this command: + adduser postgres + 4. + + Initialize the database directory and start up PostgreSQL. As the root user, follow these steps: + + + +mkdir -p /usr/local/pgsql/data +chown postgres /usr/local/pgsql/data +su - postgres +initdb -D /usr/local/pgsql/data -E UNICODE --locale=C +pg_ctl -D /usr/local/pgsql/data -l /home/postgres/logfile start + + + + If an error occurs during the final step above, review the path of the home directory for the + postgres user. It may be /var/lib/postresql instead of /home/postres. + + + + Configuring PostgreSQLConfiguring PostgreSQL + + + The values of several PostreSQL configuration parameters may be changed for enhanced performance. The following table lists the default values + and some suggested updates for several useful parameters: + Table 8.1. Suggested configuration valuesParameterDefaultSuggesteddefault_statistics_target10100work_mem4Mb128Mbshared_buffers8Mb512Mbeffective_cache_size128Mb4Gb + + + Chapter 9. SIP ServerChapter 9. SIP Server + Report errors in this documentation using Launchpad. + Chapter 9. SIP Server + Report any errors in this documentation using Launchpad. + Chapter 9. SIP ServerChapter 9. SIP Server + + SIP, standing for Standard Interchange Protocol, was developed by the + 3Mcorporation to be a common protocol for data transfer between ILS' + (referred to in SIP as an ACS, or Automated Circulation System) + and a + third party device. Originally, the protocol was developed for + use with 3M SelfCheck (often abbreviated SC, not to be confused with Staff Client) systems, but has since expanded to other companies and devices. It is now common to find + SIP in use in several other vendors' SelfCheck systems, as well as other non-SelfCheck devices. + Some examples include: + •Patron Authentication (computer access, subscription databases)•Automated Material Handling (AMH) - The automated sorting of items, often to bins or + book carts, based on shelving location or other programmable criteria + + Installing the SIP ServerInstalling the SIP Server + + This is a rough intro to installing the SIP server for Evergreen. + Getting the codeGetting the code + + Current SIP code lives at github: + cd /opt + git clone git://github.com/atz/SIPServer.git SIPServer + Or use the old style: + $ cd /opt + $ sudo cvs -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip login + When prompted for the CVS password, just hit Enter (sudo password may be req'd) + $ sudo cvs -z3 -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip co -P SIPServer + + + Configuring the ServerConfiguring the Server + + 1. + + Type the following commands from the command prompt: + $ sudo su opensrf + $ cd /openils/conf + $ cp oils_sip.xml.example oils_sip.xml + 2. + + Edit oils_sip.xml. + Change the commented out <server-params> section to this: + +<server-params +min_servers='1' +min_spare_servers='0' +max_servers='25' +/> + + 3. + + max_servers will directly correspond to the number of allowed SIP clients. Set the number accordingly, but + bear in mind that too many connections can + exhaust memory. On a 4G RAM/4 CPU server (that is also running evergreen), it is not recommended to exceed 100 + SIP client connections. + + + Adding SIP UsersAdding SIP Users + + 1. + + Type the following commands from the command prompt: + $ sudo su opensrf + $ cd /openils/conf + $ cp oils_sip.xml.example oils_sip.xml + 2. + + in the <accounts> section, add SIP client login information. Make sure that all + <logins> use the same institution attribute, and make + sure the institution is listed in <institutions>. All attributes in the <login> section will be + used by the SIP client. + + 3. + + In Evergreen, create a new profile group called SIP. + This group should be a sub-group of Users + (not Staff or Patrons). + Set Editing Permission as group_application.user.sip_client and give the group the following permissions: + + COPY_CHECKIN + COPY_CHECKOUT + RENEW_CIRC + VIEW_CIRCULATIONS + VIEW_COPY_CHECKOUT_HISTORY + VIEW_PERMIT_CHECKOUT + VIEW_USER + VIEW_USER_FINES_SUMMARY + VIEW_USER_TRANSACTIONS + + OR use SQL like: + + +INSERT INTO permission.grp_tree (id,name,parent,description,application_perm) +VALUES (8, 'SIP', 1, 'SIP2 Client Systems', 'group_application.user.sip_client'); + +INSERT INTO permission.grp_perm_map (grp,perm,depth) +VALUES (8,15,0),(8,16,0),(8,17,0),(8,31,0),(8,32,0),(8,48,0),(8,54,0),(8,75,0),(8,82,0); + + + + Verify: + + +SELECT * +FROM permission.grp_perm_map JOIN permission.perm_list ON +permission.grp_perm_map.perm=permission.perm_list.id +WHERE grp=8; + + + + Keep in mind that the id (8) may not necessarily be available on your system. + 4. + + For each account created in the <login> section of oils_sip.xml, create a user (via the staff client user editor) + that has the same username + and password and put that user into the SIP group. + The expiration date will affect the SIP users' connection so you might want to make a note of + this somewhere. + + + Running the serverRunning the server + + To start the SIP server type the following commands from the command prompt: + $ sudo su opensrf + $ oils_ctl.sh -d /openils/var/run -s /openils/conf/oils_sip.xml -a [start|stop|restart]_sip + + Logging-SIPLogging-SIP + + SyslogSyslog + + It is useful to log SIP requests to a separate file especially during initial setup by modifying your + syslog config file. + 1. + + Edit syslog.conf. + $ sudo vi /etc/syslog.conf # maybe /etc/rsyslog.conf + 2. + + Add this: + local6.* -/var/log/SIP_evergreen.log + 3. + + Syslog expects the logfile to exist so create the file. + $ sudo touch /var/log/SIP_evergreen.log + 4. + + Restart sysklogd. + $ sudo /etc/init.d/sysklogd restart + + + Syslog-NGSyslog-NG + + + 1. + + Edit logging config. + sudo vi /etc/syslog-ng/syslog-ng.conf + 2. + + Add: + +# SIP2 for Evergreen +filter f_eg_sip { level(warn, err, crit) and facility(local6); }; +destination eg_sip { file("/var/log/SIP_evergreen.log"); }; +log { source(s_all); filter(f_eg_sip); destination(eg_sip); }; + + 3. + + Syslog-ng expects the logfile to exist so create the file. + $ sudo touch /var/log/SIP_evergreen.log + 4. + + Restart syslog-ng + $ sudo /etc/init.d/syslog-ng restart + + + + Testing Your SIP ConnectionTesting Your SIP Connection + + • + In the top level CVS checkout of the SIPServer code. + $ cd SIPServer/t + • + Edit SIPtest.pm, change the $instid, $server, $username, and + $password variables. This will be enough to test connectivity. + To run all tests, you'll need to change all the variables in the Configuration section. + $ PERL5LIB=../ perl 00sc_status.t + This should produce something like: + +1..4 +ok 1 - Invalid username +ok 2 - Invalid username +ok 3 - login +ok 4 - SC status + + • + Don't be dismayed at Invalid Username. That's just one of the many tests that are run. + + + More TestingMore Testing + + 1. + + Once you have opened up either the SIP OR SIP2 ports to be + accessible from outside you can do some testing via telnet. You can try this with localhost + if you so wish, but we want to prove that SIP2 works from non-localhost. + Replace $instid, $server, $barcode, $username, + and $password variables below as necessary. + We are using 6001 here which is associated with SIP2 as per our configuration. + +$ telnet $server 6001 +Connected to $server. +Escape character is '^]'. +9300CN**$username**|CO**$password**|CP**$instid** + + You should get back. + 941 + 2. + + Now just copy in the following line (with variables replaced) you don't need to hit enter, just paste! + 2300120080623 172148AO**$instid**|AA**$barcode**|AC$password|AD**$password** + You will get back the patron information for $barcode (something similar to the what's below). +24 Y 00120100113 170738AEFirstName MiddleName LastName|AA**$barcode**|BLY|CQY +|BHUSD|BV0.00|AFOK|AO**$instid**| + + The response declares it is a valid patron BLY with a valid password CQY and shows the user's + $name. + + + + SIP CommunicationSIP Communication + + SIP generally communicates over a TCP connection (either raw sockets or over + telnet), but can also communicate via serial connections and other methods. In Evergreen, + the most common deployment is a RAW socket connection on port 6001. + SIP communication consists of strings of messages, each message request and response begin with a 2-digit + “command” - Requests usually being an odd + number and responses usually increased by 1 to be an even number. The combination numbers for the request command and response is often referred to as a + Message Pair (for example, a 23 command is a request for patron status, a 24 response is a patron status, and the message pair 23/24 is + patron status message pair). The table in the next section shows the message pairs and a description of them. + For clarification, the “Request” is from the device (selfcheck or otherwise) to the ILS/ACS. The response is… the response + to the request ;). + Within each request and response, a number of fields (either a fixed width or separated with a | [pipe symbol] and preceeded with a + 2-character field identifier) + are used. The fields vary between message pairs. + PairNameSupported?Details01Block PatronYes01_Block_Patron - ACS responds with 24 Patron Status Response09/10CheckinYes (with extensions)09/10_Checkin11/12CheckoutYes (no renewals)11/12_Checkout15/16HoldNo15/16_Hold17/18Item InformationYes (no extensions)17/18_Item_Information19/20Item Status UpdateNo19/20_Item_Status_Update - Returns Patron Enable response, but doesn't make any changes in EG23/24Patron StatusYes23/24_Patron_Status - 63/64 “Patron Information” preferred25/26Patron EnableNo25/26_Patron_Enable - Used during system testing and validation29/30RenewNO (maybe?)29/30_Renew35/36End SessionYes35/36_End_Session37/38Fee PaidNo37/38_Fee_Paid63/64Patron InformationYes (no extensions)63/64_Patron_Information65/66Renew AllNo65/66_Renew_All93/94LoginYes93/94_Login - Must be first command to Evergreen ACS (via socket) or SIP will terminate97/96Resend last messageYes97/96_Resend99/98SC/ACS StatusYes99/98_SC_and_ACS_Status + 01 Block Patron01 Block Patron + + A selfcheck will issue a Block Patron command if a patron leaves their card in a selfcheck machine or if the selfcheck detects tampering (such as attempts + to disable multiple items during a single item checkout, multiple failed pin entries, etc). + In Evergreen, this command does the following: + •User alert message: CARD BLOCKED BY SELF-CHECK MACHINE (this is independent of the AL + Blocked Card Message field).•Card is marked inactive. + The request looks like: + 01<card retained><date>[fields AO, AL, AA, AC] + Card Retained: A single character field of Y or N - tells the ACS whether the SC has + retained the card (ex: left in the machine) or not. + Date: An 18 character field for the date/time when the block occurred. + Format: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, “ Z” (3 blanks and a Z) represents UTC(GMT/Zulu) + Fields: See Fields for more details. + The response is a 24 “Patron Status Response” with the following: + •Charge privileges denied•Renewal privileges denied•Recall privileges denied (hard-coded in every 24 or 64 response)•hold privileges denied•Screen Message 1 (AF): blocked•Patron + + 09/10 Checkin09/10 Checkin + + The request looks like: + 09<No block (Offline)><xact date><return date>[Fields AP,AO,AB,AC,CH,BI] + No Block (Offline): A single character field of Y or N - Offline transactions are not currently + supported so send N. + xact date: an 18 character field for the date/time when the checkin occurred. Format: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - + 4 blanks when local time, “ Z” (3 blanks and a Z) represents UTC(GMT/Zulu) + Fields: See Fields for more details. + The response is a 10 “Checkin Response” with the following: + 10<resensitize><magnetic media><alert><xact date>[Fields AO,AB,AQ,AJ,CL,AA,CK,CH,CR,CS,CT,CV,CY,DA,AF,AG] + Example (with a remote hold): + 09N20100507 16593720100507 165937APCheckin Bin 5|AOBR1|AB1565921879|ACsip_01| + +101YNY20100623 165731AOBR1|AB1565921879|AQBR1|AJPerl 5 desktop reference|CK001|CSQA76.73.P33V76 1996 +|CTBR3|CY373827|DANicholas Richard Woodard|CV02| + + Here you can see a hold alert for patron CY 373827, named DA Nicholas Richard Woodard, + to be picked up at CT “BR3”. Since the transaction is happening + at AO “BR1”, the alert type CV is 02 for hold at remote library. + The possible values for CV are: + •00: unknown•01: local hold•02: remote hold•03: ILL transfer (not used by EG)•04: transfer•99: other + + the logic for Evergreen to determine the content is magnetic_media comes from either legacy circ scripts or search_config_circ_modifier. + The default is non-magnetic. + The same is true for media_type (default 001). Evergreen does not populate the collection_code because it does not really have any, but it will provide the + call_number where available. + Unlike the item_id (barcode), the title_id is actually a title string, unless the configuration forces the return of + the bib ID. + Don't be confused by the different branches that can show up in the same response line. + •AO is where the transaction took place,•AQ is the “permanent location”, and•CT is the destination location (i.e., pickup lib for a hold or target lib for a transfer). + + + 11/12 Checkout11/12 Checkout + + + 15/16 Hold15/16 Hold + + Not yet supported. + + 17/18 Item Information17/18 Item Information + + The request looks like: + 17<xact_date>[fields: AO,AB,AC] + The request is very terse. AC is optional. + The following response structure is for SIP2. (Version 1 of the protocol had only 6 total fields.) + +18<circulation_status><security_marker><fee_type><xact_date> +[fields: CF,AH,CJ,CM,AB,AJ,BG,BH,BV,CK,AQ,AP,CH,AF,AG,+CT,+CS] + + Example: + 1720060110 215612AOBR1|ABno_such_barcode| + 1801010120100609 162510ABno_such_barcode|AJ| + 1720060110 215612AOBR1|AB1565921879| +1810020120100623 171415AB1565921879|AJPerl 5 desktop reference|CK001|AQBR1|APBR1|BGBR1 +|CTBR3|CSQA76.73.P33V76 1996| + + The first case is with a bogus barcode. The latter shows an item with a circulation_status of 10 for in transit between libraries. + The known values of circulation_status are enumerated in the spec. + EXTENSIONS: The CT field for destination location and CS call number are used by + Automated Material Handling systems. + + 19/20 Item Status Update19/20 Item Status Update + + + 23/24 Patron Status23/24 Patron Status + + Example: + 2300120060101 084235AOUWOLS|AAbad_barcode|ACsip_01|ADbad_password| + 24YYYY 00120100507 013934AE|AAbad_barcode|BLN|AOUWOLS| + 2300120060101 084235AOCONS|AA999999|ACsip_01|ADbad_password| + 24 Y 00120100507 022318AEDoug Fiander|AA999999|BLY|CQN|BHUSD|BV0.00|AFOK|AOCONS| + 2300120060101 084235AOCONS|AA999999|ACsip_01|ADuserpassword|LY|CQN|BHUSD|BV0.00|AFOK|AOCONS| + 24 Y 00120100507 022803AEDoug Fiander|AA999999|BLY|CQY|BHUSD|BV0.00|AFOK|AOCONS| + 1.The BL field (SIP2, optional) is valid patron, so the + N value means + bad_barcode doesn't match a patron, the Y value means 999999 does.2.The CQ field (SIP2, optional) is valid password, so the N + value means bad_password doesn't match 999999's password, the Y means userpassword + does. + So if you were building the most basic SIP2 authentication client, you would check for + |CQY| in the response to know the user's barcode and password + are correct (|CQY| implies |BLY|, since you cannot check the password unless the barcode exists). However, in practice, + depending on the application, there are other factors to consider in authentication, like whether the user is blocked from checkout, owes excessive fines, reported their + card lost, etc. These limitations are reflected in the 14-character patron status string immediately following the 24 code. + See the field definitions in your copy of the spec. + + 25/26 Patron Enable25/26 Patron Enable + + Not yet supported. + + 29/30 Renew29/30 Renew + + Evergreen ACS status message indicates renew is supported. + + 35/36 End Session35/36 End Session + + 3520100505 115901AOBR1|AA999999| + 36Y20100507 161213AOCONS|AA999999|AFThank you!| + The Y/N code immediately after the 36 indicates success/failure. Failure is not particularly meaningful or + important in this context, and for evergreen it is hardcoded Y. + + 37/38 Fee Paid37/38 Fee Paid + + Not implemented. + + 63/64 Patron Information63/64 Patron Information + + Attempting to retrieve patron info with a bad barcode: + 6300020060329 201700 AOBR1|AAbad_barcode| + 64YYYY 00020100623 141130000000000000000000000000AE|AAbad_barcode|BLN|AOBR1| + Attempting to retrieve patron info with a good barcode (but bad patron password): + 6300020060329 201700 AOBR1|AA999999|ADbadpwd| + +64 Y 00020100623 141130000000000000000000000000AA999999|AEDavid J. Fiander|BHUSD|BV0.00 +|BD2 Meadowvale Dr. St Thomas, ON Canada + +90210|BEdjfiander@somemail.com|BF(519) 555 1234|AQBR1|BLY|CQN|PB19640925|PCPatrons +|PIUnfiltered|AFOK|AOBR1| + + See 23/24 Patron Status for info on BL and CQ fields. + + 65/66 Renew All65/66 Renew All + + Not yet supported. + + 93/94 Login93/94 Login + + Example: + 9300CNsip_01|CObad_value|CPBR1| + [Connection closed by foreign host.] + ... + 9300CNsip_01|COsip_01|CPBR1| + 941 + 941 means successful terminal login. 940 or getting dropped means failure. + + 97/96 Resend97/96 Resend + + + 99/98 SC and ACS Status99/98 SC and ACS Status + + 99<status code><max print width><protocol version> + All 3 fields are required: + •0: SC is OK•1: SC is out of paper•2: SC shutting down•status code - 1 character:•max print width - 3 characters - the integer number of characters the client can print•protocol version - 4 characters - x.xx + +98<on-line status><checkin ok><checkout ok><ACS renewal policy> +<status update ok><offline ok><timeout period> + + +<retries allowed><date/time sync><protocol version><institution id> +<library name><supported messages><terminal + + location><screen message><print line> + Example: + 9910302.00 + 98YYYYNN60000320100510 1717202.00AOCONS|BXYYYYYYYYYNYNNNYN| + The Supported Messages field BX appears only in SIP2, and specifies whether 16 different + SIP commands are supported by the ACS or not. + + FieldsFields + + All fixed-length fields in a communication will appear before the first variable-length field. This allows for simple parsing. Variable-length fields are by + definition delimited, though there will not necessarily be an initial delimiter between the last fixed-length field and the first variable-length one. It would be + unnecessary, since you should know the exact position where that field begins already. + + + + Chapter 10. SRU and Z39.50 ServerChapter 10. SRU and Z39.50 Server + Report errors in this documentation using Launchpad. + Chapter 10. SRU and Z39.50 Server + Report any errors in this documentation using Launchpad. + Chapter 10. SRU and Z39.50 ServerChapter 10. SRU and Z39.50 Server + + Evergreen is extremely scalable and can serve the need of a large range of libraries. The specific requirements and configuration of your system should be determined based on your + specific needs of your organization or consortium. + Testing SRU with yaz-clientTesting SRU with yaz-client + + yaz-client is installed as a part of Index Data's YAZ software. Recent versions include support for querying SRU servers. + Evergreen ships an SRU configuration + that works out of the box. To search Evergreen with yaz-client, choose the GET query method and issue the find command. + In the following example, we connect to the Evergreen test server dev.gapines.org - substitute this hostname with your own + Evergreen server hostname: + Some older versions of yaz-client have known issues with SRU. Ensure that you are using the latest edition of yaz from + http://www.indexdata.com/yaz. + $ yaz-client http://dev.gapines.org/opac/extras/sru + Z> sru GET 1.1 + Z> find hemingway + + If your database has records that match that term, you will get the corresponding MARCXML records + in your response from yaz-client. + Here's what the SRU request looks like as sent to the Evergreen web server: + GET /opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 + You can see what the response looks like by hitting the same URL in your Web browser: + + http://dev.gapines.org/opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 + CQL queries + Evergreen supports some CQL index-sets for advanced queries such as a subset of + Dublin Core (DC) elements. Those DC elements that are + supported map to Evergreen default indexes as follows: + DC element Evergreen indextitletitlecreator authorcontributorauthorpublisherkeywordsubjectsubjectidentifierkeywordtypenoneformatnonelanguagelang + Here are a few examples of SRU searches against some of these indexes: + •dc.title all “complete dinosaur”•dc.subject all “britain france”•dc.title exact “The Empire Strikes Back”•dc.author=king and dc.title=zone + + Setting up Z39.50 server supportSetting up Z39.50 server support + + + You must have Evergreen's SRU server running before you can enable Z39.50 server support. + + This support uses an Z39.50-to-SRU translator service supplied + by the Net::Z3950::Simple2ZOOM Perl module to enable Evergreen to act as a Z39.50 server. + You could run the Z39.50 server on a different machine. It just needs to be able to connect to the + Evergreen SRU server. + Setting up the Z39.50 server1. + + Install a recent version of yaz (the Makefile.install should have installed a suitable version).2. + + Install Net::Z3950::Simple2ZOOM (sudo cpan Net::Z3950::Simple2ZOOM)3. + + Create a Simple2ZOOM configuration file. Something like the following is a good start, and is + based on the Simple2ZOOM + documentation example. We'll name the file dgo.conf for our example: + +<client> + <database name="gapines"> + <zurl>http://dev.gapines.org/opac/extras/sru</zurl> + <option name="sru">get</option> + <charset>marc-8</charset> + <search> + <querytype>cql</querytype> + <map use="4"><index>eg.title</index></map> + <map use="7"><index>eg.keyword</index></map> + <map use="8"><index>eg.keyword</index></map> + <map use="21"><index>eg.subject</index></map> + <map use="1003"><index>eg.author</index></map> + <map use="1018"><index>eg.publisher</index></map> + <map use="1035"><index>eg.keyword</index></map> + <map use="1016"><index>eg.keyword</index></map> + </search> + </database> +</client> + + You can have multiple <database> sections in a single file, each pointing to a different scope of your consortium. The name attribute on + the <database> element is used in your Z39.50 connection string to name the database. The + <zurl> element must point to + http://hostname/opac/extras/sru. As of Evergreen 1.6, you can append an optional organization unit shortname for search + scoping purposes, and you can also append /holdings if you want to expose the holdings for any returned records. So your zurl + could be http://dev.gapines.org/opac/extras/sru/BR1/holdings to limit the search scope to BR1 and its children, and + to expose its holdings. + 4. + + Run simple2ZOOM as a daemon, specifying the configuration files and one or more listener addresses that the + Z39.50 server will + be accessible on. If you do not specify a port, it will automatically run on port 9999. In the following example, + we tell it to listen both to localhost on port 2210, and on dev.gapines.org + n port 210: + + <yazgfs> + <server id="server1"> + <retrievalinfo> + <retrieval syntax="xml"/> + <retrieval syntax="marc21"> + <backend syntax="xml"> + <marc inputformat="xml" outputformat="marc" inputcharset="utf-8" outputcharset="marc-8"/> + </backend> + </retrieval> + </retrievalinfo> + </server> +</yazgfs> + + 5. + + Run simple2ZOOM as a daemon, specifying the configuration files and one or more listener addresses that + the Z39.50 server will be accessible on. + If you do not specify a port, it will automatically run on port 9999. In the following example, we tell it to listen both to localhost on port 2210, and on dev.gapines.org on port 210: + simple2zoom -c dgo.conf -- -f xml2marc-yaz.cfg localhost:2210 dev.gapines.org:210 + + To test the Z39.50 server, we can use yaz-client again: + +yaz-client +Z> open localhost:2210/gapines +Connecting...OK. +Sent initrequest. +Connection accepted by v3 target. +ID : 81/81 +Name : Simple2ZOOM Universal Gateway/GFS/YAZ +Version: 1.03/1.128/3.0.34 +Options: search present delSet triggerResourceCtrl scan sort namedResultSets +Elapsed: 0.010718 +Z> format marcxml +Z> find “dc.title=zone and dc.author=king” +Sent searchRequest. +Received SearchResponse. +Search was a success. +Number of hits: 0, setno 4 +records returned: 0 +Elapsed: 0.611432 +Z> find “dead zone” +Sent searchRequest. +Received SearchResponse. +Search was a success. +Number of hits: 4, setno 5 +records returned: 0 +Elapsed: 1.555461 +Z> show 1 +Sent presentRequest (1+1). +Records: 1 +[]Record type: XML +<record xmlns:... (rest of record deliberately truncated) + + + + Chapter 11. Troubleshooting System ErrorsChapter 11. Troubleshooting System Errors + Report errors in this documentation using Launchpad. + Chapter 11. Troubleshooting System Errors + Report any errors in this documentation using Launchpad. + Chapter 11. Troubleshooting System ErrorsChapter 11. Troubleshooting System Errors + + If you have Evergreen installed and are encountering systematic errors, here is the steps to find the + cause and solution to most problems. These instructions assume standard locations and file names for Evergreen + installations, and may also include commands for specific Linux distributions. + Systematic Evergreen Restart to Isolate Errors1. + + Stop Apache: + /etc/init.d/apache2 stop + or + apache2ctl stop + 2. + + Stop OpenSRF: + osrf_ctl.sh -l -a stop_all + You should get either output simlar to this: + +Stopping OpenSRF C process 12515... +Stopping OpenSRF C process 12520... +Stopping OpenSRF C process 12526... +Stopping OpenSRF Perl process 12471... +Stopping OpenSRF Router process 12466... + + Or, if services have already been stopped, output may look like this: + OpenSRF C not running + OpenSRF Perl not running + OpenSRF Router not running + Occasionally osrf_ctl.sh fails to kill OpenSRF processes, so we should check to make + sure that none are still running with the command: + ps -aef | grep OpenSRF + You should manually kill any OpenSRF processes. + If you were unable to stop OpenSRF with the above methods, you could also try this + command: + rm –R /openils/var/run/*.pid + This will remove the temporary OpenSRF process files from the run directory which may + have been left over from a previous system boot cycle. + 3. + + Restart Ejabberd and + Memcached with the following commands: + sudo /etc/init.d/ejabberd restart + sudo /etc/init.d/memcached restart + 4. + + Start the OpenSRF router and check for errors + + /openils/bin/osrf_ctl.sh -l -a start_router + If the router started correctly, output will be: + Starting OpenSRF Router + If router does not start correctly, you should check the router error log files + for error information. + Evergreen 1.6 uses two routers, a public one and a private one, with two different + logfiles: + /openils/var/log/private.router.log + /openils/var/log/public.router.log + A quick way to find error information in the logs is with the grep command. + grep ERR /openils/var/log/*router.log + As a final sanity check, look for router processes using the process status + command: + ps -aef | grep Router + 5. + + Start the OpenSRF perl services and check for errors + + /openils/bin/osrf_ctl.sh -l -a start_perl + You should see the output similar to the following: + +Starting OpenSRF Perl +* starting all services for ... +* starting service pid=7484 opensrf.settings +* starting service pid=7493 open-ils.cat +* starting service pid=7495 open-ils.supercat +* starting service pid=7497 open-ils.search +* starting service pid=7499 open-ils.circ +* starting service pid=7501 open-ils.actor +* starting service pid=7502 open-ils.storage +... + + If the perl services do not start correctly or you receive errors, search for errors + in the following log files: + •/openils/var/log/router.log•/openils/var/log/osrfsys.log + At this point you can use the grep command to find errors in + any of the Evergreen log files: + grep ERR /openils/var/log/*.log + As a final sanity check, look for OpenSRF processes: + ps -aef | grep -i opensrf + 6. + + Start the OpenSRF c services and check for errors:] + + /openils/bin/osrf_ctl.sh -l -a start_c + And output should be: + Starting OpenSRF C (host=localhost) + If the c service does not start, check for errors by grepping + the log files for errors: + grep ERR /openils/var/log/*.log + Check for OpenSRF processes: + ps -aef | grep -i opensrf + 7. + + Smoke test with autogen.sh + The autogen tool will take some dynamic information from the database and generate + static JavaScript files for use by the OPAC and staff client. It is also able to refresh + the proximity map between libraries for the purpose of efficiently routing hold + requests. + As user opensrf, you invoke autogen with the command: + /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u + If Autogen completes successfully, the output will be: + +Updating fieldmapper +Updating web_fieldmapper +Updating OrgTree +removing OrgTree from the cache... +Updating OrgTree HTML +Updating locales selection HTML +Updating Search Groups +Refreshing proximity of org units +Successfully updated the organization proximity +Done + + If Autogen does not complete its task and you receive errors, use + grep to find errors in the log files: + grep ERR /openils/var/log/*.log + 8. + + Connect to Evergreen using the srfsh command-line OpenSRF client + /openils/bin/srfsh + + In order for you to connect using srfsh, you will need to + have set up the .srfsh.xml configuration file in your home directory as as + described in the installation chapter. + + You will then see the srfsh prompt: + srfsh# + At the srfsh prompt, enter this command: + login admin open-ils + You should the request verification: + +Received Data: "6f63ff5542da1fead4431c6c280efc75" +------------------------------------ +Request Completed Successfully +Request Time in seconds: 0.018414 +------------------------------------ + +Received Data: { +"ilsevent":0, +"textcode":"SUCCESS", +"desc":" ", +"pid":7793, +"stacktrace":"oils_auth.c:312", +"payload":{ +"authtoken":"28804ebf99508496e2a4d2593aaa930e", + "authtime":420.000000 +} +} + +------------------------------------ +Request Completed Successfully +Request Time in seconds: 0.552430 +------------------------------------ +Login Session: 28804. Session timeout: 420.000 +srfsh# + If you encounter errors or if you are unable to connect, you should consult the + srfsh.log file. The location of this file is configured in your + .srfsh.xml configuration file and is + /openils/var/log/srfsh.log by default. + Pressing + Ctrl+D + or entering “exit” will terminate srfsh. + 9. + + Start Apache and check for errors: + + /etc/init.d/apache2 start + or + apache2ctl start + You should see output: + +* Starting web server apache2 +...done. + + the Apache OpenSRF modules write to the + /openils/var/log/gateway.log + However, you should check all of the log files for errors: + grep ERR /openils/var/log/*.log + Another place to check for errors is the Apache error logs + generally located in in the /var/log/Apache2 + + directory + If you encounter errors with Apache, a common source of potential problems are the + Evergreen site configuration files /etc/apache2/eg_vhost.conf and + /etc/apache2/sites-available/eg.conf + + + 10. + + Testing with settings-tester.pl + As the opensrf user, run the script settings-tester.pl to see if it finds any + system configuration problems. + +cd /home/opensrf/Evergreen-ILS-1.6.0.0 +perl Open-ILS/src/support-scripts/settings-tester.pl + + Here is example output from running settings-tester.pl: + +LWP::UserAgent version 5.810 +XML::LibXML version 1.70 +XML::LibXML::XPathContext version 1.70 +XML::LibXSLT version 1.70 +Net::Server::PreFork version 0.97 +Cache::Memcached version 1.24 +Class::DBI version 0.96 +Class::DBI::AbstractSearch version 0.07 +Template version 2.19 +DBD::Pg version 2.8.2 +Net::Z3950::ZOOM version 1.24 +MARC::Record version 2.0.0 +MARC::Charset version 1.1 +MARC::File::XML version 0.92 +Text::Aspell version 0.04 +CGI version 3.29 +DateTime::TimeZone version 0.7701 +DateTime version 0.42 +DateTime::Format::ISO8601 version 0.06 +DateTime::Format::Mail version 0.3001 +Unix::Syslog version 1.1 +GD::Graph3d version 0.63 +JavaScript::SpiderMonkey version 0.19 +Log::Log4perl version 1.16 +Email::Send version 2.192 +Text::CSV version 1.06 +Text::CSV_XS version 0.52 +Spreadsheet::WriteExcel::Big version 2.20 +Tie::IxHash version 1.21 +Parse::RecDescent version 1.95.1 +SRU version 0.99 +JSON::XS version 2.27 + + +Checking Jabber connection for user opensrf, domain private.localhost +* Jabber successfully connected + +Checking Jabber connection for user opensrf, domain public.localhost +* Jabber successfully connected + +Checking Jabber connection for user router, domain public.localhost +* Jabber successfully connected + +Checking Jabber connection for user router, domain private.localhost +* Jabber successfully connected + +Checking database connections +* /opensrf/default/reporter/setup :: Successfully connected to database... + * Database has the expected server encoding UTF8. +* /opensrf/default/apps/open-ils.storage/app_settings/databases :: Successfully... +* /opensrf/default/apps/open-ils.cstore/app_settings :: Successfully... + * Database has the expected server encoding UTF8. +* /opensrf/default/apps/open-ils.pcrud/app_settings :: Successfully ... + * Database has the expected server encoding UTF8. +* /opensrf/default/apps/open-ils.reporter-store/app_settings :: Successfully... + * Database has the expected server encoding UTF8. + +Checking database drivers to ensure <driver> matches <language> +* OK: Pg language is undefined for reporter base configuration +* OK: Pg language is undefined for reporter base configuration +* OK: Pg language is perl in /opensrf/default/apps/open-ils.storage/language +* OK: pgsql language is C in /opensrf/default/apps/open-ils.cstore/language +* OK: pgsql language is C in /opensrf/default/apps/open-ils.pcrud/language +* OK: pgsql language is C in /opensrf/default/apps/open-ils.reporter-store/language + +Checking libdbi and libdbi-drivers + * OK - found locally installed libdbi.so and libdbdpgsql.so in shared library path + +Checking hostname + * OK: found hostname 'localhost' in <hosts> section of opensrf.xml +$ + + If the output from the script does not help you find the problem, please do not make any further significant changes to your configuration. + Follow the steps in the troubleshooting guide in Chapter 11, Troubleshooting System Errors. + 11. + + Try to login from the staff client + 12. + + Testing the Catalog + + By default, the OPAC will live at the URL http://my.domain.com/opac/. + Navigate to this URL and the front page of the OPAC should load. There is a basic text entry field with some extra search options. If you have any + problems loading this page, check the Apache error logs. If the page loads but does not function correctly, then check for possible javascript errors. We + highly recommend testing with the Firefox browser because of the helpful javascript debugging tools. + Assuming that the OPAC is functioning and there is data in your database, you can now perform other simple functional tests + (e.g., searching the catalog). + + + Chapter 12. Action TriggersChapter 12. Action Triggers + Report errors in this documentation using Launchpad. + Chapter 12. Action Triggers + Report any errors in this documentation using Launchpad. + Chapter 12. Action TriggersChapter 12. Action Triggers + + Action Triggers were introduced to Evergreen in 1.6. They allow administrators the ability to set up actions for specific events. They are useful for notification events such as + hold notifications. + + + To access the Action Triggers module, select + Admin → Local Administration → Notifications / Action triggers + + You must have Local Administrator permissions to access the Action Triggers module. + You will notice four tabs on this page: Event Definitions, Hooks, + Reactors and Validators. + + Event DefinitionsEvent Definitions + + + Event Definitions is the main tab and contains the key fields when working with action triggers. These fields include: + Table 12.1. Action Trigger Event DefinitionsFieldDescriptionOwning libraryThe shortname of the library for which the action / trigger / hook is defined.NameThe name of the trigger event, that links to a trigger event environment containing a set of fields + that will be returned to the Validators / Reactors for processing.HooksThe name of the trigger for the trigger event. The underlying action_trigger.hook table defines the Fieldmapper + class in the core_type column off of which the rest of the field definitions “hang”. EnabledSets the given trigger as enabled or disabled. This must be set to enabled for the Action trigger to run.Processing DelayDefines how long after a given trigger / hook event has occurred before the associated action (“Reactor”) + will be taken.Processing Delay FieldDefines the field associated with the event on which the processing delay is calculated. For example, the processing delay + context field on the hold.capture hook (which has a core_type of ahr) is capture_time.Processing Group Context FieldUsed to batch actions based on its associated group.ValidatorsThe subroutines receive the trigger environment as an argument (see the linked Name for + the environment definition) and returns either 1 if the validator is true or 0 + if the validator returns false.ReactorsLinks the action trigger to the Reactor.Max Event Validity DelayDefine the threshold for how far back the action_trigger_runner.pl script should reach to generate + a batch of events. + + Creating Action Triggers1. + + From the top menu, select + Admin → Local Administration → Notifications / Action triggers + + 2. + Click on the New button.3. + Select an Owning Library.4. + Create a unique Name for your new action trigger.5. + Select the Hook.6. + Check the Enabled check box.7. + Create a unique Name for your new action trigger.8. + Set the Processing Delay in the appropriate format. Eg. 7 days to run 7 days from the trigger event + or 00:01:00 to run 1 hour after the Processing Delay Context Field.9. + Set the Processing Delay Context Field and Processing Group Context Field.10. + Select the Validator, Reactor, Failure Cleanup and Success Cleanup. + 11. + Set the Processing Delay Context Field and Processing Group Context Field.12. + Enter text in the Template text box if required. These are for email messages. Here is an sample + template for sending 90 day overdue notices: + +[%- USE date -%] +[%- user = target.0.usr -%] +To: robert.soulliere@mohawkcollege.ca +From: robert.soulliere@mohawkcollege.ca +Subject: Overdue Notification + +Dear [% user.family_name %], [% user.first_given_name %] +Our records indicate the following items are overdue. + +[%- USE date -%] +[%- user = target.0.usr -%] +To: [%- params.recipient_email || user.email %] +From: [%- params.sender_email || default_sender %] +Subject: Overdue Items Marked Lost + +Dear [% user.family_name %], [% user.first_given_name %] +The following items are 90 days overdue and have been marked LOST. +[%- params.recipient_email || user.email %][%- params.sender_email || default_sender %] +[% FOR circ IN target %] + Title: [% circ.target_copy.call_number.record.simple_record.title %] + Barcode: [% circ.target_copy.barcode %] + Due: [% date.format(helpers.format_date(circ.due_date), '%Y-%m-%d') %] + Item Cost: [% helpers.get_copy_price(circ.target_copy) %] + Total Owed For Transaction: [% circ.billable_transaction.summary.total_owed %] + Library: [% circ.circ_lib.name %] +[% END %] + + +[% FOR circ IN target %] + Title: [% circ.target_copy.call_number.record.simple_record.title %] + Barcode: [% circ.target_copy.barcode %] + Due: [% date.format(helpers.format_date(circ.due_date), '%Y-%m-%d') %] + Item Cost: [% helpers.get_copy_price(circ.target_copy) %] + Total Owed For Transaction: [% circ.billable_transaction.summary.total_owed %] + Library: [% circ.circ_lib.name %] +[% END %] + + + 13. + Once you are satisfied with your new event trigger , click the Save button located at the bottom of the + form + A quick and easy way to create new action triggers is to clone an existing action trigger. + Cloning Existing Action Triggers1. + + Check the check box next to the action trigger you wish to clone + 2. + + Click the Clone Selected on the top left of the page. + 3. + + An editing window with open. Notice that the fields will be populated with content from the cloned action trigger. Edit as necessary and + give the new action trigger a unique Name. + 4. + + Click Save. + + Editing Action Triggers1. + + Check the check box next to the action trigger you wish to delete + 2. + + Click the Delete Selected on the top left of the page. + + + Before deleting an action trigger, you should consider disabling it through the editing form. This way you can simply enable it if you decide that you would like to use + the action trigger in the future. + Deleting Action Triggers1. + + Check the check box next to the action trigger you wish to delete + 2. + + Click the Delete Selected on the top left of the page. + + + HooksHooks + + + Hooks define the Fieldmapper class in the core_type column off of which the rest of the field definitions “hang”. + Table 12.2. HooksFieldDescriptionHook KeyA unique name given to the hook.Core TypeUsed to link the action trigger to the IDL class in fm_IDL.xmlDescriptionText to describe the purpose of the hook. PassiveIndicates whether or not an event is created by direct user action or is circumstantial. + You may also create, edit and delete Hooks but the Core Type must refer to an IDL class in the fm_IDL.xml file. + + ReactorsReactors + + + Reactors link the trigger definition to the action to be carried out. + Table 12.3. Action Trigger ReactorsFieldDescriptionModule NameThe name of the Module to run if the action trigger is validated. It must be defined as a subroutine in + /openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm or as a module + in /openils/lib/perl5/OpenILS/Application/Trigger/Reactor/*.pm.DescriptionDescription of the Action to be carried out. + You may also create, edit and delete Reactors. Just remember that their must be an associated subroutine or module in the Reactor Perl module. + + ValidatorsValidators + + + Validators set the validation test to be preformed to determine whether the action trigger is executed. + Table 12.4. Action Trigger ValidatorsFieldDescriptionModule NameThe name of the subroutine in + /openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm to validate the action trigger.DescriptionDescription of validation test to run. + You may also create, edit and delete Validators. Just remember that their must be an associated subroutine in the Reactor.pm Perl module. + + Processing Action TriggersProcessing Action Triggers + + + To run the action triggers, an Evergreen administrator will need to run the trigger processing script /openils/bin/action_trigger_runner.pl + --process-hooks --run-pending. This should be set up as a cron job to run + periodically. + You have several options when running the script: + •--run-pending: Run the pending events.•--process-hooks: Create hook events•--osrf-config=[config_file]: OpenSRF core config file. Defaults to: + /openils/conf/opensrf_core.xml.•--custom-filters=[filter_file]: File containing a JSON Object which describes any hooks that should + use a user-defined filter to find their target objects. Defaults to: /openils/conf/action_trigger_filters.json•--max-sleep=[seconds]: When in process-hooks mode, wait up to [seconds] for the lock file to go away. + Defaults to 3600 (1 hour).•--hooks=hook1[,hook2,hook3,...]: Define which hooks to create events for. If none are defined, it defaults to the list of hooks defined + in the --custom-filters option.•--debug-stdout: Print server responses to stdout (as JSON) for debugging.•--lock-file=[file_name]: Sets the lock file for the process.•--help: Show help information. + + + Chapter 13. Booking Module AdministrationChapter 13. Booking Module Administration + Report errors in this documentation using Launchpad. + Chapter 13. Booking Module Administration + Report any errors in this documentation using Launchpad. + Chapter 13. Booking Module AdministrationChapter 13. Booking Module Administration + Adapted with permission from original material by the Evergreen + Community + AbstractThe Evergreen booking module is included in Evergreen 1.6.1.x and above.The following + documentation will include information about making cataloged items bookable; making + non-bibliographic items bookable; and setting permissions in the booking module for + staff. + + Make a Cataloged Item Bookable in AdvanceMake a Cataloged Item Bookable in Advance + + + If their permission settings allow, staff members can make items bookable. Staff members + can do this in advance of a booking request, or they can do it on the fly. + If you know in advance of the request that an item will need to be booked, you can make + the item bookable. + + + 1. + + In the staff client, select Search → Search the Catalog + 2. + + Begin a title search to find an item. + 3. + + Click the title of the item that you want to book. + 4. + + The Record Summary will appear. In this view you can see + information about the item and its locations. Click Actions for this Record → Holdings Maintenance in the top right corner of the screen. + 5. + + The Holdings Maintenance screen will appear. In this screen, + you can view the volumes and copies of an item avaialable at each branch. To view the + barcodes and other information for each copy, click the arrow adjacent to the branch + with the copy that you need to view. Click on successive arrows until you find the + copy that you need to view. + 6. + + Select the item that you want to make bookable. Right click to open the menu, and + click Make Item Bookable. + 7. + + The item has now been added to the list of resources that are bookable. To book + the item, return to the Record Summary, and proceed with + booking.. + + + In Evergreen 1.6.1, there is no way to make an item “unbookable” after it has been + made bookable and has been reserved. The Delete Selected button + on this screen deletes the resource from the screen, but the item will be able to be + booked after it has been returned. + + + + Make a Cataloged Item Bookable On the FlyMake a Cataloged Item Bookable On the Fly + + If a patron wants to book an item immediately that does not have bookable status, you + can book the item on the fly if you have the appropriate permissions. + + 1. + + Follow steps one through five in the section called “Make a Cataloged Item Bookable in Advance”. + 2. + + Select the item that you want to make bookable. Right click to open the menu, and + click Book Item Now. + 3. + + A Reservations screen will appear in a new tab, and you can + make the reservation. + + + + Create a Bookable Status for Non-Bibliographic ItemsCreate a Bookable Status for Non-Bibliographic Items + + + Staff with the required permissions can create a bookable status for non-bibliographic + items. For example, staff can book conference rooms or laptops. You will be able to create + types of resources, specify the names of individual resources within each type, and set + attributes to describe those resources. You can then bring the values together through the + Resource Attribute Map. + 1. + + First, create the type of resource that you want to make bookable. Select Admin → Server Administration → Booking → Resource Types. + 2. + + A list of resource types will appear. You may also see titles of cataloged items + on this screen if they were added using the Make Item Bookable + or Book Now links. You should not attempt to add cataloged items + on this screen; it is best to use the aforementioned links to make those items + bookable. In this screen, you will create a type of resource. + 3. + + In the right corner, click New Resource Type. + 4. + + A box will appear in which you will create a type of resource. In this box, you + can set fines, determine “elbow room” periods between reservations on this type of + resource, and indicate if this type of resource can be transferred to another + library. Click Save when you have entered the needed + information. + 5. + + After you click Save, the box will disappear. Refresh the + screen to see the item that you have added. + 6. + + Next, set the attributes for the type of resource that you have created. Select Server Administration → Booking → Resource Attributes. + 7. + + Click New Resource Attribute. + 8. + + A box will appear in which you can add the attributes of the resource. Attributes + are descriptive information that is provided to the staff member when the booking + request is made. For example, an attribute of the projector may be a cart that allows + for its transportation. Other attributes might be number of seats available in a + room, or MAC or PC attributes for a laptop. Click Save when + the necessary information has been entered. + 9. + + The box will disappear. Refresh the screen to see the added attribute. + 10. + + Next, add the values for the resource attributes. A value can be a number, yes/no, + or any other meaningful information. Select Server Administration → Booking → Resource Attribute Values. + 11. + + Select New Resource Attribute Value. + 12. + + A pop up box will appear. Select the Resource Attribute from + the drop down box. Add the value. You can add multiple values for this field. Click + Save when the required information has been added. + 13. + + If you refresh the screen, the attribute value may not appear, but it has been + saved. + 14. + + Next, identify the specific objects that are associated with this resource type. + Click Admin → Server Administration → Booking → Resources. + 15. + + Click New Resource. + 16. + + A pop-up box will appear. Add information for the resource and click + Save. Repeat this process for each resource. + 17. + + Refresh the screen, and the resource(s) that you added will appear. + 18. + + Finally, use Resource Attribute Maps to bring together the + resource and its attributes. Select Admin → Server Administration → Booking → Resource Attribute Maps. + 19. + + Select New Resource Attribute Map + 20. + + Select the resource that you want to match with its attributes, then click + Save. Repeat for all applicable resources. + 21. + + You have now created bookable, non-bibliographic resource(s) with + attributes. + + + Setting Booking PermissionsSetting Booking Permissions + + + Administrators can set permissions so that staff members can view reservations, make + reservations, and make bibliographic or non-bibliographic items bookable. + + If a staff member attempts to book an item for which they do not have the appropriate + permissions, they will receive an error message. + + To set permissions, select Admin → Server Administration → Permissions. + + Staff members should be assigned the following permissions to do common tasks in the + booking module. These permissions could be assigned to front line staff members, such as + circulation staff. Permissions with an asterisk (*) are + already included in the Staff permission group. All other + booking permissions must be applied individually. + + • + View Reservations: VIEW_TRANSACTION* + • + Use the pull list: + RETRIEVE_RESERVATION_PULL_LIST + • + Capture reservations: CAPTURE_RESERVATION + • + Assist patrons with pickup and return: + VIEW_USER* + • + Create/update/delete reservations: + ADMIN_BOOKING_RESERVATION + + + The following permissions allow users to do more advanced tasks, such as making items + bookable, booking items on the fly, and creating non-bibliographic resources for + booking. + + • + Create/update/delete booking resource type: + ADMIN_BOOKING_RESOURCE_TYPE + • + Create/update/delete booking resource attributes: + ADMIN_BOOKING_RESOURCE_ATTR + • + Create/update/delete booking resource attribute + values: ADMIN_BOOKING_RESOURCE_ATTR_VALUE + • + Create/update/delete booking resource: + ADMIN_BOOKING_RESOURCE + • + Create/update/delete booking resource attribute + maps: ADMIN_BOOKING_RESOURCE_ATTR_MAP + + + In addition to having the permissions listed above, staff members will need a valid + working location in their profiles. This should be done when registering new staff members. + + + + + + + + + + Report errors in this documentation using Launchpad. + + Report any errors in this documentation using Launchpad. + Part V. ReportsReports are a powerful tool in Evergreen and can be used for statistical comparisons or collection maintenance. The following part covers everything dealing with reports from starting the reporter daemon to viewing reports your library has created. The range of topics in this part is quite broad and different chapters will be useful to different roles in an Evergreen library system. + + + + + Report errors in this documentation using Launchpad. + + Report any errors in this documentation using Launchpad. + Part VI. Third Party System Integration + + + + + + Report errors in this documentation using Launchpad. + + Report any errors in this documentation using Launchpad. + Part VII. DevelopmentThis part will allow you to customize the Evergreen OPAC, develop useful SQL queries and help you learn the skills necessary for developing new Evergreen applications. It is intended for experienced Evergreen administrators and Evergreen developers who wish to customize Evergreen or enhance their knowledge of the database structure and code. Some of these chapters are introductory in nature, but others assume some level of web development, programming, or database administration experience. + Chapter 14. Evergreen File Structure and Configuration FilesChapter 14. Evergreen File Structure and Configuration Files + Report errors in this documentation using Launchpad. + Chapter 14. Evergreen File Structure and Configuration Files + Report any errors in this documentation using Launchpad. + Chapter 14. Evergreen File Structure and Configuration FilesChapter 14. Evergreen File Structure and Configuration FilesAbstractThis section will describe the basic file structure and cover key configuration files. Understanding the directory and file structure of Evergreen will allow you + to be able to customize your Evergreen software and take full advantage of many features. + + + Evergreen Directory StructureEvergreen Directory Structure + + This is the top level directory structure of Evergreen located in the default installation directory /openils: + Table 14.1. Evergreen Directory StructureDirectoryDescriptionbinContains many critical Perl and shell scripts such as autogen.sh and + oils.ctl. confContains the configuration scripts including the two most important base configuration files opensrf_core.xml and opensrf.xml.includeContains the header files used by the scripts written in C.libContains the core code of Evergreen including the C code and perl modules. In particular, the perl modules in the + subdirectoryperl5/OpenILS + are of particular interest to developers. varLargest directory and includes the web directories (web), lock pid fies + (run), circ setting files (circ) templates + (templates) and log (templates and + data) files. + Evergreen Configuration FilesEvergreen Configuration Files + + + Table 14.2. Key Evergreen Configuration FilesFileDescription/openils/conf/opensrf_core.xmlFiles which controls which Evergreen services are run on the public and private routers. For a service to run, it must be registered in this file. This file also controls the loglevel and points to the log file for the services. An Evergreen restart is required for changes to take effect./openils/conf/opensrf.xmlUse this file to set directory locations, the default locale, default notice settings and settings for all Evergreen services. + It is critical for any administrator to understand the settings in this file. An Evergreen restart is required for changes to take effect./openils/conf/fm_IDL.xml Used for linking the OpenSRF/Evergreen services to the Evergreen database tables. An Evergreen restart is required for changes to take + effect. Running autogen.sh is also required./etc/apache2/eg_vhost.confControls the Evergreen virtual site. Allows to configure the skin for the OPAC or configure various directories within the Apache web server. + An Apache restart is required for changes to this file to take effect. + Table 14.3. Useful Evergreen ScriptsFileDescription/openils/bin/autogen.shUsed to update changes to org units and the fm_IDL.xml file. Will generate web and staff client pages based on contents of files and + Evergreen database entries./openils/bin/clark-kent.plPerl script for starting the reporter./openils/bin/action_trigger_runner.plPerl script used to trigger the actions set up in the action trigger tool in the staff client./openils/bin/osrf_ctl.shThe start up script for OpenSRF and Evergreen. /openils/bin/reshelving_ complete.srfshChange status from “reshelving” to “available” for items which have been in reshelving for a certain amount of time./openils/bin/srfshUsed to start the OpenSRF shell. + + + + Chapter 15. Customizing the Staff ClientChapter 15. Customizing the Staff Client + Report errors in this documentation using Launchpad. + Chapter 15. Customizing the Staff Client + Report any errors in this documentation using Launchpad. + Chapter 15. Customizing the Staff ClientChapter 15. Customizing the Staff Client + + + This chapter will give you some guidance on customizing the staff client. + The files related to the staff client are located in the directory /openils/var/web/xul/[staff client version]/server/ + Changing Colors and ImagesChanging Colors and Images + + To change or adjust the image on the main screen edit /openils/var/web/xul/index.xhtml. By default, the image on this page is + main_logo.jpg which is the same main logo used in the OPAC. + To adjust colors on various staff client pages edit the corresponding cascading style sheets located in + /openils/var/web/xul/[staff client version]/server/skin/. Other display aspects can also be adjusted using these cascading style sheets. + + Changing Labels and MessagesChanging Labels and Messages + + + You can customize labels in the staff client by editing the corresponding DTD files. The staff client uses the same lang.dtd used by the OPAC. This file is located in /openils/var/web/opac/locale/[your locale]. Other labels are controlled by the staff client specific lang.dtd file in /openils/var/web/xul/client version]/server/locale/[your locale]/. + + Changing the Search SkinChanging the Search Skin + + There are a few ways to change the custom skin for OPAC searching in staff client. + Changing the Search Skin on Server - Overriding Local SettingsChanging the Search Skin on Server - Overriding Local Settings + + To change the opac search skins used by the staff client create a file named custom.js and place it in the + /openils/var/web/xul/[staff client version]/server/skin/ directory. This will effect all staff clients since these settings will + override local settings. + For example, the following text in custom.js would set the staff client opac, details page, results page and browse function to the craftsman + skin: + +urls['opac'] = '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1'; +urls['opac_rdetail'] = '/opac/' + LOCALE + '/skin/craftsman/xml/rdetail.xml'; +urls['opac_rresult'] = '/opac/' + LOCALE + '/skin/craftsman/xml/rresult.xml'; +urls['browser'] = '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1'; + + Restart the staff client to see the changes. + + Changing the Search Skin on an individual MachineChanging the Search Skin on an individual Machine + + To change the search skin on an individual machine for personal preferences or needs, edit the file + /[Evergreen staff client path]/build/chrome/content/main/constants.js. + Find the lines which point to the urls for the OPAC and edit accordingly. For example, here is an example to set the opac, details page, results page and browse + function to the craftsman skin: + + 'opac' : '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1', +'opac_rdetail' : '/opac/' + LOCALE + '/skin/craftsman/xml/rdetail.xml', +'opac_rresult' : '/opac/' + LOCALE + '/skin/craftsman/xml/rresult.xml', +... +'browser' : '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1', + + After editing this file, save it and restart the staff client for the changes to take effect. + + + + Chapter 16. Customizing the OPACChapter 16. Customizing the OPAC + Report errors in this documentation using Launchpad. + Chapter 16. Customizing the OPAC + Report any errors in this documentation using Launchpad. + Chapter 16. Customizing the OPACChapter 16. Customizing the OPAC + + While Evergreen is ready to go out of the box, libraries will want to customize Evergreen with their own color scheme, logos and layout. This chapter will explain how to + customize Evergreen to meet the needs of your users. For these task some knowledge of html and css is required. Many of these + instructions assume an installation of Evergreen using the default file locations. + + Be sure to save a backup copy of all files you edit in a location other than /openils/var/web/opac/ as files here could be + overwritten when you upgrade your copy of Evergreen. + + Change the Color SchemeChange the Color Scheme + + + To change the color scheme of the default Evergreen skin, edit /openils/var/web/opac/theme/default/css/colors.css. From this one file you can + change the 4 base color scheme as well as colors of specific elements. + + You can also create alternate themes for your users. + 1. + + Copy the css folder and its contents from the example alternate theme /openils/var/web/opac/theme/reddish/ + to a new folder /openils/var/web/opac/theme/[your new theme]/. + 2. + + Edit /openils/var/web/opac/theme/[your new theme]/css/colors.css to use the colors you want. + 3. + + Link to your new style sheet by adding the following to /openils/var/web/opac/skin/default/xml/common/css_common.xml. + +<link type='text/css' +rel="alternate stylesheet" +title='&opac.style.yourtheme;' +href="<!--#echo var='OILS_THEME_BASE'-->/yourtheme/css/colors.css" +name='Default' csstype='color'/> + + 4. + + Give your new theme a name users can select by adding the following to /openils/var/web/opac/locale/ + [your locale]/opac.dtd. + <!ENTITY opac.style.yourtheme "YourTheme"> + + + customizing Opac Text and Labelscustomizing Opac Text and Labels + + + To change text and links used throughout the OPAC, edit the following files: + •/openils/var/web/opac/locale/[your locale]/lang.dtd•/openils/var/web/opac/locale/[your locale]/opac.dtd + + A better way to customize OPAC text is to create custom dtd files for your lang and opac customizations and then add a include + statement above the default dtd files. + + <!DOCTYPE html PUBLIC + "-//W3C//DTD XHTML 1.0 Transitional//EN" + "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd" [ + <!--#include virtual="/opac/locale/${locale}/custom_opac.dtd"--> + <!--#include virtual="/opac/locale/${locale}/opac.dtd"--> + ]> + + position is important here. The first/top included dtd files will take precedence over the subsequent dtd includes. + + While it is possible to add text to the xml files itself, it is a good practice to use the DTD file to control the text and refer to the DTD elements in the xml/html code. + For example, the footer.xml file has this code to generate a copyright statement: + +<div id='copyright_text'> +<span>&footer.copyright;</span> + + The included opac.dtd file in the en-US locale directory has this setting for &footer.copyright text: + <!ENTITY footer.copyright "Copyright © 2006-2010 Georgia Public Library Service, and others"> + + Logo ImagesLogo Images + + To change the logos used by default to your own logos, replace the following files with images of your own, appropriately sized. + •Large main logo:/openils/var/web/opac/images/main_logo.jpg•Small logo:/openils/var/web/opac/images/small_logo.jpg + + Added ContentAdded Content + + + By default Evergreen includes customizable “Added Content” features to enhance the OPAC experience for your user. These features include Amazon book covers + and Google books searching. These features can be turned off or customized. + Book CoversBook Covers + + The default install of Evergreen includes Amazon book covers. The settings for this are controlled by the <added_content> section of + /openils/conf/opensrf.xml. Here are the key elements of this configuration: + <module>OpenILS::WWW::AddedContent::Amazon</module> + This calls the Amazon perl module. If you wish to link to a different book cover service other than Amazon, you must create a new perl module and refer to it here. + You will also need to change other settings accordingly. There are some available book cover perl modules available in + trunk + <base_url>http://images.amazon.com/images/P/</base_url> + Base URL for Amazon added content fetching. This URL may need to be shortened when new (read: non-image) content fetching + capabilities are added. + <timeout>1</timeout> + Max number of seconds to wait for an added content request to return data. Data not returned within the timeout is considered a failure. + <retry_timeout>600</retry_timeout> + After added content lookups have been disabled due to too many lookup failures, this is the amount of time to wait before we try again. + <max_errors>15</max_errors> + Maximum number of consecutive lookup errors a given process can live before added content lookups are disabled for everyone. + <userid>MY_USER_ID</userid> + If a userid is required to access the added content. + + Google Books LinkGoogle Books Link + + + The results page will display a Browse in Google Books Search link for items in the results page which have corresponding entries + in Google Books. + This will link to Google Books content including table of contents and complete versions of the work if it exists in Google Books. Items not in Google Books will not + display a link. This feature can be turned off by changing the googleBooksLink variable setting to false in the file + /openils/var/web/opac/skin/default/js/result_common.js. By default, this feature is activated. + + SyndeticsSyndetics + + Sydantics is another option for added content, Here is an example of using Sydantics as your added content provider: + + + + <!-- We're using Syndetics --> + <module>OpenILS::WWW::AddedContent::Syndetic</module> + <base_url>http://syndetics.com/index.aspx</base_url> + + <!-- A userid is required to access the added content from Syndetic. --> + <userid>uneedsomethinghere</userid> + + <!-- + Max number of seconds to wait for an added content request to + return data. Data not returned within the timeout is considered + a failure + --> + <timeout>1</timeout> + + <!-- + After added content lookups have been disabled due to too many + lookup failures, this is the amount of time to wait before + we try again + --> + <retry_timeout>600</retry_timeout> + + <!-- + maximum number of consecutive lookup errors a given process can + have before added content lookups are disabled for everyone + --> + <max_errors>15</max_errors> + + </added_content> + + + Sydantics is a fee based service. For details, visit: http://www.bowker.com/syndetics/ + + + Customizing the Results PageCustomizing the Results Page + + The results page is extremely customizable and allows some built in features to be activated with some simple edits or more advanced customizations can be done by more + experienced web developers. + There are several critical files to edit if you wish to customize the results page: + •/openils/var/web/opac/skin/default/js/result_common.js - This file controls the JavaScript for the top level elements on the results + page and should only be edited by experienced web developers except for the Google books link setting mentioned previously.•/openils/var/web/opac/skin/default/js/rresult.js - Has some good controls of results page settings at the top of this file but + requires web development skills for editing this file.•/openils/var/web/opac/skin/default/xml/result/rresult_table.xml - This controls the layout of the items table on results page. + + Customizing the Details PageCustomizing the Details Page + + + There are many options when customizing the details page in Evergreen. The default settings are effective for most libraries, but it is important to understand the full potential + of Evergreen when displaying the details of items. + Some quick features can be turned on and off by changing variable values in the file /openils/var/web/opac/skin/default/js/rdedail.js. + You will notice the section at the top of this file called “Per-skin configuration settings”. Changing setting in this section can control several features including + limiting results to local only or showing copy location or displaying serial holdings. Form this section you can also enable refworks and set the Refworks host URL. + Some copy level details settings can be turned on and off from /openils/var/web/opac/skin/default/js/copy_details.js including displaying certain fields + such as due date in the OPAC. + An important file is the /openils/var/web/opac/skin/default/xml/rdetail/rdetail_summary.xml file. This file allows you to control which field to display in + the details summary of the record. The new BibTemplate feature makes this file even more powerful by allowing you to display any marc fields + with a variety of formatting options. + The /openils/var/web/opac/skin/default/xml/rdetail/rdetail_copyinfo.xml file allows you to format the display of the copy information. + + BibTemplateBibTemplate + + BibTemplate is an Evergreen-custom Dojo module which can be used to retrieve and format XML data served by the Evergreen unAPI service. unAPI is a protocol for requesting known objects in specific formats, and Evergreen uses this to supply data – bibliographic records, + metarecords, monograph holdings information, Located URIs, and more to come – in many different formats from MARCXML to MODS to custom XML applications. + + Managing the display of information from raw XML can be difficult, and the purpose of BibTemplate is to make this simpler, as well as move the display closer to the + client and away from the source data. This is good from a separation-of-responsibilities perspective, and also makes it easier to contain and control local customization. + BibTemplate supports the following Evergreen meta data formats: + •MARCXML - datatype='marcxml-full' (default)•MODS 3.3: datatype='mods33'•Dublin Core: datatype='rdf_dc'•FGDC: datatype='fgdc' + HTML APIHTML API + + BibTemplate follows the Dojo convention of adding attributes to existing (X)HTML in order to progressively change its behavior. The 1.6.0 HTML API consists of a + set of attributes that are added to existing OPAC markup, and fall into two classes: + • The slot marker – Elements that denote the location of bibliographic data to insert.•The slot formatter – Elements that specify how the named data should be formatted for display. + + Slot MarkerSlot Marker + + A slot marker is any displayable HTML element that has a type attribute with a value starting with opac/slot-data. This element will become the container + for the formatted data. A slot marker is required in order to retrieve, format and display data using BibTemplate. A slot marker must also have an + attribute called query containing a CSS3 selector. This selector is applied to the XML returned by the unAPI service in order to gather the specific XML + Nodes that should be considered for formatting. + The slot marker can also specify the format of the data to be returned from the unAPI service. This can be specified by adding +{format} to the type + attribute, as in opac/slot-data+mods33-full. The default data format is marcxml-uri, which is an augmented MARCXML record containing Located URI information + and unAPI links. + Example of a slot marker: + <p type='opac/slot-data' query='datafield[tag=245]'></p> + Most useful attribute match operators include: + • datafield[tag=245] - exact match•datafield[tag^=65] - match start of value + Selectors always narrow, so select broadly and iterate through the NodeList + + Slot FormatterSlot Formatter + + A slot formatter is any invisible HTML element which has a type attribute with the value of opac/slot-format. (NOTE: before 1.6.0.4, only <script> + elements were supported, though this restriction is now removed to support Internet Explorer.) Only one slot formatter element is allowed in each slot. The text contents + of this element are wrapped in a JavaScript function and run for each node returned by the query CSS3 selector + specified on the slot marker. This function is passed + one argument, called item, which an XML Node captured by the selector. This function should return HTML text. The output for all runs of the slot formatter is + concatenated into a single string and used to replace the contents of the slot marker. + The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes + specified in the CSS3 selector. + Example of a slot formatter: + + <td class='rdetail_item' id='rdetail_online' type='opac/slot-data' + query='volumes volume uris uri' join=", "> + <script type='opac/slot-format'><![CDATA[ + var link = '<a href="' + item.getAttribute('href') + '">' + item.getAttribute('label') + '</a>'; + if (item.getAttribute('use_restriction')) + link += ' (Use restriction: ' + item.getAttribute('use_restriction') + ')'; + return link; + ]]></script> + </td> + + + JavaScript APIJavaScript API + + In order for BibTemplate to find the slot markers and invoke the slot formatters JavaScript renderer must be instantiated and called. This must be done + for each record that is to contribute to a pages display. The API for this is simple and straight-forward: + The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes + specified in the CSS3 selector. + Example of a slot formatter: + + dojo.require('openils.BibTemplate'); // Tell Dojo to load BibTemplate, if it is not already loaded + + // Create a renderer supplying the record id and the short name of the org unit, if known, + // and call the render() method + new openils.BibTemplate({ record : new CGI().param('r'), org_unit : here.shortname() }).render(); + + The argument hash supplied to the new openils.BibTemplate() constructor can have the following properties: + •record – The bibliographic record ID.•org_unit – The relevant Organizational Unit, used to restrict holdings scope as on a search result or record detail page.•root – The root element within the web page that BibTemplate should search for slot markers + + BibTemplate ExamplesBibTemplate Examples + + This is all that we had to add to display the contents of an arbitrary MARC field: + +<tr> + <td>Bibliography note</td> + <td type='opac/slot-data' query='datafield[tag=504]'></td> +</tr> + + If multiple fields match, they are displayed on consecutive lines within the same left-hand cell. + To display a specific MARC subfield, add that subfield to the query attribute. + For example, subfield $a is the only user-oriented subfield in field 586 (Awards Note) + +<tr> + <td>Awards note</td> + <td type='opac/slot-data' query='datafield[tag=586] subfield[code=a]'></td> +</tr> + + Hide empty rows by default, and display them only if they have content: + + <tr class='hide_me' id='tag504'> + <td>Bibliographic note</td> + <td type='opac/slot-data' query='datafield[tag=504]'> + <script type='opac/slot-format'><![CDATA[ + dojo.query('#tag504').removeClass('hide_me'); + return '<span>' + dojox.data.dom.textContent(item) + + '</span><br/>'; + ]]></script> + </td></tr> + + •<![CDATA[ ... ]]> tells Evergreen Web server to treat the contents as literal “character data” - + avoids hilarity of entity substitution•<script type='opac/slot-format'>...</script>, contained within an “opac/slot-data” element, receives a variable named item + containing the results of the query (a NodeList) + Suppressing a subfield: + +<tr class='hide_me' id='tag700'> + <td>Additional authors</td> + <td type='opac/slot-data' query='datafield[tag=700]'> + <script type='opac/slot-format'><![CDATA[ + dojo.query('#tag700').removeClass('hide_me'); + var text = ''; + var list = dojo.query('subfield:not([code=4])', item); + for (var i =0; i < list.length; i++) { + text += dojox.data.dom.textContent(list[i]) + ' '; + } + return '<span>' + text + '</span><br/>'; + ]]></script> + </td></tr> + + + + Customizing the SlimpacCustomizing the Slimpac + + The Slimpac is the an alternative OPAC display for browsers or devices without JavaScript or which may have screen size limitations. There is both a simple and advanced search + option for the Slimpac. + The html files for customizing the Slimpac search display are located in the folder /openils/var/web/opac/extras/slimpac. + start.html is the basic search display and advanced.html is the display for the advanced search option. + By default, the Slimpac files include the same locale dtd as the regular OPAC (opac.dtd). However, the slimpac files do not use the same CSS files as the + regular OPAC which means that if you change the OPAC color scheme, you must also edit the Slimpac files. + Customizing the Slimpac Results DisplayCustomizing the Slimpac Results Display + + Two files control the display results for the slimpac. Edit the XSL stylesheet (/openils/var/xsl/ATOM2XHTML.xsl) to edit the elements of the + record as pulled from the XML output. + You may also change the style of the page by editing the CSS stylesheet for the results display (/openils/var/web/opac/extras/os.css). + + Customizing the Slimpac Details/Holdings DisplayCustomizing the Slimpac Details/Holdings Display + + It is also possible to customize the details page when viewing specific items from the results list. To edit the holdings display which contains the details of the specific + record linked from the results display, edit the CSS stylesheet for the holdings/details page + (/openils/var/web/opac/extras/htmlcard.css). You may also control the content of the record by editing MARC21slim2HTMLCard.xsl. + Holdings data may also be controlled by editing MARC21slim2HTMLCard-holdings.xsl. + + + Integrating a Evergreen Search Form on a Web PageIntegrating a Evergreen Search Form on a Web Page + + It is possible to embed a simple search form into an html page which will allow users to search for materials in your Evergreen catalog. Here is code which can be embedded + anywhere in the body of your web page: + +<form action="http://[domain name]/opac/[locale]/skin/default/xml/rresult.xml" method="get"> +<div> + Quick Catalog Search:<br /> + <input type="text" alt="Input Box for Catalog Search" maxlength="250" + size="20" id="t" name="t" value="" /> + <input type="hidden" id="rt" name="rt" value="keyword" /> + <input type="hidden" id="tp" name="tp" value="keyword" /> + <input type="hidden" id="l" name="l" value="2" /> + <input type="hidden" id="d" name="d" value="" /> + <input type="hidden" id="f" name="f" value="" /> + <input type="submit" value="Search" class="form-submit" /> + </div> +</form> + + + Replace [domain name] with the domain name of your Evergreen server and replace [locale] with the desired locale of + your Evergreen instance (e.g. en-US). This does a basic keyword search. Different types of searches and more advanced search forms can be developed. For further information on the url parameters used by Evergreen, see ??? for more details. + + + Chapter 17. OpenSRFChapter 17. OpenSRF + Report errors in this documentation using Launchpad. + Chapter 17. OpenSRF + Report any errors in this documentation using Launchpad. + Chapter 17. OpenSRFChapter 17. OpenSRF + + + One of the claimed advantages of + Evergreen over alternative integrated library systems is the underlying Open + Service Request Framework (OpenSRF, pronounced "open surf") architecture. This + article introduces OpenSRF, demonstrates how to build OpenSRF services through + simple code examples, and explains the technical foundations on which OpenSRF + is built. This chapter was taken from Dan Scott's Easing gently into OpenSRF article, June, 2010. + + Introducing OpenSRFIntroducing OpenSRF + + + OpenSRF is a message routing network that offers scalability and failover + support for individual services and entire servers with minimal development and + deployment overhead. You can use OpenSRF to build loosely-coupled applications + that can be deployed on a single server or on clusters of geographically + distributed servers using the same code and minimal configuration changes. + Although copyright statements on some of the OpenSRF code date back to Mike + Rylander’s original explorations in 2000, Evergreen was the first major + application to be developed with, and to take full advantage of, the OpenSRF + architecture starting in 2004. The first official release of OpenSRF was 0.1 in + February 2005 (http://evergreen-ils.org/blog/?p=21), but OpenSRF’s development + continues a steady pace of enhancement and refinement, with the release of + 1.0.0 in October 2008 and the most recent release of 1.2.2 in February 2010. + OpenSRF is a distinct break from the architectural approach used by previous + library systems and has more in common with modern Web applications. The + traditional "scale-up" approach to serve more transactions is to purchase a + server with more CPUs and more RAM, possibly splitting the load between a Web + server, a database server, and a business logic server. Evergreen, however, is + built on the Open Service Request Framework (OpenSRF) architecture, which + firmly embraces the "scale-out" approach of spreading transaction load over + cheap commodity servers. The initial GPLS + PINES hardware cluster, while certainly impressive, may have offered the + misleading impression that Evergreen requires a lot of hardware to run. + However, Evergreen and OpenSRF easily scale down to a single server; many + Evergreen libraries run their entire library system on a single server, and + most OpenSRF and Evergreen development occurs on a virtual machine running on a + single laptop or desktop image. + Another common concern is that the flexibility of OpenSRF’s distributed + architecture makes it complex to configure and to write new applications. This + article demonstrates that OpenSRF itself is an extremely simple architecture on + which one can easily build applications of many kinds – not just library + applications – and that you can use a number of different languages to call and + implement OpenSRF methods with a minimal learning curve. With an application + built on OpenSRF, when you identify a bottleneck in your application’s business + logic layer, you can adjust the number of the processes serving that particular + bottleneck on each of your servers; or if the problem is that your service is + resource-hungry, you could add an inexpensive server to your cluster and + dedicate it to running that resource-hungry service. + Programming language supportProgramming language support + + If you need to develop an entirely new OpenSRF service, you can choose from a + number of different languages in which to implement that service. OpenSRF + client language bindings have been written for C, Java, JavaScript, Perl, and + Python, and service language bindings have been written for C, Perl, and Python. + This article uses Perl examples as a lowest common denominator programming + language. Writing an OpenSRF binding for another language is a relatively small + task if that language offers libraries that support the core technologies on + which OpenSRF depends: + • + + Extensible Messaging and Presence + Protocol (XMPP, sometimes referred to as Jabber) - provides the base messaging + infrastructure between OpenSRF clients and services + + + • + + JavaScript Object Notation (JSON) - serializes the content + of each XMPP message in a standardized and concise format + + • + + memcached - provides the caching service + + + • + + syslog - the standard UNIX logging + service + + + + Unfortunately, the + OpenSRF + reference documentation, although augmented by the + OpenSRF + glossary, blog posts like the description + of OpenSRF and Jabber, and even this article, is not a sufficient substitute + for a complete specification on which one could implement a language binding. + The recommended option for would-be developers of another language binding is + to use the Python implementation as the cleanest basis for a port to another + language. + + + + Writing an OpenSRF ServiceWriting an OpenSRF Service + + Imagine an application architecture in which 10 lines of Perl or Python, using + the data types native to each language, are enough to implement a method that + can then be deployed and invoked seamlessly across hundreds of servers. You + have just imagined developing with OpenSRF – it is truly that simple. Under the + covers, of course, the OpenSRF language bindings do an incredible amount of + work on behalf of the developer. An OpenSRF application consists of one or more + OpenSRF services that expose methods: for example, the opensrf.simple-text + demonstration + service exposes the opensrf.simple-text.split() and + opensrf.simple-text.reverse() methods. Each method accepts zero or more + arguments and returns zero or one results. The data types supported by OpenSRF + arguments and results are typical core language data types: strings, numbers, + booleans, arrays, and hashes. + To implement a new OpenSRF service, perform the following steps: + 1. + + Include the base OpenSRF support libraries + + 2. + + Write the code for each of your OpenSRF methods as separate procedures + + 3. + + Register each method + + 4. + + Add the service definition to the OpenSRF configuration files + + + For example, the following code implements an OpenSRF service. The service + includes one method named opensrf.simple-text.reverse() that accepts one + string as input and returns the reversed version of that string: + +#!/usr/bin/perl + +package OpenSRF::Application::Demo::SimpleText; + +use strict; + +use OpenSRF::Application; +use parent qw/OpenSRF::Application/; + +sub text_reverse { + my ($self , $conn, $text) = @_; + my $reversed_text = scalar reverse($text); + return $reversed_text; +} + +__PACKAGE__->register_method( + method => 'text_reverse', + api_name => 'opensrf.simple-text.reverse' +); + + Ten lines of code, and we have a complete OpenSRF service that exposes a single + method and could be deployed quickly on a cluster of servers to meet your + application’s ravenous demand for reversed strings! If you’re unfamiliar with + Perl, the use OpenSRF::Application; use parent qw/OpenSRF::Application/; + lines tell this package to inherit methods and properties from the + OpenSRF::Application module. For example, the call to + __PACKAGE__->register_method() is defined in OpenSRF::Application but due to + inheritance is available in this package (named by the special Perl symbol + __PACKAGE__ that contains the current package name). The register_method() + procedure is how we introduce a method to the rest of the OpenSRF world. + Registering a service with the OpenSRF configuration filesRegistering a service with the OpenSRF configuration files + + Two files control most of the configuration for OpenSRF: + • + + opensrf.xml contains the configuration for the service itself, as well as + a list of which application servers in your OpenSRF cluster should start + the service. + + • + + opensrf_core.xml (often referred to as the "bootstrap configuration" + file) contains the OpenSRF networking information, including the XMPP server + connection credentials for the public and private routers. You only need to touch + this for a new service if the new service needs to be accessible via the + public router. + + + + Begin by defining the service itself in opensrf.xml. To register the + opensrf.simple-text service, add the following section to the <apps> + element (corresponding to the XPath /opensrf/default/apps/): + + +<apps> + <opensrf.simple-text> + <keepalive>3</keepalive> + <stateless>1</stateless> + <language>perl</language> + <implementation>OpenSRF::Application::Demo::SimpleText</implementation> + <max_requests>100</max_requests> + <unix_config> + <max_requests>1000</max_requests> + <unix_log>opensrf.simple-text_unix.log</unix_log> + <unix_sock>opensrf.simple-text_unix.sock</unix_sock> + <unix_pid>opensrf.simple-text_unix.pid</unix_pid> + <min_children>5</min_children> + <max_children>15</max_children> + <min_spare_children>2</min_spare_children> + <max_spare_children>5</max_spare_children> + </unix_config> + </opensrf.simple-text> + + <!-- other OpenSRF services registered here... --> +</apps> + + + + The element name is the name that the OpenSRF control scripts use to refer + to the service. + + + + The <keepalive> element specifies the interval (in seconds) between + checks to determine if the service is still running. + + + + The <stateless> element specifies whether OpenSRF clients can call + methods from this service without first having to create a connection to a + specific service backend process for that service. If the value is 1, then + the client can simply issue a request and the router will forward the request + to an available service and the result will be returned directly to the client. + + + + The <language> element specifies the programming language in which the + service is implemented. + + + + The <implementation> element pecifies the name of the library or module + in which the service is implemented. + + + + (C implementations only): The <max_requests> element, as a direct child + of the service element name, specifies the maximum number of requests a process + serves before it is killed and replaced by a new process. + + + + (Perl implementations only): The <max_requests> element, as a direct + child of the <unix_config> element, specifies the maximum number of requests + a process serves before it is killed and replaced by a new process. + + + + The <unix_log> element specifies the name of the log file for + language-specific log messages such as syntax warnings. + + + + The <unix_sock> element specifies the name of the UNIX socket used for + inter-process communications. + + + + The <unix_pid> element specifies the name of the PID file for the + master process for the service. + + + + The <min_children> element specifies the minimum number of child + processes that should be running at any given time. + + + + The <max_children> element specifies the maximum number of child + processes that should be running at any given time. + + + + The <min_spare_children> element specifies the minimum number of idle + child processes that should be available to handle incoming requests. If there + are fewer than this number of spare child processes, new processes will be + spawned. + + + + The`<max_spare_children>` element specifies the maximum number of idle + child processes that should be available to handle incoming requests. If there + are more than this number of spare child processes, the extra processes will be + killed. + + + To make the service accessible via the public router, you must also + edit the opensrf_core.xml configuration file to add the service to the list + of publicly accessible services: + Making a service publicly accessible in opensrf_core.xml.  + +<router> + <!-- This is the public router. On this router, we only register applications + which should be accessible to everyone on the opensrf network --> + <name>router</name> + <domain>public.localhost</domain> + <services> + <service>opensrf.math</service> + <service>opensrf.simple-text</service> + </services> +</router> + + + + + This section of the opensrf_core.xml file is located at XPath + /config/opensrf/routers/. + + + + public.localhost is the canonical public router domain in the OpenSRF + installation instructions. + + + + Each <service> element contained in the <services> element + offers their services via the public router as well as the private router. + + + Once you have defined the new service, you must restart the OpenSRF Router + to retrieve the new configuration and start or restart the service itself. + + Calling an OpenSRF methodCalling an OpenSRF method + + + OpenSRF clients in any supported language can invoke OpenSRF services in any + supported language. So let’s see a few examples of how we can call our fancy + new opensrf.simple-text.reverse() method: + Calling OpenSRF methods from the srfsh clientCalling OpenSRF methods from the srfsh client + + srfsh is a command-line tool installed with OpenSRF that you can use to call + OpenSRF methods. To call an OpenSRF method, issue the request command and + pass the OpenSRF service and method name as the first two arguments; then pass + one or more JSON objects delimited by commas as the arguments to the method + being invoked. + The following example calls the opensrf.simple-text.reverse method of the + opensrf.simple-text OpenSRF service, passing the string "foobar" as the + only method argument: + +$ srfsh +srfsh # request opensrf.simple-text opensrf.simple-text.reverse "foobar" + +Received Data: "raboof" + +=------------------------------------ +Request Completed Successfully +Request Time in seconds: 0.016718 +=------------------------------------ + + + Getting documentation for OpenSRF methods from the srfsh clientGetting documentation for OpenSRF methods from the srfsh client + + The srfsh client also gives you command-line access to retrieving metadata + about OpenSRF services and methods. For a given OpenSRF method, for example, + you can retrieve information such as the minimum number of required arguments, + the data type and a description of each argument, the package or library in + which the method is implemented, and a description of the method. To retrieve + the documentation for an opensrf method from srfsh, issue the introspect + command, followed by the name of the OpenSRF service and (optionally) the + name of the OpenSRF method. If you do not pass a method name to the introspect + command, srfsh lists all of the methods offered by the service. If you pass + a partial method name, srfsh lists all of the methods that match that portion + of the method name. + The quality and availability of the descriptive information for each + method depends on the developer to register the method with complete and + accurate information. The quality varies across the set of OpenSRF and + Evergreen APIs, although some effort is being put towards improving the + state of the internal documentation. + +srfsh# introspect opensrf.simple-text "opensrf.simple-text.reverse" +--> opensrf.simple-text + +Received Data: { + "__c":"opensrf.simple-text", + "__p":{ + "api_level":1, + "stream":0, + "object_hint":"OpenSRF_Application_Demo_SimpleText", + "remote":0, + "package":"OpenSRF::Application::Demo::SimpleText", + "api_name":"opensrf.simple-text.reverse", + "server_class":"opensrf.simple-text", + "signature":{ + "params":[ + { + "desc":"The string to reverse", + "name":"text", + "type":"string" + } + ], + "desc":"Returns the input string in reverse order\n", + "return":{ + "desc":"Returns the input string in reverse order", + "type":"string" + } + }, + "method":"text_reverse", + "argc":1 + } +} + + + + stream denotes whether the method supports streaming responses or not. + + + + package identifies which package or library implements the method. + + + + api_name identifies the name of the OpenSRF method. + + + + signature is a hash that describes the parameters for the method. + + + + params is an array of hashes describing each parameter in the method; + each parameter has a description (desc), name (name), and type (type). + + + + desc is a string that describes the method itself. + + + + return is a hash that describes the return value for the method; it + contains a description of the return value (desc) and the type of the + returned value (type). + + + + method identifies the name of the function or method in the source + implementation. + + + + argc is an integer describing the minimum number of arguments that + must be passed to this method. + + + + Calling OpenSRF methods from Perl applicationsCalling OpenSRF methods from Perl applications + + To call an OpenSRF method from Perl, you must connect to the OpenSRF service, + issue the request to the method, and then retrieve the results. + +#/usr/bin/perl +use strict; +use OpenSRF::AppSession; +use OpenSRF::System; + +OpenSRF::System->bootstrap_client(config_file => '/openils/conf/opensrf_core.xml'); + +my $session = OpenSRF::AppSession->create("opensrf.simple-text"); + +print "substring: Accepts a string and a number as input, returns a string\n"; +my $result = $session->request("opensrf.simple-text.substring", "foobar", 3); +my $request = $result->gather(); +print "Substring: $request\n\n"; + +print "split: Accepts two strings as input, returns an array of strings\n"; +$request = $session->request("opensrf.simple-text.split", "This is a test", " "); +my $output = "Split: ["; +my $element; +while ($element = $request->recv()) { + $output .= $element->content . ", "; +} +$output =~ s/, $/]/; +print $output . "\n\n"; + +print "statistics: Accepts an array of strings as input, returns a hash\n"; +my @many_strings = [ + "First I think I'll have breakfast", + "Then I think that lunch would be nice", + "And then seventy desserts to finish off the day" +]; + +$result = $session->request("opensrf.simple-text.statistics", @many_strings); +$request = $result->gather(); +print "Length: " . $result->{'length'} . "\n"; +print "Word count: " . $result->{'word_count'} . "\n"; + +$session->disconnect(); + + + + The OpenSRF::System->bootstrap_client() method reads the OpenSRF + configuration information from the indicated file and creates an XMPP client + connection based on that information. + + + + The OpenSRF::AppSession->create() method accepts one argument - the name + of the OpenSRF service to which you want to want to make one or more requests - + and returns an object prepared to use the client connection to make those + requests. + + + + The OpenSRF::AppSession->request() method accepts a minimum of one + argument - the name of the OpenSRF method to which you want to make a request - + followed by zero or more arguments to pass to the OpenSRF method as input + values. This example passes a string and an integer to the + opensrf.simple-text.substring method defined by the opensrf.simple-text + OpenSRF service. + + + + The gather() method, called on the result object returned by the + request() method, iterates over all of the possible results from the result + object and returns a single variable. + + + + This request() call passes two strings to the opensrf.simple-text.split + method defined by the opensrf.simple-text OpenSRF service and returns (via + gather()) a reference to an array of results. + + + + The opensrf.simple-text.split() method is a streaming method that + returns an array of results with one element per recv() call on the + result object. We could use the gather() method to retrieve all of the + results in a single array reference, but instead we simply iterate over + the result variable until there are no more results to retrieve. + + + + While the gather() convenience method returns only the content of the + complete set of results for a given request, the recv() method returns an + OpenSRF result object with status, statusCode, and content fields as + we saw in the HTTP results example. + + + + This request() call passes an array to the + opensrf.simple-text.statistics method defined by the opensrf.simple-text + OpenSRF service. + + + + The result object returns a hash reference via gather(). The hash + contains the length and word_count keys we defined in the method. + + + + The OpenSRF::AppSession->disconnect() method closes the XMPP client + connection and cleans up resources associated with the session. + + + + + Accepting and returning more interesting data typesAccepting and returning more interesting data types + + Of course, the example of accepting a single string and returning a single + string is not very interesting. In real life, our applications tend to pass + around multiple arguments, including arrays and hashes. Fortunately, OpenSRF + makes that easy to deal with; in Perl, for example, returning a reference to + the data type does the right thing. In the following example of a method that + returns a list, we accept two arguments of type string: the string to be split, + and the delimiter that should be used to split the string. + Basic text splitting method.  + +sub text_split { + my $self = shift; + my $conn = shift; + my $text = shift; + my $delimiter = shift || ' '; + + my @split_text = split $delimiter, $text; + return \@split_text; +} + +__PACKAGE__->register_method( + method => 'text_split', + api_name => 'opensrf.simple-text.split' +); + + + We simply return a reference to the list, and OpenSRF does the rest of the work + for us to convert the data into the language-independent format that is then + returned to the caller. As a caller of a given method, you must rely on the + documentation used to register to determine the data structures - if the developer has + added the appropriate documentation. + + Accepting and returning Evergreen objectsAccepting and returning Evergreen objects + + OpenSRF is agnostic about objects; its role is to pass JSON back and forth + between OpenSRF clients and services, and it allows the specific clients and + services to define their own semantics for the JSON structures. On top of that + infrastructure, Evergreen offers the fieldmapper: an object-relational mapper + that provides a complete definition of all objects, their properties, their + relationships to other objects, the permissions required to create, read, + update, or delete objects of that type, and the database table or view on which + they are based. + + The Evergreen fieldmapper offers a great deal of convenience for working with + complex system objects beyond the basic mapping of classes to database + schemas. Although the result is passed over the wire as a JSON object + containing the indicated fields, fieldmapper-aware clients then turn those + JSON objects into native objects with setter / getter methods for each field. + All of this metadata about Evergreen objects is defined in the + fieldmapper configuration file (/openils/conf/fm_IDL.xml), and access to + these classes is provided by the open-ils.cstore, open-ils.pcrud, and + open-ils.reporter-store OpenSRF services which parse the fieldmapper + configuration file and dynamically register OpenSRF methods for creating, + reading, updating, and deleting all of the defined classes. + Example fieldmapper class definition for "Open User Summary".  + +<class id="mous" controller="open-ils.cstore open-ils.pcrud" + oils_obj:fieldmapper="money::open_user_summary" + oils_persist:tablename="money.open_usr_summary" + reporter:label="Open User Summary"> + <fields oils_persist:primary="usr" oils_persist:sequence=""> + <field name="balance_owed" reporter:datatype="money" /> + <field name="total_owed" reporter:datatype="money" /> + <field name="total_paid" reporter:datatype="money" /> + <field name="usr" reporter:datatype="link"/> + </fields> + <links> + <link field="usr" reltype="has_a" key="id" map="" class="au"/> + </links> + <permacrud xmlns="http://open-ils.org/spec/opensrf/IDL/permacrud/v1"> + <actions> + <retrieve permission="VIEW_USER"> + <context link="usr" field="home_ou"/> + </retrieve> + </actions> + </permacrud> +</class> + + + + + The <class> element defines the class: + + • + + The id attribute defines the class hint that identifies the class both + elsewhere in the fieldmapper configuration file, such as in the value of the + field attribute of the <link> element, and in the JSON object itself when + it is instantiated. For example, an "Open User Summary" JSON object would have + the top level property of "__c":"mous". + + • + + The controller attribute identifies the services that have direct access + to this class. If open-ils.pcrud is not listed, for example, then there is + no means to directly access members of this class through a public service. + + • + + The oils_obj:fieldmapper attribute defines the name of the Perl + fieldmapper class that will be dynamically generated to provide setter and + getter methods for instances of the class. + + • + + The oils_persist:tablename attribute identifies the schema name and table + name of the database table that stores the data that represents the instances + of this class. In this case, the schema is money and the table is + open_usr_summary. + + • + + The reporter:label attribute defines a human-readable name for the class + used in the reporting interface to identify the class. These names are defined + in English in the fieldmapper configuration file; however, they are extracted + so that they can be translated and served in the user’s language of choice. + + + + + The <fields> element lists all of the fields that belong to the object. + + • + + The oils_persist:primary attribute identifies the field that acts as the + primary key for the object; in this case, the field with the name usr. + + • + + The oils_persist:sequence attribute identifies the sequence object + (if any) in this database provides values for new instances of this class. In + this case, the primary key is defined by a field that is linked to a different + table, so no sequence is used to populate these instances. + + + + + Each <field> element defines a single field with the following attributes: + + • + + The name attribute identifies the column name of the field in the + underlying database table as well as providing a name for the setter / getter + method that can be invoked in the JSON or native version of the object. + + • + + The reporter:datatype attribute defines how the reporter should treat + the contents of the field for the purposes of querying and display. + + • + + The reporter:label attribute can be used to provide a human-readable name + for each field; without it, the reporter falls back to the value of the name + attribute. + + + + + The <links> element contains a set of zero or more <link> elements, + each of which defines a relationship between the class being described and + another class. + + • + + The field attribute identifies the field named in this class that links + to the external class. + + • + + The reltype attribute identifies the kind of relationship between the + classes; in the case of has_a, each value in the usr field is guaranteed + to have a corresponding value in the external class. + + • + + The key attribute identifies the name of the field in the external + class to which this field links. + + • + + The rarely-used map attribute identifies a second class to which + the external class links; it enables this field to define a direct + relationship to an external class with one degree of separation, to + avoid having to retrieve all of the linked members of an intermediate + class just to retrieve the instances from the actual desired target class. + + • + + The class attribute identifies the external class to which this field + links. + + + + + The <permacrud> element defines the permissions that must have been + granted to a user to operate on instances of this class. + + + + The <retrieve> element is one of four possible children of the + <actions> element that define the permissions required for each action: + create, retrieve, update, and delete. + + • + + The permission attribute identifies the name of the permission that must + have been granted to the user to perform the action. + + • + + The contextfield attribute, if it exists, defines the field in this class + that identifies the library within the system for which the user must have + prvileges to work. If a user has been granted a given permission, but has not been + granted privileges to work at a given library, they can not perform the action + at that library. + + + + + The rarely-used <context> element identifies a linked field (link + attribute) in this class which links to an external class that holds the field + (field attribute) that identifies the library within the system for which the + user must have privileges to work. + + + When you retrieve an instance of a class, you can ask for the result to + flesh some or all of the linked fields of that class, so that the linked + instances are returned embedded directly in your requested instance. In that + same request you can ask for the fleshed instances to in turn have their linked + fields fleshed. By bundling all of this into a single request and result + sequence, you can avoid the network overhead of requiring the client to request + the base object, then request each linked object in turn. + You can also iterate over a collection of instances and set the automatically + generated isdeleted, isupdated, or isnew properties to indicate that + the given instance has been deleted, updated, or created respectively. + Evergreen can then act in batch mode over the collection to perform the + requested actions on any of the instances that have been flagged for action. + + Returning streaming resultsReturning streaming results + + In the previous implementation of the opensrf.simple-text.split method, we + returned a reference to the complete array of results. For small values being + delivered over the network, this is perfectly acceptable, but for large sets of + values this can pose a number of problems for the requesting client. Consider a + service that returns a set of bibliographic records in response to a query like + "all records edited in the past month"; if the underlying database is + relatively active, that could result in thousands of records being returned as + a single network request. The client would be forced to block until all of the + results are returned, likely resulting in a significant delay, and depending on + the implementation, correspondingly large amounts of memory might be consumed + as all of the results are read from the network in a single block. + OpenSRF offers a solution to this problem. If the method returns results that + can be divided into separate meaningful units, you can register the OpenSRF + method as a streaming method and enable the client to loop over the results one + unit at a time until the method returns no further results. In addition to + registering the method with the provided name, OpenSRF also registers an additional + method with .atomic appended to the method name. The .atomic variant gathers + all of the results into a single block to return to the client, giving the caller + the ability to choose either streaming or atomic results from a single method + definition. + In the following example, the text splitting method has been reimplemented to + support streaming; very few changes are required: + Text splitting method - streaming mode.  + +sub text_split { + my $self = shift; + my $conn = shift; + my $text = shift; + my $delimiter = shift || ' '; + + my @split_text = split $delimiter, $text; + foreach my $string (@split_text) { + $conn->respond($string); + } + return undef; +} + +__PACKAGE__->register_method( + method => 'text_split', + api_name => 'opensrf.simple-text.split', + stream => 1 +); + + + + + Rather than returning a reference to the array, a streaming method loops + over the contents of the array and invokes the respond() method of the + connection object on each element of the array. + + + + Registering the method as a streaming method instructs OpenSRF to also + register an atomic variant (opensrf.simple-text.split.atomic). + + + + Error! Warning! Info! Debug!Error! Warning! Info! Debug! + + As hard as it may be to believe, it is true: applications sometimes do not + behave in the expected manner, particularly when they are still under + development. The service language bindings for OpenSRF include integrated + support for logging messages at the levels of ERROR, WARNING, INFO, DEBUG, and + the extremely verbose INTERNAL to either a local file or to a syslogger + service. The destination of the log files, and the level of verbosity to be + logged, is set in the opensrf_core.xml configuration file. To add logging to + our Perl example, we just have to add the OpenSRF::Utils::Logger package to our + list of used Perl modules, then invoke the logger at the desired logging level. + You can include many calls to the OpenSRF logger; only those that are higher + than your configured logging level will actually hit the log. The following + example exercises all of the available logging levels in OpenSRF: + +use OpenSRF::Utils::Logger; +my $logger = OpenSRF::Utils::Logger; +# some code in some function +{ + $logger->error("Hmm, something bad DEFINITELY happened!"); + $logger->warn("Hmm, something bad might have happened."); + $logger->info("Something happened."); + $logger->debug("Something happened; here are some more details."); + $logger->internal("Something happened; here are all the gory details.") +} + + If you call the mythical OpenSRF method containing the preceding OpenSRF logger + statements on a system running at the default logging level of INFO, you will + only see the INFO, WARN, and ERR messages, as follows: + Results of logging calls at the default level of INFO.  + +[2010-03-17 22:27:30] opensrf.simple-text [ERR :5681:SimpleText.pm:277:] +[2010-03-17 22:27:30] opensrf.simple-text [WARN:5681:SimpleText.pm:278:] +[2010-03-17 22:27:30] opensrf.simple-text [INFO:5681:SimpleText.pm:279:] + + + If you then increase the the logging level to INTERNAL (5), the logs will + contain much more information, as follows: + Results of logging calls at the default level of INTERNAL.  + +[2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:277:] +[2010-03-17 22:48:11] opensrf.simple-text [WARN:5934:SimpleText.pm:278:] +[2010-03-17 22:48:11] opensrf.simple-text [INFO:5934:SimpleText.pm:279:] +[2010-03-17 22:48:11] opensrf.simple-text [DEBG:5934:SimpleText.pm:280:] +[2010-03-17 22:48:11] opensrf.simple-text [INTL:5934:SimpleText.pm:281:] +[2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:283:] +[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:Cache.pm:125:] +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:579:] +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:586:] +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:190:] +[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:780:] Calling queue_wait(0) +[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:769:] Resending...0 +[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:450:] In send +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] +... + + + To see everything that is happening in OpenSRF, try leaving your logging level + set to INTERNAL for a few minutes - just ensure that you have a lot of free disk + space available if you have a moderately busy system! + + Caching results: one secret of scalabilityCaching results: one secret of scalability + + + If you have ever used an application that depends on a remote Web service + outside of your control—say, if you need to retrieve results from a + microblogging service—you know the pain of latency and dependability (or the + lack thereof). To improve the response time for OpenSRF services, you can take + advantage of the support offered by the OpenSRF::Utils::Cache module for + communicating with a local instance or cluster of memcache daemons to store + and retrieve persistent values. The following example demonstrates caching + by sleeping for 10 seconds the first time it receives a given cache key and + cannot retrieve a corresponding value from the cache: + Simple caching OpenSRF service.  + +use OpenSRF::Utils::Cache; +sub test_cache { + my $self = shift; + my $conn = shift; + my $test_key = shift; + my $cache = OpenSRF::Utils::Cache->new('global'); + my $cache_key = "opensrf.simple-text.test_cache.$test_key"; + my $result = $cache->get_cache($cache_key) || undef; + if ($result) { + $logger->info("Resolver found a cache hit"); + return $result; + } + sleep 10; + my $cache_timeout = 300; + $cache->put_cache($cache_key, "here", $cache_timeout); + return "There was no cache hit."; +} + + + + + The OpenSRF::Utils::Cache module provides access to the built-in caching + support in OpenSRF. + + + + The constructor for the cache object accepts a single argument to define + the cache type for the object. Each cache type can use a separate memcache + server to keep the caches separated. Most Evergreen services use the global + cache, while the anon cache is used for Web sessions. + + + + The cache key is simply a string that uniquely identifies the value you + want to store or retrieve. This line creates a cache key based on the OpenSRF + method name and request input value. + + + + The get_cache() method checks to see if the cache key already exists. If + a matching key is found, the service immediately returns the stored value. + + + + If the cache key does not exist, the code sleeps for 10 seconds to + simulate a call to a slow remote Web service or an intensive process. + + + + The $cache_timeout variable represents a value for the lifetime of the + cache key in seconds. + + + + After the code retrieves its value (or, in the case of this example, + finishes sleeping), it creates the cache entry by calling the put_cache() + method. The method accepts three arguments: the cache key, the value to be + stored ("here"), and the timeout value in seconds to ensure that we do not + return stale data on subsequent calls. + + + + Initializing the service and its children: child labourInitializing the service and its children: child labour + + When an OpenSRF service is started, it looks for a procedure called + initialize() to set up any global variables shared by all of the children of + the service. The initialize() procedure is typically used to retrieve + configuration settings from the opensrf.xml file. + An OpenSRF service spawns one or more children to actually do the work + requested by callers of the service. For every child process an OpenSRF service + spawns, the child process clones the parent environment and then each child + process runs the child_init() process (if any) defined in the OpenSRF service + to initialize any child-specific settings. + When the OpenSRF service kills a child process, it invokes the child_exit() + procedure (if any) to clean up any resources associated with the child process. + Similarly, when the OpenSRF service is stopped, it calls the DESTROY() + procedure to clean up any remaining resources. + + Retrieving configuration settingsRetrieving configuration settings + + The settings for OpenSRF services are maintained in the opensrf.xml XML + configuration file. The structure of the XML document consists of a root + element <opensrf> containing two child elements: + • + + The <default> element contains an <apps> element describing all + OpenSRF services running on this system—see the section called “Registering a service with the OpenSRF configuration files” --, as + well as any other arbitrary XML descriptions required for global configuration + purposes. For example, Evergreen uses this section for email notification and + inter-library patron privacy settings. + + • + + The <hosts> element contains one element per host that participates in + this OpenSRF system. Each host element must include an <activeapps> element + that lists all of the services to start on this host when the system starts + up. Each host element can optionally override any of the default settings. + + + OpenSRF includes a service named opensrf.settings to provide distributed + cached access to the configuration settings with a simple API: + • + + opensrf.settings.default_config.get accepts zero arguments and returns + the complete set of default settings as a JSON document. + + • + + opensrf.settings.host_config.get accepts one argument (hostname) and + returns the complete set of settings, as customized for that hostname, as a + JSON document. + + • + + opensrf.settings.xpath.get accepts one argument (an + XPath expression) and returns the portion of + the configuration file that matches the expression as a JSON document. + + + For example, to determine whether an Evergreen system uses the opt-in + support for sharing patron information between libraries, you could either + invoke the opensrf.settings.default_config.get method and parse the + JSON document to determine the value, or invoke the opensrf.settings.xpath.get + method with the XPath /opensrf/default/share/user/opt_in argument to + retrieve the value directly. + In practice, OpenSRF includes convenience libraries in all of its client + language bindings to simplify access to configuration values. C offers + osrfConfig.c, Perl offers OpenSRF::Utils::SettingsClient, Java offers + org.opensrf.util.SettingsClient, and Python offers osrf.set. These + libraries locally cache the configuration file to avoid network roundtrips for + every request and enable the developer to request specific values without + having to manually construct XPath expressions. + + + OpenSRF Communication FlowsOpenSRF Communication Flows + + + Now that you have seen that it truly is easy to create an OpenSRF service, we + can take a look at what is going on under the covers to make all of this work + for you. + Get on the messaging bus - safelyGet on the messaging bus - safely + + One of the core innovations of OpenSRF was to use the Extensible Messaging and + Presence Protocol (XMPP, more colloquially known as Jabber) as the messaging + bus that ties OpenSRF services together across servers. XMPP is an "XML + protocol for near-real-time messaging, presence, and request-response services" + (http://www.ietf.org/rfc/rfc3920.txt) that OpenSRF relies on to handle most of + the complexity of networked communications. OpenSRF requres an XMPP server + that supports multiple domains such as ejabberd. + Multiple domain support means that a single server can support XMPP virtual + hosts with separate sets of users and access privileges per domain. By + routing communications through separate public and private XMPP domains, + OpenSRF services gain an additional layer of security. + The OpenSRF + installation documentation instructs you to create two separate hostnames + (private.localhost and public.localhost) to use as XMPP domains. OpenSRF + can control access to its services based on the domain of the client and + whether a given service allows access from clients on the public domain. When + you start OpenSRF, the first XMPP clients that connect to the XMPP server are + the OpenSRF public and private routers. OpenSRF routers maintain a list of + available services and connect clients to available services. When an OpenSRF + service starts, it establishes a connection to the XMPP server and registers + itself with the private router. The OpenSRF configuration contains a list of + public OpenSRF services, each of which must also register with the public + router. + + OpenSRF communication flows over XMPPOpenSRF communication flows over XMPP + + + In a minimal OpenSRF deployment, two XMPP users named "router" connect to the + XMPP server, with one connected to the private XMPP domain and one connected to + the public XMPP domain. Similarly, two XMPP users named "opensrf" connect to + the XMPP server via the private and public XMPP domains. When an OpenSRF + service is started, it uses the "opensrf" XMPP user to advertise its + availability with the corresponding router on that XMPP domain; the XMPP server + automatically assigns a Jabber ID (JID) based on the client hostname to each + service’s listener process and each connected drone process waiting to carry + out requests. When an OpenSRF router receives a request to invoke a method on a + given service, it connects the requester to the next available listener in the + list of registered listeners for that service. + Services and clients connect to the XMPP server using a single set of XMPP + client credentials (for example, opensrf@private.localhost), but use XMPP + resource identifiers to differentiate themselves in the JID for each + connection. For example, the JID for a copy of the opensrf.simple-text + service with process ID 6285 that has connected to the private.localhost + domain using the opensrf XMPP client credentials could be + opensrf@private.localhost/opensrf.simple-text_drone_at_localhost_6285. By + convention, the user name for OpenSRF clients is opensrf, and the user name + for OpenSRF routers is router, so the XMPP server for OpenSRF will have four + separate users registered: + * opensrf@private.localhost is an OpenSRF client that connects with these + credentials and which can access any OpenSRF service. + * opensrf@public.localhost is an OpenSRF client that connects with these + credentials and which can only access OpenSRF services that have registered + with the public router. + * router@private.localhost is the private OpenSRF router with which all + services register. + * router@public.localhost is the public OpenSRF router with which only + services that must be publicly accessible register. + All OpenSRF services automatically register themselves with the private XMPP + domain, but only those services that register themselves with the public XMPP + domain can be invoked from public OpenSRF clients. The OpenSRF client and + router user names, passwords, and domain names, along with the list of services + that should be public, are contained in the opensrf_core.xml configuration + file. + + OpenSRF communication flows over HTTPOpenSRF communication flows over HTTP + + + In some contexts, access to a full XMPP client is not a practical option. For + example, while XMPP clients have been implemented in JavaScript, you might + be concerned about browser compatibility and processing overhead - or you might + want to issue OpenSRF requests from the command line with curl. Fortunately, + any OpenSRF service registered with the public router is accessible via the + OpenSRF HTTP Translator. The OpenSRF HTTP Translator implements the + OpenSRF-over-HTTP + proposed specification as an Apache module that translates HTTP requests into + OpenSRF requests and returns OpenSRF results as HTTP results to the initiating + HTTP client. + Issuing an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator.  + +# curl request broken up over multiple lines for legibility +curl -H "X-OpenSRF-service: opensrf.simple-text" + --data 'osrf-msg=[ \ + {"__c":"osrfMessage","__p":{"threadTrace":0,"locale":"en-CA", + "type":"REQUEST","payload": {"__c":"osrfMethod","__p": + {"method":"opensrf.simple-text.reverse","params":["foobar"]} + }} + }]' +http://localhost/osrf-http-translator + + + + + The X-OpenSRF-service header identifies the OpenSRF service of interest. + + + + The POST request consists of a single parameter, the osrf-msg value, + which contains a JSON array. + + + + The first object is an OpenSRF message ("__c":"osrfMessage") with a set of + parameters ("__p":{}). + + • + + The identifier for the request ("threadTrace":0); this value is echoed + back in the result. + + • + + The message type ("type":"REQUEST"). + + • + + The locale for the message; if the OpenSRF method is locale-sensitive, it + can check the locale for each OpenSRF request and return different information + depending on the locale. + + • + + The payload of the message ("payload":{}) containing the OpenSRF method + request ("__c":"osrfMethod") and its parameters ("__p:"{}). + + • + + The method name for the request ("method":"opensrf.simple-text.reverse"). + + • + + A set of JSON parameters to pass to the method ("params":["foobar"]); in + this case, a single string "foobar". + + + + + + The URL on which the OpenSRF HTTP translator is listening, + /osrf-http-translator is the default location in the Apache example + configuration files shipped with the OpenSRF source, but this is configurable. + + + Results from an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator.  + +# HTTP response broken up over multiple lines for legibility +[{"__c":"osrfMessage","__p": + {"threadTrace":0, "payload": + {"__c":"osrfResult","__p": + {"status":"OK","content":"raboof","statusCode":200} + },"type":"RESULT","locale":"en-CA" + } +}, +{"__c":"osrfMessage","__p": + {"threadTrace":0,"payload": + {"__c":"osrfConnectStatus","__p": + {"status":"Request Complete","statusCode":205} + },"type":"STATUS","locale":"en-CA" + } +}] + + + + + The OpenSRF HTTP Translator returns an array of JSON objects in its + response. Each object in the response is an OpenSRF message + ("__c":"osrfMessage") with a collection of response parameters ("__p":). + + + + The OpenSRF message identifier ("threadTrace":0) confirms that this + message is in response to the request matching the same identifier. + + + + The message includes a payload JSON object ("payload":) with an OpenSRF + result for the request ("__c":"osrfResult"). + + + + The result includes a status indicator string ("status":"OK"), the content + of the result response - in this case, a single string "raboof" + ("content":"raboof") - and an integer status code for the request + ("statusCode":200). + + + + The message also includes the message type ("type":"RESULT") and the + message locale ("locale":"en-CA"). + + + + The second message in the set of results from the response. + + + + Again, the message identifier confirms that this message is in response to + a particular request. + + + + The payload of the message denotes that this message is an + OpenSRF connection status message ("__c":"osrfConnectStatus"), with some + information about the particular OpenSRF connection that was used for this + request. + + + + The response parameters for an OpenSRF connection status message include a + verbose status ("status":"Request Complete") and an integer status code for + the connection status (`"statusCode":205). + + + + The message also includes the message type ("type":"RESULT") and the + message locale ("locale":"en-CA"). + + + Before adding a new public OpenSRF service, ensure that it does + not introduce privilege escalation or unchecked access to data. For example, + the Evergreen open-ils.cstore private service is an object-relational mapper + that provides read and write access to the entire Evergreen database, so it + would be catastrophic to expose that service publicly. In comparison, the + Evergreen open-ils.pcrud public service offers the same functionality as + open-ils.cstore to any connected HTTP client or OpenSRF client, but the + additional authentication and authorization layer in open-ils.pcrud prevents + unchecked access to Evergreen’s data. + + Stateless and stateful connectionsStateless and stateful connections + + OpenSRF supports both stateless and stateful connections. When an OpenSRF + client issues a REQUEST message in a stateless connection, the router + forwards the request to the next available service and the service returns the + result directly to the client. + + When an OpenSRF client issues a CONNECT message to create a stateful conection, the + router returns the Jabber ID of the next available service to the client so + that the client can issue one or more REQUEST message directly to that + particular service and the service will return corresponding RESULT messages + directly to the client. Until the client issues a DISCONNECT message, that + particular service is only available to the requesting client. Stateful connections + are useful for clients that need to make many requests from a particular service, + as it avoids the intermediary step of contacting the router for each request, as + well as for operations that require a controlled sequence of commands, such as a + set of database INSERT, UPDATE, and DELETE statements within a transaction. + + + Message body formatMessage body format + + OpenSRF was an early adopter of JavaScript Object Notation (JSON). While XMPP + is an XML protocol, the Evergreen developers recognized that the compactness of + the JSON format offered a significant reduction in bandwidth for the volume of + messages that would be generated in an application of that size. In addition, + the ability of languages such as JavaScript, Perl, and Python to generate + native objects with minimal parsing offered an attractive advantage over + invoking an XML parser for every message. Instead, the body of the XMPP message + is a simple JSON structure. For a simple request, like the following example + that simply reverses a string, it looks like a significant overhead: but we get + the advantages of locale support and tracing the request from the requester + through the listener and responder (drone). + A request for opensrf.simple-text.reverse("foobar"):  + +<message from='router@private.localhost/opensrf.simple-text' + to='opensrf@private.localhost/opensrf.simple-text_listener_at_localhost_6275' + router_from='opensrf@private.localhost/_karmic_126678.3719_6288' + router_to='' router_class='' router_command='' osrf_xid='' +> + <thread>1266781414.366573.12667814146288</thread> + <body> +[ + {"__c":"osrfMessage","__p": + {"threadTrace":"1","locale":"en-US","type":"REQUEST","payload": + {"__c":"osrfMethod","__p": + {"method":"opensrf.simple-text.reverse","params":["foobar"]} + } + } + } +] + </body> +</message> + + + A response from opensrf.simple-text.reverse("foobar").  + +<message from='opensrf@private.localhost/opensrf.simple-text_drone_at_localhost_6285' + to='opensrf@private.localhost/_karmic_126678.3719_6288' + router_command='' router_class='' osrf_xid='' +> + <thread>1266781414.366573.12667814146288</thread> + <body> +[ + {"__c":"osrfMessage","__p": + {"threadTrace":"1","payload": + {"__c":"osrfResult","__p": + {"status":"OK","content":"raboof","statusCode":200} + } ,"type":"RESULT","locale":"en-US"} + }, + {"__c":"osrfMessage","__p": + {"threadTrace":"1","payload": + {"__c":"osrfConnectStatus","__p": + {"status":"Request Complete","statusCode":205} + },"type":"STATUS","locale":"en-US"} + } +] + </body> +</message> + + + The content of the <body> element of the OpenSRF request and result should + look familiar; they match the structure of the OpenSRF over HTTP examples that we previously dissected. + + Registering OpenSRF methods in depthRegistering OpenSRF methods in depth + + Let’s explore the call to __PACKAGE__->register_method(); most of the members + of the hash are optional, and for the sake of brevity we omitted them in the + previous example. As we have seen in the results of the introspection call, a + verbose registration method call is recommended to better enable the internal + documentation. Here is the complete set of members that you should pass to + __PACKAGE__->register_method(): + • + + The method member specifies the name of the procedure in this module that is being registered as an OpenSRF method. + + • + + The api_name member specifies the invocable name of the OpenSRF method; by convention, the OpenSRF service name is used as the prefix. + + • + + The optional api_level member can be used for versioning the methods to allow the use of a deprecated API, but in practical use is always 1. + + • + + The optional argc member specifies the minimal number of arguments that the method expects. + + • + + The optional stream member, if set to any value, specifies that the method supports returning multiple values from a single call to + subsequent requests. OpenSRF automatically creates a corresponding method with ".atomic" appended to its name that returns the complete set of results in a + single request. Streaming methods are useful if you are returning hundreds of records and want to act on the results as they return. + + • + + The optional signature member is a hash that describes the method’s purpose, arguments, and return value. + + • + + The desc member of the signature hash describes the method’s purpose. + + • + + The params member of the signature hash is an array of hashes in which each array element describes the corresponding method + argument in order. + + • + + The name member of the argument hash specifies the name of the argument. + + • + + The desc member of the argument hash describes the argument’s purpose. + + • + + The type member of the argument hash specifies the data type of the argument: for example, string, integer, boolean, number, array, or hash. + + + • + + The return member of the signature hash is a hash that describes the return value of the method. + + • + + The desc member of the return hash describes the return value. + + • + + The type member of the return hash specifies the data type of the return value: for example, string, integer, boolean, number, + array, or hash. + + + + + + + Evergreen-specific OpenSRF servicesEvergreen-specific OpenSRF services + + Evergreen is currently the primary showcase for the use of OpenSRF as an + application architecture. Evergreen 1.6.1 includes the following + set of OpenSRF services: + • + + The open-ils.actor service supports common tasks for working with user + accounts and libraries. + + • + + The open-ils.auth service supports authentication of Evergreen users. + + • + + The open-ils.booking service supports the management of reservations + for bookable items. + + • + + The open-ils.cat service supports common cataloging tasks, such as + creating, modifying, and merging bibliographic and authority records. + + • + + The open-ils.circ service supports circulation tasks such as checking + out items and calculating due dates. + + • + + The open-ils.collections service supports tasks that assist collections + agencies in contacting users with outstanding fines above a certain + threshold. + + • + + The open-ils.cstore private service supports unrestricted access to + Evergreen fieldmapper objects. + + • + + The open-ils.ingest private service supports tasks for importing + data such as bibliographic and authority records. + + • + + The open-ils.pcrud service supports permission-based access to Evergreen + fieldmapper objects. + + • + + The open-ils.penalty penalty service supports the calculation of + penalties for users, such as being blocked from further borrowing, for + conditions such as having too many items checked out or too many unpaid + fines. + + • + + The open-ils.reporter service supports the creation and scheduling of + reports. + + • + + The open-ils.reporter-store private service supports access to Evergreen + fieldmapper objects for the reporting service. + + • + + The open-ils.search service supports searching across bibliographic + records, authority records, serial records, Z39.50 sources, and ZIP codes. + + • + + The open-ils.storage private service supports a deprecated method of + providing access to Evergreen fieldmapper objects. Implemented in Perl, + this service has largely been replaced by the much faster C-based + open-ils.cstore service. + + • + + The open-ils.supercat service supports transforms of MARC records into + other formats, such as MODS, as well as providing Atom and RSS feeds and + SRU access. + + • + + The open-ils.trigger private service supports event-based triggers for + actions such as overdue and holds available notification emails. + + • + + The open-ils.vandelay service supports the import and export of batches of + bibliographic and authority records. + + + Of some interest is that the open-ils.reporter-store and open-ils.cstore + services have identical implementations. Surfacing them as separate services + enables a deployer of Evergreen to ensure that the reporting service does not + interfere with the performance-critical open-ils.cstore service. One can also + direct the reporting service to a read-only database replica to, again, avoid + interference with open-ils.cstore which must write to the master database. + There are only a few significant services that are not built on OpenSRF in + Evergreen 1.6.0, such as the SIP and Z39.50 servers. These services implement + different protocols and build on existing daemon architectures (Simple2ZOOM + for Z39.50), but still rely on the other OpenSRF services to provide access + to the Evergreen data. The non-OpenSRF services are reasonably self-contained + and can be deployed on different servers to deliver the same sort of deployment + flexibility as OpenSRF services, but have the disadvantage of not being + integrated into the same configuration and control infrastructure as the + OpenSRF services. + + + Chapter 18. Evergreen Data Models and AccessChapter 18. Evergreen Data Models and Access + Report errors in this documentation using Launchpad. + Chapter 18. Evergreen Data Models and Access + Report any errors in this documentation using Launchpad. + Chapter 18. Evergreen Data Models and AccessChapter 18. Evergreen Data Models and Access + + + This chapter was taken from Dan Scott's Developer Workshop, February 2010. + + Exploring the Database SchemaExploring the Database Schema + + The database schema is tied pretty tightly to PostgreSQL. Although PostgreSQL + adheres closely to ANSI SQL standards, the use of schemas, SQL functions + implemented in both plpgsql and plperl, and PostgreSQL’s native full-text + search would make it… challenging… to port to other database platforms. + A few common PostgreSQL interfaces for poking around the schema and + manipulating data are: + • + + psql (the command line client) + + • + + pgadminIII (a GUI client). + + + Or you can read through the source files in Open-ILS/src/sql/Pg. + Let’s take a quick tour through the schemas, pointing out some highlights + and some key interdependencies: + • + + actor.org_unit → asset.copy_location + + • + + actor.usr → actor.card + + • + + biblio.record_entry → asset.call_number → asset.copy + + • + + config.metabib_field → metabib.*_field_entry + + + This documentation also contains an Appendix for the Evergreen Chapter 22, Database Schema. + + Database access methodsDatabase access methods + + You could use direct access to the database via Perl DBI, JDBC, etc, + but Evergreen offers several database CRUD services for + creating / retrieving / updating / deleting data. These avoid tying + you too tightly to the current database schema and they funnel database + access through the same mechanism, rather than tying up connections + with other interfaces. + + Evergreen Interface Definition Language (IDL)Evergreen Interface Definition Language (IDL) + + + Defines properties and required permissions for Evergreen classes. + To reduce network overhead, a given object is identified via a + class-hint and serialized as a JSON array of properties (no named properties). + As of 1.6, fields will be serialized in the order in which they appear + in the IDL definition file, and the is_new / is_changed / is_deleted + properties are automatically added. This has greatly reduced the size of + the fm_IDL.xml file and makes DRY people happier :) + • + + … oils_persist:readonly tells us, if true, that the data lives in the database, but is pulled from the SELECT statement defined in the <oils_persist:source_definition> + child element + + + IDL basic example (config.language_map)IDL basic example (config.language_map) + + +<class id="clm" controller="open-ils.cstore open-ils.pcrud" + oils_obj:fieldmapper="config::language_map" + oils_persist:tablename="config.language_map" + reporter:label="Language Map" oils_persist:field_safe="true"> + <fields oils_persist:primary="code" oils_persist:sequence=""> + <field reporter:label="Language Code" name="code" + reporter:selector="value" reporter:datatype="text"/> + <field reporter:label="Language" name="value" + reporter:datatype="text" oils_persist:i18n="true"/> + </fields> + <links/> + <permacrud xmlns="http://open-ils.org/spec/opensrf/IDL/permacrud/v1"> + <actions> + <create global_required="true" permission="CREATE_MARC_CODE"> + <retrieve global_required="true" + permission="CREATE_MARC_CODE UPDATE_MARC_CODE DELETE_MARC_CODE"> + <update global_required="true" permission="UPDATE_MARC_CODE"> + <delete global_required="true" permission="DELETE_MARC_CODE"> + </actions> + </permacrud> +</class> + + + + The class element defines the attributes and permissions for classes, + and relationships between classes. + + + • + + The id attribute on the class element defines the class hint that is + used everywhere in Evergreen. + + • + + The controller attribute defines the OpenSRF + services that provide access to the data for the class objects. + + + + + The oils_obj::fieldmapper attribute defines the name of the class that + is generated by OpenILS::Utils::Fieldmapper. + + + + The oils_persist:tablename attribute defines the name of the table + that contains the data for the class objects. + + + + The reporter interface uses reporter:label attribute values in + the source list to provide meaningful class and attribute names. The + open-ils.fielder service generates a set of methods that provide direct + access to the classes for which oils_persist:field_safe is true. For + example, + + + +srfsh# request open-ils.fielder open-ils.fielder.clm.atomic \ +{"query":{"code":{"=":"eng"}}} + +Received Data: [ + { + "value":"English", + "code":"eng" + } +] + + + + + The fields element defines the list of fields for the class. + + + • + + The oils_persist:primary attribute defines the column that acts as + the primary key for the table. + + • + + The oils_persist:sequence attribute holds the name of the database + sequence. + + + + + Each field element defines one property of the class. + + + • + + The name attribute defines the getter/setter method name for the field. + + • + + The reporter:label attribute defines the attribute name as used in + the reporter interface. + + • + + The reporter:selector attribute defines the field used in the reporter + filter interface to provide a selectable list. This gives the user a more + meaningful access point than the raw numeric ID or abstract code. + + • + + The reporter:datatype attribute defines the type of data held by + this property for the purposes of the reporter. + + + + + The oils_persist:i18n attribute, when true, means that + translated values for the field’s contents may be accessible in + different locales. + + + + + The permacrud element defines the permissions (if any) required + to create, retrieve, update, + and delete data for this + class. open-ils.permacrud must be defined as a controller for the class + for the permissions to be applied. + + + + + Each action requires one or more permission values that the + user must possess to perform the action. + + • + + If the global_required attribute is true, then the user must + have been granted that permission globally (depth = 0) to perform + the action. + + • + + The context_field attribute denotes the <field> that identifies + the org_unit at which the user must have the pertinent permission. + + • + + + An action element may contain a <context_field> element that + defines the linked class (identified by the link attribute) and + the field in the linked class that identifies the org_unit where + the permission must be held. + + • + + + If the <context_field> element contains a jump attribute, + then it defines a link to a link to a class with a field identifying + the org_unit where the permission must be held. + + + + + + Reporter data types and their possible valuesReporter data types and their possible values + + • + + bool: Boolean true or false + + • + + id: ID of the row in the database + + • + + int: integer value + + • + + interval: PostgreSQL time interval + + • + + link: link to another class, as defined in the <links> + element of the class definition + + • + + money: currency amount + + • + + org_unit: list of org_units + + • + + text: text value + + • + + timestamp: PostgreSQL timestamp + + + + IDL example with linked fields (actor.workstation)IDL example with linked fields (actor.workstation) + + Just as tables often include columns with foreign keys that point + to values stored in the column of a different table, IDL classes + can contain fields that link to fields in other classes. The <links> + element defines which fields link to fields in other classes, and + the nature of the relationship: + +<class id="aws" controller="open-ils.cstore" + oils_obj:fieldmapper="actor::workstation" + oils_persist:tablename="actor.workstation" + reporter:label="Workstation"> + <fields oils_persist:primary="id" + oils_persist:sequence="actor.workstation_id_seq"> + <field reporter:label="Workstation ID" name="id" + reporter:datatype="id"/> + <field reporter:label="Workstation Name" name="name" + reporter:datatype="text"/> + <field reporter:label="Owning Library" name="owning_lib" + reporter:datatype="org_unit"/> + <field reporter:label="Circulations" name="circulations" + oils_persist:virtual="true" reporter:datatype="link"/> + </fields> + <links> + <link field="owning_lib" reltype="has_a" key="id" + map="" class="aou"/> + <link field="circulations" reltype="has_many" key="workstation" + map="" class="circ"/> + <link field="circulation_checkins" reltype="has_many" + key="checkin_workstation" map="" class="circ"/> + </links> +</class> + + + + This field includes an oils_persist:virtual attribute with the value of + true, meaning that the linked class circ is a virtual class. + + + + The <links> element contains 0 or more <link> elements. + + + + Each <link> element defines the field (field) that links to a different + class (class), the relationship (rel_type) between this field and the target + field (key). If the field in this class links to a virtual class, the (map) + attribute defines the field in the target class that returns a list of matching + objects for each object in this class. + + + + + open-ils.cstore data access interfacesopen-ils.cstore data access interfaces + + + For each class documented in the IDL, the open-ils.cstore service + automatically generates a set of data access methods, based on the + oils_persist:tablename class attribute. + For example, for the class hint clm, cstore generates the following + methods with the config.language_map qualifer: + • + + open-ils.cstore.direct.config.language_map.id_list {"code" { "like": "e%" } } + + Retrieves a list composed only of the IDs that match the query. + • + + open-ils.cstore.direct.config.language_map.retrieve "eng" + + Retrieves the object that matches a specific ID. + • + + open-ils.cstore.direct.config.language_map.search {"code" : "eng"} + + Retrieves a list of objects that match the query. + • + + open-ils.cstore.direct.config.language_map.create <_object_> + + Creates a new object from the passed in object. + • + + open-ils.cstore.direct.config.language_map.update <_object_> + + Updates the object that has been passed in. + • + + open-ils.cstore.direct.config.language_map.delete "eng" + + Deletes the object that matches the query. + + + open-ils.pcrud data access interfacesopen-ils.pcrud data access interfaces + + + For each class documented in the IDL, the open-ils.pcrud service + automatically generates a set of data access methods, based on the + oils_persist:tablename class attribute. + For example, for the class hint clm, open-ils.pcrud generates the following + methods that parallel the open-ils.cstore interface: + • + + open-ils.pcrud.id_list.clm <_authtoken_>, { "code": { "like": "e%" } } + + • + + open-ils.pcrud.retrieve.clm <_authtoken_>, "eng" + + • + + open-ils.pcrud.search.clm <_authtoken_>, { "code": "eng" } + + • + + open-ils.pcrud.create.clm <_authtoken_>, <_object_> + + • + + open-ils.pcrud.update.clm <_authtoken_>, <_object_> + + • + + open-ils.pcrud.delete.clm <_authtoken_>, "eng" + + + + Transaction and savepoint controlTransaction and savepoint control + + Both open-ils.cstore and open-ils.pcrud enable you to control database transactions + to ensure that a set of operations either all succeed, or all fail, + atomically: + • + + open-ils.cstore.transaction.begin + + • + + open-ils.cstore.transaction.commit + + • + + open-ils.cstore.transaction.rollback + + • + + open-ils.pcrud.transaction.begin + + • + + open-ils.pcrud.transaction.commit + + • + + open-ils.pcrud.transaction.rollback + + + At a more granular level, open-ils.cstore and open-ils.pcrud enable you to set database + savepoints to ensure that a set of operations either all succeed, or all + fail, atomically, within a given transaction: + • + + open-ils.cstore.savepoint.begin + + • + + open-ils.cstore.savepoint.commit + + • + + open-ils.cstore.savepoint.rollback + + • + + open-ils.pcrud.savepoint.begin + + • + + open-ils.pcrud.savepoint.commit + + • + + open-ils.pcrud.savepoint.rollback + + + Transactions and savepoints must be performed within a stateful + connection to the open-ils.cstore and open-ils.pcrud services. + In srfsh, you can open a stateful connection using the open + command, and then close the stateful connection using the close + command - for example: + srfsh# open open-ils.cstore + ... perform various transaction-related work + srfsh# close open-ils.cstore + JSON QueriesJSON Queries + + + Beyond simply retrieving objects by their ID using the \*.retrieve + methods, you can issue queries against the \*.delete and \*.search + methods using JSON to filter results with simple or complex search + conditions. + For example, to generate a list of barcodes that are held in a + copy location that allows holds and is visible in the OPAC: + +srfsh# request open-ils.cstore open-ils.cstore.json_query + {"select": {"acp":["barcode"], "acpl":["name"]}, + "from": {"acp":"acpl"}, + "where": [ + {"+acpl": "holdable"}, + {"+acpl": "opac_visible"} + ]} + +Received Data: { + "barcode":"BARCODE1", + "name":"Stacks" +} + +Received Data: { + "barcode":"BARCODE2", + "name":"Stacks" +} + + + + Invoke the json_query service. + + + + Select the barcode field from the acp class and the name + field from the acpl class. + + + + Join the acp class to the acpl class based on the linked field + defined in the IDL. + + + + Add a where clause to filter the results. We have more than one + condition beginning with the same key, so we wrap the conditions inside + an array. + + + + The first condition tests whether the boolean value of the holdable + field on the acpl class is true. + + + + The second condition tests whether the boolean value of the + opac_visible field on the acpl class is true. + + + For thorough coverage of the breadth of support offered by JSON + query syntax, see JSON Queries: A Tutorial. + + Fleshing linked objectsFleshing linked objects + + A simplistic approach to retrieving a set of objects that are linked to + an object that you are retrieving - for example, a set of call numbers + linked to the barcodes that a given user has borrowed - would be to: + 1. Retrieve the list of circulation objects (circ class) + for a given user (usr class). + 2. For each circulation object, look up the target copy (target_copy + field, linked to the acp class). + 3. For each copy, look up the call number for that copy (call_number + field, linked to the acn class). + However, this would result in potentially hundreds of round-trip + queries from the client to the server. Even with low-latency connections, + the network overhead would be considerable. So, built into the open-ils.cstore and + open-ils.pcrud access methods is the ability to flesh linked fields - + that is, rather than return an identifier to a given linked field, + the method can return the entire object as part of the initial response. + Most of the interfaces that return class instances from the IDL offer the + ability to flesh returned fields. For example, the + open-ils.cstore.direct.\*.retrieve methods allow you to specify a + JSON structure defining the fields you wish to flesh in the returned object. + Fleshing fields in objects returned by open-ils.cstore.  + +srfsh# request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ + { + "flesh": 1, + "flesh_fields": { + "acp": ["location"] + } + } + + + + + The flesh argument is the depth at which objects should be fleshed. + For example, to flesh out a field that links to another object that includes + a field that links to another object, you would specify a depth of 2. + + + + The flesh_fields argument contains a list of objects with the fields + to flesh for each object. + + + Let’s flesh things a little deeper. In addition to the copy location, + let’s also flesh the call number attached to the copy, and then flesh + the bibliographic record attached to the call number. + Fleshing fields in fields of objects returned by open-ils.cstore.  + +request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ + { + "flesh": 2, + "flesh_fields": { + "acp": ["location", "call_number"], + "acn": ["record"] + } + } + + + + + Adding an IDL entry for ResolverResolverAdding an IDL entry for ResolverResolver + + Most OpenSRF methods in Evergreen define their object interface in the + IDL. Without an entry in the IDL, the prospective caller of a given + method is forced to either call the method and inspect the returned + contents, or read the source to work out the structure of the JSON + payload. At this stage of the tutorial, we have not defined an entry + in the IDL to represent the object returned by the + open-ils.resolver.resolve_holdings method. It is time to complete + that task. + The open-ils.resolver service is unlike many of the other classes + defined in the IDL because its data is not stored in the Evergreen + database. Instead, the data is requested from an external Web service + and only temporarily cached in memcached. Fortunately, the IDL + enables us to represent this kind of class by setting the + oils_persist:virtual class attribute to true. + So, let’s add an entry to the IDL for the open-ils.resolver.resolve_holdings + service: + + And let’s make ResolverResolver.pm return an array composed of our new + rhr classes rather than raw JSON objects: + + Once we add the new entry to the IDL and copy the revised ResolverResolver.pm + Perl module to /openils/lib/perl5/OpenILS/Application/, we need to: + 1. + + Copy the updated IDL to both the /openils/conf/ and + /openils/var/web/reports/ directories. The Dojo approach to + parsing the IDL uses the IDL stored in the reports directory. + + 2. + + Restart the Perl services to make the new IDL visible to the services + and refresh the open-ils.resolver implementation + + 3. + + Rerun /openils/bin/autogen.sh to regenerate the JavaScript versions + of the IDL required by the HTTP translator and gateway. + + + We also need to adjust our JavaScript client to use the nifty new + objects that open-ils.resolver.resolve_holdings now returns. + The best approach is to use the support in Evergreen’s Dojo extensions + to generate the JavaScript classes directly from the IDL XML file. + Accessing classes defined in the IDL via Fieldmapper.  + + + + + Load the Dojo core. + + + + fieldmapper.AutoIDL reads /openils/var/reports/fm_IDL.xml to + generate a list of class properties. + + + + fieldmapper.dojoData seems to provide a store for Evergreen data + accessed via Dojo. + + + + fieldmapper.Fieldmapper converts the list of class properties into + actual classes. + + + + fieldmapper.standardRequest invokes an OpenSRF method and returns + an array of objects. + + + + The first argument to fieldmapper.standardRequest is an array + containing the OpenSRF service name and method name. + + + + The second argument to fieldmapper.standardRequest is an array + containing the arguments to pass to the OpenSRF method. + + + + As Fieldmapper has instantiated the returned objects based on their + class hints, we can invoke getter/setter methods on the objects. + + + + + + Chapter 19. Introduction to SQL for Evergreen AdministratorsChapter 19. Introduction to SQL for Evergreen Administrators + Report errors in this documentation using Launchpad. + Chapter 19. Introduction to SQL for Evergreen Administrators + Report any errors in this documentation using Launchpad. + Chapter 19. Introduction to SQL for Evergreen AdministratorsChapter 19. Introduction to SQL for Evergreen Administrators + + + This chapter was taken from Dan Scott's Introduction to SQL for Evergreen Administrators, February 2010. + + Introduction to SQL DatabasesIntroduction to SQL Databases + + + IntroductionIntroduction + + Over time, the SQL database has become the standard method of storing, + retrieving, and processing raw data for applications. Ranging from embedded + databases such as SQLite and Apache Derby, to enterprise databases such as + Oracle and IBM DB2, any SQL database offers basic advantages to application + developers such as standard interfaces (Structured Query Language (SQL), Java + Database Connectivity (JDBC), Open Database Connectivity (ODBC), Perl Database + Independent Interface (DBI)), a standard conceptual model of data (tables, + fields, relationships, constraints, etc), performance in storing and retrieving + data, concurrent access, etc. + Evergreen is built on PostgreSQL, an open source SQL database that began as + POSTGRES at the University of California at Berkeley in 1986 as a research + project led by Professor Michael Stonebraker. A SQL interface was added to a + fork of the original POSTGRES Berkelely code in 1994, and in 1996 the project + was renamed PostgreSQL. + + TablesTables + + + The table is the cornerstone of a SQL database. Conceptually, a database table + is similar to a single sheet in a spreadsheet: every table has one or more + columns, with each row in the table containing values for each column. Each + column in a table defines an attribute corresponding to a particular data type. + We’ll insert a row into a table, then display the resulting contents. Don’t + worry if the INSERT statement is completely unfamiliar, we’ll talk more about + the syntax of the insert statement later. + actor.usr_note database table.  + +evergreen=# INSERT INTO actor.usr_note (usr, creator, pub, title, value) + VALUES (1, 1, TRUE, 'Who is this guy?', 'He''s the administrator!'); + +evergreen=# select id, usr, creator, pub, title, value from actor.usr_note; + id | usr | creator | pub | title | value +----+-----+---------+-----+------------------+------------------------- + 1 | 1 | 1 | t | Who is this guy? | He's the administrator! +(1 rows) + + + PostgreSQL supports table inheritance, which lets you define tables that + inherit the column definitions of a given parent table. A search of the data in + the parent table includes the data in the child tables. Evergreen uses table + inheritance: for example, the action.circulation table is a child of the + money.billable_xact table, and the money.*_payment tables all inherit from + the money.payment parent table. + + SchemasSchemas + + PostgreSQL, like most SQL databases, supports the use of schema names to group + collections of tables and other database objects together. You might think of + schemas as namespaces if you’re a programmer; or you might think of the schema + / table / column relationship like the area code / exchange / local number + structure of a telephone number. + Table 19.1. Examples: database object namesFull name Schema name Table name Field nameactor.usr_note.titleactorusr_notetitlebiblio.record_entry.marcbibliorecord_entrymarc + The default schema name in PostgreSQL is public, so if you do not specify a + schema name when creating or accessing a database object, PostgreSQL will use + the public schema. As a result, you might not find the object that you’re + looking for if you don’t use the appropriate schema. + Example: Creating a table without a specific schema.  + +evergreen=# CREATE TABLE foobar (foo TEXT, bar TEXT); +CREATE TABLE +evergreen=# \d foobar + Table "public.foobar" + Column | Type | Modifiers +--------+------+----------- + foo | text | + bar | text | + + + Example: Trying to access a unqualified table outside of the public schema.  + evergreen=# SELECT * FROM usr_note; + ERROR: relation "usr_note" does not exist + LINE 1: SELECT * FROM usr_note; + ^ + + Evergreen uses schemas to organize all of its tables with mostly intuitive, + if short, schema names. Here’s the current (as of 2010-01-03) list of schemas + used by Evergreen: + Table 19.2. Evergreen schema namesSchema name DescriptionacqAcquisitionsactionCirculation actionsaction_triggerEvent mechanismsactorEvergreen users and organization unitsassetCall numbers and copiesauditorTrack history of changes to selected tablesauthorityAuthority recordsbiblioBibliographic recordsbookingResource bookingsconfigEvergreen configurable optionscontainerBuckets for records, call numbers, copies, and usersextend_reporterExtra views for report definitionsmetabibMetadata about bibliographic recordsmoneyFines and billsofflineOffline transactionspermissionUser permissionsqueryStored SQL statementsreporterReport definitionssearchSearch functionsserialSerial MFHD recordsstatsConvenient views of circulation and asset statisticsvandelayMARC batch importer and exporter + The term schema has two meanings in the world of SQL databases. We have + discussed the schema as a conceptual grouping of tables and other database + objects within a given namespace; for example, "the actor schema contains the + tables and functions related to users and organizational units". Another common + usage of schema is to refer to the entire data model for a given database; + for example, "the Evergreen database schema". + + ColumnsColumns + + Each column definition consists of: + • + + a data type + + • + + (optionally) a default value to be used whenever a row is inserted that + does not contain a specific value + + • + + (optionally) one or more constraints on the values beyond data type + + + Although PostgreSQL supports dozens of data types, Evergreen makes our life + easier by only using a handful. + Table 19.3. PostgreSQL data types used by EvergreenType name Description LimitsINTEGERMedium integer-2147483648 to +2147483647BIGINTLarge integer-9223372036854775808 to 9223372036854775807SERIALSequential integer1 to 2147483647BIGSERIALLarge sequential integer1 to 9223372036854775807TEXTVariable length character dataUnlimited lengthBOOLBooleanTRUE or FALSETIMESTAMP WITH TIME ZONETimestamp4713 BC to 294276 ADTIMETimeExpressed in HH:MM:SSNUMERIC(precision, scale)DecimalUp to 1000 digits of precision. In Evergreen mostly used for money + values, with a precision of 6 and a scale of 2 (####.##). + Full details about these data types are available from the + data types section of + the PostgreSQL manual. + + ConstraintsConstraints + + Prevent NULL valuesPrevent NULL values + + A column definition may include the constraint NOT NULL to prevent NULL + values. In PostgreSQL, a NULL value is not the equivalent of zero or false or + an empty string; it is an explicit non-value with special properties. We’ll + talk more about how to work with NULL values when we get to queries. + + Primary keyPrimary key + + Every table can have at most one primary key. A primary key consists of one or + more columns which together uniquely identify each row in a table. If you + attempt to insert a row into a table that would create a duplicate or NULL + primary key entry, the database rejects the row and returns an error. + Natural primary keys are drawn from the intrinsic properties of the data being + modelled. For example, some potential natural primary keys for a table that + contains people would be: + Table 19.4. Example: Some potential natural primary keys for a table of peopleNatural key Pros ConsFirst name, last name, addressNo two people with the same name would ever live at the same address, right?Lots of columns force data duplication in referencing tablesSSN or driver’s licenseThese are guaranteed to be uniqueLots of people don’t have an SSN or a driver’s license + To avoid problems with natural keys, many applications instead define surrogate + primary keys. A surrogate primary keys is a column with an autoincrementing + integer value added to a table definition that ensures uniqueness. + Evergreen uses surrogate keys (a column named id with a SERIAL data type) + for most of its tables. + + Foreign keysForeign keys + + Every table can contain zero or more foreign keys: one or more columns that + refer to the primary key of another table. + For example, let’s consider Evergreen’s modelling of the basic relationship + between copies, call numbers, and bibliographic records. Bibliographic records + contained in the biblio.record_entry table can have call numbers attached to + them. Call numbers are contained in the asset.call_number table, and they can + have copies attached to them. Copies are contained in the asset.copy table. + Table 19.5. Example: Evergreen’s copy / call number / bibliographic record relationshipsTable Primary key Column with a foreign key Points toasset.copyasset.copy.idasset.copy.call_numberasset.call_number.idasset.call_numberasset.call_number.idasset.call_number.recordbiblio.record_entry.idbiblio.record_entrybiblio.record_entry.id + + Check constraintsCheck constraints + + PostgreSQL enables you to define rules to ensure that the value to be inserted + or updated meets certain conditions. For example, you can ensure that an + incoming integer value is within a specific range, or that a ZIP code matches a + particular pattern. + + + Deconstructing a table definition statementDeconstructing a table definition statement + + The actor.org_address table is a simple table in the Evergreen schema that + we can use as a concrete example of many of the properties of databases that + we have discussed so far. + +CREATE TABLE actor.org_address ( + id SERIAL PRIMARY KEY, + valid BOOL NOT NULL DEFAULT TRUE, + address_type TEXT NOT NULL DEFAULT 'MAILING', + org_unit INT NOT NULL REFERENCES actor.org_unit (id) + DEFERRABLE INITIALLY DEFERRED, + street1 TEXT NOT NULL, + street2 TEXT, + city TEXT NOT NULL, + county TEXT, + state TEXT NOT NULL, + country TEXT NOT NULL, + post_code TEXT NOT NULL +); + + + + The column named id is defined with a special data type of SERIAL; if + given no value when a row is inserted into a table, the database automatically + generates the next sequential integer value for the column. SERIAL is a + popular data type for a primary key because it is guaranteed to be unique - and + indeed, the constraint for this column identifies it as the PRIMARY KEY. + + + + The data type BOOL defines a boolean value: TRUE or FALSE are the only + acceptable values for the column. The constraint NOT NULL instructs the + database to prevent the column from ever containing a NULL value. The column + property DEFAULT TRUE instructs the database to automatically set the value + of the column to TRUE if no value is provided. + + + + The data type TEXT defines a text column of practically unlimited length. + As with the previous column, there is a NOT NULL constraint, and a default + value of 'MAILING' will result if no other value is supplied. + + + + The REFERENCES actor.org_unit (id) clause indicates that this column has a + foreign key relationship to the actor.org_unit table, and that the value of + this column in every row in this table must have a corresponding value in the + id column in the referenced table (actor.org_unit). + + + + The column named street2 demonstrates that not all columns have constraints + beyond data type. In this case, the column is allowed to be NULL or to contain a + TEXT value. + + + + Displaying a table definition using psqlDisplaying a table definition using psql + + The psql command-line interface is the preferred method for accessing + PostgreSQL databases. It offers features like tab-completion, readline support + for recalling previous commands, flexible input and output formats, and + is accessible via a standard SSH session. + If you press the Tab key once after typing one or more characters of the + database object name, psql automatically completes the name if there are no + other matches. If there are other matches for your current input, nothing + happens until you press the Tab key a second time, at which point psql + displays all of the matches for your current input. + To display the definition of a database object such as a table, issue the + command \d _object-name_. For example, to display the definition of the + actor.usr_note table: + +$ psql evergreen +psql (8.4.1) +Type "help" for help. + +evergreen=# \d actor.usr_note + Table "actor.usr_note" + Column | Type | Modifiers +-------------+--------------------------+------------------------------------------------------------- + id | bigint | not null default nextval('actor.usr_note_id_seq'::regclass) + usr | bigint | not null + creator | bigint | not null + create_date | timestamp with time zone | default now() + pub | boolean | not null default false + title | text | not null + value | text | not null +Indexes: + "usr_note_pkey" PRIMARY KEY, btree (id) + "actor_usr_note_creator_idx" btree (creator) + "actor_usr_note_usr_idx" btree (usr) +Foreign-key constraints: + "usr_note_creator_fkey" FOREIGN KEY (creator) REFERENCES actor.usr(id) ON ... + "usr_note_usr_fkey" FOREIGN KEY (usr) REFERENCES actor.usr(id) ON DELETE .... + +evergreen=# \q +$ + + + + This is the most basic connection to a PostgreSQL database. You can use a + number of other flags to specify user name, hostname, port, and other options. + + + + The \d command displays the definition of a database object. + + + + The \q command quits the psql session and returns you to the shell prompt. + + + + + Basic SQL queriesBasic SQL queries + + The SELECT statementThe SELECT statement + + The SELECT statement is the basic tool for retrieving information from a + database. The syntax for most SELECT statements is: + SELECT [columns(s)] + FROM [table(s)] + [WHERE condition(s)] + [GROUP BY columns(s)] + [HAVING grouping-condition(s)] + [ORDER BY column(s)] + [LIMIT maximum-results] + [OFFSET start-at-result-#] + ; + For example, to select all of the columns for each row in the + actor.usr_address table, issue the following query: + SELECT * + FROM actor.usr_address + ; + + Selecting particular columns from a tableSelecting particular columns from a table + + SELECT * returns all columns from all of the tables included in your query. + However, quite often you will want to return only a subset of the possible + columns. You can retrieve specific columns by listing the names of the columns + you want after the SELECT keyword. Separate each column name with a comma. + For example, to select just the city, county, and state from the + actor.usr_address table, issue the following query: + SELECT city, county, state + FROM actor.usr_address + ; + + Sorting results with the ORDER BY clauseSorting results with the ORDER BY clause + + By default, a SELECT statement returns rows matching your query with no + guarantee of any particular order in which they are returned. To force + the rows to be returned in a particular order, use the ORDER BY clause + to specify one or more columns to determine the sorting priority of the + rows. + For example, to sort the rows returned from your actor.usr_address query by + city, with county and then zip code as the tie breakers, issue the + following query: + +SELECT city, county, state + FROM actor.usr_address + ORDER BY city, county, post_code +; + + + Filtering results with the WHERE clauseFiltering results with the WHERE clause + + Thus far, your results have been returning all of the rows in the table. + Normally, however, you would want to restrict the rows that are returned to the + subset of rows that match one or more conditions of your search. The WHERE + clause enables you to specify a set of conditions that filter your query + results. Each condition in the WHERE clause is an SQL expression that returns + a boolean (true or false) value. + For example, to restrict the results returned from your actor.usr_address + query to only those rows containing a state value of Connecticut, issue the + following query: + +SELECT city, county, state + FROM actor.usr_address + WHERE state = 'Connecticut' + ORDER BY city, county, post_code +; + + You can include more conditions in the WHERE clause with the OR and AND + operators. For example, to further restrict the results returned from your + actor.usr_address query to only those rows where the state column contains a + value of Connecticut and the city column contains a value of Hartford, + issue the following query: + +SELECT city, county, state + FROM actor.usr_address + WHERE state = 'Connecticut' + AND city = 'Hartford' + ORDER BY city, county, post_code +; + + To return rows where the state is Connecticut and the city is Hartford or + New Haven, you must use parentheses to explicitly group the city value + conditions together, or else the database will evaluate the OR city = 'New + Haven' clause entirely on its own and match all rows where the city column is + New Haven, even though the state might not be Connecticut. + Trouble with OR.  + +SELECT city, county, state + FROM actor.usr_address + WHERE state = 'Connecticut' + AND city = 'Hartford' OR city = 'New Haven' + ORDER BY city, county, post_code +; + +-- Can return unwanted rows because the OR is not grouped! + + + Grouped OR’ed conditions.  + +SELECT city, county, state + FROM actor.usr_address + WHERE state = 'Connecticut' + AND (city = 'Hartford' OR city = 'New Haven') + ORDER BY city, county, post_code +; + +-- The parentheses ensure that the OR is applied to the cities, and the +-- state in either case must be 'Connecticut' + + + Comparison operatorsComparison operators + + Here is a partial list of comparison operators that are commonly used in + WHERE clauses: + Comparing two scalar valuesComparing two scalar values + + • + + x = y (equal to) + + • + + x != y (not equal to) + + • + + x < y (less than) + + • + + x > y (greater than) + + • + + x LIKE y (TEXT value x matches a subset of TEXT y, where y is a string that + can contain % as a wildcard for 0 or more characters, and _ as a wildcard + for a single character. For example, WHERE 'all you can eat fish and chips + and a big stick' LIKE '%fish%stick' would return TRUE) + + • + + x ILIKE y (like LIKE, but the comparison ignores upper-case / lower-case) + + • + + x IN y (x is in the list of values y, where y can be a list or a SELECT + statement that returns a list) + + + + + + NULL valuesNULL values + + SQL databases have a special way of representing the value of a column that has + no value: NULL. A NULL value is not equal to zero, and is not an empty + string; it is equal to nothing, not even another NULL, because it has no value + that can be compared. + To return rows from a table where a given column is not NULL, use the + IS NOT NULL comparison operator. + Retrieving rows where a column is not NULL.  + +SELECT id, first_given_name, family_name + FROM actor.usr + WHERE second_given_name IS NOT NULL +; + + + Similarly, to return rows from a table where a given column is NULL, use + the IS NULL comparison operator. + Retrieving rows where a column is NULL.  + +SELECT id, first_given_name, second_given_name, family_name + FROM actor.usr + WHERE second_given_name IS NULL +; + + id | first_given_name | second_given_name | family_name +----+------------------+-------------------+---------------- + 1 | Administrator | | System Account +(1 row) + + + Notice that the NULL value in the output is displayed as empty space, + indistinguishable from an empty string; this is the default display method in + psql. You can change the behaviour of psql using the pset command: + Changing the way NULL values are displayed in psql.  + +evergreen=# \pset null '(null)' +Null display is '(null)'. + +SELECT id, first_given_name, second_given_name, family_name + FROM actor.usr + WHERE second_given_name IS NULL +; + + id | first_given_name | second_given_name | family_name +----+------------------+-------------------+---------------- + 1 | Administrator | (null) | System Account +(1 row) + + + Database queries within programming languages such as Perl and C have + special methods of checking for NULL values in returned results. + + Text delimiter: 'Text delimiter: ' + + You might have noticed that we have been using the ' character to delimit + TEXT values and values such as dates and times that are TEXT values. Sometimes, + however, your TEXT value itself contains a ' character, such as the word + you’re. To prevent the database from prematurely ending the TEXT value at the + first ' character and returning a syntax error, use another ' character to + escape the following ' character. + For example, to change the last name of a user in the actor.usr table to + L’estat, issue the following SQL: + Escaping ' in TEXT values.  + +UPDATE actor.usr + SET family_name = 'L''estat' + WHERE profile IN ( + SELECT id + FROM permission.grp_tree + WHERE name = 'Vampire' + ) + ; + + When you retrieve the row from the database, the value is displayed with just + a single ' character: + +SELECT id, family_name + FROM actor.usr + WHERE family_name = 'L''estat' +; + + id | family_name +----+------------- + 1 | L'estat +(1 row) + + + Grouping and eliminating results with the GROUP BY and HAVING clausesGrouping and eliminating results with the GROUP BY and HAVING clauses + + The GROUP BY clause returns a unique set of results for the desired columns. + This is most often used in conjunction with an aggregate function to present + results for a range of values in a single query, rather than requiring you to + issue one query per target value. + Returning unique results of a single column with GROUP BY.  + +SELECT grp + FROM permission.grp_perm_map + GROUP BY grp + ORDER BY grp; + + grp +-----+ + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 10 +(8 rows) + + + While GROUP BY can be useful for a single column, it is more often used + to return the distinct results across multiple columns. For example, the + following query shows us which groups have permissions at each depth in + the library hierarchy: + Returning unique results of multiple columns with GROUP BY.  + +SELECT grp, depth + FROM permission.grp_perm_map + GROUP BY grp, depth + ORDER BY depth, grp; + + grp | depth +-----+------- + 1 | 0 + 2 | 0 + 3 | 0 + 4 | 0 + 5 | 0 + 10 | 0 + 3 | 1 + 4 | 1 + 5 | 1 + 6 | 1 + 7 | 1 + 10 | 1 + 3 | 2 + 4 | 2 + 10 | 2 +(15 rows) + + + Extending this further, you can use the COUNT() aggregate function to + also return the number of times each unique combination of grp and depth + appears in the table. Yes, this is a sneak peek at the use of aggregate + functions! Keeners. + Counting unique column combinations with GROUP BY.  + +SELECT grp, depth, COUNT(grp) + FROM permission.grp_perm_map + GROUP BY grp, depth + ORDER BY depth, grp; + + grp | depth | count +-----+-------+------- + 1 | 0 | 6 + 2 | 0 | 2 + 3 | 0 | 45 + 4 | 0 | 3 + 5 | 0 | 5 + 10 | 0 | 1 + 3 | 1 | 3 + 4 | 1 | 4 + 5 | 1 | 1 + 6 | 1 | 9 + 7 | 1 | 5 + 10 | 1 | 10 + 3 | 2 | 24 + 4 | 2 | 8 + 10 | 2 | 7 +(15 rows) + + + You can use the WHERE clause to restrict the returned results before grouping + is applied to the results. The following query restricts the results to those + rows that have a depth of 0. + Using the WHERE clause with GROUP BY.  + +SELECT grp, COUNT(grp) + FROM permission.grp_perm_map + WHERE depth = 0 + GROUP BY grp + ORDER BY 2 DESC +; + + grp | count +-----+------- + 3 | 45 + 1 | 6 + 5 | 5 + 4 | 3 + 2 | 2 + 10 | 1 +(6 rows) + + + To restrict results after grouping has been applied to the rows, use the + HAVING clause; this is typically used to restrict results based on + a comparison to the value returned by an aggregate function. For example, + the following query restricts the returned rows to those that have more than + 5 occurrences of the same value for grp in the table. + GROUP BY restricted by a HAVING clause.  + +SELECT grp, COUNT(grp) + FROM permission.grp_perm_map + GROUP BY grp + HAVING COUNT(grp) > 5 +; + + grp | count +-----+------- + 6 | 9 + 4 | 15 + 5 | 6 + 1 | 6 + 3 | 72 + 10 | 18 +(6 rows) + + + + Eliminating duplicate results with the DISTINCT keywordEliminating duplicate results with the DISTINCT keyword + + GROUP BY is one way of eliminating duplicate results from the rows returned + by your query. The purpose of the DISTINCT keyword is to remove duplicate + rows from the results of your query. However, it works, and it is easy - so if + you just want a quick list of the unique set of values for a column or set of + columns, the DISTINCT keyword might be appropriate. + On the other hand, if you are getting duplicate rows back when you don’t expect + them, then applying the DISTINCT keyword might be a sign that you are + papering over a real problem. + Returning unique results of multiple columns with DISTINCT.  + +SELECT DISTINCT grp, depth + FROM permission.grp_perm_map + ORDER BY depth, grp +; + + grp | depth +-----+------- + 1 | 0 + 2 | 0 + 3 | 0 + 4 | 0 + 5 | 0 + 10 | 0 + 3 | 1 + 4 | 1 + 5 | 1 + 6 | 1 + 7 | 1 + 10 | 1 + 3 | 2 + 4 | 2 + 10 | 2 +(15 rows) + + + + Paging through results with the LIMIT and OFFSET clausesPaging through results with the LIMIT and OFFSET clauses + + The LIMIT clause restricts the total number of rows returned from your query + and is useful if you just want to list a subset of a large number of rows. For + example, in the following query we list the five most frequently used + circulation modifiers: + Using the LIMIT clause to restrict results.  + +SELECT circ_modifier, COUNT(circ_modifier) + FROM asset.copy + GROUP BY circ_modifier + ORDER BY 2 DESC + LIMIT 5 +; + + circ_modifier | count +---------------+-------- + CIRC | 741995 + BOOK | 636199 + SER | 265906 + DOC | 191598 + LAW MONO | 126627 +(5 rows) + + + When you use the LIMIT clause to restrict the total number of rows returned + by your query, you can also use the OFFSET clause to determine which subset + of the rows will be returned. The use of the OFFSET clause assumes that + you’ve used the ORDER BY clause to impose order on the results. + In the following example, we use the OFFSET clause to get results 6 through + 10 from the same query that we prevously executed. + Using the OFFSET clause to return a specific subset of rows.  + +SELECT circ_modifier, COUNT(circ_modifier) + FROM asset.copy + GROUP BY circ_modifier + ORDER BY 2 DESC + LIMIT 5 + OFFSET 5 +; + + circ_modifier | count +---------------+-------- + LAW SERIAL | 102758 + DOCUMENTS | 86215 + BOOK_WEB | 63786 + MFORM SER | 39917 + REF | 34380 +(5 rows) + + + + + Advanced SQL queriesAdvanced SQL queries + + Transforming column values with functionsTransforming column values with functions + + PostgreSQL includes many built-in functions for manipulating column data. + You can also create your own functions (and Evergreen does make use of + many custom functions). There are two types of functions used in + databases: scalar functions and aggregate functions. + Scalar functionsScalar functions + + Scalar functions transform each value of the target column. If your query + would return 50 values for a column in a given query, and you modify your + query to apply a scalar function to the values returned for that column, + it will still return 50 values. For example, the UPPER() function, + used to convert text values to upper-case, modifies the results in the + following set of queries: + Using the UPPER() scalar function to convert text values to upper-case.  + +-- First, without the UPPER() function for comparison +SELECT shortname, name + FROM actor.org_unit + WHERE id < 4 +; + + shortname | name +-----------+----------------------- + CONS | Example Consortium + SYS1 | Example System 1 + SYS2 | Example System 2 +(3 rows) + +-- Now apply the UPPER() function to the name column +SELECT shortname, UPPER(name) + FROM actor.org_unit + WHERE id < 4 +; + + shortname | upper +-----------+-------------------- + CONS | EXAMPLE CONSORTIUM + SYS1 | EXAMPLE SYSTEM 1 + SYS2 | EXAMPLE SYSTEM 2 +(3 rows) + + + There are so many scalar functions in PostgreSQL that we cannot cover them + all here, but we can list some of the most commonly used functions: + • + + || - concatenates two text values together + + • + + COALESCE() - returns the first non-NULL value from the list of arguments + + • + + LOWER() - returns a text value converted to lower-case + + • + + REPLACE() - returns a text value after replacing all occurrences of a given text value with a different text value + + • + + REGEXP_REPLACE() - returns a text value after being transformed by a regular expression + + • + + UPPER() - returns a text value converted to upper-case + + + For a complete list of scalar functions, see + the PostgreSQL function documentation. + + Aggregate functionsAggregate functions + + Aggregate functions return a single value computed from the the complete set of + values returned for the specified column. + • + + AVG() + + • + + COUNT() + + • + + MAX() + + • + + MIN() + + • + + SUM() + + + + + Sub-selectsSub-selects + + A sub-select is the technique of using the results of one query to feed + into another query. You can, for example, return a set of values from + one column in a SELECT statement to be used to satisfy the IN() condition + of another SELECT statement; or you could return the MAX() value of a + column in a SELECT statement to match the = condition of another SELECT + statement. + For example, in the following query we use a sub-select to restrict the copies + returned by the main SELECT statement to only those locations that have an + opac_visible value of TRUE: + Sub-select example.  + +SELECT call_number + FROM asset.copy + WHERE deleted IS FALSE + AND location IN ( + SELECT id + FROM asset.copy_location + WHERE opac_visible IS TRUE + ) +; + + + Sub-selects can be an approachable way to breaking down a problem that + requires matching values between different tables, and often result in + a clearly expressed solution to a problem. However, if you start writing + sub-selects within sub-selects, you should consider tackling the problem + with joins instead. + + JoinsJoins + + Joins enable you to access the values from multiple tables in your query + results and comparison operators. For example, joins are what enable you to + relate a bibliographic record to a barcoded copy via the biblio.record_entry, + asset.call_number, and asset.copy tables. In this section, we discuss the + most common kind of join—the inner join—as well as the less common outer join + and some set operations which can compare and contrast the values returned by + separate queries. + When we talk about joins, we are going to talk about the left-hand table and + the right-hand table that participate in the join. Every join brings together + just two tables - but you can use an unlimited (for our purposes) number + of joins in a single SQL statement. Each time you use a join, you effectively + create a new table, so when you add a second join clause to a statement, + table 1 and table 2 (which were the left-hand table and the right-hand table + for the first join) now act as a merged left-hand table and the new table + in the second join clause is the right-hand table. + Clear as mud? Okay, let’s look at some examples. + Inner joinsInner joins + + An inner join returns all of the columns from the left-hand table in the join + with all of the columns from the right-hand table in the joins that match a + condition in the ON clause. Typically, you use the = operator to match the + foreign key of the left-hand table with the primary key of the right-hand + table to follow the natural relationship between the tables. + In the following example, we return all of columns from the actor.usr and + actor.org_unit tables, joined on the relationship between the user’s home + library and the library’s ID. Notice in the results that some columns, like + id and mailing_address, appear twice; this is because both the actor.usr + and actor.org_unit tables include columns with these names. This is also why + we have to fully qualify the column names in our queries with the schema and + table names. + A simple inner join.  + +SELECT * + FROM actor.usr + INNER JOIN actor.org_unit ON actor.usr.home_ou = actor.org_unit.id + WHERE actor.org_unit.shortname = 'CONS' +; + +-[ RECORD 1 ]------------------+--------------------------------- +id | 1 +card | 1 +profile | 1 +usrname | admin +email | +... +mailing_address | +billing_address | +home_ou | 1 +... +claims_never_checked_out_count | 0 +id | 1 +parent_ou | +ou_type | 1 +ill_address | 1 +holds_address | 1 +mailing_address | 1 +billing_address | 1 +shortname | CONS +name | Example Consortium +email | +phone | +opac_visible | t +fiscal_calendar | 1 + + + Of course, you do not have to return every column from the joined tables; + you can (and should) continue to specify only the columns that you want to + return. In the following example, we count the number of borrowers for + every user profile in a given library by joining the permission.grp_tree + table where profiles are defined against the actor.usr table, and then + joining the actor.org_unit table to give us access to the user’s home + library: + Borrower Count by Profile (Adult, Child, etc)/Library.  + +SELECT permission.grp_tree.name, actor.org_unit.name, COUNT(permission.grp_tree.name) + FROM actor.usr + INNER JOIN permission.grp_tree + ON actor.usr.profile = permission.grp_tree.id + INNER JOIN actor.org_unit + ON actor.org_unit.id = actor.usr.home_ou + WHERE actor.usr.deleted IS FALSE + GROUP BY permission.grp_tree.name, actor.org_unit.name + ORDER BY actor.org_unit.name, permission.grp_tree.name +; + + name | name | count +-------+--------------------+------- + Users | Example Consortium | 1 +(1 row) + + + + AliasesAliases + + So far we have been fully-qualifying all of our table names and column names to + prevent any confusion. This quickly gets tiring with lengthy qualified + table names like permission.grp_tree, so the SQL syntax enables us to assign + aliases to table names and column names. When you define an alias for a table + name, you can access its column throughout the rest of the statement by simply + appending the column name to the alias with a period; for example, if you assign + the alias au to the actor.usr table, you can access the actor.usr.id + column through the alias as au.id. + The formal syntax for declaring an alias for a column is to follow the column + name in the result columns clause with AS alias. To declare an alias for a table name, + follow the table name in the FROM clause (including any JOIN statements) with + AS alias. However, the AS keyword is optional for tables (and columns as + of PostgreSQL 8.4), and in practice most SQL statements leave it out. For + example, we can write the previous INNER JOIN statement example using aliases + instead of fully-qualified identifiers: + Borrower Count by Profile (using aliases).  + +SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" + FROM actor.usr au + INNER JOIN permission.grp_tree pgt + ON au.profile = pgt.id + INNER JOIN actor.org_unit aou + ON aou.id = au.home_ou + WHERE au.deleted IS FALSE + GROUP BY pgt.name, aou.name + ORDER BY aou.name, pgt.name +; + + Profile | Library | Count +---------+--------------------+------- + Users | Example Consortium | 1 +(1 row) + + + A nice side effect of declaring an alias for your columns is that the alias + is used as the column header in the results table. The previous version of + the query, which didn’t use aliased column names, had two columns named + name; this version of the query with aliases results in a clearer + categorization. + + Outer joinsOuter joins + + An outer join returns all of the rows from one or both of the tables + participating in the join. + • + + For a LEFT OUTER JOIN, the join returns all of the rows from the left-hand + table and the rows matching the join condition from the right-hand table, with + NULL values for the rows with no match in the right-hand table. + + • + + A RIGHT OUTER JOIN behaves in the same way as a LEFT OUTER JOIN, with the + exception that all rows are returned from the right-hand table participating in + the join. + + • + + For a FULL OUTER JOIN, the join returns all the rows from both the left-hand + and right-hand tables, with NULL values for the rows with no match in either + the left-hand or right-hand table. + + + Base tables for the OUTER JOIN examples.  + +SELECT * FROM aaa; + + id | stuff +----+------- + 1 | one + 2 | two + 3 | three + 4 | four + 5 | five +(5 rows) + +SELECT * FROM bbb; + + id | stuff | foo +----+-------+---------- + 1 | one | oneone + 2 | two | twotwo + 5 | five | fivefive + 6 | six | sixsix +(4 rows) + + + Example of a LEFT OUTER JOIN.  + +SELECT * FROM aaa + LEFT OUTER JOIN bbb ON aaa.id = bbb.id +; + id | stuff | id | stuff | foo +----+-------+----+-------+---------- + 1 | one | 1 | one | oneone + 2 | two | 2 | two | twotwo + 3 | three | | | + 4 | four | | | + 5 | five | 5 | five | fivefive +(5 rows) + + + Example of a RIGHT OUTER JOIN.  + +SELECT * FROM aaa + RIGHT OUTER JOIN bbb ON aaa.id = bbb.id +; + id | stuff | id | stuff | foo +----+-------+----+-------+---------- + 1 | one | 1 | one | oneone + 2 | two | 2 | two | twotwo + 5 | five | 5 | five | fivefive + | | 6 | six | sixsix +(4 rows) + + + Example of a FULL OUTER JOIN.  + +SELECT * FROM aaa + FULL OUTER JOIN bbb ON aaa.id = bbb.id +; + id | stuff | id | stuff | foo +----+-------+----+-------+---------- + 1 | one | 1 | one | oneone + 2 | two | 2 | two | twotwo + 3 | three | | | + 4 | four | | | + 5 | five | 5 | five | fivefive + | | 6 | six | sixsix +(6 rows) + + + + Self joinsSelf joins + + It is possible to join a table to itself. You can, in fact you must, use + aliases to disambiguate the references to the table. + + + Set operationsSet operations + + Relational databases are effectively just an efficient mechanism for + manipulating sets of values; they are implementations of set theory. There are + three operators for sets (tables) in which each set must have the same number + of columns with compatible data types: the union, intersection, and difference + operators. + Base tables for the set operation examples.  + +SELECT * FROM aaa; + + id | stuff + ----+------- + 1 | one + 2 | two + 3 | three + 4 | four + 5 | five + (5 rows) + +SELECT * FROM bbb; + + id | stuff | foo + ----+-------+---------- + 1 | one | oneone + 2 | two | twotwo + 5 | five | fivefive + 6 | six | sixsix +(4 rows) + + + UnionUnion + + The UNION operator returns the distinct set of rows that are members of + either or both of the left-hand and right-hand tables. The UNION operator + does not return any duplicate rows. To return duplicate rows, use the + UNION ALL operator. + Example of a UNION set operation.  + +-- The parentheses are not required, but are intended to help +-- illustrate the sets participating in the set operation +( + SELECT id, stuff + FROM aaa +) +UNION +( + SELECT id, stuff + FROM bbb +) +ORDER BY 1 +; + + id | stuff +----+------- + 1 | one + 2 | two + 3 | three + 4 | four + 5 | five + 6 | six +(6 rows) + + + + IntersectionIntersection + + The INTERSECT operator returns the distinct set of rows that are common to + both the left-hand and right-hand tables. To return duplicate rows, use the + INTERSECT ALL operator. + Example of an INTERSECT set operation.  + +( + SELECT id, stuff + FROM aaa +) +INTERSECT +( + SELECT id, stuff + FROM bbb +) +ORDER BY 1 +; + + id | stuff +----+------- + 1 | one + 2 | two + 5 | five +(3 rows) + + + + DifferenceDifference + + The EXCEPT operator returns the rows in the left-hand table that do not + exist in the right-hand table. You are effectively subtracting the common + rows from the left-hand table. + Example of an EXCEPT set operation.  + +( + SELECT id, stuff + FROM aaa +) +EXCEPT +( + SELECT id, stuff + FROM bbb +) +ORDER BY 1 +; + + id | stuff +----+------- + 3 | three + 4 | four +(2 rows) + +-- Order matters: switch the left-hand and right-hand tables +-- and you get a different result +( + SELECT id, stuff + FROM bbb +) +EXCEPT +( + SELECT id, stuff + FROM aaa +) +ORDER BY 1 +; + + id | stuff +----+------- + 6 | six +(1 row) + + + + + ViewsViews + + A view is a persistent SELECT statement that acts like a read-only table. + To create a view, issue the CREATE VIEW statement, giving the view a name + and a SELECT statement on which the view is built. + The following example creates a view based on our borrower profile count: + Creating a view.  + +CREATE VIEW actor.borrower_profile_count AS + SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" + FROM actor.usr au + INNER JOIN permission.grp_tree pgt + ON au.profile = pgt.id + INNER JOIN actor.org_unit aou + ON aou.id = au.home_ou + WHERE au.deleted IS FALSE + GROUP BY pgt.name, aou.name + ORDER BY aou.name, pgt.name +; + + + When you subsequently select results from the view, you can apply additional + WHERE clauses to filter the results, or ORDER BY clauses to change the + order of the returned rows. In the following examples, we issue a simple + SELECT * statement to show that the default results are returned in the + same order from the view as the equivalent SELECT statement would be returned. + Then we issue a SELECT statement with a WHERE clause to further filter the + results. + Selecting results from a view.  + +SELECT * FROM actor.borrower_profile_count; + + Profile | Library | Count +----------------------------+----------------------------+------- + Faculty | University Library | 208 + Graduate | University Library | 16 + Patrons | University Library | 62 +... + +-- You can still filter your results with WHERE clauses +SELECT * + FROM actor.borrower_profile_count + WHERE "Profile" = 'Faculty'; + + Profile | Library | Count +---------+----------------------------+------- + Faculty | University Library | 208 + Faculty | College Library | 64 + Faculty | College Library 2 | 102 + Faculty | University Library 2 | 776 +(4 rows) + + + + InheritanceInheritance + + PostgreSQL supports table inheritance: that is, a child table inherits its + base definition from a parent table, but can add additional columns to its + own definition. The data from any child tables is visible in queries against + the parent table. + Evergreen uses table inheritance in several areas: + * In the Vandelay MARC batch importer / exporter, Evergreen defines base + tables for generic queues and queued records for which authority record and + bibliographic record child tables + * Billable transactions are based on the money.billable_xact table; + child tables include action.circulation for circulation transactions + and money.grocery for general bills. + * Payments are based on the money.payment table; its child table is + money.bnm_payment (for brick-and-mortar payments), which in turn has child + tables of money.forgive_payment, money.work_payment, money.credit_payment, + money.goods_payment, and money.bnm_desk_payment. The + money.bnm_desk_payment table in turn has child tables of money.cash_payment, + money.check_payment, and money.credit_card_payment. + * Transits are based on the action.transit_copy table, which has a child + table of action.hold_transit_copy for transits initiated by holds. + * Generic acquisition line items are defined by the + acq.lineitem_attr_definition table, which in turn has a number of child + tables to define MARC attributes, generated attributes, user attributes, and + provider attributes. + + + Understanding query performance with EXPLAINUnderstanding query performance with EXPLAIN + + Some queries run for a long, long time. This can be the result of a poorly + written query—a query with a join condition that joins every + row in the biblio.record_entry table with every row in the metabib.full_rec + view would consume a massive amount of memory and disk space and CPU time—or + a symptom of a schema that needs some additional indexes. PostgreSQL provides + the EXPLAIN tool to estimate how long it will take to run a given query and + show you the query plan (how it plans to retrieve the results from the + database). + To generate the query plan without actually running the statement, simply + prepend the EXPLAIN keyword to your query. In the following example, we + generate the query plan for the poorly written query that would join every + row in the biblio.record_entry table with every row in the metabib.full_rec + view: + Query plan for a terrible query.  + +EXPLAIN SELECT * + FROM biblio.record_entry + FULL OUTER JOIN metabib.full_rec ON 1=1 +; + + QUERY PLAN +-------------------------------------------------------------------------------// + Merge Full Join (cost=0.00..4959156437783.60 rows=132415734100864 width=1379) + -> Seq Scan on record_entry (cost=0.00..400634.16 rows=2013416 width=1292) + -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) +(3 rows) + + + This query plan shows that the query would return 132415734100864 rows, and it + plans to accomplish what you asked for by sequentially scanning (Seq Scan) + every row in each of the tables participating in the join. + In the following example, we have realized our mistake in joining every row of + the left-hand table with every row in the right-hand table and take the saner + approach of using an INNER JOIN where the join condition is on the record ID. + Query plan for a less terrible query.  + +EXPLAIN SELECT * + FROM biblio.record_entry bre + INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id; + QUERY PLAN +----------------------------------------------------------------------------------------// + Hash Join (cost=750229.86..5829273.98 rows=65766704 width=1379) + Hash Cond: (real_full_rec.record = bre.id) + -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) + -> Hash (cost=400634.16..400634.16 rows=2013416 width=1292) + -> Seq Scan on record_entry bre (cost=0.00..400634.16 rows=2013416 width=1292) +(5 rows) + + + This time, we will return 65766704 rows - still way too many rows. We forgot + to include a WHERE clause to limit the results to something meaningful. In + the following example, we will limit the results to deleted records that were + modified in the last month. + Query plan for a realistic query.  + +EXPLAIN SELECT * + FROM biblio.record_entry bre + INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id + WHERE bre.deleted IS TRUE + AND DATE_TRUNC('MONTH', bre.edit_date) > + DATE_TRUNC ('MONTH', NOW() - '1 MONTH'::INTERVAL) +; + + QUERY PLAN +----------------------------------------------------------------------------------------// + Hash Join (cost=5058.86..2306218.81 rows=201669 width=1379) + Hash Cond: (real_full_rec.record = bre.id) + -> Seq Scan on real_full_rec (cost=0.00..1640972.04 rows=65766704 width=87) + -> Hash (cost=4981.69..4981.69 rows=6174 width=1292) + -> Index Scan using biblio_record_entry_deleted on record_entry bre + (cost=0.00..4981.69 rows=6174 width=1292) + Index Cond: (deleted = true) + Filter: ((deleted IS TRUE) AND (date_trunc('MONTH'::text, edit_date) + > date_trunc('MONTH'::text, (now() - '1 mon'::interval)))) +(7 rows) + + + We can see that the number of rows returned is now only 201669; that’s + something we can work with. Also, the overall cost of the query is 2306218, + compared to 4959156437783 in the original query. The Index Scan tells us + that the query planner will use the index that was defined on the deleted + column to avoid having to check every row in the biblio.record_entry table. + However, we are still running a sequential scan over the + metabib.real_full_rec table (the table on which the metabib.full_rec + view is based). Given that linking from the bibliographic records to the + flattened MARC subfields is a fairly common operation, we could create a + new index and see if that speeds up our query plan. + Query plan with optimized access via a new index.  + +-- This index will take a long time to create on a large database +-- of bibliographic records +CREATE INDEX bib_record_idx ON metabib.real_full_rec (record); + +EXPLAIN SELECT * + FROM biblio.record_entry bre + INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id + WHERE bre.deleted IS TRUE + AND DATE_TRUNC('MONTH', bre.edit_date) > + DATE_TRUNC ('MONTH', NOW() - '1 MONTH'::INTERVAL) +; + + QUERY PLAN +----------------------------------------------------------------------------------------// + Nested Loop (cost=0.00..1558330.46 rows=201669 width=1379) + -> Index Scan using biblio_record_entry_deleted on record_entry bre + (cost=0.00..4981.69 rows=6174 width=1292) + Index Cond: (deleted = true) + Filter: ((deleted IS TRUE) AND (date_trunc('MONTH'::text, edit_date) > + date_trunc('MONTH'::text, (now() - '1 mon'::interval)))) + -> Index Scan using bib_record_idx on real_full_rec + (cost=0.00..240.89 rows=850 width=87) + Index Cond: (real_full_rec.record = bre.id) +(6 rows) + + + We can see that the resulting number of rows is still the same (201669), but + the execution estimate has dropped to 1558330 because the query planner can + use the new index (bib_record_idx) rather than scanning the entire table. + Success! + While indexes can significantly speed up read access to tables for common + filtering conditions, every time a row is created or updated the corresponding + indexes also need to be maintained - which can decrease the performance of + writes to the database. Be careful to keep the balance of read performance + versus write performance in mind if you plan to create custom indexes in your + Evergreen database. + + Inserting, updating, and deleting dataInserting, updating, and deleting data + + Inserting dataInserting data + + To insert one or more rows into a table, use the INSERT statement to identify + the target table and list the columns in the table for which you are going to + provide values for each row. If you do not list one or more columns contained + in the table, the database will automatically supply a NULL value for those + columns. The values for each row follow the VALUES clause and are grouped in + parentheses and delimited by commas. Each row, in turn, is delimited by commas + (this multiple row syntax requires PostgreSQL 8.2 or higher). + For example, to insert two rows into the permission.usr_grp_map table: + Inserting rows into the permission.usr_grp_map table.  + INSERT INTO permission.usr_grp_map (usr, grp) + VALUES (2, 10), (2, 4) + ; + + Of course, as with the rest of SQL, you can replace individual column values + with one or more use sub-selects: + Inserting rows using sub-selects instead of integers.  + +INSERT INTO permission.usr_grp_map (usr, grp) + VALUES ( + (SELECT id FROM actor.usr + WHERE family_name = 'Scott' AND first_given_name = 'Daniel'), + (SELECT id FROM permission.grp_tree + WHERE name = 'Local System Administrator') + ), ( + (SELECT id FROM actor.usr + WHERE family_name = 'Scott' AND first_given_name = 'Daniel'), + (SELECT id FROM permission.grp_tree + WHERE name = 'Circulator') + ) +; + + + + Inserting data using a SELECT statementInserting data using a SELECT statement + + Sometimes you want to insert a bulk set of data into a new table based on + a query result. Rather than a VALUES clause, you can use a SELECT + statement to insert one or more rows matching the column definitions. This + is a good time to point out that you can include explicit values, instead + of just column identifiers, in the return columns of the SELECT statement. + The explicit values are returned in every row of the result set. + In the following example, we insert 6 rows into the permission.usr_grp_map + table; each row will have a usr column value of 1, with varying values for + the grp column value based on the id column values returned from + permission.grp_tree: + Inserting rows via a SELECT statement.  + +INSERT INTO permission.usr_grp_map (usr, grp) + SELECT 1, id + FROM permission.grp_tree + WHERE id > 2 +; + +INSERT 0 6 + + + + Deleting rowsDeleting rows + + Deleting data from a table is normally fairly easy. To delete rows from a table, + issue a DELETE statement identifying the table from which you want to delete + rows and a WHERE clause identifying the row or rows that should be deleted. + In the following example, we delete all of the rows from the + permission.grp_perm_map table where the permission maps to + UPDATE_ORG_UNIT_CLOSING and the group is anything other than administrators: + Deleting rows from a table.  + +DELETE FROM permission.grp_perm_map + WHERE grp IN ( + SELECT id + FROM permission.grp_tree + WHERE name != 'Local System Administrator' + ) AND perm = ( + SELECT id + FROM permission.perm_list + WHERE code = 'UPDATE_ORG_UNIT_CLOSING' + ) +; + + + There are two main reasons that a DELETE statement may not actually + delete rows from a table, even when the rows meet the conditional clause. + 1. + + If the row contains a value that is the target of a relational constraint, + for example, if another table has a foreign key pointing at your target + table, you will be prevented from deleting a row with a value corresponding + to a row in the dependent table. + + 2. + + If the table has a rule that substitutes a different action for a DELETE + statement, the deletion will not take place. In Evergreen it is common for a + table to have a rule that substitutes the action of setting a deleted column + to TRUE. For example, if a book is discarded, deleting the row representing + the copy from the asset.copy table would severely affect circulation statistics, + bills, borrowing histories, and their corresponding tables in the database that + have foreign keys pointing at the asset.copy table (action.circulation and + money.billing and its children respectively). Instead, the deleted column + value is set to TRUE and Evergreen’s application logic skips over these rows + in most cases. + + + + Updating rowsUpdating rows + + To update rows in a table, issue an UPDATE statement identifying the table + you want to update, the column or columns that you want to set with their + respective new values, and (optionally) a WHERE clause identifying the row or + rows that should be updated. + Following is the syntax for the UPDATE statement: + UPDATE [table-name] + SET [column] TO [new-value] + WHERE [condition] + ; + + + Query requestsQuery requests + + The following queries were requested by Bibliomation, but might be reusable + by other libraries. + Monthly circulation stats by collection code / libraryMonthly circulation stats by collection code / library + + Monthly Circulation Stats by Collection Code/Library.  + +SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", acl.name AS "Copy Location" + FROM asset.copy ac + INNER JOIN asset.copy_location acl ON ac.location = acl.id + INNER JOIN action.circulation acirc ON acirc.target_copy = ac.id + INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id + WHERE DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') + AND acirc.desk_renewal IS FALSE + AND acirc.opac_renewal IS FALSE + AND acirc.phone_renewal IS FALSE + GROUP BY aou.name, acl.name + ORDER BY aou.name, acl.name, 1 +; + + + + Monthly circulation stats by borrower stat / libraryMonthly circulation stats by borrower stat / library + + Monthly Circulation Stats by Borrower Stat/Library.  + +SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", asceum.stat_cat_entry AS "Borrower Stat" + FROM action.circulation acirc + INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id + INNER JOIN actor.stat_cat_entry_usr_map asceum ON asceum.target_usr = acirc.usr + INNER JOIN actor.stat_cat astat ON asceum.stat_cat = astat.id + WHERE DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') + AND astat.name = 'Preferred language' + AND acirc.desk_renewal IS FALSE + AND acirc.opac_renewal IS FALSE + AND acirc.phone_renewal IS FALSE + GROUP BY aou.name, asceum.stat_cat_entry + ORDER BY aou.name, asceum.stat_cat_entry, 1 +; + + + + Monthly intralibrary loan stats by libraryMonthly intralibrary loan stats by library + + Monthly Intralibrary Loan Stats by Library.  + +SELECT aou.name AS "Library", COUNT(acirc.id) + FROM action.circulation acirc + INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id + INNER JOIN asset.copy ac ON acirc.target_copy = ac.id + INNER JOIN asset.call_number acn ON ac.call_number = acn.id + WHERE acirc.circ_lib != acn.owning_lib + AND DATE_TRUNC('MONTH', acirc.create_time) = DATE_TRUNC('MONTH', NOW() - INTERVAL '3 month') + AND acirc.desk_renewal IS FALSE + AND acirc.opac_renewal IS FALSE + AND acirc.phone_renewal IS FALSE + GROUP by aou.name + ORDER BY aou.name, 2 +; + + + + Monthly borrowers added by profile (adult, child, etc) / libraryMonthly borrowers added by profile (adult, child, etc) / library + + Monthly Borrowers Added by Profile (Adult, Child, etc)/Library.  + +SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" + FROM actor.usr au + INNER JOIN permission.grp_tree pgt + ON au.profile = pgt.id + INNER JOIN actor.org_unit aou + ON aou.id = au.home_ou + WHERE au.deleted IS FALSE + AND DATE_TRUNC('MONTH', au.create_date) = DATE_TRUNC('MONTH', NOW() - '3 months'::interval) + GROUP BY pgt.name, aou.name + ORDER BY aou.name, pgt.name +; + + + + Borrower count by profile (adult, child, etc) / libraryBorrower count by profile (adult, child, etc) / library + + Borrower Count by Profile (Adult, Child, etc)/Library.  + +SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" + FROM actor.usr au + INNER JOIN permission.grp_tree pgt + ON au.profile = pgt.id + INNER JOIN actor.org_unit aou + ON aou.id = au.home_ou + WHERE au.deleted IS FALSE + GROUP BY pgt.name, aou.name + ORDER BY aou.name, pgt.name +; + + + + Monthly items added by collection / libraryMonthly items added by collection / library + + We define a “collection” as a shelving location in Evergreen. + Monthly Items Added by Collection/Library.  + +SELECT aou.name AS "Library", acl.name, COUNT(ac.barcode) + FROM actor.org_unit aou + INNER JOIN asset.call_number acn ON acn.owning_lib = aou.id + INNER JOIN asset.copy ac ON ac.call_number = acn.id + INNER JOIN asset.copy_location acl ON ac.location = acl.id + WHERE ac.deleted IS FALSE + AND acn.deleted IS FALSE + AND DATE_TRUNC('MONTH', ac.create_date) = DATE_TRUNC('MONTH', NOW() - '1 month'::interval) + GROUP BY aou.name, acl.name + ORDER BY aou.name, acl.name +; + + + + Hold purchase alert by libraryHold purchase alert by library + + in the following set of queries, we bring together the active title, volume, + and copy holds and display those that have more than a certain number of holds + per title. The goal is to UNION ALL the three queries, then group by the + bibliographic record ID and display the title / author information for those + records that have more than a given threshold of holds. + Hold Purchase Alert by Library.  + +-- Title holds +SELECT all_holds.bib_id, aou.name, rmsr.title, rmsr.author, COUNT(all_holds.bib_id) + FROM + ( + ( + SELECT target, request_lib + FROM action.hold_request + WHERE hold_type = 'T' + AND fulfillment_time IS NULL + AND cancel_time IS NULL + ) + UNION ALL + -- Volume holds + ( + SELECT bre.id, request_lib + FROM action.hold_request ahr + INNER JOIN asset.call_number acn ON ahr.target = acn.id + INNER JOIN biblio.record_entry bre ON acn.record = bre.id + WHERE ahr.hold_type = 'V' + AND ahr.fulfillment_time IS NULL + AND ahr.cancel_time IS NULL + ) + UNION ALL + -- Copy holds + ( + SELECT bre.id, request_lib + FROM action.hold_request ahr + INNER JOIN asset.copy ac ON ahr.target = ac.id + INNER JOIN asset.call_number acn ON ac.call_number = acn.id + INNER JOIN biblio.record_entry bre ON acn.record = bre.id + WHERE ahr.hold_type = 'C' + AND ahr.fulfillment_time IS NULL + AND ahr.cancel_time IS NULL + ) + ) AS all_holds(bib_id, request_lib) + INNER JOIN reporter.materialized_simple_record rmsr + INNER JOIN actor.org_unit aou ON aou.id = all_holds.request_lib + ON rmsr.id = all_holds.bib_id + GROUP BY all_holds.bib_id, aou.name, rmsr.id, rmsr.title, rmsr.author + HAVING COUNT(all_holds.bib_id) > 2 + ORDER BY aou.name +; + + + + Update borrower records with a different home libraryUpdate borrower records with a different home library + + In this example, the library has opened a new branch in a growing area, + and wants to reassign the home library for the patrons in the vicinity of + the new branch to the new branch. To accomplish this, we create a staging table + that holds a set of city names and the corresponding branch shortname for the home + library for each city. + Then we issue an UPDATE statement to set the home library for patrons with a + physical address with a city that matches the city names in our staging table. + Update borrower records with a different home library.  + +CREATE SCHEMA staging; +CREATE TABLE staging.city_home_ou_map (city TEXT, ou_shortname TEXT, + FOREIGN KEY (ou_shortname) REFERENCES actor.org_unit (shortname)); +INSERT INTO staging.city_home_ou_map (city, ou_shortname) + VALUES ('Southbury', 'BR1'), ('Middlebury', 'BR2'), ('Hartford', 'BR3'); +BEGIN; + +UPDATE actor.usr au SET home_ou = COALESCE( + ( + SELECT aou.id + FROM actor.org_unit aou + INNER JOIN staging.city_home_ou_map schom ON schom.ou_shortname = aou.shortname + INNER JOIN actor.usr_address aua ON aua.city = schom.city + WHERE au.id = aua.usr + GROUP BY aou.id + ), home_ou) +WHERE ( + SELECT aou.id + FROM actor.org_unit aou + INNER JOIN staging.city_home_ou_map schom ON schom.ou_shortname = aou.shortname + INNER JOIN actor.usr_address aua ON aua.city = schom.city + WHERE au.id = aua.usr + GROUP BY aou.id +) IS NOT NULL; + + + + + + + Chapter 20. JSON QueriesChapter 20. JSON Queries + Report errors in this documentation using Launchpad. + Chapter 20. JSON Queries + Report any errors in this documentation using Launchpad. + Chapter 20. JSON QueriesChapter 20. JSON Queries + + The json_query facility provides a way for client applications to query the database over the network. Instead of constructing its own SQL, the application encodes a query in the + form of a JSON string and passes it to the json_query service. Then the json_query service parses the JSON, constructs and executes the corresponding SQL, and returns the results to + the client application. + This arrangement enables the json_query service to act as a gatekeeper, protecting the database from potentially damaging SQL commands. In particular, the generated SQL is + confined to SELECT statements, which will not change the contents of the database. + + In addition, the json_query service sometimes uses its knowledge of the database structure to supply column names and join conditions so that the client application doesn't + have to. + + Nevertheless, the need to encode a query in a JSON string adds complications, because the client needs to know how to build the right JSON. JSON queries are also somewhat + limiting -- they can't do all of the things that you can do with raw SQL. + The IDLThe IDL + + + A JSON query does not refer to tables and columns. Instead, it refers to classes and fields, which the IDL maps to the corresponding database entities. + + The IDL (Interface Definition Language) is an XML file, typically /openils/conf/fm_IDL.xml. It maps each class to a table, view, or subquery, and + each field to a column. It also includes information about foreign key relationships. + + (The IDL also defines virtual classes and virtual fields, which don't correspond to database entities. We won't discuss them here, because json_query ignores them.) + + When it first starts up, json_query loads a relevant subset of the IDL into memory. Thereafter, it consults its copy of the IDL whenever it needs to know about the database + structure. It uses the IDL to validate the JSON queries, and to translate classes and fields to the corresponding tables and columns. In some cases it uses the IDL to supply information + that the queries don't provide. + Definitions + + You should also be familiar with JSON. However it is worth defining a couple of terms that have other meanings in other contexts: + + •An "object" is a JSON object, i.e. a comma-separated list of name:value pairs, enclosed in curly braces, like this: + { "a":"frobozz", "b":24, "c":null } + •An "array" is a JSON array, i.e. a comma-separated list of values, enclosed in square brackets, like this: + [ "Goober", 629, null, false, "glub" ] + + + The ExamplesThe Examples + + The test_json_query utility generated the SQL for all of the sample queries in this tutorial. Newlines and indentation were then inserted manually for readability. + All examples involve the actor.org_unit table, sometimes in combination with a few related tables. The queries themselves are designed to illustrate the syntax, not + to do anything useful at the application level. For example, it's not meaningful to take the square root of an org_unit id, except to illustrate how to code a function call. + The examples are like department store mannequins -- they have no brains, they're only for display. + The simplest kind of query defines nothing but a FROM clause. For example: + + { + "from":"aou" + } + + In this minimal example we select from only one table. Later we will see how to join multiple tables. + Since we don't supply a WHERE clause, json_query constructs a default WHERE clause for us, including all the available columns. The resulting SQL looks like this: + +SELECT + "aou".billing_address AS "billing_address", + "aou".holds_address AS "holds_address", + "aou".id AS "id", + "aou".ill_address AS "ill_address", + "aou".mailing_address AS "mailing_address", + "aou".name AS "name", + "aou".ou_type AS "ou_type", + "aou".parent_ou AS "parent_ou", + "aou".shortname AS "shortname", + "aou".email AS "email", + "aou".phone AS "phone", + "aou".opac_visible AS "opac_visible" +FROM + actor.org_unit AS "aou" ; + + + Default SELECT ClausesDefault SELECT Clauses + + + The default SELECT clause includes every column that the IDL defines it as a non-virtual field for the class in question. If a column is present in the database but + not defined in the IDL, json_query doesn't know about it. In the case of the example shown above, all the columns are defined in the IDL, so they all show up in the default + SELECT clause. + If the FROM clause joins two or more tables, the default SELECT clause includes columns only from the core table, not from any of the joined tables. + The default SELECT clause has almost the same effect as "SELECT *", but not exactly. If you were to "SELECT * from actor.org_unit_type in psql, the output would + include all the same columns as in the example above, but not in the same order. A default SELECT clause includes the columns in the order in which the IDL defines them, + which may be different from the order in which the database defines them. + In practice, the sequencing of columns in the SELECT clause is not significant. The result set is returned to the client program in the form of a data structure, which + the client program can navigate however it chooses. + + Other LessonsOther Lessons + + There are other ways to get a default SELECT clause. However, default SELECT clauses are a distraction at this point, because most of the time you'll specify your + own SELECT clause explicitly, as we will discuss later. + Let's consider some more important aspects of this simple example -- more important because they apply to more complex queries as well. + • + The entire JSON query is an object. In this simple case the object includes only one entry, for the FROM clause. Typically you'll also have entries + for the SELECT clause and the WHERE clause, and possibly for HAVING, ORDER BY, LIMIT, or OFFSET clauses. There is no separate entry for a GROUP BY clause, which you + can specify by other means. + • + Although all the other entries are optional, you must include an entry for the FROM clause. You cannot, for example, do a SELECT USER the way + you can in psql. + • + Every column is qualified by an alias for the table. This alias is always the class name for the table, as defined in the IDL. + • + Every column is aliased with the column name. There is a way to choose a different column alias (not shown here). + + + The SELECT ClauseThe SELECT Clause + + The following variation also produces a default SELECT clause: + +{ + "from":"aou", + "select": { + "aou":"*" + } +} + + ...and so does this one: + +{ + "select": { + "aou":null + }, + "from":"aou" +} + + While this syntax may not be terribly useful, it does illustrate the minimal structure of a SELECT clause in a JSON query: an entry in the outermost JSON object, + with a key of “select”. The value associated with this key is another JSON object, whose keys are class names. + (These two examples also illustrate another point: unlike SQL, a JSON query doesn't care whether the FROM clause or the SELECT clause comes first.) + Usually you don't want the default SELECT clause. Here's how to select only some of the columns: + +{ + "from":"aou", + "select": { + "aou":[ "id", "name" ] + } +} + + The value associated with the class name is an array of column names. If you select columns from multiple tables (not shown here), you'll need a separate entry for each table, + and a separate column list for each entry. + The previous example results in the following SQL: + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" ; + + + Fancier SELECT ClausesFancier SELECT Clauses + + The previous example featured an array of column names. More generally, it featured an array of field specifications, and one kind of field specification is a column name. + The other kind is a JSON object, with some combination of the following keys: + • + “column” -- the column name (required). + • + “alias” -- used to define a column alias, which otherwise defaults to the column name. + • + “aggregate” -- takes a value of true or false. Don't worry about this one yet. It concerns the use of GROUP BY clauses, which we will examine + later. + • + “transform” -- the name of an SQL function to be called. + • + “result_field” -- used with "transform"; specifies an output column of a function that returns multiple columns at a time. + • + “params” -- used with "transform"; provides a list of parameters for the function. They may be strings, numbers, or nulls. + + This example assigns a different column alias: + +{ + "from":"aou", + "select": { + "aou": [ + "id", + { "column":"name", "alias":"org_name" } + ] + } +} + +SELECT + "aou".id AS "id", + "aou".name AS "org_name" +FROM + actor.org_unit AS "aou" ; + + In this case, changing the column alias doesn't accomplish much. But if we were joining to the actor.org_unit_type table, which also has a "name" column, we could + use different aliases to distinguish them. + The following example uses a function to raise a column to upper case: + +{ + "from":"aou", + "select": { + "aou": [ + "id", + { "column":"name", "transform":"upper" } + ] + } +} + +SELECT + "aou".id AS "id", + upper("aou".name ) AS "name" +FROM + actor.org_unit AS "aou" ; + + Here we take a substring of the name, using the params element to pass parameters: + + { + "from":"aou", + "select": { + "aou": [ + "id", { + "column":"name", + "transform":"substr", + "params":[ 3, 5 ] + } + ] + } + } + + SELECT + "aou".id AS "id", + substr("aou".name,'3','5' ) AS "name" + FROM + actor.org_unit AS "aou" ; + + The parameters specified with params are inserted after the applicable column (name in this case), + which is always the first parameter. They are always passed as strings, i.e. enclosed in quotes, even if the JSON expresses them as numbers. PostgreSQL will ordinarily + coerce them to the right type. However if the function name is overloaded to accept different types, PostgreSQL may invoke a function other than the one intended. + Finally we call a fictitious function "frobozz" that returns multiple columns, where we want only one of them: + +{ + "from":"aou", + "select": { + "aou": [ + "id", { + "column":"name", + "transform":"frobozz", + "result_field":"zamzam" + } + ] + } +} + +SELECT + "aou".id AS "id", + (frobozz("aou".name ))."zamzam" AS "name" +FROM + actor.org_unit AS "aou" ; + + The frobozz function doesn't actually exist, but json_query doesn't know that. The query won't fail until json_query tries to execute it in + the database. + + Things You Can't DoThings You Can't Do + + You can do some things in a SELECT clause with raw SQL (with psql, for example) that you can't do with a JSON query. Some of them matter and some of them don't. + When you do a JOIN, you can't arrange the selected columns in any arbitrary sequence, because all of the columns from a given table must be grouped together. + This limitation doesn't matter. The results are returned in the form of a data structure, which the client program can navigate however it likes. + You can't select an arbitrary expression, such as "percentage / 100" or "last_name || ', ' || first_name". Most of the time this limitation doesn't matter either, because + the client program can do these kinds of manipulations for itself. However, function calls may be a problem. You can't nest them, and you can't pass more than one column value + to them (and it has to be the first parameter). + You can't use a CASE expression. Instead, the client application can do the equivalent branching for itself. + You can't select a subquery. In raw SQL you can do something like the following: + +SELECT + id, + name, + ( + SELECT name + FROM actor.org_unit_type AS aout + WHERE aout.id = aou.ou_type + ) AS type_name +FROM + actor.org_unit AS aou; + + This contrived example is not very realistic. Normally you would use a JOIN in this case, and that's what you should do in a JSON query. Other cases may not be so + easy to solve. + + The WHERE ClauseThe WHERE Clause + + Most queries need a WHERE clause, as in this simple example: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou":"3" + } +} + + Like the SELECT clause, the WHERE clause gets its own entry in the top-level object of a JSON query. The key is “where”, and the associated value is either + an object (as shown here) or an array (to be discussed a bit later). Each entry in the object is a separate condition. + In this case, we use a special shortcut for expressing an equality condition. The column name is on the left of the colon, and the value to which we are equating it is on + the right. + Here's the resulting SQL: + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".parent_ou = 3; + + Like the SELECT clause, the generated WHERE clause qualifies each column name with the alias of the relevant table. + If you want to compare a column to NULL, put “null” (without quotation marks) to the right of the colon instead of a literal value. The + resulting SQL will include “IS NULL” instead of an equals sign. + + Other Kinds of ComparisonsOther Kinds of Comparisons + + Here's the same query (which generates the same SQL) without the special shortcut: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou":{ "=":3 } + } +} + + We still have an entry whose key is the column name, but this time the associated value is another JSON object. It must contain exactly one entry, + with the comparison operator on the left of the colon, and the value to be compared on the right. + The same syntax works for other kinds of comparison operators. For example: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou":{ ">":3 } + } +} + + ...turns into: + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".parent_ou > 3 ; + + The condition '“=”:null' turns into IS NULL. Any other operator used with “null” turns into IS NOT NULL. + You can use most of the comparison operators recognized by PostgreSQL: + + = <> != + < > <= >= + ~ ~* !~ !~* + like ilike + similar to + + The only ones you can't use are “is distinct from” and “is not distinct from”. + + Custom ComparisonsCustom Comparisons + + Here's a dirty little secret: json_query doesn't really pay much attention to the operator you supply. It merely checks to make sure that the operator doesn't contain + any semicolons or white space, in order to prevent certain kinds of SQL injection. It also allows "similar to" as a special exception. + As a result, you can slip an operator of your own devising into the SQL, so long as it doesn't contain any semicolons or white space, and doesn't create invalid syntax. + Here's a contrived and rather silly example: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou":{ "<2+":3 } + } +} + + ...which results in the following SQL: + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".parent_ou <2+ 3; + + It's hard to come up with a realistic case where this hack would be useful, but it could happen. + + Comparing One Column to AnotherComparing One Column to Another + + Here's how to put another column on the right hand side of a comparison: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "id": { ">": { "+aou":"parent_ou" } } + } +}; + + This syntax is similar to the previous examples, except that instead of comparing to a literal value, we compare to an object. This object has only a single entry, + whose key is a table alias preceded by a leading plus sign. The associated value is the name of the column. + Here's the resulting SQL: + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE +( + "aou".id > ( "aou".parent_ou ) +); + + The table alias must correspond to the appropriate table. Since json_query doesn't validate the choice of alias, it won't detect an invalid alias until it tries to + execute the query. In this simple example there's only one table to choose from. The choice of alias is more important in a subquery or join. + The leading plus sign, combined with a table alias, can be used in other situations to designate the table to which a column belongs. We shall defer a discussion of + this usage to the section on joins. + + Testing Boolean ColumnsTesting Boolean Columns + + In SQL, there are several ways to test a boolean column such as actor.org_unit.opac_visible. The most obvious way is to compare it to true or false: + +SELECT + id +FROM + actor.org_unit +WHERE + opac_visible = true; + + In a JSON query this approach doesn't work. If you try it, the "= true" test will turn into IS NULL. Don't do that. Instead, use a leading plus sign, as described in + the preceding section, to treat the boolean column as a stand-alone condition: + +{ + "from":"aou", + "select": { "aou":[ "id" ] }, + "where": { + "+aou":"opac_visible" + } +} + + Result: + +SELECT + "aou".id AS "id" +FROM + actor.org_unit AS "aou" +WHERE + "aou".opac_visible ; + + If you need to test for falsity, then write a test for truth and negate it with the "-not" operator. We will discuss the "-not" operator later, but here's a preview: + +{ + "from":"aou", + "select": { "aou":[ "id" ] }, + "where": { + "-not": { + "+aou":"opac_visible" + } + } +} + +SELECT + "aou".id AS "id" +FROM + actor.org_unit AS "aou" +WHERE + NOT ( "aou".opac_visible ); + + You can also compare a boolean column directly to a more complex condition: + +{ + "from":"aou", + "select": { "aou":[ "id" ] }, + "where": { + "opac_visible": { + "=": { "parent_ou":{ ">":3 } } + } + } +} + + Here we compare a boolean column, not to a literal value, but to a boolean expression. The resulting SQL looks a little goofy, but it works: + +SELECT + "aou".id AS "id" +FROM + actor.org_unit AS "aou" +WHERE + ( + "aou".opac_visible = ( "aou".parent_ou > 3 ) + ); + + In this case we compare the boolean column to a single simple condition. However you can include additional complications -- multiple conditions, IN lists, + BETWEEN clauses, and other features as described below. + + Multiple ConditionsMultiple Conditions + + If you need multiple conditions, just add them to the "where" object, separated by commas: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou":{ ">":3 }, + "id":{ "<>":7 } + } +} + + The generated SQL connects the conditions with AND: + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".parent_ou g 3 + AND "aou".id <> 7; + + Later we will see how to use OR instead of AND. + + Using ArraysUsing Arrays + + Here's a puzzler. Suppose you need two conditions for the same column. How do you code them in the same WHERE clause? For example, suppose you want something like this: + +SELECT + id, + name +FROM + actor.org_unit +WHERE + parent_ou > 3 + AND parent_ou <> 7; + + You might try a WHERE clause like this: + +"where": { + "parent_ou":{ ">":3 }, + "parent_ou":{ "<>":7 } + } + + Nope. Won't work. According to JSON rules, two entries in the same object can't have the same key. + After slapping yourself in the forehead, you try something a little smarter: + +"where": { + "parent_ou": { + ">":3, + "<>":7 + } +} + + Nice try, but that doesn't work either. Maybe it ought to work -- at least it's legal JSON -- but, no. + Here's what works: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": [ + { "parent_ou":{ ">":3 } }, + { "parent_ou":{ "<>":7 } } + ] +} + + We wrapped the two conditions into two separate JSON objects, and then wrapped those objects together into a JSON array. The resulting SQL looks like this: + +SELECT + "aou".id AS "id", + "aou".name AS "name +FROM + actor.org_unit AS "aou" +WHERE + ( "aou".parent_ou > 3 ) +AND + ( "aou".parent_ou <> 7 ); + + That's not quite what we were hoping for, because the extra parentheses are so ugly. But they're harmless. This will do. + If you're in the mood, you can use arrays to as many parentheses as you like, even if there is only one condition inside: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": + [[[[[[ + { + "parent_ou":{ ">":3 } + }, + ]]]]]] +} + + ...yields: + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + ( ( ( ( ( ( "aou".parent_ou > 3 ) ) ) ) ) ); + + + How to ORHow to OR + + By default, json_query combines conditions with AND. When you need OR, here's how to do it: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "-or": { + "id":2, + "parent_ou":3 + } + } +} + + We use “-or” as the key, with the conditions to be ORed in an associated object. The leading minus sign is there to make sure that the operator isn't confused with a + column name. Later we'll see some other operators with leading minus signs. In a couple of spots we even use plus signs. + Here are the results from the above example: + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + ( + "aou".id = 2 + OR "aou".parent_ou = 3 + ); + + The conditions paired with “-or” are linked by OR and enclosed in parentheses. + Here's how to do the same thing using an array, except that it produces an extra layer of parentheses: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "-or": [ + { "id":2 }, + { "parent_ou":3 } + ] + } +} +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + ( + ( "aou".id = 2 ) + OR ( "aou".parent_ou = 3 ) + ); + + It's possible, though not very useful, to have only a single condition subject to the “-or” operator. In that case, the condition appears by itself, since there's nothing + to OR it to. This trick is another way to add an extraneous layer of parentheses. + + Another way to ANDAnother way to AND + + You can also use the “-and” operator. It works just like “-or”, except that it combines conditions with AND instead of OR. Since AND is the default, we don't usually + need a separate operator for it, but it's available. + In rare cases, nothing else will do -- you can't include two conditions in the same list because of the duplicate key problem, but you can't combine them with + arrays either. In particular, you might need to combine them within an expression that you're comparing to a boolean column (see the subsection above on Testing Boolean Columns). + + Negation with NOTNegation with NOT + + The “-not” operator negates a condition or set of conditions. For example: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "-not": { + "id":{ ">":2 }, + "parent_ou":3 + } + } +} + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + NOT + ( + "aou".id > 2 + AND "aou".parent_ou = 3 + ); + + In this example we merely negate a combination of two comparisons. However the condition to be negated may be as complicated as it needs to be. Anything that can be + subject to “where” can be subject to “-not”. + In most cases you can achieve the same result by other means. However the “-not” operator is the only way to represent NOT BETWEEN + (to be discussed later). + + EXISTS with SubqueriesEXISTS with Subqueries + + Two other operators carry a leading minus sign: “-exists” and its negation “-not-exists”. These operators apply to subqueries, which have the + same format as a full query. For example: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "-exists": { + "from":"asv", + "select":{ "asv":[ "id" ] }, + "where": { + "owner":7 + } + } + } +} + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE +EXISTS + ( + SELECT "asv".id AS "id" + FROM action.survey AS "asv" + WHERE "asv".owner = 7 + ); + + This kind of subquery is of limited use, because its WHERE clause doesn't have anything to do with the main query. It just shuts down the main query altogether + if it isn't satisfied. + More typical is a correlated subquery, whose WHERE clause refers to a row from the main query. For example: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "-exists": { + "from":"asv", + "select":{ "asv":[ "id" ] }, + "where": { + "owner":{ "=":{ "+aou":"id" }} + } + } + } +} + + Note the use of “+aou” to qualify the id column in the inner WHERE clause. + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + EXISTS + ( + SELECT "asv".id AS "id" + FROM action.survey AS "asv" + WHERE ("asv".owner = ( "aou".id )) + ); + + This latter example illustrates the syntax, but in practice, it would probably be more natural to use an IN clause with a subquery (to be discussed later). + + BETWEEN ClausesBETWEEN Clauses + + Here's how to express a BETWEEN clause: + +{ + "from":"aou", + "select": { "aou":[ "id" ] }, + "where": { + "parent_ou": { "between":[ 3, 7 ] } + } +} + + The value associated with the column name is an object with a single entry, whose key is "between". The corresponding value is an array with exactly two values, defining the + range to be tested. + The range bounds must be either numbers or string literals. Although SQL allows them to be null, a null doesn't make sense in this context, because a null never matches + anything. Consequently json_query doesn't allow them. + The resulting SQL is just what you would expect: + +SELECT + "aou".id AS "id" +FROM + actor.org_unit AS "aou" +WHERE + parent_ou BETWEEN '3' AND '7'; + + + IN and NOT IN ListsIN and NOT IN Lists + + There are two ways to code an IN list. One way is simply to include the list of values in an array: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou": [ 3, 5, 7 ] + } +} + + As with a BETWEEN clause, the values in the array must be numbers or string literals. Nulls aren't allowed. Here's the resulting SQL, which again is just what + you would expect: + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".parent_ou IN (3, 5, 7); + + The other way is similar to the syntax shown above for a BETWEEN clause, except that the array may include any non-zero number of values: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "parent_ou": { "in": [ 3, 5, 7 ] } + } +} + + This version results in the same SQL as the first one. + For a NOT IN list, you can use the latter format, using the “not in” operator instead of “in”. Alternatively, you can use either format together with + the “-not” operator. + + IN and NOT IN Clauses with SubqueriesIN and NOT IN Clauses with Subqueries + + For an IN clause with a subquery, the syntax is similar to the second of the two formats for an IN list (see the previous subsection). The "in" or "not in" operator + is paired, not with an array of values, but with an object representing the subquery. For example: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "id": { + "in": { + "from":"asv", + "select":{ "asv":[ "owner" ] }, + "where":{ "name":"Voter Registration" } + } + } + } +} + + The results: + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".id IN + ( + SELECT + "asv".owner AS "owner" + FROM + action.survey AS "asv" + WHERE + "asv".name = 'Voter Registration' + ); + + In SQL the subquery may select multiple columns, but in a JSON query it can select only a single column. + For a NOT IN clause with a subquery, use the “not in” operator instead of “in”. + + Comparing to a FunctionComparing to a Function + + Here's how to compare a column to a function call: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "id":{ ">":[ "sqrt", 16 ] } + } +} + + A comparison operator (“>” in this case) is paired with an array. The first entry in the array must be a string giving the name of the function. The remaining parameters, + if any, are the parameters. They may be strings, numbers, or nulls. The resulting SQL for this example: + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + "aou".id > sqrt( '16' ); + + All parameters are passed as quoted strings -- even if, as in this case, they are really numbers. + This syntax is somewhat limited in that the function parameters must be constants (hence the use of a silly example). + + Putting a Function Call on the LeftPutting a Function Call on the Left + + In the discussion of the SELECT clause, we saw how you could transform the value of a selected column by passing it to a function. In the WHERE clause, you can + use similar syntax to transform the value of a column before comparing it to something else. + For example: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "name": { + "=": { + "transform":"upper", + "value":"CARTER BRANCH" + } + } + } +} + + The "transform" entry gives the name of the function that we will use on the left side of the comparison. The "value" entry designates the value on the right side + of the comparison. + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + upper("aou".name ) = 'CARTER BRANCH' ; + + As in the SELECT clause, you can pass literal values or nulls to the function as additional parameters by using an array tagged as “params”: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "name": { + "=": { + "transform":"substr", + "params":[ 1, 6 ], + "value":"CARTER" + } + } + } +} + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + substr("aou".name,'1','6' ) = 'CARTER' ; + + The first parameter is always the column name, qualified by the class name, followed by any additional parameters (which are always enclosed in quotes even if they + are numeric). + As in the SELECT clause: if the function returns multiple columns, you can specify the one you want by using a "result_field" entry (not shown here). + If you leave out the "transform" entry (or misspell it), the column name will appear on the left without any function call. This syntax works, but it's more + complicated than it needs to be. + + + Putting Function Calls on Both SidesPutting Function Calls on Both Sides + + If you want to compare one function call to another, you can use the same syntax shown in the previous subsection -- except that the “value” entry carries an + array instead of a literal value. For example: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "id": { + ">": { + "transform":"factorial", + "value":[ "sqrt", 1000 ] + } + } + } +} +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE + factorial("aou".id ) > sqrt( '1000' ) ; + + The format for the right side function is similar to what we saw earlier, in the subsection Comparing to a Function. Note that there are two different formats + for defining function calls: + • + For a function call to the left of the comparison, the function name is tagged as “transform”. The first parameter is always the relevant + column name; additional parameters, if any, are in an array tagged as "params". The entry for “result_field”, if present, specifies a subcolumn. + • + For a function call to the right of the comparison, the function name is the first entry in an array, together with any parameters. + There's no way to specify a subcolumn. + + + Comparing a Function to a ConditionComparing a Function to a Condition + + So far we have seen two kinds of data for the “value” tag. A string or number translates to a literal value, and an array translates to a function call. + The third possibility is a JSON object, which translates to a condition. For example: + +{ + "from":"aou", + "select": { "aou":[ "id", "name" ] }, + "where": { + "id": { + "=": { + "value":{ "parent_ou":{ ">":3 } }, + "transform":"is_prime" + } + } + } +} + + The function tagged as “transform” must return boolean, or else json_query will generate invalid SQL. The function used here, “is_prime”, + is fictitious. + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +WHERE +( + is_prime("aou".id ) = ( "aou".parent_ou > 3 ) +); + + If we left out the “transform” entry, json_query would compare the column on the left (which would to be boolean) to the condition on the right. The results are similar + to those for a simpler format described earlier (see the subsection Testing Boolean Columns). + In the example above we compared the boolean to a simple condition. However the expression on the right may include multiple conditions, IN lists, subqueries, + and whatever other complications are necessary. + + Things You Can't DoThings You Can't Do + + The WHERE clause is subject to some of the same limitations as the SELECT clause. However, in the WHERE clause these limitations are more limiting, because + the client program can't compensate by doing some of the work for itself. + You can't use arbitrary expressions in a WHERE condition, such as "WHERE id > parent_ou -- 3". In some cases you may be able to contrive a custom operator in order to + fake such an expression. However this mechanism is neither very general nor very aesthetic. + To the right of a comparison operator, all function parameters must be literals or null. You can't pass a column value, nor can you nest function calls. + Likewise you can't include column values or arbitrary expressions in an IN list or a BETWEEN clause. + You can't include null values in an IN list or a BETWEEN list, not that you should ever want to. + As noted earlier: you can't use the comparison operators “is distinct from” or “is not distinct from”. + Also as noted earlier: a subquery in an IN clause cannot select more than one column. + + JOIN clausesJOIN clauses + + Until now, our examples have selected from only one table at a time. As a result, the FROM clause has been very simple -- just a single string containing + the class name of the relevant table. + When the FROM clause joins multiple tables, the corresponding JSON naturally gets more complicated. + SQL provides two ways to define a join. One way is to list both tables in the FROM clause, and put the join conditions in the WHERE clause: + +SELECT + aou.id, + aout.name +FROM + actor.org_unit aou, + actor.org_unit_type aout +WHERE + aout.id = aou.ou_type; + + The other way is to use an explicit JOIN clause: + +SELECT + aou.id, + aout.name +FROM + actor.org_unit aou + JOIN actor.org_unit_type aout + ON ( aout.id = aou.ou_type ); + + JSON queries use only the second of these methods. The following example expresses the same query in JSON: + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aou":"aout" + } +} + + First, let's review the SELECT clause. Since it selects rows from two different tables, the data for “select” includes two entries, one for each table. + As for the FROM clause, it's no longer just a string. It's a JSON object, with exactly one entry. The key of this entry is the class name of the core table, i.e. + the table named immediately after the FROM keyword. The data associated with this key contains the rest of the information about the join. In this simple example, + that information consists entirely of a string containing the class name of the other table. + So where is the join condition? + It's in the IDL. Upon reading the IDL, json_query knows that actor.org_unit has a foreign key pointing to actor.org_unit_type, and builds a join condition accordingly: + +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit AS "aou" + INNER JOIN actor.org_unit_type AS "aout" + ON ( "aout".id = "aou".ou_type ) ; + + In this case the core table is the child table, and the joined table is the parent table. We could just as well have written it the other way around: + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout":"aou" + } +} + +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".ou_type = "aout".id ) ; + + + Specifying The Join Columns ExplicitlySpecifying The Join Columns Explicitly + + While it's convenient to let json_query pick the join columns, it doesn't always work. + For example, the actor.org_unit table has four different address ids, for four different kinds of addresses. Each of them is a foreign key to the actor.org_address table. + Json_query can't guess which one you want if you don't tell it. + (Actually it will try to guess. It will pick the first matching link that it finds in the IDL, which may or may not be the one you want.) + Here's how to define exactly which columns you want for the join: + +{ + "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, + "from": { + "aou": { + "aoa": { + "fkey":"holds_address", + "field":"id" + } + } + } +} + + Before, the table we were joining was represented merely by its class name. Now it's represented by an entry in a JSON object. The key of that entry is the + class name, and the associated data is another layer of JSON object containing the attributes of the join. + Later we'll encounter other kinds of join attributes. For now, the only attributes that we're looking at are the ones that identify the join columns: + “fkey” and “field”. The hard part is remembering which is which: + • + “fkey” identifies the join column from the left table; + • + “field” identifies the join column from the right table. + + When there are only two tables involved, the core table is on the left, and the non-core table is on the right. In more complex queries neither table may be the + core table. + Here is the result of the preceding JSON: + +SELECT + "aou".id AS "id", + "aoa".street1 AS "street1" +FROM + actor.org_unit AS "aou" + INNER JOIN actor.org_address AS "aoa" + ON ( "aoa".id = "aou".holds_address ) ; + + In this example the child table is on the left and the parent table is on the right. We can swap the tables if we swap the join columns as well: + +{ + "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, + "from": { + "aoa": { + "aou": { + "fkey":"id", + "field":"holds_address" + } + } + } +} + +SELECT + "aou".id AS "id", + "aoa".street1 AS "street1" +FROM + actor.org_address AS "aoa" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".holds_address = "aoa".id ) ; + + When you specify both of the join columns, json_query assumes that you know what you're doing. It doesn't check the IDL to confirm that the join makes sense. + The burden is on you to avoid absurdities. + + Specifying Only One Join ColumnSpecifying Only One Join Column + + We just saw how to specify both ends of a join. It turns out that there's a shortcut -- most of the time you only need to specify one end. Consider + the following variation on the previous example: + +{ + "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, + "from": { + "aoa": { + "aou": { + "field":"holds_address" + } + } + } +} + + ..which results in exactly the same SQL as before. + Here we specified the join column from the child table, the column that is a foreign key pointing to another table. As long as that linkage is defined in the IDL, + json_query can look it up and figure out what the corresponding column is in the parent table. + However this shortcut doesn't work if you specify only the column in the parent table, because it would lead to ambiguities. Suppose we had specified the id + column of actor.org_address. As noted earlier, there are four different foreign keys from actor.org_unit to actor.org_address, and json_query would have no way to guess + which one we wanted. + + Joining to Multiple TablesJoining to Multiple Tables + + So far we have joined only two tables at a time. What if we need to join one table to two different tables? + Here's an example: + +{ + "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, + "from": { + "aou": { + "aout":{}, + "aoa": { + "fkey":"holds_address" + } + } + } +} + + The first join, to actor.org_unit_type, is simple. We could have specified join columns, but we don't have to, because json_query will construct that join on the basis of + what it finds in the IDL. Having no join attributes to specify, we leave that object empty. + For the second join, to actor.org_address, we have to specify at least the join column in the child table, as discussed earlier. We could also have specified the join + column from the parent table, but we don't have to, so we didn't. + Here is the resulting SQL: + +SELECT + "aou".id AS "id", + "aout".depth AS "depth", + "aoa".street1 AS "street1" +FROM + actor.org_unit AS "aou" + INNER JOIN actor.org_unit_type AS "aout" + ON ( "aout".id = "aou".ou_type ) + INNER JOIN actor.org_address AS "aoa" + ON ( "aoa".id = "aou".holds_address ) ; + + Since there can be only one core table, the outermost object in the FROM clause can have only one entry, whose key is the class name of the core table. The next + level has one entry for every table that's joined to the core table. + + Nested JoinsNested Joins + + Let's look at that last query again. It joins three tables, and the core table is the one in the middle. Can we make one of the end tables the core table instead? + Yes, we can: + +{ + "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, + "from": { + "aoa": { + "aou": { + "field":"holds_address", + "join": { + "aout":{ "fkey":"ou_type" } + } + } + } + } +} + + The “join” attribute introduces another level of join. In this case "aou" is the left table for the nested join, and the right table for the original join. + Here are the results: + +SELECT + "aou".id AS "id", + "aout".depth AS "depth", + "aoa".street1 AS "street1" +FROM + actor.org_address AS "aoa" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".holds_address = "aoa".id ) + INNER JOIN actor.org_unit_type AS "aout" + ON ( "aout".id = "aou".ou_type ) ; + + + Outer JoinsOuter Joins + + By default, json_query constructs an inner join. If you need an outer join, you can add the join type as an attribute of the join: + Yes, we can: + +{ + "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, + "from": { + "aoa": { + "aou": { + "field":"mailing_address", + "type":"left" + } + } + } +} + + Here is the resulting SQL for this example: + +SELECT + "aou".id AS "id", + "aoa".street1 AS "street1" +FROM + actor.org_address AS "aoa" + LEFT JOIN actor.org_unit AS "aou" + ON ( "aou".mailing_address = "aoa".id ) ; + + + Referring to Joined Tables in the WHERE ClauseReferring to Joined Tables in the WHERE Clause + + In the WHERE clause of the generated SQL, every column name is qualified by a table alias, which is always the corresponding class name. + If a column belongs to the core table, this qualification happens by default. If it belongs to a joined table, the JSON must specify what class name + to use for an alias. For example: + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout":"aou" + }, + "where": { + "+aou":{ "parent_ou":2 } + } +} + + Note the peculiar operator “+aou” -- a plus sign followed by the relevant class name. This operator tells json_query to apply the specified class to the condition that + follows. The result: + +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".ou_type = "aout".id ) +WHERE + ( "aou".parent_ou = 2 ); + + The plus-class operator may apply to multiple conditions: + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout":"aou" + }, + "where": { + "+aou":{ + "parent_ou":2, + "id":{ "<":42 } + } + } +} + +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".ou_type = "aout".id ) +WHERE + ( + "aou".parent_ou = 2 + AND "aou".id < 42 + ); + + For these artificial examples, it would have been simpler to swap the tables, so that actor.org_unit is the core table. Then you wouldn't need to go through any + special gyrations to apply the right table alias. In a more realistic case, however, you might need to apply conditions to both tables. Just swapping the tables + wouldn't solve the problem. + You can also use a plus-class operator to compare columns from two different tables: + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout":"aou" + }, + "where": { + "depth": { ">": { "+aou":"parent_ou" } } + } +} + + +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".ou_type = "aout".id ) +WHERE + ( + "aout".depth > ( "aou".parent_ou ) + ); + + Please don't expect that query to make any sense. It doesn't. But it illustrates the syntax. + + Join FiltersJoin Filters + + While the above approach certainly works, the special syntax needed is goofy and awkward. A somewhat cleaner solution is to include a condition in the JOIN clause: + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout": { + "aou": { + "filter": { + "parent_ou":2 + } + } + } + } +} + +SELECT + "aou".id AS "id", "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".ou_type = "aout".id + AND "aou".parent_ou = 2 ) ; + + By default, json_query uses AND to combine the “filter” condition with the original join condition. If you need OR, you can use the “filter_op” attribute to + say so: + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout": { + "aou": { + "filter": { + "parent_ou":2 + }, + "filter_op":"or" + } + } + } +} + +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON ( "aou".ou_type = "aout".id + OR "aou".parent_ou = 2 ) ; + + If the data tagged by “filter_op” is anything but “or” (in upper, lower, or mixed case), json_query uses AND instead of OR. + The condition tagged by “filter” may be much more complicated. In fact it accepts all the same syntax as the WHERE clause. + Remember, though, that it all gets combined with the the original join condition with an AND, or with an OR if you so specify. If + you're not careful, the result may be a confusing mixture of AND and OR at the same level. + + Joining to a SubqueryJoining to a Subquery + + In SQL you can put a subquery in a FROM clause, and select from it as if it were a table. A JSON query has no way to do that directly. The IDL, however, + can define a class as a subquery instead of as a table. When you SELECT from it, json_query inserts the corresponding subquery into the FROM clause. For example: + +{ + "select":{ "iatc":[ "id", "dest", "copy_status" ] }, + "from": "iatc" +} + + There's nothing special-looking about this JSON, but json_query expands it as follows: + +SELECT + "iatc".id AS "id", + "iatc".dest AS "dest", + "iatc".copy_status AS "copy_status" +FROM + ( + SELECT t.* + FROM + action.transit_copy t + JOIN actor.org_unit AS s + ON (t.source = s.id) + JOIN actor.org_unit AS d + ON (t.dest = d.id) + WHERE + s.parent_ou <> d.parent_ou + ) AS "iatc" ; + + The “iatc” class is like a view, except that it's defined in the IDL instead of the database. In this case it provides a way to do a join that would otherwise be + impossible through a JSON query, because it joins the same table in two different ways (see the next subsection). + + Things You Can't DoThings You Can't Do + + In a JOIN, as with other SQL constructs, there are some things that you can't do with a JSON query. + In particular, you can't specify a table alias, because the table alias is always the class name. As a result: + • + You can't join a table to itself. For example, you can't join actor.org_unit to itself in order to select the name of the parent for every org_unit. + • + You can't join to the same table in more than one way. For example, you can't join actor.org_unit to actor.org_address through four different foreign + keys, to get four kinds of addresses in a single query. + + The only workaround is to perform the join in a view, or in a subquery defined in the IDL as described in the previous subsection. + Some other things, while not impossible, require some ingenuity in the use of join filters. + For example: by default, json_query constructs a join condition using only a single pair of corresponding columns. As long as the database is designed accordingly, + a single pair of columns will normally suffice. If you ever need to join on more than one pair of columns, you can use join filters for the extras. + Likewise, join conditions are normally equalities. In raw SQL it is possible (though rarely useful) to base a join on an inequality, or to use a function call in a join + condition, or to omit any join condition in order to obtain a Cartesian product. If necessary, you can devise such unconventional joins by combining the normal join + conditions with join filters. + For example, here's how to get a Cartesian product: + +{ + "select": { "aou":[ "id" ], "aout":[ "name" ] }, + "from": { + "aout": { + "aou": { + "filter": { + "ou_type":{ "<>": { "+aout":"id" } } + }, + "filter_op":"or" + } + } + } +} + + +SELECT + "aou".id AS "id", + "aout".name AS "name" +FROM + actor.org_unit_type AS "aout" + INNER JOIN actor.org_unit AS "aou" + ON + ( + "aou".ou_type = "aout".id + OR ("aou".ou_type <> ( "aout".id )) + ) ; + + Yes, it's ugly, but at least you're not likely to do it by accident. + + Selecting from FunctionsSelecting from Functions + + In SQL, you can put a function call in the FROM clause. The function may return multiple columns and multiple rows. Within the query, the function behaves like a table. + A JSON query can also select from a function: + +{ + "from": [ "actor.org_unit_ancestors", 5 ] +} + + The data associated with “from” is an array instead of a string or an object. The first element in the array specifies the name of the function. Subsequent elements, + if any, supply the parameters of the function; they must be literal values or nulls. + Here is the resulting query: + +SELECT * +FROM + actor.org_unit_ancestors( '5' ) AS "actor.org_unit_ancestors" ; + + In a JSON query this format is very limited, largely because the IDL knows nothing about the available functions. You can't join the function to a table or to + another function. If you try to supply a SELECT list or a WHERE clause, json_query will ignore it. The generated query will always select every column, via a wild card asterisk, + from every row. + + The ORDER BY ClauseThe ORDER BY Clause + + In most cases you can encode an ORDER BY clause as either an array or an object. Let's take a simple example and try it both ways. First the array: + +{ + "select":{ "aou":[ "name" ] }, + "from": "aou", + "order_by": [ + { "class":"aou", "field":"name" } + ] +} + + Now the object: + +{ + "select":{ "aou":[ "name" ] }, + "from": "aou", + "order_by": { + "aou":{ "name":{} } + } +} + + The results are identical from either version: + +SELECT + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +ORDER BY + "aou".name; + + The array format is more verbose, but as we shall see, it is also more flexible. It can do anything the object format can do, plus some things that the object + format can't do. + + ORDER BY as an ArrayORDER BY as an Array + + In the array format, each element of the array is an object defining one of the sort fields. Each such object must include at least two tags: + • + The “class” tag provides the name of the class, which must be either the core class or a joined class. + • + The “field” tag provides the field name, corresponding to one of the columns of the class. + + If you want to sort by multiple fields, just include a separate object for each field. + If you want to sort a field in descending order, add a “direction” tag: + +{ + "select":{ "aou":[ "name" ] }, + "from": "aou", + "order_by": [ + { + "class":"aou", + "field":"name", + "transform":"upper" + } + ] +} + + +SELECT + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +ORDER BY + upper("aou".name ); + + If you need additional parameters for the function, you can use the “params” tag to pass them: + +{ + "select":{ "aou":[ "name" ] }, + "from": "aou", + "order_by": [ + { + "class":"aou", + "field":"name", + "transform":"substr", + "params":[ 1, 8 ] + } + ] +} + + The additional parameters appear as elements in an array. They may be numbers, strings, or nulls. + +SELECT + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +ORDER BY + substr("aou".name,'1','8' ); + + As we have seen elsewhere, all literal values are passed as quoted strings, even if they are numbers. + If the function returns multiple columns, you can use the “result_field” tag to indicate which one you want (not shown). + + + ORDER BY as an ObjectORDER BY as an Object + + When you encode the ORDER BY clause as an object, the keys of the object are class names. Each class must be either the core class or a joined class. The data for + each class can be either an array or another layer of object. Here's an example with one of each: + +{ + "select":{ "aout":"id", "aou":[ "name" ] }, + "from": { "aou":"aout" }, + "order_by": { + "aout":[ "id" ], + "aou":{ "name":{ "direction":"desc" } } + } +} + + For the “aout” class, the associated array is simply a list of field names (in this case, just one). Naturally, each field must reside in the class with which + it is associated. + However, a list of field names provides no way to specify the direction of sorting, or a transforming function. You can add those details only if the class + name is paired with an object, as in the example for the "aou" class. The keys for such an object are field names, and the associated tags define other details. + In this example, we use the “direction"” tag to specify that the name field be sorted in descending order. This tag works the same way here as described earlier. + If the associated string starts with "D" or "d", the sort will be descending; otherwise it will be ascending. + Here is the resulting SQL: + +SELECT + "aou".name AS "name" +FROM + actor.org_unit AS "aou" + INNER JOIN actor.org_unit_type AS "aout" + ON ( "aout".id = "aou".ou_type ) +ORDER BY + "aout".id, + "aou".name DESC; + + +{ + "select":{ "aou":[ "name", "id" ] }, + "from": "aou", + "order_by": { + "aou":{ + "name":{ "transform":"substr", "params":[ 1, 8 ] } + } + } +} + +SELECT + "aou".name AS "name", + "aou".id AS "id" +FROM + actor.org_unit AS "aou" +ORDER BY + substr("aou".name,'1','8' ); + + + Things You Can't DoThings You Can't Do + + If you encode the ORDER BY clause as an object, you may encounter a couple of restrictions. + Because the key of such an object is the class name, all the fields from a given class must be grouped together. You can't sort by a column from one table, followed by + a column from another table, followed by a column from the first table. If you need such a sort, you must encode the ORDER BY clause in the array format, which suffers + from no such restrictions. + For similar reasons, with an ORDER BY clause encoded as an object, you can't reference the same column more than once. Although such a sort may seem perverse, + there are situations where it can be useful, provided that the column is passed to a transforming function. + For example, you might want a case-insensitive sort, except that for any given letter you want lower case to sort first. For example, you want “diBona” to sort + before “Dibona”. Here's a way to do that, coding the ORDER BY clause as an array: + +{ + "select":{ "au":[ "family_name", "id" ] }, + "from": "au", + "order_by": [ + { "class":"au", "field":"family_name", "transform":"upper" }, + { "class":"au", "field":"family_name" } + ] +} +SELECT + "au".family_name AS "family_name", + "au".id AS "id" +FROM + actor.usr AS "au" +ORDER BY + upper("au".family_name ), + "au".family_name; + + Such a sort is not possible where the ORDER BY clause is coded as an object. + + The GROUP BY ClauseThe GROUP BY Clause + + A JSON query has no separate construct to define a GROUP BY clause. Instead, the necessary information is distributed across the SELECT clause. However, + the way it works is a bit backwards from what you might expect, so pay attention. + Here's an example: + +{ + "select": { + "aou": [ + { "column":"parent_ou" }, + { "column":"name", "transform":"max", "aggregate":true } + ] + }, + "from": "aou" +} + + The “transform” tag is there just to give us an excuse to do a GROUP BY. What's important to notice is the “aggregate” tag. + Here's the resulting SQL: + +SELECT + "aou".parent_ou AS "parent_ou", + max("aou".name ) AS "name" +FROM + actor.org_unit AS "aou" +GROUP BY + 1; + + The GROUP BY clause references fields from the SELECT clause by numerical reference, instead of by repeating them. Notice that the field it references, + parent_ou, is the one that doesn't carry the “aggregate” tag in the JSON. + Let's state that more generally. The GROUP BY clause includes only the fields that do not carry the “aggregate” tag (or that carry it with a value of false). + However, that logic applies only when some field somewhere does carry the “aggregate” tag, with a value of true. If there is no “aggregate” tag, or + it appears only with a value of false, then there is no GROUP BY clause. + If you really want to include every field in the GROUP BY clause, don't use “aggregate”. Use the “distinct” tag, as described in the next section. + + The DISTINCT ClauseThe DISTINCT Clause + + JSON queries don't generate DISTINCT clauses. However, they can generate GROUP BY clauses that include every item from the SELECT clause. The effect is the same as + applying DISTINCT to the entire SELECT clause. + For example: + +{ + "select": { + "aou": [ + "parent_ou", + "ou_type" + ] + }, + "from":"aou", + "distinct":"true" +} + + Note the “distinct” entry at the top level of the query object, with a value of “true”. + +SELECT + "aou".parent_ou AS "parent_ou", + "aou".ou_type AS "ou_type" +FROM + actor.org_unit AS "aou" +GROUP BY + 1, 2; + + The generated GROUP BY clause references every column in the SELECT clause by number. + + The HAVING ClauseThe HAVING Clause + + For a HAVING clause, add a “having” entry at the top level of the query object. For the associated data, you can use all the same syntax + that you can use for a WHERE clause. + Here's a simple example: + +{ + "select": { + "aou": [ + "parent_ou", { + "column":"id", + "transform":"count", + "alias":"id_count", + "aggregate":"true" + } + ] + }, + "from":"aou", + "having": { + "id": { + ">" : { + "transform":"count", + "value":6 + } + } + } +} + + We use the “aggregate” tag in the SELECT clause to give us a GROUP BY to go with the HAVING. Results: + +SELECT + "aou".parent_ou AS "parent_ou", + count("aou".id ) AS "id_count" +FROM + actor.org_unit AS "aou" +GROUP BY + 1 +HAVING + count("aou".id ) > 6 ; + + In raw SQL we could have referred to “count( 1 )”. But since JSON queries cannot encode arbitrary expressions, we applied the count function to a column that + cannot be null. + + The LIMIT and OFFSET ClausesThe LIMIT and OFFSET Clauses + + To add an LIMIT or OFFSET clause, add an entry to the top level of a query object. For example: + +{ + "select": { + "aou": [ "id", "name" ] + }, + "from":"aou", + "order_by": { "aou":[ "id" ] }, + "offset": 7, + "limit": 42 +} + + The data associated with “offset” and “limit” may be either a number or a string, but if it's a string, it should have a number inside. + Result: + +SELECT + "aou".id AS "id", + "aou".name AS "name" +FROM + actor.org_unit AS "aou" +ORDER BY + "aou".id +LIMIT 42 +OFFSET 7; + + + + Chapter 21. SuperCatChapter 21. SuperCat + Report errors in this documentation using Launchpad. + Chapter 21. SuperCat + Report any errors in this documentation using Launchpad. + Chapter 21. SuperCatChapter 21. SuperCat + + Using SuperCatUsing SuperCat> + + + SuperCat allows Evergreen record and information retrieval from a web browser using a based on a number of open web standards and formats. The following record types are + supported: + •isbn•metarecord•record + Return a list of ISBNs for related recordsReturn a list of ISBNs for related records + + + Similar to the OCLC xISBN service, Evergreen can return a list of related records based on its oISBN algorithm: + http://<hostname>/opac/extras/osibn/<ISBN> + For example, http://dev.gapines.org/opac/extras/oisbn/0439136350 returns: + +<idlist metarecord="302670"> +<isbn record="250060">0790783525</isbn> +<isbn record="20717">0736691316</isbn> +<isbn record="250045">0790783517</isbn> +<isbn record="199060">9500421151</isbn> +<isbn record="250061">0790783495</isbn> +<isbn record="154477">0807286028</isbn> +<isbn record="227297">1594130027</isbn> +<isbn record="26682">0786222743</isbn> +<isbn record="17179">0807282316</isbn> +<isbn record="34885">0807282316</isbn> +<isbn record="118019">8478885196</isbn> +<isbn record="1231">0738301477</isbn> +</idlist> + + + Return recordsReturn records + + + SuperCat can return records and metarecords in many different formats (see the section called “Supported formats” + http://<hostname>/opac/extras/supercat/retrieve/<format>/<record-type>/<bib-ID> + For example, http://dev.gapines.org/opac/extras/supercat/retrieve/mods/record/555 returns: + +<mods:modsCollection version="3.0"> + <mods:mods xsi:schemaLocation="http://www.loc.gov/mods/ http://www.loc.gov/standards/mods/mods.xsd"> + <titleInfo> + <title>More Brer Rabbit stories /</title> + </titleInfo> + <typeOfResource>text</typeOfResource> + <originInfo> + <place> + <code authority="marc">xx</c0de> + </place> + <publisher>Award Publications</publisher> + <dateIssued>c1982, 1983</dateIssued> + <dateIssued encoding="marc" point="start">1983</dateIssued> + <dateIssued encoding="marc" point="end">1982</dateIssued> + <issuance>monographic</issuance> + </originInfo> + <language authority="iso639-2b">eng</language> + <physicalDescription> + <form authority="marcform">print</form> + <extent>unp. : col. ill.</extent> + </physicalDescription> + <note type="statement of responsibility">ill. by Rene Cloke.</note> + <subject authority="lcsh"> + <topic>Animals</topic> + <topic>Fiction</topic> + </subject> + <subject authority="lcsh"> + <topic>Fables</topic> + </subject> + <recordInfo> + <recordContentSource>(BRO)</recordContentSource> + <recordCreationDate encoding="marc">930903</recordCreationDate> + <recordChangeDate encoding="iso8601">19990703024637.0</recordChangeDate> + <recordIdentifier>PIN60000007 </recordIdentifier> + </recordInfo> + </mods:mods> +</mods:modsCollection> + + + Return a feed of recently edited or created recordsReturn a feed of recently edited or created records + + + SuperCat can return feeds of recently edited or created authority and bibliographic records: + http://<hostname>/opac/extras/feed/freshmeat/<feed-type>/[authority|biblio]/[import|edit]/<limit>/<date> + The limit records imported or edited following the supplied date will be returned. If you do not supply a date, then the most recent limit records will be returned. + If you do not supply a limit, then up to 10 records will be returned. + Feed-type can be one of atom, html, htmlholdings, marcxml, mods, mods3, or rss2. + For example, http://dev.gapines.org/opac/extras/feed/freshmeat/atom/biblio/import/10/2008-01-01 + + Browse recordsBrowse records + + SuperCat can browse records in HTML and XML formats: + http://<hostname>/opac/extras/supercat/browse/<format>/call_number/<org_unit>/<call_number> + For example, http://dev.gapines.org/opac/extras/browse/xml/call_number/-/GV returns: + +<hold:volumes xmlns:hold='http://open-ils.org/spec/holdings/v1'> + <hold:volume id="tag:open-ils.org,2008:asset-call_number/130607" lib="FRRLS-FA" label="GUTCHEON BETH"> + <act:owning_lib id="tag:open-ils.org,2008:actor-org_unit/111" name="Fayette County Public Library"/> + <record xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/ + standards/marcxml/schema/MARC21slim.xsd" + id="tag:open-ils.org,2008:biblio-record_entry/21669/FRRLS-FA"> + <leader>09319pam a2200961 a 4500</leader> + <controlfield tag="001"/> + <controlfield tag="005">20000302124754.0</controlfield> + <controlfield tag="008">990817s2000 nyu 000 1 eng </controlfield> + <datafield tag="010" ind1=" " ind2=" "> + <subfield code="a"> 99045936</subfield> + </datafield> + .. + </record> + <record> + .. + </record> + </hold:volume> +</hold:volumes> + + + Supported formatsSupported formats + + + SuperCat maintains a list of supported formats for records and metarecords: + http://<hostname>/opac/extras/supercat/formats/<record-type> + For example, http://dev.gapines.org/opac/extras/supercat/formats/record returns: + +<formats> + <format> + <name>opac</name> + <type>text/html</type> + </format> + <format> + <name>htmlholdings</name> + <type>text/html</type> + </format> +... + + + + Adding new SuperCat FormatsAdding new SuperCat Formats + + + Adding SuperCat formats requires experience editing XSL files and familiarity with XML and Perl. + SuperCat web services are based on the OpenSRF service, >open-ils.supercat. + Developers are able to add new formats by adding the xsl stylesheet for the format. By default, the location of the stylesheets is /openils/var/xsl/. You must also add the feed to the perl + modules openils/lib/perl5/OpenILS/WWW/SuperCat/feed.pm and openils/lib/perl5/OpenILS/WWW/SuperCat.pm. An Evergreen restart is + required for the feed to be activated. + Use an existing xsl stylesheet and Perl module entry as a template for your new format. + + Customizing SuperCat FormatsCustomizing SuperCat Formats + + + Editing SuperCat formats requires experience editing XSL files and familiarity with XML.. + It is possible to customize existing supercat formats using XSL stylesheets. You are able to change the content to be displayed and the design of the pages. + In order to change the display of a specific format, edit the corresponding XSL file(s) for the particular format. The default location for the XSL stylesheets is + /openils/var/xsl/. + + + + + + Report errors in this documentation using Launchpad. + + Report any errors in this documentation using Launchpad. + Part VIII. AppendicesPart VIII. AppendicesPart VIII. Appendices + Report errors in this documentation using Launchpad. + Part VIII. Appendices + Report any errors in this documentation using Launchpad. + Part VIII. AppendicesTable of Contents22. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema query Schema reporter Schema search Schema serial Schema staging Schema stats Schema vandelay A. About this Documentation About the Documentation Interest Group (DIG) How to Participate B. Getting More Information Glossary Index + + Chapter 22. Database SchemaChapter 22. Database Schema + Report errors in this documentation using Launchpad. + Chapter 22. Database Schema + Report any errors in this documentation using Launchpad. + Chapter 22. Database SchemaChapter 22. Database SchemaThis is the schema for the Evergreen database.Schema acqSchema acqacq_lineitem_historyacq_lineitem_historyFieldData TypeConstraints and Referencesaudit_idbigint + + + PRIMARY KEY + + + + + + + + + audit_timetimestamp with time zone + + + NOT NULL; + + + + audit_actiontext + + + NOT NULL; + + + + idbigint + + + NOT NULL; + + + + creatorinteger + + + NOT NULL; + + + + editorinteger + + + NOT NULL; + + + + selectorinteger + + + NOT NULL; + + + + providerinteger + + + + + purchase_orderinteger + + + + + picklistinteger + + + + + expected_recv_timetimestamp with time zone + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + + edit_timetimestamp with time zone + + + NOT NULL; + + + + marctext + + + NOT NULL; + + + + eg_bib_idbigint + + + + + source_labeltext + + + + + statetext + + + NOT NULL; + + + + cancel_reasoninteger + + + + + estimated_unit_pricenumeric + + + + + claim_policyinteger + + + + + + + + + + acq_lineitem_lifecycleacq_lineitem_lifecycleFieldData TypeConstraints and References?column?bigint + + + + + audit_timetimestamp with time zone + + + + + audit_actiontext + + + + + idbigint + + + + + creatorinteger + + + + + editorinteger + + + + + selectorinteger + + + + + providerinteger + + + + + purchase_orderinteger + + + + + picklistinteger + + + + + expected_recv_timetimestamp with time zone + + + + + create_timetimestamp with time zone + + + + + edit_timetimestamp with time zone + + + + + marctext + + + + + eg_bib_idbigint + + + + + source_labeltext + + + + + statetext + + + + + cancel_reasoninteger + + + + + estimated_unit_pricenumeric + + + + + claim_policyinteger + + + + + + + + + + acq_purchase_order_historyacq_purchase_order_historyFieldData TypeConstraints and Referencesaudit_idbigint + + + PRIMARY KEY + + + + + + + + + audit_timetimestamp with time zone + + + NOT NULL; + + + + audit_actiontext + + + NOT NULL; + + + + idinteger + + + NOT NULL; + + + + ownerinteger + + + NOT NULL; + + + + creatorinteger + + + NOT NULL; + + + + editorinteger + + + NOT NULL; + + + + ordering_agencyinteger + + + NOT NULL; + + + + create_timetimestamp with time zone + + + NOT NULL; + + + + edit_timetimestamp with time zone + + + NOT NULL; + + + + providerinteger + + + NOT NULL; + + + + statetext + + + NOT NULL; + + + + order_datetimestamp with time zone + + + + + nametext + + + NOT NULL; + + + + cancel_reasoninteger + + + + + prepayment_requiredboolean + + + NOT NULL; + + + + + + + + + acq_purchase_order_lifecycleacq_purchase_order_lifecycleFieldData TypeConstraints and References?column?bigint + + + + + audit_timetimestamp with time zone + + + + + audit_actiontext + + + + + idinteger + + + + + ownerinteger + + + + + creatorinteger + + + + + editorinteger + + + + + ordering_agencyinteger + + + + + create_timetimestamp with time zone + + + + + edit_timetimestamp with time zone + + + + + providerinteger + + + + + statetext + + + + + order_datetimestamp with time zone + + + + + nametext + + + + + cancel_reasoninteger + + + + + prepayment_requiredboolean + + + + + + + + + + all_fund_allocation_totalall_fund_allocation_totalFieldData TypeConstraints and Referencesfundinteger + + + + + amountnumeric + + + + + + + + + + all_fund_combined_balanceall_fund_combined_balanceFieldData TypeConstraints and Referencesfundinteger + + + + + amountnumeric + + + + + + + + + + all_fund_encumbrance_totalall_fund_encumbrance_totalFieldData TypeConstraints and Referencesfundinteger + + + + + amountnumeric + + + + + + + + + + all_fund_spent_balanceall_fund_spent_balanceFieldData TypeConstraints and Referencesfundinteger + + + + + amountnumeric + + + + + + + + + + all_fund_spent_totalall_fund_spent_totalFieldData TypeConstraints and Referencesfundinteger + + + + + amountnumeric + + + + + + + + + + cancel_reasoncancel_reasonFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + org_unitinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.org_unit + + + labeltext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + descriptiontext + + + NOT NULL; + + + + keep_debitsboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + Tables referencing acq.lineitem via Foreign Key Constraints + •acq.lineitem•acq.lineitem_detail•acq.purchase_order•acq.user_request + + + + + claimclaimFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + typeinteger + + + + + + NOT NULL; + + + + + acq.claim_type + + + lineitem_detailbigint + + + + + + NOT NULL; + + + + + acq.lineitem_detail + + + + + + + + Tables referencing acq.claim_event via Foreign Key Constraints + •acq.claim_event + + + + + claim_eventclaim_eventFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + typeinteger + + + + + + NOT NULL; + + + + + acq.claim_event_type + + + claimserial + + + + + + NOT NULL; + + + + + acq.claim + + + event_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + notetext + + + + + + + + + + claim_event_typeclaim_event_typeFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + org_unitinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + codetext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + descriptiontext + + + NOT NULL; + + + + library_initiatedboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + Tables referencing acq.claim_event via Foreign Key Constraints + •acq.claim_event•acq.claim_policy_action•acq.serial_claim_event + + + + + claim_policyclaim_policyFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + org_unitinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + descriptiontext + + + NOT NULL; + + + + + + + + + Tables referencing acq.claim_policy_action via Foreign Key Constraints + •acq.claim_policy_action•acq.lineitem•acq.provider + + + + + claim_policy_actionclaim_policy_actionFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + claim_policyinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + acq.claim_policy + + + action_intervalinterval + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + actioninteger + + + + + + NOT NULL; + + + + + acq.claim_event_type + + + + + + + + claim_typeclaim_typeFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + org_unitinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.org_unit + + + codetext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + descriptiontext + + + NOT NULL; + + + + + + + + + Tables referencing acq.claim via Foreign Key Constraints + •acq.claim•acq.serial_claim + + + + + currency_typecurrency_typeFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + labeltext + + + + + + + + + + Tables referencing acq.exchange_rate via Foreign Key Constraints + •acq.exchange_rate•acq.fund•acq.fund_debit•acq.funding_source•acq.provider + + + + + debit_attributiondebit_attributionFieldData TypeConstraints and Referencesidinteger + + + PRIMARY KEY + + + + + + + + + fund_debitinteger + + + + + + NOT NULL; + + + + + acq.fund_debit + + + debit_amountnumeric + + + NOT NULL; + + + + funding_source_creditinteger + + + + + + + + + acq.funding_source_credit + + + credit_amountnumeric + + + + + + + + + + distribution_formuladistribution_formulaFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.org_unit + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + skip_countinteger + + + NOT NULL; + + + + + + + + + Tables referencing acq.distribution_formula_application via Foreign Key Constraints + •acq.distribution_formula_application•acq.distribution_formula_entry + + + + + distribution_formula_applicationdistribution_formula_applicationFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + formulainteger + + + + + + NOT NULL; + + + + + acq.distribution_formula + + + lineiteminteger + + + + + + NOT NULL; + + + + + acq.lineitem + + + + + + + + distribution_formula_entrydistribution_formula_entryFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + formulainteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + acq.distribution_formula + + + positioninteger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + item_countinteger + + + NOT NULL; + + + + owning_libinteger + + + + + + + + + actor.org_unit + + + locationinteger + + + + + + + + + asset.copy_location + + + + + + Constraints on distribution_formula_entryacqdfe_must_be_somewhereCHECK (((owning_lib IS NOT NULL) OR (location IS NOT NULL))) + + + + + + edi_accountedi_accountFieldData TypeConstraints and Referencesidinteger + + + PRIMARY KEY + + + + + + DEFAULT nextval('config.remote_account_id_seq'::regclass); + + + + + labeltext + + + NOT NULL; + + + + hosttext + + + NOT NULL; + + + + usernametext + + + + + passwordtext + + + + + accounttext + + + + + pathtext + + + + + ownerinteger + + + NOT NULL; + + + + last_activitytimestamp with time zone + + + + + providerinteger + + + + + + NOT NULL; + + + + + acq.provider + + + in_dirtext + + + + + vendcodetext + + + + + vendaccttext + + + + + + + + + + Tables referencing acq.edi_message via Foreign Key Constraints + •acq.edi_message•acq.provider + + + + + edi_messageedi_messageFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + accountinteger + + + + + + + + + acq.edi_account + + + remote_filetext + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + translate_timetimestamp with time zone + + + + + process_timetimestamp with time zone + + + + + error_timetimestamp with time zone + + + + + statustext + + + NOT NULL; + + + DEFAULT 'new'::text; + + + editext + + + + + jeditext + + + + + errortext + + + + + purchase_orderinteger + + + + + + + + + acq.purchase_order + + + message_typetext + + + NOT NULL; + + + + + + + Constraints on edi_messagestatus_valueCHECK ((status = ANY (ARRAY['new'::text, 'translated'::text, 'trans_error'::text, 'processed'::text, 'proc_error'::text, 'delete_error'::text, 'retry'::text, 'complete'::text])))valid_message_typeCHECK ((message_type = ANY (ARRAY['ORDERS'::text, 'ORDRSP'::text, 'INVOIC'::text, 'OSTENQ'::text, 'OSTRPT'::text]))) + + + + + + exchange_rateexchange_rateFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + from_currencytext + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + acq.currency_type + + + + + to_currencytext + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + acq.currency_type + + + rationumeric + + + NOT NULL; + + + + + + + + + fiscal_calendarfiscal_calendarFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + NOT NULL; + + + + + + + + + Tables referencing acq.fiscal_year via Foreign Key Constraints + •acq.fiscal_year•actor.org_unit + + + + + fiscal_yearfiscal_yearFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + calendarinteger + + + + UNIQUE#1 + ; + + + + + UNIQUE#2 + ; + + + + + + + NOT NULL; + + + + + + + + + acq.fiscal_calendar + + + yearinteger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + year_begintimestamp with time zone + + + + UNIQUE#2 + ; + + + + NOT NULL; + + + + + + year_endtimestamp with time zone + + + NOT NULL; + + + + + + + + + fundfundFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + orginteger + + + + UNIQUE#2 + ; + + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + + actor.org_unit + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + yearinteger + + + + UNIQUE#2 + ; + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + DEFAULT date_part('year'::text, now()); + + + + + + + currency_typetext + + + + + + NOT NULL; + + + + + acq.currency_type + + + codetext + + + + UNIQUE#2 + ; + + + + + + + + rolloverboolean + + + NOT NULL; + + + DEFAULT false; + + + propagateboolean + + + NOT NULL; + + + DEFAULT true; + + + activeboolean + + + NOT NULL; + + + DEFAULT true; + + + balance_warning_percentinteger + + + + + balance_stop_percentinteger + + + + + + + + Constraints on fundacq_fund_rollover_ implies_propagateCHECK ((propagate OR (NOT rollover))) + + + + + + Tables referencing acq.fund_allocation via Foreign Key Constraints + •acq.fund_allocation•acq.fund_debit•acq.fund_tag_map•acq.fund_transfer•acq.invoice_item•acq.lineitem_detail•acq.po_item + + + + + fund_allocationfund_allocationFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + funding_sourceinteger + + + + + + NOT NULL; + + + + + acq.funding_source + + + fundinteger + + + + + + NOT NULL; + + + + + acq.fund + + + amountnumeric + + + NOT NULL; + + + + allocatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + notetext + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + fund_allocation_percentfund_allocation_percentFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + funding_sourceinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + acq.funding_source + + + + + orginteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + fund_codetext + + + + UNIQUE#1 + ; + + + + + + + + percentnumeric + + + NOT NULL; + + + + allocatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + notetext + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + Constraints on fund_allocation_percentpercentage_rangeCHECK (((percent >= (0)::numeric) AND (percent <= (100)::numeric))) + + + + + + fund_allocation_totalfund_allocation_totalFieldData TypeConstraints and Referencesfundinteger + + + + + amountnumeric(100,2) + + + + + + + + + + fund_combined_balancefund_combined_balanceFieldData TypeConstraints and Referencesfundinteger + + + + + amountnumeric + + + + + + + + + + fund_debitfund_debitFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + fundinteger + + + + + + NOT NULL; + + + + + acq.fund + + + origin_amountnumeric + + + NOT NULL; + + + + origin_currency_typetext + + + + + + NOT NULL; + + + + + acq.currency_type + + + amountnumeric + + + NOT NULL; + + + + encumbranceboolean + + + NOT NULL; + + + DEFAULT true; + + + debit_typetext + + + NOT NULL; + + + + xfer_destinationinteger + + + + + + + + + acq.fund + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + Tables referencing acq.debit_attribution via Foreign Key Constraints + •acq.debit_attribution•acq.invoice_item•acq.lineitem_detail•acq.po_item + + + + + fund_debit_totalfund_debit_totalFieldData TypeConstraints and Referencesfundinteger + + + + + encumbranceboolean + + + + + amountnumeric + + + + + + + + + + fund_encumbrance_totalfund_encumbrance_totalFieldData TypeConstraints and Referencesfundinteger + + + + + amountnumeric + + + + + + + + + + fund_spent_balancefund_spent_balanceFieldData TypeConstraints and Referencesfundinteger + + + + + amountnumeric + + + + + + + + + + fund_spent_totalfund_spent_totalFieldData TypeConstraints and Referencesfundinteger + + + + + amountnumeric + + + + + + + + + + fund_tagfund_tagFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.org_unit + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + + + + + + Tables referencing acq.fund_tag_map via Foreign Key Constraints + •acq.fund_tag_map + + + + + fund_tag_mapfund_tag_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + fundinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + acq.fund + + + taginteger + + + + UNIQUE#1 + ; + + + + + + + + + + + + acq.fund_tag + + + + + + + + fund_transferfund_transferFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + src_fundinteger + + + + + + NOT NULL; + + + + + acq.fund + + + src_amountnumeric + + + NOT NULL; + + + + dest_fundinteger + + + + + + + + + acq.fund + + + dest_amountnumeric + + + + + transfer_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + transfer_userinteger + + + + + + NOT NULL; + + + + + actor.usr + + + notetext + + + + + funding_source_creditinteger + + + + + + NOT NULL; + + + + + acq.funding_source_credit + + + + + + + + funding_sourcefunding_sourceFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + ownerinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.org_unit + + + currency_typetext + + + + + + NOT NULL; + + + + + acq.currency_type + + + codetext + + + + UNIQUE; + + + + + + + + + + + + + Tables referencing acq.fund_allocation via Foreign Key Constraints + •acq.fund_allocation•acq.fund_allocation_percent•acq.funding_source_credit + + + + + funding_source_allocation_totalfunding_source_allocation_totalFieldData TypeConstraints and Referencesfunding_sourceinteger + + + + + amountnumeric(100,2) + + + + + + + + + + funding_source_balancefunding_source_balanceFieldData TypeConstraints and Referencesfunding_sourceinteger + + + + + amountnumeric(100,2) + + + + + + + + + + funding_source_creditfunding_source_creditFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + funding_sourceinteger + + + + + + NOT NULL; + + + + + acq.funding_source + + + amountnumeric + + + NOT NULL; + + + + notetext + + + + + deadline_datetimestamp with time zone + + + + + effective_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + Tables referencing acq.debit_attribution via Foreign Key Constraints + •acq.debit_attribution•acq.fund_transfer + + + + + funding_source_credit_totalfunding_source_credit_totalFieldData TypeConstraints and Referencesfunding_sourceinteger + + + + + amountnumeric + + + + + + + + + + invoiceinvoiceFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + receiverinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + providerinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + acq.provider + + + shipperinteger + + + + + + NOT NULL; + + + + + acq.provider + + + recv_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + recv_methodtext + + + + + + NOT NULL; + + + DEFAULT 'EDI'::text; + + + + acq.invoice_method + + + inv_typetext + + + + + inv_identtext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + payment_authtext + + + + + payment_methodtext + + + + + + + + + acq.invoice_payment_method + + + notetext + + + + + completeboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + Tables referencing acq.invoice_entry via Foreign Key Constraints + •acq.invoice_entry•acq.invoice_item + + + + + invoice_entryinvoice_entryFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + invoiceinteger + + + + + + NOT NULL; + + + + + acq.invoice + + + purchase_orderinteger + + + + + + + + + acq.purchase_order + + + lineiteminteger + + + + + + + + + acq.lineitem + + + inv_item_countinteger + + + NOT NULL; + + + + phys_item_countinteger + + + + + notetext + + + + + billed_per_itemboolean + + + + + cost_billednumeric(8,2) + + + + + actual_costnumeric(8,2) + + + + + amount_paidnumeric(8,2) + + + + + + + + + + invoice_iteminvoice_itemFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + invoiceinteger + + + + + + NOT NULL; + + + + + acq.invoice + + + purchase_orderinteger + + + + + + + + + acq.purchase_order + + + fund_debitinteger + + + + + + + + + acq.fund_debit + + + inv_item_typetext + + + + + + NOT NULL; + + + + + acq.invoice_item_type + + + titletext + + + + + authortext + + + + + notetext + + + + + cost_billednumeric(8,2) + + + + + actual_costnumeric(8,2) + + + + + fundinteger + + + + + + + + + acq.fund + + + amount_paidnumeric(8,2) + + + + + po_iteminteger + + + + + + + + + acq.po_item + + + targetbigint + + + + + + + + + + invoice_item_typeinvoice_item_typeFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + nametext + + + NOT NULL; + + + + prorateboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + Tables referencing acq.invoice_item via Foreign Key Constraints + •acq.invoice_item•acq.po_item + + + + + invoice_methodinvoice_methodFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + nametext + + + NOT NULL; + + + + + + + + + Tables referencing acq.invoice via Foreign Key Constraints + •acq.invoice + + + + + invoice_payment_methodinvoice_payment_methodFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + nametext + + + NOT NULL; + + + + + + + + + Tables referencing acq.invoice via Foreign Key Constraints + •acq.invoice + + + + + lineitemlineitemFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + editorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + selectorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + providerinteger + + + + + + + + + acq.provider + + + purchase_orderinteger + + + + + + + + + acq.purchase_order + + + picklistinteger + + + + + + + + + acq.picklist + + + expected_recv_timetimestamp with time zone + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + edit_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + marctext + + + NOT NULL; + + + + eg_bib_idbigint + + + + + + + + + biblio.record_entry + + + source_labeltext + + + + + statetext + + + NOT NULL; + + + DEFAULT 'new'::text; + + + cancel_reasoninteger + + + + + + + + + acq.cancel_reason + + + estimated_unit_pricenumeric + + + + + claim_policyinteger + + + + + + + + + acq.claim_policy + + + + + + Constraints on lineitempicklist_or_poCHECK (((picklist IS NOT NULL) OR (purchase_order IS NOT NULL))) + + + + + + Tables referencing acq.distribution_formula_application via Foreign Key Constraints + •acq.distribution_formula_application•acq.invoice_entry•acq.lineitem_attr•acq.lineitem_detail•acq.lineitem_note•acq.user_request + + + + + lineitem_alert_textlineitem_alert_textFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + codetext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + descriptiontext + + + + + owning_libinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.org_unit + + + + + + + + Tables referencing acq.lineitem_note via Foreign Key Constraints + •acq.lineitem_note + + + + + lineitem_attrlineitem_attrFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + definitionbigint + + + NOT NULL; + + + + lineitembigint + + + + + + NOT NULL; + + + + + acq.lineitem + + + attr_typetext + + + NOT NULL; + + + + attr_nametext + + + NOT NULL; + + + + attr_valuetext + + + NOT NULL; + + + + + + + + + lineitem_attr_definitionlineitem_attr_definitionFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + codetext + + + NOT NULL; + + + + descriptiontext + + + NOT NULL; + + + + removetext + + + NOT NULL; + + + DEFAULT ''::text; + + + identboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + lineitem_detaillineitem_detailFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + lineiteminteger + + + + + + NOT NULL; + + + + + acq.lineitem + + + fundinteger + + + + + + + + + acq.fund + + + fund_debitinteger + + + + + + + + + acq.fund_debit + + + eg_copy_idbigint + + + + + barcodetext + + + + + cn_labeltext + + + + + notetext + + + + + collection_codetext + + + + + circ_modifiertext + + + + + + + + + config.circ_modifier + + + owning_libinteger + + + + + + + + + actor.org_unit + + + locationinteger + + + + + + + + + asset.copy_location + + + recv_timetimestamp with time zone + + + + + cancel_reasoninteger + + + + + + + + + acq.cancel_reason + + + + + + + + Tables referencing acq.claim via Foreign Key Constraints + •acq.claim + + + + + lineitem_generated_attr_definitionlineitem_generated_attr_definitionFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); + + + + + codetext + + + NOT NULL; + + + + descriptiontext + + + NOT NULL; + + + + removetext + + + NOT NULL; + + + DEFAULT ''::text; + + + identboolean + + + NOT NULL; + + + DEFAULT false; + + + xpathtext + + + NOT NULL; + + + + + + + + + lineitem_local_attr_definitionlineitem_local_attr_definitionFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); + + + + + codetext + + + NOT NULL; + + + + descriptiontext + + + NOT NULL; + + + + removetext + + + NOT NULL; + + + DEFAULT ''::text; + + + identboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + lineitem_marc_attr_definitionlineitem_marc_attr_definitionFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); + + + + + codetext + + + NOT NULL; + + + + descriptiontext + + + NOT NULL; + + + + removetext + + + NOT NULL; + + + DEFAULT ''::text; + + + identboolean + + + NOT NULL; + + + DEFAULT false; + + + xpathtext + + + NOT NULL; + + + + + + + + + lineitem_notelineitem_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + lineiteminteger + + + + + + NOT NULL; + + + + + acq.lineitem + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + editorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + edit_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + valuetext + + + NOT NULL; + + + + alert_textinteger + + + + + + + + + acq.lineitem_alert_text + + + vendor_publicboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + lineitem_provider_attr_definitionlineitem_provider_attr_definitionFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); + + + + + codetext + + + NOT NULL; + + + + descriptiontext + + + NOT NULL; + + + + removetext + + + NOT NULL; + + + DEFAULT ''::text; + + + identboolean + + + NOT NULL; + + + DEFAULT false; + + + xpathtext + + + NOT NULL; + + + + providerinteger + + + + + + NOT NULL; + + + + + acq.provider + + + + + + + + lineitem_usr_attr_definitionlineitem_usr_attr_definitionFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('acq.lineitem_attr_definition_id_seq'::regclass); + + + + + codetext + + + NOT NULL; + + + + descriptiontext + + + NOT NULL; + + + + removetext + + + NOT NULL; + + + DEFAULT ''::text; + + + identboolean + + + NOT NULL; + + + DEFAULT false; + + + usrinteger + + + + + + NOT NULL; + + + + + actor.usr + + + + + + + + ordered_funding_source_creditordered_funding_source_creditFieldData TypeConstraints and Referencessort_priorityinteger + + + + + sort_datetimestamp with time zone + + + + + idinteger + + + + + funding_sourceinteger + + + + + amountnumeric + + + + + notetext + + + + + + + + + + picklistpicklistFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.usr + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + editorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + org_unitinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + edit_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + Tables referencing acq.lineitem via Foreign Key Constraints + •acq.lineitem + + + + + po_itempo_itemFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + purchase_orderinteger + + + + + + + + + acq.purchase_order + + + fund_debitinteger + + + + + + + + + acq.fund_debit + + + inv_item_typetext + + + + + + NOT NULL; + + + + + acq.invoice_item_type + + + titletext + + + + + authortext + + + + + notetext + + + + + estimated_costnumeric(8,2) + + + + + fundinteger + + + + + + + + + acq.fund + + + targetbigint + + + + + + + + + + Tables referencing acq.invoice_item via Foreign Key Constraints + •acq.invoice_item + + + + + po_notepo_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + purchase_orderinteger + + + + + + NOT NULL; + + + + + acq.purchase_order + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + editorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + edit_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + valuetext + + + NOT NULL; + + + + vendor_publicboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + providerproviderFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + ownerinteger + + + + UNIQUE#2 + ; + + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + + + actor.org_unit + + + currency_typetext + + + + + + NOT NULL; + + + + + acq.currency_type + + + codetext + + + + UNIQUE#2 + ; + + + + NOT NULL; + + + + + + holding_tagtext + + + + + santext + + + + + edi_defaultinteger + + + + + + + + + acq.edi_account + + + activeboolean + + + NOT NULL; + + + DEFAULT true; + + + prepayment_requiredboolean + + + NOT NULL; + + + DEFAULT false; + + + urltext + + + + + emailtext + + + + + phonetext + + + + + fax_phonetext + + + + + default_claim_policyinteger + + + + + + + + + acq.claim_policy + + + + + + + + Tables referencing acq.edi_account via Foreign Key Constraints + •acq.edi_account•acq.invoice•acq.lineitem•acq.lineitem_provider_attr_definition•acq.provider_address•acq.provider_contact•acq.provider_holding_subfield_map•acq.provider_note•acq.purchase_order + + + + + provider_addressprovider_addressFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + validboolean + + + NOT NULL; + + + DEFAULT true; + + + address_typetext + + + + + providerinteger + + + + + + NOT NULL; + + + + + acq.provider + + + street1text + + + NOT NULL; + + + + street2text + + + + + citytext + + + NOT NULL; + + + + countytext + + + + + statetext + + + NOT NULL; + + + + countrytext + + + NOT NULL; + + + + post_codetext + + + NOT NULL; + + + + fax_phonetext + + + + + + + + + + provider_contactprovider_contactFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + providerinteger + + + + + + NOT NULL; + + + + + acq.provider + + + nametext + + + NOT NULL; + + + + roletext + + + + + emailtext + + + + + phonetext + + + + + + + + + + Tables referencing acq.provider_contact_address via Foreign Key Constraints + •acq.provider_contact_address + + + + + provider_contact_addressprovider_contact_addressFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + validboolean + + + NOT NULL; + + + DEFAULT true; + + + address_typetext + + + + + contactinteger + + + + + + NOT NULL; + + + + + acq.provider_contact + + + street1text + + + NOT NULL; + + + + street2text + + + + + citytext + + + NOT NULL; + + + + countytext + + + + + statetext + + + NOT NULL; + + + + countrytext + + + NOT NULL; + + + + post_codetext + + + NOT NULL; + + + + fax_phonetext + + + + + + + + + + provider_holding_subfield_mapprovider_holding_subfield_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + providerinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + acq.provider + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + subfieldtext + + + NOT NULL; + + + + + + + + + provider_noteprovider_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + providerinteger + + + + + + NOT NULL; + + + + + acq.provider + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + editorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + edit_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + valuetext + + + NOT NULL; + + + + + + + + + purchase_orderpurchase_orderFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + NOT NULL; + + + + + actor.usr + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + editorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + ordering_agencyinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + edit_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + providerinteger + + + + + + NOT NULL; + + + + + acq.provider + + + statetext + + + NOT NULL; + + + DEFAULT 'new'::text; + + + order_datetimestamp with time zone + + + + + nametext + + + NOT NULL; + + + + cancel_reasoninteger + + + + + + + + + acq.cancel_reason + + + prepayment_requiredboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + Tables referencing acq.edi_message via Foreign Key Constraints + •acq.edi_message•acq.invoice_entry•acq.invoice_item•acq.lineitem•acq.po_item•acq.po_note + + + + + serial_claimserial_claimFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + typeinteger + + + + + + NOT NULL; + + + + + acq.claim_type + + + itembigint + + + + + + NOT NULL; + + + + + serial.item + + + + + + + + Tables referencing acq.serial_claim_event via Foreign Key Constraints + •acq.serial_claim_event + + + + + serial_claim_eventserial_claim_eventFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + typeinteger + + + + + + NOT NULL; + + + + + acq.claim_event_type + + + claimserial + + + + + + NOT NULL; + + + + + acq.serial_claim + + + event_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + notetext + + + + + + + + + + user_requestuser_requestFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + usrinteger + + + + + + NOT NULL; + + + + + actor.usr + + + holdboolean + + + NOT NULL; + + + DEFAULT true; + + + pickup_libinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + holdable_formatstext + + + + + phone_notifytext + + + + + email_notifyboolean + + + NOT NULL; + + + DEFAULT true; + + + lineiteminteger + + + + + + + + + acq.lineitem + + + eg_bibbigint + + + + + + + + + biblio.record_entry + + + request_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + need_beforetimestamp with time zone + + + + + max_feetext + + + + + request_typeinteger + + + + + + NOT NULL; + + + + + acq.user_request_type + + + isxntext + + + + + titletext + + + + + volumetext + + + + + authortext + + + + + article_titletext + + + + + article_pagestext + + + + + publishertext + + + + + locationtext + + + + + pubdatetext + + + + + mentionedtext + + + + + other_infotext + + + + + cancel_reasoninteger + + + + + + + + + acq.cancel_reason + + + + + + + + user_request_typeuser_request_typeFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + labeltext + + + + UNIQUE; + + + + NOT NULL; + + + + + + + + + + + Tables referencing acq.user_request via Foreign Key Constraints + •acq.user_request + + + + + Schema actionSchema actionaged_circulationaged_circulationFieldData TypeConstraints and Referencesusr_post_codetext + + + + + usr_home_ouinteger + + + NOT NULL; + + + + usr_profileinteger + + + NOT NULL; + + + + usr_birth_yearinteger + + + + + copy_call_numberinteger + + + NOT NULL; + + + + copy_locationinteger + + + NOT NULL; + + + + copy_owning_libinteger + + + NOT NULL; + + + + copy_circ_libinteger + + + NOT NULL; + + + + copy_bib_recordbigint + + + NOT NULL; + + + + idbigint + + + PRIMARY KEY + + + + + + + + + xact_starttimestamp with time zone + + + NOT NULL; + + + + xact_finishtimestamp with time zone + + + + + unrecoveredboolean + + + + + target_copybigint + + + NOT NULL; + + + + circ_libinteger + + + NOT NULL; + + + + circ_staffinteger + + + NOT NULL; + + + + checkin_staffinteger + + + + + checkin_libinteger + + + + + renewal_remaininginteger + + + NOT NULL; + + + + due_datetimestamp with time zone + + + + + stop_fines_timetimestamp with time zone + + + + + checkin_timetimestamp with time zone + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + + durationinterval + + + + + fine_intervalinterval + + + NOT NULL; + + + + recurring_finenumeric(6,2) + + + + + max_finenumeric(6,2) + + + + + phone_renewalboolean + + + NOT NULL; + + + + desk_renewalboolean + + + NOT NULL; + + + + opac_renewalboolean + + + NOT NULL; + + + + duration_ruletext + + + NOT NULL; + + + + recurring_fine_ruletext + + + NOT NULL; + + + + max_fine_ruletext + + + NOT NULL; + + + + stop_finestext + + + + + workstationinteger + + + + + checkin_workstationinteger + + + + + checkin_scan_timetimestamp with time zone + + + + + parent_circbigint + + + + + + + + + + all_circulationall_circulationFieldData TypeConstraints and Referencesidbigint + + + + + usr_post_codetext + + + + + usr_home_ouinteger + + + + + usr_profileinteger + + + + + usr_birth_yearinteger + + + + + copy_call_numberbigint + + + + + copy_locationinteger + + + + + copy_owning_libinteger + + + + + copy_circ_libinteger + + + + + copy_bib_recordbigint + + + + + xact_starttimestamp with time zone + + + + + xact_finishtimestamp with time zone + + + + + target_copybigint + + + + + circ_libinteger + + + + + circ_staffinteger + + + + + checkin_staffinteger + + + + + checkin_libinteger + + + + + renewal_remaininginteger + + + + + due_datetimestamp with time zone + + + + + stop_fines_timetimestamp with time zone + + + + + checkin_timetimestamp with time zone + + + + + create_timetimestamp with time zone + + + + + durationinterval + + + + + fine_intervalinterval + + + + + recurring_finenumeric(6,2) + + + + + max_finenumeric(6,2) + + + + + phone_renewalboolean + + + + + desk_renewalboolean + + + + + opac_renewalboolean + + + + + duration_ruletext + + + + + recurring_fine_ruletext + + + + + max_fine_ruletext + + + + + stop_finestext + + + + + workstationinteger + + + + + checkin_workstationinteger + + + + + checkin_scan_timetimestamp with time zone + + + + + parent_circbigint + + + + + + + + + + billable_circulationsbillable_circulationsFieldData TypeConstraints and Referencesidbigint + + + + + usrinteger + + + + + xact_starttimestamp with time zone + + + + + xact_finishtimestamp with time zone + + + + + unrecoveredboolean + + + + + target_copybigint + + + + + circ_libinteger + + + + + circ_staffinteger + + + + + checkin_staffinteger + + + + + checkin_libinteger + + + + + renewal_remaininginteger + + + + + due_datetimestamp with time zone + + + + + stop_fines_timetimestamp with time zone + + + + + checkin_timetimestamp with time zone + + + + + create_timetimestamp with time zone + + + + + durationinterval + + + + + fine_intervalinterval + + + + + recurring_finenumeric(6,2) + + + + + max_finenumeric(6,2) + + + + + phone_renewalboolean + + + + + desk_renewalboolean + + + + + opac_renewalboolean + + + + + duration_ruletext + + + + + recurring_fine_ruletext + + + + + max_fine_ruletext + + + + + stop_finestext + + + + + workstationinteger + + + + + checkin_workstationinteger + + + + + checkin_scan_timetimestamp with time zone + + + + + parent_circbigint + + + + + + + + + + circulationcirculationFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('money.billable_xact_id_seq'::regclass); + + + + + usrinteger + + + + + + NOT NULL; + + + + + actor.usr + + + xact_starttimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + xact_finishtimestamp with time zone + + + + + unrecoveredboolean + + + + + target_copybigint + + + NOT NULL; + + + + circ_libinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + circ_staffinteger + + + NOT NULL; + + + + checkin_staffinteger + + + + + checkin_libinteger + + + + + renewal_remaininginteger + + + NOT NULL; + + + + due_datetimestamp with time zone + + + + + stop_fines_timetimestamp with time zone + + + + + checkin_timetimestamp with time zone + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + durationinterval + + + + + fine_intervalinterval + + + NOT NULL; + + + DEFAULT '1 day'::interval; + + + recurring_finenumeric(6,2) + + + + + max_finenumeric(6,2) + + + + + phone_renewalboolean + + + NOT NULL; + + + DEFAULT false; + + + desk_renewalboolean + + + NOT NULL; + + + DEFAULT false; + + + opac_renewalboolean + + + NOT NULL; + + + DEFAULT false; + + + duration_ruletext + + + NOT NULL; + + + + recurring_fine_ruletext + + + NOT NULL; + + + + max_fine_ruletext + + + NOT NULL; + + + + stop_finestext + + + + + workstationinteger + + + + + + + + + actor.workstation + + + checkin_workstationinteger + + + + + + + + + actor.workstation + + + checkin_scan_timetimestamp with time zone + + + + + parent_circbigint + + + + + + + + + action.circulation + + + + + + Constraints on circulationcirculation_stop_fines_checkCHECK ((stop_fines = ANY (ARRAY['CHECKIN'::text, 'CLAIMSRETURNED'::text, 'LOST'::text, 'MAXFINES'::text, 'RENEW'::text, 'LONGOVERDUE'::text, 'CLAIMSNEVERCHECKEDOUT'::text]))) + + + + + + Tables referencing action.circulation via Foreign Key Constraints + •action.circulation + + + + + fieldsetfieldsetFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + NOT NULL; + + + + + actor.usr + + + owning_libinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + statustext + + + NOT NULL; + + + + creation_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + scheduled_timetimestamp with time zone + + + + + applied_timetimestamp with time zone + + + + + classnametext + + + NOT NULL; + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + stored_queryinteger + + + + + + + + + query.stored_query + + + pkey_valuetext + + + + + + + + Constraints on fieldsetfieldset_one_or_the_otherCHECK ((((stored_query IS NOT NULL) AND (pkey_value IS NULL)) OR ((pkey_value IS NOT NULL) AND (stored_query IS NULL))))valid_statusCHECK ((status = ANY (ARRAY['PENDING'::text, 'APPLIED'::text, 'ERROR'::text]))) + + + + + + Tables referencing action.fieldset_col_val via Foreign Key Constraints + •action.fieldset_col_val + + + + + fieldset_col_valfieldset_col_valFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + fieldsetinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + action.fieldset + + + coltext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + valtext + + + + + + + + + + hold_copy_maphold_copy_mapFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + holdinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + action.hold_request + + + target_copybigint + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + + + + + + hold_notificationhold_notificationFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + holdinteger + + + + + + NOT NULL; + + + + + action.hold_request + + + notify_staffinteger + + + + + + + + + actor.usr + + + notify_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + methodtext + + + NOT NULL; + + + + notetext + + + + + + + + + + hold_requesthold_requestFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + request_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + capture_timetimestamp with time zone + + + + + fulfillment_timetimestamp with time zone + + + + + checkin_timetimestamp with time zone + + + + + return_timetimestamp with time zone + + + + + prev_check_timetimestamp with time zone + + + + + expire_timetimestamp with time zone + + + + + cancel_timetimestamp with time zone + + + + + cancel_causeinteger + + + + + + + + + action.hold_request_cancel_cause + + + cancel_notetext + + + + + targetbigint + + + NOT NULL; + + + + current_copybigint + + + + + fulfillment_staffinteger + + + + + + + + + actor.usr + + + fulfillment_libinteger + + + + + + + + + actor.org_unit + + + request_libinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + requestorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + usrinteger + + + + + + NOT NULL; + + + + + actor.usr + + + selection_ouinteger + + + NOT NULL; + + + + selection_depthinteger + + + NOT NULL; + + + + pickup_libinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + hold_typetext + + + NOT NULL; + + + + holdable_formatstext + + + + + phone_notifytext + + + + + email_notifyboolean + + + NOT NULL; + + + DEFAULT true; + + + frozenboolean + + + NOT NULL; + + + DEFAULT false; + + + thaw_datetimestamp with time zone + + + + + shelf_timetimestamp with time zone + + + + + cut_in_lineboolean + + + + + mint_conditionboolean + + + NOT NULL; + + + DEFAULT true; + + + shelf_expire_timetimestamp with time zone + + + + + + + + + + Tables referencing action.hold_copy_map via Foreign Key Constraints + •action.hold_copy_map•action.hold_notification•action.hold_request_note•action.hold_transit_copy + + + + + hold_request_cancel_causehold_request_cancel_causeFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + labeltext + + + + UNIQUE; + + + + + + + + + + + + + Tables referencing action.hold_request via Foreign Key Constraints + •action.hold_request + + + + + hold_request_notehold_request_noteFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + holdbigint + + + + + + NOT NULL; + + + + + action.hold_request + + + titletext + + + NOT NULL; + + + + bodytext + + + NOT NULL; + + + + slipboolean + + + NOT NULL; + + + DEFAULT false; + + + pubboolean + + + NOT NULL; + + + DEFAULT false; + + + staffboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + hold_transit_copyhold_transit_copyFieldData TypeConstraints and Referencesidinteger + + + PRIMARY KEY + + + + + + DEFAULT nextval('action.transit_copy_id_seq'::regclass); + + + + + source_send_timetimestamp with time zone + + + + + dest_recv_timetimestamp with time zone + + + + + target_copybigint + + + NOT NULL; + + + + sourceinteger + + + NOT NULL; + + + + destinteger + + + NOT NULL; + + + + prev_hopinteger + + + + + copy_statusinteger + + + NOT NULL; + + + + persistant_transferboolean + + + NOT NULL; + + + DEFAULT false; + + + prev_destinteger + + + + + holdinteger + + + + + + + + + action.hold_request + + + + + + + + in_house_usein_house_useFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + itembigint + + + NOT NULL; + + + + staffinteger + + + + + + NOT NULL; + + + + + actor.usr + + + org_unitinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + use_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + non_cat_in_house_usenon_cat_in_house_useFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + item_typebigint + + + + + + NOT NULL; + + + + + config.non_cataloged_type + + + staffinteger + + + + + + NOT NULL; + + + + + actor.usr + + + org_unitinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + use_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + non_cataloged_circulationnon_cataloged_circulationFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + patroninteger + + + + + + NOT NULL; + + + + + actor.usr + + + staffinteger + + + + + + NOT NULL; + + + + + actor.usr + + + circ_libinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + item_typeinteger + + + + + + NOT NULL; + + + + + config.non_cataloged_type + + + circ_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + open_circulationopen_circulationFieldData TypeConstraints and Referencesidbigint + + + + + usrinteger + + + + + xact_starttimestamp with time zone + + + + + xact_finishtimestamp with time zone + + + + + unrecoveredboolean + + + + + target_copybigint + + + + + circ_libinteger + + + + + circ_staffinteger + + + + + checkin_staffinteger + + + + + checkin_libinteger + + + + + renewal_remaininginteger + + + + + due_datetimestamp with time zone + + + + + stop_fines_timetimestamp with time zone + + + + + checkin_timetimestamp with time zone + + + + + create_timetimestamp with time zone + + + + + durationinterval + + + + + fine_intervalinterval + + + + + recurring_finenumeric(6,2) + + + + + max_finenumeric(6,2) + + + + + phone_renewalboolean + + + + + desk_renewalboolean + + + + + opac_renewalboolean + + + + + duration_ruletext + + + + + recurring_fine_ruletext + + + + + max_fine_ruletext + + + + + stop_finestext + + + + + workstationinteger + + + + + checkin_workstationinteger + + + + + checkin_scan_timetimestamp with time zone + + + + + parent_circbigint + + + + + + + + + + reservation_transit_copyreservation_transit_copyFieldData TypeConstraints and Referencesidinteger + + + PRIMARY KEY + + + + + + DEFAULT nextval('action.transit_copy_id_seq'::regclass); + + + + + source_send_timetimestamp with time zone + + + + + dest_recv_timetimestamp with time zone + + + + + target_copybigint + + + + + + NOT NULL; + + + + + booking.resource + + + sourceinteger + + + NOT NULL; + + + + destinteger + + + NOT NULL; + + + + prev_hopinteger + + + + + copy_statusinteger + + + NOT NULL; + + + + persistant_transferboolean + + + NOT NULL; + + + DEFAULT false; + + + prev_destinteger + + + + + reservationinteger + + + + + + + + + booking.reservation + + + + + + + + surveysurveyFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + start_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + end_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT (now() + '10 years'::interval); + + + usr_summaryboolean + + + NOT NULL; + + + DEFAULT false; + + + opacboolean + + + NOT NULL; + + + DEFAULT false; + + + pollboolean + + + NOT NULL; + + + DEFAULT false; + + + requiredboolean + + + NOT NULL; + + + DEFAULT false; + + + nametext + + + NOT NULL; + + + + descriptiontext + + + NOT NULL; + + + + + + + + + Tables referencing action.survey_question via Foreign Key Constraints + •action.survey_question•action.survey_response + + + + + survey_answersurvey_answerFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + questioninteger + + + + + + NOT NULL; + + + + + action.survey_question + + + answertext + + + NOT NULL; + + + + + + + + + Tables referencing action.survey_response via Foreign Key Constraints + •action.survey_response + + + + + survey_questionsurvey_questionFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + surveyinteger + + + + + + NOT NULL; + + + + + action.survey + + + questiontext + + + NOT NULL; + + + + + + + + + Tables referencing action.survey_answer via Foreign Key Constraints + •action.survey_answer•action.survey_response + + + + + survey_responsesurvey_responseFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + response_group_idinteger + + + + + usrinteger + + + + + surveyinteger + + + + + + NOT NULL; + + + + + action.survey + + + questioninteger + + + + + + NOT NULL; + + + + + action.survey_question + + + answerinteger + + + + + + NOT NULL; + + + + + action.survey_answer + + + answer_datetimestamp with time zone + + + + + effective_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + transit_copytransit_copyFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + source_send_timetimestamp with time zone + + + + + dest_recv_timetimestamp with time zone + + + + + target_copybigint + + + NOT NULL; + + + + sourceinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + destinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + prev_hopinteger + + + + + + + + + action.transit_copy + + + copy_statusinteger + + + + + + NOT NULL; + + + + + config.copy_status + + + persistant_transferboolean + + + NOT NULL; + + + DEFAULT false; + + + prev_destinteger + + + + + + + + + actor.org_unit + + + + + + + + Tables referencing action.transit_copy via Foreign Key Constraints + •action.transit_copy + + + + + unfulfilled_hold_innermost_loopunfulfilled_hold_innermost_loopFieldData TypeConstraints and Referencesholdinteger + + + + + circ_libinteger + + + + + countbigint + + + + + + + + + + unfulfilled_hold_listunfulfilled_hold_listFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + current_copybigint + + + NOT NULL; + + + + holdinteger + + + NOT NULL; + + + + circ_libinteger + + + NOT NULL; + + + + fail_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + unfulfilled_hold_loopsunfulfilled_hold_loopsFieldData TypeConstraints and Referencesholdinteger + + + + + circ_libinteger + + + + + countbigint + + + + + + + + + + unfulfilled_hold_max_loopunfulfilled_hold_max_loopFieldData TypeConstraints and Referencesholdinteger + + + + + maxbigint + + + + + + + + + + unfulfilled_hold_min_loopunfulfilled_hold_min_loopFieldData TypeConstraints and Referencesholdinteger + + + + + minbigint + + + + + + + + + + Schema action_triggerSchema action_triggercleanupcleanupFieldData TypeConstraints and Referencesmoduletext + + + PRIMARY KEY + + + + + + + + + descriptiontext + + + + + + + + + + Tables referencing action_trigger.event_definition via Foreign Key Constraints + •action_trigger.event_definition + + + + + collectorcollectorFieldData TypeConstraints and Referencesmoduletext + + + PRIMARY KEY + + + + + + + + + descriptiontext + + + + + + + + + + Tables referencing action_trigger.environment via Foreign Key Constraints + •action_trigger.environment + + + + + environmentenvironmentFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + event_definteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + action_trigger.event_definition + + + pathtext + + + + + collectortext + + + + + + + + + action_trigger.collector + + + labeltext + + + + UNIQUE#1 + ; + + + + + + + + + + + Constraints on environmentenvironment_label_checkCHECK ((label <> ALL (ARRAY['result'::text, 'target'::text, 'event'::text]))) + + + + + + eventeventFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + targetbigint + + + NOT NULL; + + + + event_definteger + + + + + + + + + action_trigger.event_definition + + + add_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + run_timetimestamp with time zone + + + NOT NULL; + + + + start_timetimestamp with time zone + + + + + update_timetimestamp with time zone + + + + + complete_timetimestamp with time zone + + + + + update_processinteger + + + + + statetext + + + NOT NULL; + + + DEFAULT 'pending'::text; + + + user_datatext + + + + + template_outputbigint + + + + + + + + + action_trigger.event_output + + + error_outputbigint + + + + + + + + + action_trigger.event_output + + + async_outputbigint + + + + + + + + + action_trigger.event_output + + + + + + Constraints on eventevent_state_checkCHECK ((state = ANY (ARRAY['pending'::text, 'invalid'::text, 'found'::text, 'collecting'::text, 'collected'::text, 'validating'::text, 'valid'::text, 'reacting'::text, 'reacted'::text, 'cleaning'::text, 'complete'::text, 'error'::text])))event_user_data_checkCHECK (((user_data IS NULL) OR is_json(user_data))) + + + + + + event_definitionevent_definitionFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + activeboolean + + + NOT NULL; + + + DEFAULT true; + + + ownerinteger + + + + UNIQUE#2 + ; + + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + + + actor.org_unit + + + nametext + + + + UNIQUE#2 + ; + + + + NOT NULL; + + + + + + hooktext + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + action_trigger.hook + + + validatortext + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + action_trigger.validator + + + reactortext + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + action_trigger.reactor + + + cleanup_successtext + + + + + + + + + action_trigger.cleanup + + + cleanup_failuretext + + + + + + + + + action_trigger.cleanup + + + delayinterval + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + DEFAULT '00:05:00'::interval; + + + + + max_delayinterval + + + + + usr_fieldtext + + + + + opt_in_settingtext + + + + + + + + + config.usr_setting_type + + + delay_fieldtext + + + + UNIQUE#1 + ; + + + + + + + + group_fieldtext + + + + + templatetext + + + + + granularitytext + + + + + + + + + + Tables referencing action_trigger.environment via Foreign Key Constraints + •action_trigger.environment•action_trigger.event•action_trigger.event_params + + + + + event_outputevent_outputFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + is_errorboolean + + + NOT NULL; + + + DEFAULT false; + + + datatext + + + NOT NULL; + + + + + + + + + Tables referencing action_trigger.event via Foreign Key Constraints + •action_trigger.event + + + + + event_paramsevent_paramsFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + event_definteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + action_trigger.event_definition + + + + + paramtext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + valuetext + + + NOT NULL; + + + + + + + + + hookhookFieldData TypeConstraints and Referenceskeytext + + + PRIMARY KEY + + + + + + + + + core_typetext + + + NOT NULL; + + + + descriptiontext + + + + + passiveboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + Tables referencing action_trigger.event_definition via Foreign Key Constraints + •action_trigger.event_definition + + + + + reactorreactorFieldData TypeConstraints and Referencesmoduletext + + + PRIMARY KEY + + + + + + + + + descriptiontext + + + + + + + + + + Tables referencing action_trigger.event_definition via Foreign Key Constraints + •action_trigger.event_definition + + + + + validatorvalidatorFieldData TypeConstraints and Referencesmoduletext + + + PRIMARY KEY + + + + + + + + + descriptiontext + + + + + + + + + + Tables referencing action_trigger.event_definition via Foreign Key Constraints + •action_trigger.event_definition + + + + + Schema actorSchema actorcardcardFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + usrinteger + + + + + + NOT NULL; + + + + + actor.usr + + + barcodetext + + + + UNIQUE; + + + + NOT NULL; + + + + + + activeboolean + + + NOT NULL; + + + DEFAULT true; + + + + + + + + hours_of_operationhours_of_operationFieldData TypeConstraints and Referencesidinteger + + + + + + PRIMARY KEY + + + + + + + + actor.org_unit + + + + + dow_0_opentime without time zone + + + NOT NULL; + + + DEFAULT '09:00:00'::time without time zone; + + + dow_0_closetime without time zone + + + NOT NULL; + + + DEFAULT '17:00:00'::time without time zone; + + + dow_1_opentime without time zone + + + NOT NULL; + + + DEFAULT '09:00:00'::time without time zone; + + + dow_1_closetime without time zone + + + NOT NULL; + + + DEFAULT '17:00:00'::time without time zone; + + + dow_2_opentime without time zone + + + NOT NULL; + + + DEFAULT '09:00:00'::time without time zone; + + + dow_2_closetime without time zone + + + NOT NULL; + + + DEFAULT '17:00:00'::time without time zone; + + + dow_3_opentime without time zone + + + NOT NULL; + + + DEFAULT '09:00:00'::time without time zone; + + + dow_3_closetime without time zone + + + NOT NULL; + + + DEFAULT '17:00:00'::time without time zone; + + + dow_4_opentime without time zone + + + NOT NULL; + + + DEFAULT '09:00:00'::time without time zone; + + + dow_4_closetime without time zone + + + NOT NULL; + + + DEFAULT '17:00:00'::time without time zone; + + + dow_5_opentime without time zone + + + NOT NULL; + + + DEFAULT '09:00:00'::time without time zone; + + + dow_5_closetime without time zone + + + NOT NULL; + + + DEFAULT '17:00:00'::time without time zone; + + + dow_6_opentime without time zone + + + NOT NULL; + + + DEFAULT '09:00:00'::time without time zone; + + + dow_6_closetime without time zone + + + NOT NULL; + + + DEFAULT '17:00:00'::time without time zone; + + + + + + + + org_addressorg_addressFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + validboolean + + + NOT NULL; + + + DEFAULT true; + + + address_typetext + + + NOT NULL; + + + DEFAULT 'MAILING'::text; + + + org_unitinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + street1text + + + NOT NULL; + + + + street2text + + + + + citytext + + + NOT NULL; + + + + countytext + + + + + statetext + + + NOT NULL; + + + + countrytext + + + NOT NULL; + + + + post_codetext + + + NOT NULL; + + + + santext + + + + + + + + + + Tables referencing actor.org_unit via Foreign Key Constraints + •actor.org_unit + + + + + org_lassoorg_lassoFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + + + + + + + + + + Tables referencing actor.org_lasso_map via Foreign Key Constraints + •actor.org_lasso_map + + + + + org_lasso_maporg_lasso_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + lassointeger + + + + + + NOT NULL; + + + + + actor.org_lasso + + + org_unitinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + + + + + + org_unitorg_unitFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + parent_ouinteger + + + + + + + + + actor.org_unit + + + ou_typeinteger + + + + + + NOT NULL; + + + + + actor.org_unit_type + + + ill_addressinteger + + + + + + + + + actor.org_address + + + holds_addressinteger + + + + + + + + + actor.org_address + + + mailing_addressinteger + + + + + + + + + actor.org_address + + + billing_addressinteger + + + + + + + + + actor.org_address + + + shortnametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + emailtext + + + + + phonetext + + + + + opac_visibleboolean + + + NOT NULL; + + + DEFAULT true; + + + fiscal_calendarinteger + + + + + + NOT NULL; + + + DEFAULT 1; + + + + acq.fiscal_calendar + + + + + + + + Tables referencing acq.cancel_reason via Foreign Key Constraints + •acq.cancel_reason•acq.claim_event_type•acq.claim_policy•acq.claim_type•acq.distribution_formula•acq.distribution_formula_entry•acq.fund•acq.fund_allocation_percent•acq.fund_tag•acq.funding_source•acq.invoice•acq.lineitem_alert_text•acq.lineitem_detail•acq.picklist•acq.provider•acq.purchase_order•acq.user_request•action.circulation•action.fieldset•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•action.survey•action.transit_copy•action_trigger.event_definition•actor.hours_of_operation•actor.org_address•actor.org_lasso_map•actor.org_unit•actor.org_unit_closed•actor.org_unit_setting•actor.stat_cat•actor.stat_cat_entry•actor.usr•actor.usr_org_unit_opt_in•actor.usr_standing_penalty•actor.workstation•asset.call_number•asset.copy•asset.copy_location•asset.copy_location_order•asset.copy_template•asset.stat_cat•asset.stat_cat_entry•biblio.record_entry•booking.reservation•booking.resource•booking.resource_attr•booking.resource_attr_value•booking.resource_type•config.billing_type•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•config.idl_field_doc•config.remote_account•money.collections_tracker•permission.grp_penalty_threshold•permission.usr_work_ou_map•reporter.output_folder•reporter.report_folder•reporter.template_folder•serial.distribution•serial.record_entry•serial.subscription•vandelay.import_bib_trash_fields•vandelay.import_item_attr_definition•vandelay.merge_profile + + + + + org_unit_closedorg_unit_closedFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + org_unitinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + close_starttimestamp with time zone + + + NOT NULL; + + + + close_endtimestamp with time zone + + + NOT NULL; + + + + reasontext + + + + + + + + + + org_unit_proximityorg_unit_proximityFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + from_orginteger + + + + + to_orginteger + + + + + proxinteger + + + + + + + + + + org_unit_settingorg_unit_settingFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + org_unitinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + nametext + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + config.org_unit_setting_type + + + + + valuetext + + + NOT NULL; + + + + + + + + + org_unit_typeorg_unit_typeFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + NOT NULL; + + + + opac_labeltext + + + NOT NULL; + + + + depthinteger + + + NOT NULL; + + + + parentinteger + + + + + + + + + actor.org_unit_type + + + can_have_volsboolean + + + NOT NULL; + + + DEFAULT true; + + + can_have_usersboolean + + + NOT NULL; + + + DEFAULT true; + + + + + + + + Tables referencing actor.org_unit via Foreign Key Constraints + •actor.org_unit•actor.org_unit_type•config.hold_matrix_matchpoint + + + + + stat_catstat_catFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + opac_visibleboolean + + + NOT NULL; + + + DEFAULT false; + + + usr_summaryboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + Tables referencing actor.stat_cat_entry via Foreign Key Constraints + •actor.stat_cat_entry•actor.stat_cat_entry_usr_map + + + + + stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + stat_catinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.stat_cat + + + + + ownerinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + valuetext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + + + + + + stat_cat_entry_usr_mapstat_cat_entry_usr_mapFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + stat_cat_entrytext + + + NOT NULL; + + + + stat_catinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.stat_cat + + + + + target_usrinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.usr + + + + + + + + + + usrusrFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + cardinteger + + + + UNIQUE; + + + + + + + + profileinteger + + + + + + NOT NULL; + + + + + permission.grp_tree + + + usrnametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + emailtext + + + + + passwdtext + + + NOT NULL; + + + + standinginteger + + + + + + NOT NULL; + + + DEFAULT 1; + + + + config.standing + + + ident_typeinteger + + + + + + NOT NULL; + + + + + config.identification_type + + + ident_valuetext + + + + + ident_type2integer + + + + + + + + + config.identification_type + + + ident_value2text + + + + + net_access_levelinteger + + + + + + NOT NULL; + + + DEFAULT 1; + + + + config.net_access_level + + + photo_urltext + + + + + prefixtext + + + + + first_given_nametext + + + NOT NULL; + + + + second_given_nametext + + + + + family_nametext + + + NOT NULL; + + + + suffixtext + + + + + aliastext + + + + + day_phonetext + + + + + evening_phonetext + + + + + other_phonetext + + + + + mailing_addressinteger + + + + + + + + + actor.usr_address + + + billing_addressinteger + + + + + + + + + actor.usr_address + + + home_ouinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + dobtimestamp with time zone + + + + + activeboolean + + + NOT NULL; + + + DEFAULT true; + + + master_accountboolean + + + NOT NULL; + + + DEFAULT false; + + + super_userboolean + + + NOT NULL; + + + DEFAULT false; + + + barredboolean + + + NOT NULL; + + + DEFAULT false; + + + deletedboolean + + + NOT NULL; + + + DEFAULT false; + + + juvenileboolean + + + NOT NULL; + + + DEFAULT false; + + + usrgroupserial + + + NOT NULL; + + + + claims_returned_countinteger + + + NOT NULL; + + + + credit_forward_balancenumeric(6,2) + + + NOT NULL; + + + DEFAULT 0.00; + + + last_xact_idtext + + + NOT NULL; + + + DEFAULT 'none'::text; + + + alert_messagetext + + + + + create_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + expire_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT (now() + '3 years'::interval); + + + claims_never_checked _out_countinteger + + + NOT NULL; + + + + + + + + + Tables referencing acq.claim_event via Foreign Key Constraints + •acq.claim_event•acq.distribution_formula_application•acq.fund_allocation•acq.fund_allocation_percent•acq.fund_transfer•acq.lineitem•acq.lineitem_note•acq.lineitem_usr_attr_definition•acq.picklist•acq.po_note•acq.provider_note•acq.purchase_order•acq.serial_claim_event•acq.user_request•action.circulation•action.fieldset•action.hold_notification•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•actor.card•actor.stat_cat_entry_usr_map•actor.usr_address•actor.usr_note•actor.usr_org_unit_opt_in•actor.usr_password_reset•actor.usr_saved_search•actor.usr_setting•actor.usr_standing_penalty•asset.call_number•asset.call_number_note•asset.copy•asset.copy_note•asset.copy_template•biblio.record_entry•biblio.record_note•booking.reservation•container.biblio_record_entry_bucket•container.call_number_bucket•container.copy_bucket•container.user_bucket•container.user_bucket_item•money.billable_xact•money.collections_tracker•permission.usr_grp_map•permission.usr_object_perm_map•permission.usr_perm_map•permission.usr_work_ou_map•reporter.output_folder•reporter.report•reporter.report_folder•reporter.schedule•reporter.template•reporter.template_folder•serial.distribution_note•serial.issuance•serial.item•serial.item_note•serial.routing_list_user•serial.subscription_note•serial.unit•vandelay.queue + + + + + usr_addressusr_addressFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + validboolean + + + NOT NULL; + + + DEFAULT true; + + + within_city_limitsboolean + + + NOT NULL; + + + DEFAULT true; + + + address_typetext + + + NOT NULL; + + + DEFAULT 'MAILING'::text; + + + usrinteger + + + + + + NOT NULL; + + + + + actor.usr + + + street1text + + + NOT NULL; + + + + street2text + + + + + citytext + + + NOT NULL; + + + + countytext + + + + + statetext + + + NOT NULL; + + + + countrytext + + + NOT NULL; + + + + post_codetext + + + NOT NULL; + + + + pendingboolean + + + NOT NULL; + + + DEFAULT false; + + + replacesinteger + + + + + + + + + actor.usr_address + + + + + + + + Tables referencing actor.usr via Foreign Key Constraints + •actor.usr•actor.usr_address + + + + + usr_noteusr_noteFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + usrbigint + + + + + + NOT NULL; + + + + + actor.usr + + + creatorbigint + + + + + + NOT NULL; + + + + + actor.usr + + + create_datetimestamp with time zone + + + + DEFAULT now(); + + + pubboolean + + + NOT NULL; + + + DEFAULT false; + + + titletext + + + NOT NULL; + + + + valuetext + + + NOT NULL; + + + + + + + + + usr_org_unit_opt_inusr_org_unit_opt_inFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + org_unitinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.org_unit + + + usrinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.usr + + + staffinteger + + + + + + NOT NULL; + + + + + actor.usr + + + opt_in_tstimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + opt_in_wsinteger + + + + + + NOT NULL; + + + + + actor.workstation + + + + + + + + usr_password_resetusr_password_resetFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + uuidtext + + + NOT NULL; + + + + usrbigint + + + + + + NOT NULL; + + + + + actor.usr + + + request_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + has_been_resetboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + usr_saved_searchusr_saved_searchFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.usr + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + create_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + query_texttext + + + NOT NULL; + + + + query_typetext + + + NOT NULL; + + + DEFAULT 'URL'::text; + + + targettext + + + NOT NULL; + + + + + + + Constraints on usr_saved_searchvalid_query_textCHECK ((query_type = 'URL'::text))valid_targetCHECK ((target = ANY (ARRAY['record'::text, 'metarecord'::text, 'callnumber'::text]))) + + + + + + usr_settingusr_settingFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + usrinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.usr + + + nametext + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + config.usr_setting_type + + + valuetext + + + NOT NULL; + + + + + + + + + usr_standing_penaltyusr_standing_penaltyFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + org_unitinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + usrinteger + + + + + + NOT NULL; + + + + + actor.usr + + + standing_penaltyinteger + + + + + + NOT NULL; + + + + + config.standing_penalty + + + staffinteger + + + + + + + + + actor.usr + + + set_datetimestamp with time zone + + + + DEFAULT now(); + + + stop_datetimestamp with time zone + + + + + notetext + + + + + + + + + + workstationworkstationFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + owning_libinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + + + + + + Tables referencing action.circulation via Foreign Key Constraints + •action.circulation•actor.usr_org_unit_opt_in•money.bnm_desk_payment + + + + + Schema assetSchema assetcall_numbercall_numberFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + creatorbigint + + + + + + NOT NULL; + + + + + actor.usr + + + create_datetimestamp with time zone + + + + DEFAULT now(); + + + editorbigint + + + + + + NOT NULL; + + + + + actor.usr + + + edit_datetimestamp with time zone + + + + DEFAULT now(); + + + recordbigint + + + + + + NOT NULL; + + + + + biblio.record_entry + + + owning_libinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + labeltext + + + NOT NULL; + + + + deletedboolean + + + NOT NULL; + + + DEFAULT false; + + + label_classbigint + + + + + + NOT NULL; + + + DEFAULT 1; + + + + asset.call_number_class + + + label_sortkeytext + + + + + + + + + + Tables referencing asset.call_number_note via Foreign Key Constraints + •asset.call_number_note•asset.copy•asset.uri_call_number_map•container.call_number_bucket_item•serial.distribution•serial.unit + + + + + call_number_classcall_number_classFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + nametext + + + NOT NULL; + + + + normalizertext + + + NOT NULL; + + + DEFAULT 'asset.normalize_generic'::text; + + + fieldtext + + + NOT NULL; + + + DEFAULT '050ab,055ab,060ab,070ab,080ab,082ab,086ab,088ab,090,092,096,098,099'::text; + + + + + + + + Tables referencing asset.call_number via Foreign Key Constraints + •asset.call_number + + + + + call_number_notecall_number_noteFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + call_numberbigint + + + + + + NOT NULL; + + + + + asset.call_number + + + creatorbigint + + + + + + NOT NULL; + + + + + actor.usr + + + create_datetimestamp with time zone + + + + DEFAULT now(); + + + pubboolean + + + NOT NULL; + + + DEFAULT false; + + + titletext + + + NOT NULL; + + + + valuetext + + + NOT NULL; + + + + + + + + + copycopyFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + circ_libinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + creatorbigint + + + + + + NOT NULL; + + + + + actor.usr + + + call_numberbigint + + + + + + NOT NULL; + + + + + asset.call_number + + + editorbigint + + + + + + NOT NULL; + + + + + actor.usr + + + create_datetimestamp with time zone + + + + DEFAULT now(); + + + edit_datetimestamp with time zone + + + + DEFAULT now(); + + + copy_numberinteger + + + + + statusinteger + + + + + + NOT NULL; + + + + + config.copy_status + + + locationinteger + + + + + + NOT NULL; + + + DEFAULT 1; + + + + asset.copy_location + + + loan_durationinteger + + + NOT NULL; + + + + fine_levelinteger + + + NOT NULL; + + + + age_protectinteger + + + + + circulateboolean + + + NOT NULL; + + + DEFAULT true; + + + depositboolean + + + NOT NULL; + + + DEFAULT false; + + + refboolean + + + NOT NULL; + + + DEFAULT false; + + + holdableboolean + + + NOT NULL; + + + DEFAULT true; + + + deposit_amountnumeric(6,2) + + + NOT NULL; + + + DEFAULT 0.00; + + + pricenumeric(8,2) + + + + + barcodetext + + + NOT NULL; + + + + circ_modifiertext + + + + + + + + + config.circ_modifier + + + circ_as_typetext + + + + + dummy_titletext + + + + + dummy_authortext + + + + + alert_messagetext + + + + + opac_visibleboolean + + + NOT NULL; + + + DEFAULT true; + + + deletedboolean + + + NOT NULL; + + + DEFAULT false; + + + floatingboolean + + + NOT NULL; + + + DEFAULT false; + + + dummy_isbntext + + + + + status_changed_timetimestamp with time zone + + + + + mint_conditionboolean + + + NOT NULL; + + + DEFAULT true; + + + costnumeric(8,2) + + + + + + + + Constraints on copycopy_fine_level_checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_loan_duration_checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) + + + + + + Tables referencing asset.copy_note via Foreign Key Constraints + •asset.copy_note•container.copy_bucket_item + + + + + copy_locationcopy_locationFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + owning_libinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.org_unit + + + holdableboolean + + + NOT NULL; + + + DEFAULT true; + + + hold_verifyboolean + + + NOT NULL; + + + DEFAULT false; + + + opac_visibleboolean + + + NOT NULL; + + + DEFAULT true; + + + circulateboolean + + + NOT NULL; + + + DEFAULT true; + + + label_prefixtext + + + + + label_suffixtext + + + + + + + + + + Tables referencing acq.distribution_formula_entry via Foreign Key Constraints + •acq.distribution_formula_entry•acq.lineitem_detail•asset.copy•asset.copy_location_order•asset.copy_template + + + + + copy_location_ordercopy_location_orderFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + locationinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + asset.copy_location + + + orginteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.org_unit + + + positioninteger + + + NOT NULL; + + + + + + + + + copy_notecopy_noteFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + owning_copybigint + + + + + + NOT NULL; + + + + + asset.copy + + + creatorbigint + + + + + + NOT NULL; + + + + + actor.usr + + + create_datetimestamp with time zone + + + + DEFAULT now(); + + + pubboolean + + + NOT NULL; + + + DEFAULT false; + + + titletext + + + NOT NULL; + + + + valuetext + + + NOT NULL; + + + + + + + + + copy_templatecopy_templateFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + owning_libinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + creatorbigint + + + + + + NOT NULL; + + + + + actor.usr + + + editorbigint + + + + + + NOT NULL; + + + + + actor.usr + + + create_datetimestamp with time zone + + + + DEFAULT now(); + + + edit_datetimestamp with time zone + + + + DEFAULT now(); + + + nametext + + + NOT NULL; + + + + circ_libinteger + + + + + + + + + actor.org_unit + + + statusinteger + + + + + + + + + config.copy_status + + + locationinteger + + + + + + + + + asset.copy_location + + + loan_durationinteger + + + + + fine_levelinteger + + + + + age_protectinteger + + + + + circulateboolean + + + + + depositboolean + + + + + refboolean + + + + + holdableboolean + + + + + deposit_amountnumeric(6,2) + + + + + pricenumeric(8,2) + + + + + circ_modifiertext + + + + + circ_as_typetext + + + + + alert_messagetext + + + + + opac_visibleboolean + + + + + floatingboolean + + + + + mint_conditionboolean + + + + + + + + Constraints on copy_templatevalid_fine_levelCHECK (((fine_level IS NULL) OR (loan_duration = ANY (ARRAY[1, 2, 3]))))valid_loan_durationCHECK (((loan_duration IS NULL) OR (loan_duration = ANY (ARRAY[1, 2, 3])))) + + + + + + Tables referencing serial.distribution via Foreign Key Constraints + •serial.distribution + + + + + opac_visible_copiesopac_visible_copiesFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + + + + recordbigint + + + + + circ_libinteger + + + + + + + + + + stat_catstat_catFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + opac_visibleboolean + + + NOT NULL; + + + DEFAULT false; + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + requiredboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + Tables referencing asset.stat_cat_entry via Foreign Key Constraints + •asset.stat_cat_entry•asset.stat_cat_entry_copy_map + + + + + stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + stat_catinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + asset.stat_cat + + + + + ownerinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + valuetext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + + + + + + Tables referencing asset.stat_cat_entry_copy_map via Foreign Key Constraints + •asset.stat_cat_entry_copy_map + + + + + stat_cat_entry_copy_mapstat_cat_entry_copy_mapFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + stat_catinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + asset.stat_cat + + + + + stat_cat_entryinteger + + + + + + NOT NULL; + + + + + asset.stat_cat_entry + + + owning_copybigint + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + + + + + + stat_cat_entry_transparency_mapstat_cat_entry_transparency_mapFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + stat_catinteger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + stat_cat_entryinteger + + + NOT NULL; + + + + owning_transparencyinteger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + + + + + + uriuriFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + hreftext + + + NOT NULL; + + + + labeltext + + + + + use_restrictiontext + + + + + activeboolean + + + NOT NULL; + + + DEFAULT true; + + + + + + + + Tables referencing asset.uri_call_number_map via Foreign Key Constraints + •asset.uri_call_number_map•serial.item + + + + + uri_call_number_mapuri_call_number_mapFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + uriinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + asset.uri + + + + + call_numberinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + asset.call_number + + + + + + + + + + Schema auditorSchema auditoracq_invoice_entry_historyacq_invoice_entry_historyFieldData TypeConstraints and Referencesaudit_idbigint + + + PRIMARY KEY + + + + + + + + + audit_timetimestamp with time zone + + + NOT NULL; + + + + audit_actiontext + + + NOT NULL; + + + + idinteger + + + NOT NULL; + + + + invoiceinteger + + + NOT NULL; + + + + purchase_orderinteger + + + + + lineiteminteger + + + + + inv_item_countinteger + + + NOT NULL; + + + + phys_item_countinteger + + + + + notetext + + + + + billed_per_itemboolean + + + + + cost_billednumeric(8,2) + + + + + actual_costnumeric(8,2) + + + + + amount_paidnumeric(8,2) + + + + + + + + + + acq_invoice_entry_lifecycleacq_invoice_entry_lifecycleFieldData TypeConstraints and References?column?bigint + + + + + audit_timetimestamp with time zone + + + + + audit_actiontext + + + + + idinteger + + + + + invoiceinteger + + + + + purchase_orderinteger + + + + + lineiteminteger + + + + + inv_item_countinteger + + + + + phys_item_countinteger + + + + + notetext + + + + + billed_per_itemboolean + + + + + cost_billednumeric(8,2) + + + + + actual_costnumeric(8,2) + + + + + amount_paidnumeric(8,2) + + + + + + + + + + acq_invoice_historyacq_invoice_historyFieldData TypeConstraints and Referencesaudit_idbigint + + + PRIMARY KEY + + + + + + + + + audit_timetimestamp with time zone + + + NOT NULL; + + + + audit_actiontext + + + NOT NULL; + + + + idinteger + + + NOT NULL; + + + + receiverinteger + + + NOT NULL; + + + + providerinteger + + + NOT NULL; + + + + shipperinteger + + + NOT NULL; + + + + recv_datetimestamp with time zone + + + NOT NULL; + + + + recv_methodtext + + + NOT NULL; + + + + inv_typetext + + + + + inv_identtext + + + NOT NULL; + + + + payment_authtext + + + + + payment_methodtext + + + + + notetext + + + + + completeboolean + + + NOT NULL; + + + + + + + + + acq_invoice_item_historyacq_invoice_item_historyFieldData TypeConstraints and Referencesaudit_idbigint + + + PRIMARY KEY + + + + + + + + + audit_timetimestamp with time zone + + + NOT NULL; + + + + audit_actiontext + + + NOT NULL; + + + + idinteger + + + NOT NULL; + + + + invoiceinteger + + + NOT NULL; + + + + purchase_orderinteger + + + + + fund_debitinteger + + + + + inv_item_typetext + + + NOT NULL; + + + + titletext + + + + + authortext + + + + + notetext + + + + + cost_billednumeric(8,2) + + + + + actual_costnumeric(8,2) + + + + + fundinteger + + + + + amount_paidnumeric(8,2) + + + + + po_iteminteger + + + + + targetbigint + + + + + + + + + + acq_invoice_item_lifecycleacq_invoice_item_lifecycleFieldData TypeConstraints and References?column?bigint + + + + + audit_timetimestamp with time zone + + + + + audit_actiontext + + + + + idinteger + + + + + invoiceinteger + + + + + purchase_orderinteger + + + + + fund_debitinteger + + + + + inv_item_typetext + + + + + titletext + + + + + authortext + + + + + notetext + + + + + cost_billednumeric(8,2) + + + + + actual_costnumeric(8,2) + + + + + fundinteger + + + + + amount_paidnumeric(8,2) + + + + + po_iteminteger + + + + + targetbigint + + + + + + + + + + acq_invoice_lifecycleacq_invoice_lifecycleFieldData TypeConstraints and References?column?bigint + + + + + audit_timetimestamp with time zone + + + + + audit_actiontext + + + + + idinteger + + + + + receiverinteger + + + + + providerinteger + + + + + shipperinteger + + + + + recv_datetimestamp with time zone + + + + + recv_methodtext + + + + + inv_typetext + + + + + inv_identtext + + + + + payment_authtext + + + + + payment_methodtext + + + + + notetext + + + + + completeboolean + + + + + + + + + + actor_org_unit_historyactor_org_unit_historyFieldData TypeConstraints and Referencesaudit_idbigint + + + PRIMARY KEY + + + + + + + + + audit_timetimestamp with time zone + + + NOT NULL; + + + + audit_actiontext + + + NOT NULL; + + + + idinteger + + + NOT NULL; + + + + parent_ouinteger + + + + + ou_typeinteger + + + NOT NULL; + + + + ill_addressinteger + + + + + holds_addressinteger + + + + + mailing_addressinteger + + + + + billing_addressinteger + + + + + shortnametext + + + NOT NULL; + + + + nametext + + + NOT NULL; + + + + emailtext + + + + + phonetext + + + + + opac_visibleboolean + + + NOT NULL; + + + + fiscal_calendarinteger + + + NOT NULL; + + + + + + + + + actor_org_unit_lifecycleactor_org_unit_lifecycleFieldData TypeConstraints and References?column?bigint + + + + + audit_timetimestamp with time zone + + + + + audit_actiontext + + + + + idinteger + + + + + parent_ouinteger + + + + + ou_typeinteger + + + + + ill_addressinteger + + + + + holds_addressinteger + + + + + mailing_addressinteger + + + + + billing_addressinteger + + + + + shortnametext + + + + + nametext + + + + + emailtext + + + + + phonetext + + + + + opac_visibleboolean + + + + + fiscal_calendarinteger + + + + + + + + + + actor_usr_address_historyactor_usr_address_historyFieldData TypeConstraints and Referencesaudit_idbigint + + + PRIMARY KEY + + + + + + + + + audit_timetimestamp with time zone + + + NOT NULL; + + + + audit_actiontext + + + NOT NULL; + + + + idinteger + + + NOT NULL; + + + + validboolean + + + NOT NULL; + + + + within_city_limitsboolean + + + NOT NULL; + + + + address_typetext + + + NOT NULL; + + + + usrinteger + + + NOT NULL; + + + + street1text + + + NOT NULL; + + + + street2text + + + + + citytext + + + NOT NULL; + + + + countytext + + + + + statetext + + + NOT NULL; + + + + countrytext + + + NOT NULL; + + + + post_codetext + + + NOT NULL; + + + + pendingboolean + + + NOT NULL; + + + + replacesinteger + + + + + + + + + + actor_usr_address_lifecycleactor_usr_address_lifecycleFieldData TypeConstraints and References?column?bigint + + + + + audit_timetimestamp with time zone + + + + + audit_actiontext + + + + + idinteger + + + + + validboolean + + + + + within_city_limitsboolean + + + + + address_typetext + + + + + usrinteger + + + + + street1text + + + + + street2text + + + + + citytext + + + + + countytext + + + + + statetext + + + + + countrytext + + + + + post_codetext + + + + + pendingboolean + + + + + replacesinteger + + + + + + + + + + actor_usr_historyactor_usr_historyFieldData TypeConstraints and Referencesaudit_idbigint + + + PRIMARY KEY + + + + + + + + + audit_timetimestamp with time zone + + + NOT NULL; + + + + audit_actiontext + + + NOT NULL; + + + + idinteger + + + NOT NULL; + + + + cardinteger + + + + + profileinteger + + + NOT NULL; + + + + usrnametext + + + NOT NULL; + + + + emailtext + + + + + passwdtext + + + NOT NULL; + + + + standinginteger + + + NOT NULL; + + + + ident_typeinteger + + + NOT NULL; + + + + ident_valuetext + + + + + ident_type2integer + + + + + ident_value2text + + + + + net_access_levelinteger + + + NOT NULL; + + + + photo_urltext + + + + + prefixtext + + + + + first_given_nametext + + + NOT NULL; + + + + second_given_nametext + + + + + family_nametext + + + NOT NULL; + + + + suffixtext + + + + + aliastext + + + + + day_phonetext + + + + + evening_phonetext + + + + + other_phonetext + + + + + mailing_addressinteger + + + + + billing_addressinteger + + + + + home_ouinteger + + + NOT NULL; + + + + dobtimestamp with time zone + + + + + activeboolean + + + NOT NULL; + + + + master_accountboolean + + + NOT NULL; + + + + super_userboolean + + + NOT NULL; + + + + barredboolean + + + NOT NULL; + + + + deletedboolean + + + NOT NULL; + + + + juvenileboolean + + + NOT NULL; + + + + usrgroupinteger + + + NOT NULL; + + + + claims_returned_countinteger + + + NOT NULL; + + + + credit_forward_balancenumeric(6,2) + + + NOT NULL; + + + + last_xact_idtext + + + NOT NULL; + + + + alert_messagetext + + + + + create_datetimestamp with time zone + + + NOT NULL; + + + + expire_datetimestamp with time zone + + + NOT NULL; + + + + claims_never_checked _out_countinteger + + + NOT NULL; + + + + + + + + + actor_usr_lifecycleactor_usr_lifecycleFieldData TypeConstraints and References?column?bigint + + + + + audit_timetimestamp with time zone + + + + + audit_actiontext + + + + + idinteger + + + + + cardinteger + + + + + profileinteger + + + + + usrnametext + + + + + emailtext + + + + + passwdtext + + + + + standinginteger + + + + + ident_typeinteger + + + + + ident_valuetext + + + + + ident_type2integer + + + + + ident_value2text + + + + + net_access_levelinteger + + + + + photo_urltext + + + + + prefixtext + + + + + first_given_nametext + + + + + second_given_nametext + + + + + family_nametext + + + + + suffixtext + + + + + aliastext + + + + + day_phonetext + + + + + evening_phonetext + + + + + other_phonetext + + + + + mailing_addressinteger + + + + + billing_addressinteger + + + + + home_ouinteger + + + + + dobtimestamp with time zone + + + + + activeboolean + + + + + master_accountboolean + + + + + super_userboolean + + + + + barredboolean + + + + + deletedboolean + + + + + juvenileboolean + + + + + usrgroupinteger + + + + + claims_returned_countinteger + + + + + credit_forward_balancenumeric(6,2) + + + + + last_xact_idtext + + + + + alert_messagetext + + + + + create_datetimestamp with time zone + + + + + expire_datetimestamp with time zone + + + + + claims_never_checked _out_countinteger + + + + + + + + + + asset_call_number_historyasset_call_number_historyFieldData TypeConstraints and Referencesaudit_idbigint + + + PRIMARY KEY + + + + + + + + + audit_timetimestamp with time zone + + + NOT NULL; + + + + audit_actiontext + + + NOT NULL; + + + + idbigint + + + NOT NULL; + + + + creatorbigint + + + NOT NULL; + + + + create_datetimestamp with time zone + + + + + editorbigint + + + NOT NULL; + + + + edit_datetimestamp with time zone + + + + + recordbigint + + + NOT NULL; + + + + owning_libinteger + + + NOT NULL; + + + + labeltext + + + NOT NULL; + + + + deletedboolean + + + NOT NULL; + + + + label_classbigint + + + NOT NULL; + + + + label_sortkeytext + + + + + + + + + + asset_call_number_lifecycleasset_call_number_lifecycleFieldData TypeConstraints and References?column?bigint + + + + + audit_timetimestamp with time zone + + + + + audit_actiontext + + + + + idbigint + + + + + creatorbigint + + + + + create_datetimestamp with time zone + + + + + editorbigint + + + + + edit_datetimestamp with time zone + + + + + recordbigint + + + + + owning_libinteger + + + + + labeltext + + + + + deletedboolean + + + + + label_classbigint + + + + + label_sortkeytext + + + + + + + + + + asset_copy_historyasset_copy_historyFieldData TypeConstraints and Referencesaudit_idbigint + + + PRIMARY KEY + + + + + + + + + audit_timetimestamp with time zone + + + NOT NULL; + + + + audit_actiontext + + + NOT NULL; + + + + idbigint + + + NOT NULL; + + + + circ_libinteger + + + NOT NULL; + + + + creatorbigint + + + NOT NULL; + + + + call_numberbigint + + + NOT NULL; + + + + editorbigint + + + NOT NULL; + + + + create_datetimestamp with time zone + + + + + edit_datetimestamp with time zone + + + + + copy_numberinteger + + + + + statusinteger + + + NOT NULL; + + + + locationinteger + + + NOT NULL; + + + + loan_durationinteger + + + NOT NULL; + + + + fine_levelinteger + + + NOT NULL; + + + + age_protectinteger + + + + + circulateboolean + + + NOT NULL; + + + + depositboolean + + + NOT NULL; + + + + refboolean + + + NOT NULL; + + + + holdableboolean + + + NOT NULL; + + + + deposit_amountnumeric(6,2) + + + NOT NULL; + + + + pricenumeric(8,2) + + + + + barcodetext + + + NOT NULL; + + + + circ_modifiertext + + + + + circ_as_typetext + + + + + dummy_titletext + + + + + dummy_authortext + + + + + alert_messagetext + + + + + opac_visibleboolean + + + NOT NULL; + + + + deletedboolean + + + NOT NULL; + + + + floatingboolean + + + NOT NULL; + + + + dummy_isbntext + + + + + status_changed_timetimestamp with time zone + + + + + mint_conditionboolean + + + NOT NULL; + + + + costnumeric(8,2) + + + + + + + + + + asset_copy_lifecycleasset_copy_lifecycleFieldData TypeConstraints and References?column?bigint + + + + + audit_timetimestamp with time zone + + + + + audit_actiontext + + + + + idbigint + + + + + circ_libinteger + + + + + creatorbigint + + + + + call_numberbigint + + + + + editorbigint + + + + + create_datetimestamp with time zone + + + + + edit_datetimestamp with time zone + + + + + copy_numberinteger + + + + + statusinteger + + + + + locationinteger + + + + + loan_durationinteger + + + + + fine_levelinteger + + + + + age_protectinteger + + + + + circulateboolean + + + + + depositboolean + + + + + refboolean + + + + + holdableboolean + + + + + deposit_amountnumeric(6,2) + + + + + pricenumeric(8,2) + + + + + barcodetext + + + + + circ_modifiertext + + + + + circ_as_typetext + + + + + dummy_titletext + + + + + dummy_authortext + + + + + alert_messagetext + + + + + opac_visibleboolean + + + + + deletedboolean + + + + + floatingboolean + + + + + dummy_isbntext + + + + + status_changed_timetimestamp with time zone + + + + + mint_conditionboolean + + + + + costnumeric(8,2) + + + + + + + + + + biblio_record_entry_historybiblio_record_entry_historyFieldData TypeConstraints and Referencesaudit_idbigint + + + PRIMARY KEY + + + + + + + + + audit_timetimestamp with time zone + + + NOT NULL; + + + + audit_actiontext + + + NOT NULL; + + + + idbigint + + + NOT NULL; + + + + creatorinteger + + + NOT NULL; + + + + editorinteger + + + NOT NULL; + + + + sourceinteger + + + + + qualityinteger + + + + + create_datetimestamp with time zone + + + NOT NULL; + + + + edit_datetimestamp with time zone + + + NOT NULL; + + + + activeboolean + + + NOT NULL; + + + + deletedboolean + + + NOT NULL; + + + + fingerprinttext + + + + + tcn_sourcetext + + + NOT NULL; + + + + tcn_valuetext + + + NOT NULL; + + + + marctext + + + NOT NULL; + + + + last_xact_idtext + + + NOT NULL; + + + + ownerinteger + + + + + share_depthinteger + + + + + + + + + + biblio_record_entry_lifecyclebiblio_record_entry_lifecycleFieldData TypeConstraints and References?column?bigint + + + + + audit_timetimestamp with time zone + + + + + audit_actiontext + + + + + idbigint + + + + + creatorinteger + + + + + editorinteger + + + + + sourceinteger + + + + + qualityinteger + + + + + create_datetimestamp with time zone + + + + + edit_datetimestamp with time zone + + + + + activeboolean + + + + + deletedboolean + + + + + fingerprinttext + + + + + tcn_sourcetext + + + + + tcn_valuetext + + + + + marctext + + + + + last_xact_idtext + + + + + ownerinteger + + + + + share_depthinteger + + + + + + + + + + Schema authoritySchema authoritybib_linkingbib_linkingFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + bibbigint + + + + + + NOT NULL; + + + + + biblio.record_entry + + + authoritybigint + + + + + + NOT NULL; + + + + + authority.record_entry + + + + + + + + full_recfull_recFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + recordbigint + + + NOT NULL; + + + + tagcharacter(3) + + + NOT NULL; + + + + ind1text + + + + + ind2text + + + + + subfieldtext + + + + + valuetext + + + NOT NULL; + + + + index_vectortsvector + + + NOT NULL; + + + + + + + + + rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + recordbigint + + + + + record_statustext + + + + + char_encodingtext + + + + + + + + + + record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + creatorinteger + + + NOT NULL; + + + DEFAULT 1; + + + editorinteger + + + NOT NULL; + + + DEFAULT 1; + + + create_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + edit_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + activeboolean + + + NOT NULL; + + + DEFAULT true; + + + deletedboolean + + + NOT NULL; + + + DEFAULT false; + + + sourceinteger + + + + + marctext + + + NOT NULL; + + + + last_xact_idtext + + + NOT NULL; + + + + ownerinteger + + + + + + + + + + Tables referencing authority.bib_linking via Foreign Key Constraints + •authority.bib_linking•authority.record_note•vandelay.authority_match•vandelay.queued_authority_record + + + + + record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + recordbigint + + + + + + NOT NULL; + + + + + authority.record_entry + + + valuetext + + + NOT NULL; + + + + creatorinteger + + + NOT NULL; + + + DEFAULT 1; + + + editorinteger + + + NOT NULL; + + + DEFAULT 1; + + + create_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + edit_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + tracing_linkstracing_linksFieldData TypeConstraints and Referencesrecordbigint + + + + + main_idbigint + + + + + main_tagcharacter(3) + + + + + main_valuetext + + + + + relationshiptext + + + + + use_restrictiontext + + + + + deprecationtext + + + + + display_restrictiontext + + + + + link_idbigint + + + + + link_tagcharacter(3) + + + + + link_valuetext + + + + + + + + + + Schema biblioSchema bibliorecord_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + creatorinteger + + + + + + NOT NULL; + + + DEFAULT 1; + + + + actor.usr + + + editorinteger + + + + + + NOT NULL; + + + DEFAULT 1; + + + + actor.usr + + + sourceinteger + + + + + qualityinteger + + + + + create_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + edit_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + activeboolean + + + NOT NULL; + + + DEFAULT true; + + + deletedboolean + + + NOT NULL; + + + DEFAULT false; + + + fingerprinttext + + + + + tcn_sourcetext + + + NOT NULL; + + + DEFAULT 'AUTOGEN'::text; + + + tcn_valuetext + + + NOT NULL; + + + DEFAULT biblio.next_autogen_tcn_value(); + + + marctext + + + NOT NULL; + + + + last_xact_idtext + + + NOT NULL; + + + + ownerinteger + + + + + + + + + actor.org_unit + + + share_depthinteger + + + + + + + + + + Tables referencing acq.lineitem via Foreign Key Constraints + •acq.lineitem•acq.user_request•asset.call_number•authority.bib_linking•biblio.record_note•booking.resource_type•container.biblio_record_entry_bucket_item•metabib.author_field_entry•metabib.identifier_field_entry•metabib.keyword_field_entry•metabib.metarecord•metabib.metarecord_source_map•metabib.real_full_rec•metabib.rec_descriptor•metabib.series_field_entry•metabib.subject_field_entry•metabib.title_field_entry•serial.record_entry•serial.subscription•vandelay.bib_match•vandelay.queued_bib_record + + + + + record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + recordbigint + + + + + + NOT NULL; + + + + + biblio.record_entry + + + valuetext + + + NOT NULL; + + + + creatorinteger + + + + + + NOT NULL; + + + DEFAULT 1; + + + + actor.usr + + + editorinteger + + + + + + NOT NULL; + + + DEFAULT 1; + + + + actor.usr + + + pubboolean + + + NOT NULL; + + + DEFAULT false; + + + create_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + edit_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + Schema bookingSchema bookingreservationreservationFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('money.billable_xact_id_seq'::regclass); + + + + + usrinteger + + + + + + NOT NULL; + + + + + actor.usr + + + xact_starttimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + xact_finishtimestamp with time zone + + + + + unrecoveredboolean + + + + + request_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + start_timetimestamp with time zone + + + + + end_timetimestamp with time zone + + + + + capture_timetimestamp with time zone + + + + + cancel_timetimestamp with time zone + + + + + pickup_timetimestamp with time zone + + + + + return_timetimestamp with time zone + + + + + booking_intervalinterval + + + + + fine_intervalinterval + + + + + fine_amountnumeric(8,2) + + + + + max_finenumeric(8,2) + + + + + target_resource_typeinteger + + + + + + NOT NULL; + + + + + booking.resource_type + + + target_resourceinteger + + + + + + + + + booking.resource + + + current_resourceinteger + + + + + + + + + booking.resource + + + request_libinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + pickup_libinteger + + + + + + + + + actor.org_unit + + + capture_staffinteger + + + + + + + + + actor.usr + + + + + + + + Tables referencing action.reservation_transit_copy via Foreign Key Constraints + •action.reservation_transit_copy•booking.reservation_attr_value_map + + + + + reservation_attr_value_mapreservation_attr_value_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + reservationinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + booking.reservation + + + attr_valueinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + booking.resource_attr_value + + + + + + + + resourceresourceFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.org_unit + + + typeinteger + + + + + + NOT NULL; + + + + + booking.resource_type + + + overbookboolean + + + NOT NULL; + + + DEFAULT false; + + + barcodetext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + depositboolean + + + NOT NULL; + + + DEFAULT false; + + + deposit_amountnumeric(8,2) + + + NOT NULL; + + + DEFAULT 0.00; + + + user_feenumeric(8,2) + + + NOT NULL; + + + DEFAULT 0.00; + + + + + + + + Tables referencing action.reservation_transit_copy via Foreign Key Constraints + •action.reservation_transit_copy•booking.reservation•booking.resource_attr_map + + + + + resource_attrresource_attrFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + resource_typeinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + booking.resource_type + + + requiredboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + Tables referencing booking.resource_attr_map via Foreign Key Constraints + •booking.resource_attr_map•booking.resource_attr_value + + + + + resource_attr_mapresource_attr_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + resourceinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + booking.resource + + + resource_attrinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + booking.resource_attr + + + valueinteger + + + + + + NOT NULL; + + + + + booking.resource_attr_value + + + + + + + + resource_attr_valueresource_attr_valueFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.org_unit + + + attrinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + booking.resource_attr + + + valid_valuetext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + + + + + + Tables referencing booking.reservation_attr_value_map via Foreign Key Constraints + •booking.reservation_attr_value_map•booking.resource_attr_map + + + + + resource_typeresource_typeFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + elbow_roominterval + + + + + fine_intervalinterval + + + + + fine_amountnumeric(8,2) + + + NOT NULL; + + + + max_finenumeric(8,2) + + + + + ownerinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.org_unit + + + catalog_itemboolean + + + NOT NULL; + + + DEFAULT false; + + + transferableboolean + + + NOT NULL; + + + DEFAULT false; + + + recordbigint + + + + UNIQUE#1 + ; + + + + + + + + + + + + biblio.record_entry + + + + + + + + Tables referencing booking.reservation via Foreign Key Constraints + •booking.reservation•booking.resource•booking.resource_attr + + + + + Schema configSchema configaudience_mapaudience_mapFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + valuetext + + + NOT NULL; + + + + descriptiontext + + + + + + + + + + bib_level_mapbib_level_mapFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + valuetext + + + NOT NULL; + + + + + + + + + bib_sourcebib_sourceFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + qualityinteger + + + + + sourcetext + + + + UNIQUE; + + + + NOT NULL; + + + + + + transcendantboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + Constraints on bib_sourcebib_source_quality_checkCHECK (((quality >= 0) AND (quality <= 100))) + + + + + + Tables referencing vandelay.queued_bib_record via Foreign Key Constraints + •vandelay.queued_bib_record + + + + + biblio_fingerprintbiblio_fingerprintFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + NOT NULL; + + + + xpathtext + + + NOT NULL; + + + + first_wordboolean + + + NOT NULL; + + + DEFAULT false; + + + formattext + + + NOT NULL; + + + DEFAULT 'marcxml'::text; + + + + + + + + billing_typebilling_typeFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + ownerinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.org_unit + + + default_pricenumeric(6,2) + + + + + + + + + + Tables referencing money.billing via Foreign Key Constraints + •money.billing + + + + + circ_matrix_circ_mod_testcirc_matrix_circ_mod_testFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + matchpointinteger + + + + + + NOT NULL; + + + + + config.circ_matrix_matchpoint + + + items_outinteger + + + NOT NULL; + + + + + + + + + Tables referencing config.circ_matrix_circ_mod_test_map via Foreign Key Constraints + •config.circ_matrix_circ_mod_test_map + + + + + circ_matrix_circ_mod_test_mapcirc_matrix_circ_mod_test_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + circ_mod_testinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + config.circ_matrix_circ_mod_test + + + + + circ_modtext + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + config.circ_modifier + + + + + + + + + + circ_matrix_matchpointcirc_matrix_matchpointFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + activeboolean + + + NOT NULL; + + + DEFAULT true; + + + org_unitinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + grpinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + permission.grp_tree + + + + + circ_modifiertext + + + + + + + UNIQUE#1 + ; + + + + + + + config.circ_modifier + + + + + marc_typetext + + + + + + + UNIQUE#1 + ; + + + + + + + config.item_type_map + + + + + marc_formtext + + + + + + + UNIQUE#1 + ; + + + + + + + config.item_form_map + + + + + marc_vr_formattext + + + + + + + UNIQUE#1 + ; + + + + + + + config.videorecording_format_map + + + + + copy_circ_libinteger + + + + + + + UNIQUE#1 + ; + + + + + + + actor.org_unit + + + + + copy_owning_libinteger + + + + + + + UNIQUE#1 + ; + + + + + + + actor.org_unit + + + + + ref_flagboolean + + + + UNIQUE#1 + ; + + + + + + + + juvenile_flagboolean + + + + UNIQUE#1 + ; + + + + + + + + is_renewalboolean + + + + UNIQUE#1 + ; + + + + + + + + usr_age_lower_boundinterval + + + + UNIQUE#1 + ; + + + + + + + + usr_age_upper_boundinterval + + + + UNIQUE#1 + ; + + + + + + + + circulateboolean + + + NOT NULL; + + + DEFAULT true; + + + duration_ruleinteger + + + + + + NOT NULL; + + + + + config.rule_circ_duration + + + recurring_fine_ruleinteger + + + + + + NOT NULL; + + + + + config.rule_recurring_fine + + + max_fine_ruleinteger + + + + + + NOT NULL; + + + + + config.rule_max_fine + + + hard_due_dateinteger + + + + + + + + + config.hard_due_date + + + script_testtext + + + + + total_copy_hold_ratiodouble precision + + + + + available_copy_hold_ratiodouble precision + + + + + + + + + + Tables referencing config.circ_matrix_circ_mod_test via Foreign Key Constraints + •config.circ_matrix_circ_mod_test + + + + + circ_modifiercirc_modifierFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + descriptiontext + + + NOT NULL; + + + + sip2_media_typetext + + + NOT NULL; + + + + magnetic_mediaboolean + + + NOT NULL; + + + DEFAULT true; + + + avg_wait_timeinterval + + + + + + + + + + Tables referencing acq.lineitem_detail via Foreign Key Constraints + •acq.lineitem_detail•asset.copy•config.circ_matrix_circ_mod_test_map•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint + + + + + copy_statuscopy_statusFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + holdableboolean + + + NOT NULL; + + + DEFAULT false; + + + opac_visibleboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + Tables referencing action.transit_copy via Foreign Key Constraints + •action.transit_copy•asset.copy•asset.copy_template + + + + + global_flagglobal_flagFieldData TypeConstraints and Referencesnametext + + + PRIMARY KEY + + + + + + + + + valuetext + + + + + enabledboolean + + + NOT NULL; + + + DEFAULT false; + + + labeltext + + + NOT NULL; + + + + + + + + + hard_due_datehard_due_dateFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + ceiling_datetimestamp with time zone + + + NOT NULL; + + + + forcetoboolean + + + NOT NULL; + + + + ownerinteger + + + NOT NULL; + + + + + + + Constraints on hard_due_datehard_due_date_name_checkCHECK ((name ~ '^\\w+$'::text)) + + + + + + Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints + •config.circ_matrix_matchpoint•config.hard_due_date_values + + + + + hard_due_date_valueshard_due_date_valuesFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + hard_due_dateinteger + + + + + + NOT NULL; + + + + + config.hard_due_date + + + ceiling_datetimestamp with time zone + + + NOT NULL; + + + + active_datetimestamp with time zone + + + NOT NULL; + + + + + + + + + hold_matrix_matchpointhold_matrix_matchpointFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + activeboolean + + + NOT NULL; + + + DEFAULT true; + + + user_home_ouinteger + + + + + + + UNIQUE#1 + ; + + + + + + + actor.org_unit + + + + + request_ouinteger + + + + + + + UNIQUE#1 + ; + + + + + + + actor.org_unit + + + + + pickup_ouinteger + + + + + + + UNIQUE#1 + ; + + + + + + + actor.org_unit + + + + + item_owning_ouinteger + + + + + + + UNIQUE#1 + ; + + + + + + + actor.org_unit + + + + + item_circ_ouinteger + + + + + + + UNIQUE#1 + ; + + + + + + + actor.org_unit + + + + + usr_grpinteger + + + + + + + UNIQUE#1 + ; + + + + + + + permission.grp_tree + + + + + requestor_grpinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + permission.grp_tree + + + + + circ_modifiertext + + + + + + + UNIQUE#1 + ; + + + + + + + config.circ_modifier + + + + + marc_typetext + + + + + + + UNIQUE#1 + ; + + + + + + + config.item_type_map + + + + + marc_formtext + + + + + + + UNIQUE#1 + ; + + + + + + + config.item_form_map + + + + + marc_vr_formattext + + + + + + + UNIQUE#1 + ; + + + + + + + config.videorecording_format_map + + + + + juvenile_flagboolean + + + + UNIQUE#1 + ; + + + + + + + + ref_flagboolean + + + + UNIQUE#1 + ; + + + + + + + + holdableboolean + + + NOT NULL; + + + DEFAULT true; + + + distance_is_from_ownerboolean + + + NOT NULL; + + + DEFAULT false; + + + transit_rangeinteger + + + + + + + + + actor.org_unit_type + + + max_holdsinteger + + + + + include_frozen_holdsboolean + + + NOT NULL; + + + DEFAULT true; + + + stop_blocked_userboolean + + + NOT NULL; + + + DEFAULT false; + + + age_hold_protect_ruleinteger + + + + + + + + + config.rule_age_hold_protect + + + + + + + + i18n_corei18n_coreFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + fq_fieldtext + + + NOT NULL; + + + + identity_valuetext + + + NOT NULL; + + + + translationtext + + + + + + NOT NULL; + + + + + config.i18n_locale + + + stringtext + + + NOT NULL; + + + + + + + + + i18n_localei18n_localeFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + marc_codetext + + + + + + NOT NULL; + + + + + config.language_map + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + descriptiontext + + + + + + + + + + Tables referencing config.i18n_core via Foreign Key Constraints + •config.i18n_core + + + + + identification_typeidentification_typeFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + + + + + + Tables referencing actor.usr via Foreign Key Constraints + •actor.usr + + + + + idl_field_docidl_field_docFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + fm_classtext + + + NOT NULL; + + + + fieldtext + + + NOT NULL; + + + + ownerinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + stringtext + + + NOT NULL; + + + + + + + + + index_normalizerindex_normalizerFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + descriptiontext + + + + + functext + + + NOT NULL; + + + + param_countinteger + + + NOT NULL; + + + + + + + + + Tables referencing config.metabib_field_index_norm_map via Foreign Key Constraints + •config.metabib_field_index_norm_map + + + + + internal_flaginternal_flagFieldData TypeConstraints and Referencesnametext + + + PRIMARY KEY + + + + + + + + + valuetext + + + + + enabledboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + item_form_mapitem_form_mapFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + valuetext + + + NOT NULL; + + + + + + + + + Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints + •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint + + + + + item_type_mapitem_type_mapFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + valuetext + + + NOT NULL; + + + + + + + + + Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints + •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint + + + + + language_maplanguage_mapFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + valuetext + + + NOT NULL; + + + + + + + + + Tables referencing config.i18n_locale via Foreign Key Constraints + •config.i18n_locale + + + + + lit_form_maplit_form_mapFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + valuetext + + + NOT NULL; + + + + descriptiontext + + + + + + + + + + marc21_ff_pos_mapmarc21_ff_pos_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + fixed_fieldtext + + + NOT NULL; + + + + tagtext + + + NOT NULL; + + + + rec_typetext + + + NOT NULL; + + + + start_posinteger + + + NOT NULL; + + + + lengthinteger + + + NOT NULL; + + + + default_valtext + + + NOT NULL; + + + DEFAULT ' '::text; + + + + + + + + marc21_physical_characteristic_subfield_mapmarc21_physical_characteristic_subfield_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ptype_keytext + + + + + + NOT NULL; + + + + + config.marc21_physical_characteristic_type_map + + + subfieldtext + + + NOT NULL; + + + + start_posinteger + + + NOT NULL; + + + + lengthinteger + + + NOT NULL; + + + + labeltext + + + NOT NULL; + + + + + + + + + Tables referencing config.marc21_physical_characteristic_value_map via Foreign Key Constraints + •config.marc21_physical_characteristic_value_map + + + + + marc21_physical_characteristic_type_mapmarc21_physical_characteristic_type_mapFieldData TypeConstraints and Referencesptype_keytext + + + PRIMARY KEY + + + + + + + + + labeltext + + + NOT NULL; + + + + + + + + + Tables referencing config.marc21_physical_characteristic_subfield_map via Foreign Key Constraints + •config.marc21_physical_characteristic_subfield_map + + + + + marc21_physical_characteristic_value_mapmarc21_physical_characteristic_value_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + valuetext + + + NOT NULL; + + + + ptype_subfieldinteger + + + + + + NOT NULL; + + + + + config.marc21_physical_characteristic_subfield_map + + + labeltext + + + NOT NULL; + + + + + + + + + marc21_rec_type_mapmarc21_rec_type_mapFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + type_valtext + + + NOT NULL; + + + + blvl_valtext + + + NOT NULL; + + + + + + + + + metabib_classmetabib_classFieldData TypeConstraints and Referencesnametext + + + PRIMARY KEY + + + + + + + + + labeltext + + + + UNIQUE; + + + + NOT NULL; + + + + + + + + + + + Tables referencing config.metabib_field via Foreign Key Constraints + •config.metabib_field•config.metabib_search_alias + + + + + metabib_fieldmetabib_fieldFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + field_classtext + + + + + + NOT NULL; + + + + + config.metabib_class + + + nametext + + + NOT NULL; + + + + labeltext + + + NOT NULL; + + + + xpathtext + + + NOT NULL; + + + + weightinteger + + + NOT NULL; + + + DEFAULT 1; + + + formattext + + + + + + NOT NULL; + + + DEFAULT 'mods33'::text; + + + + config.xml_transform + + + search_fieldboolean + + + NOT NULL; + + + DEFAULT true; + + + facet_fieldboolean + + + NOT NULL; + + + DEFAULT false; + + + facet_xpathtext + + + + + + + + + + Tables referencing config.metabib_field_index_norm_map via Foreign Key Constraints + •config.metabib_field_index_norm_map•config.metabib_search_alias•metabib.author_field_entry•metabib.identifier_field_entry•metabib.keyword_field_entry•metabib.series_field_entry•metabib.subject_field_entry•metabib.title_field_entry•search.relevance_adjustment + + + + + metabib_field_index_norm_mapmetabib_field_index_norm_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + fieldinteger + + + + + + NOT NULL; + + + + + config.metabib_field + + + norminteger + + + + + + NOT NULL; + + + + + config.index_normalizer + + + paramstext + + + + + posinteger + + + NOT NULL; + + + + + + + + + metabib_search_aliasmetabib_search_aliasFieldData TypeConstraints and Referencesaliastext + + + PRIMARY KEY + + + + + + + + + field_classtext + + + + + + NOT NULL; + + + + + config.metabib_class + + + fieldinteger + + + + + + + + + config.metabib_field + + + + + + + + net_access_levelnet_access_levelFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + + + + + + Tables referencing actor.usr via Foreign Key Constraints + •actor.usr + + + + + non_cataloged_typenon_cataloged_typeFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + owning_libinteger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + circ_durationinterval + + + NOT NULL; + + + DEFAULT '14 days'::interval; + + + in_houseboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + Tables referencing action.non_cat_in_house_use via Foreign Key Constraints + •action.non_cat_in_house_use•action.non_cataloged_circulation + + + + + org_unit_setting_typeorg_unit_setting_typeFieldData TypeConstraints and Referencesnametext + + + PRIMARY KEY + + + + + + + + + labeltext + + + + UNIQUE; + + + + NOT NULL; + + + + + + grptext + + + + + + + + + config.settings_group + + + descriptiontext + + + + + datatypetext + + + NOT NULL; + + + DEFAULT 'string'::text; + + + fm_classtext + + + + + view_perminteger + + + + + + + + + permission.perm_list + + + update_perminteger + + + + + + + + + permission.perm_list + + + + + + Constraints on org_unit_setting_typecoust_no_empty_linkCHECK ((((datatype = 'link'::text) AND (fm_class IS NOT NULL)) OR ((datatype <> 'link'::text) AND (fm_class IS NULL))))coust_valid_datatypeCHECK ((datatype = ANY (ARRAY['bool'::text, 'integer'::text, 'float'::text, 'currency'::text, 'interval'::text, 'date'::text, 'string'::text, 'object'::text, 'array'::text, 'link'::text]))) + + + + + + Tables referencing actor.org_unit_setting via Foreign Key Constraints + •actor.org_unit_setting + + + + + remote_accountremote_accountFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + labeltext + + + NOT NULL; + + + + hosttext + + + NOT NULL; + + + + usernametext + + + + + passwordtext + + + + + accounttext + + + + + pathtext + + + + + ownerinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + last_activitytimestamp with time zone + + + + + + + + + + rule_age_hold_protectrule_age_hold_protectFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + ageinterval + + + NOT NULL; + + + + proxinteger + + + NOT NULL; + + + + + + + Constraints on rule_age_hold_protectrule_age_hold_ protect_name_checkCHECK ((name ~ '^\\w+$'::text)) + + + + + + Tables referencing config.hold_matrix_matchpoint via Foreign Key Constraints + •config.hold_matrix_matchpoint + + + + + rule_circ_durationrule_circ_durationFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + extendedinterval + + + NOT NULL; + + + + normalinterval + + + NOT NULL; + + + + shrtinterval + + + NOT NULL; + + + + max_renewalsinteger + + + NOT NULL; + + + + + + + Constraints on rule_circ_durationrule_circ_duration_name_checkCHECK ((name ~ '^\\w+$'::text)) + + + + + + Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints + •config.circ_matrix_matchpoint + + + + + rule_max_finerule_max_fineFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + amountnumeric(6,2) + + + NOT NULL; + + + + is_percentboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + Constraints on rule_max_finerule_max_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) + + + + + + Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints + •config.circ_matrix_matchpoint + + + + + rule_recurring_finerule_recurring_fineFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + highnumeric(6,2) + + + NOT NULL; + + + + normalnumeric(6,2) + + + NOT NULL; + + + + lownumeric(6,2) + + + NOT NULL; + + + + recurrence_intervalinterval + + + NOT NULL; + + + DEFAULT '1 day'::interval; + + + + + + Constraints on rule_recurring_finerule_recurring_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) + + + + + + Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints + •config.circ_matrix_matchpoint + + + + + settings_groupsettings_groupFieldData TypeConstraints and Referencesnametext + + + PRIMARY KEY + + + + + + + + + labeltext + + + + UNIQUE; + + + + NOT NULL; + + + + + + + + + + + Tables referencing config.org_unit_setting_type via Foreign Key Constraints + •config.org_unit_setting_type•config.usr_setting_type + + + + + standingstandingFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + valuetext + + + + UNIQUE; + + + + NOT NULL; + + + + + + + + + + + Tables referencing actor.usr via Foreign Key Constraints + •actor.usr + + + + + standing_penaltystanding_penaltyFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + labeltext + + + NOT NULL; + + + + block_listtext + + + + + org_depthinteger + + + + + + + + + + Tables referencing actor.usr_standing_penalty via Foreign Key Constraints + •actor.usr_standing_penalty•permission.grp_penalty_threshold + + + + + upgrade_logupgrade_logFieldData TypeConstraints and Referencesversiontext + + + PRIMARY KEY + + + + + + + + + install_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + usr_setting_typeusr_setting_typeFieldData TypeConstraints and Referencesnametext + + + PRIMARY KEY + + + + + + + + + opac_visibleboolean + + + NOT NULL; + + + DEFAULT false; + + + labeltext + + + + UNIQUE; + + + + NOT NULL; + + + + + + descriptiontext + + + + + grptext + + + + + + + + + config.settings_group + + + datatypetext + + + NOT NULL; + + + DEFAULT 'string'::text; + + + fm_classtext + + + + + + + + Constraints on usr_setting_typecoust_no_empty_linkCHECK ((((datatype = 'link'::text) AND (fm_class IS NOT NULL)) OR ((datatype <> 'link'::text) AND (fm_class IS NULL))))coust_valid_datatypeCHECK ((datatype = ANY (ARRAY['bool'::text, 'integer'::text, 'float'::text, 'currency'::text, 'interval'::text, 'date'::text, 'string'::text, 'object'::text, 'array'::text, 'link'::text]))) + + + + + + Tables referencing action_trigger.event_definition via Foreign Key Constraints + •action_trigger.event_definition•actor.usr_setting + + + + + videorecording_format_mapvideorecording_format_mapFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + valuetext + + + NOT NULL; + + + + + + + + + Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints + •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint + + + + + xml_transformxml_transformFieldData TypeConstraints and Referencesnametext + + + PRIMARY KEY + + + + + + + + + namespace_uritext + + + NOT NULL; + + + + prefixtext + + + NOT NULL; + + + + xslttext + + + NOT NULL; + + + + + + + + + Tables referencing config.metabib_field via Foreign Key Constraints + •config.metabib_field + + + + + z3950_attrz3950_attrFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + sourcetext + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + config.z3950_source + + + + + nametext + + + NOT NULL; + + + + labeltext + + + NOT NULL; + + + + codeinteger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + formatinteger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + truncationinteger + + + NOT NULL; + + + + + + + + + z3950_sourcez3950_sourceFieldData TypeConstraints and Referencesnametext + + + PRIMARY KEY + + + + + + + + + labeltext + + + + UNIQUE; + + + + NOT NULL; + + + + + + hosttext + + + NOT NULL; + + + + portinteger + + + NOT NULL; + + + + dbtext + + + NOT NULL; + + + + record_formattext + + + NOT NULL; + + + DEFAULT 'FI'::text; + + + transmission_formattext + + + NOT NULL; + + + DEFAULT 'usmarc'::text; + + + authboolean + + + NOT NULL; + + + DEFAULT true; + + + + + + + + Tables referencing config.z3950_attr via Foreign Key Constraints + •config.z3950_attr + + + + + Schema containerSchema containerbiblio_record_entry_bucketbiblio_record_entry_bucketFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.usr + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + btypetext + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + DEFAULT 'misc'::text; + + + + container.biblio_record_entry_bucket_type + + + + + pubboolean + + + NOT NULL; + + + DEFAULT false; + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + Tables referencing container.biblio_record_entry_bucket_item via Foreign Key Constraints + •container.biblio_record_entry_bucket_item•container.biblio_record_entry_bucket_note + + + + + biblio_record_entry_bucket_itembiblio_record_entry_bucket_itemFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + bucketinteger + + + + + + NOT NULL; + + + + + container.biblio_record_entry_bucket + + + target_biblio_record_entrybigint + + + + + + NOT NULL; + + + + + biblio.record_entry + + + posinteger + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + Tables referencing container.biblio_record_entry_bucket_item_note via Foreign Key Constraints + •container.biblio_record_entry_bucket_item_note + + + + + biblio_record_entry_bucket_item_notebiblio_record_entry_bucket_item_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + iteminteger + + + + + + NOT NULL; + + + + + container.biblio_record_entry_bucket_item + + + notetext + + + NOT NULL; + + + + + + + + + biblio_record_entry_bucket_notebiblio_record_entry_bucket_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + bucketinteger + + + + + + NOT NULL; + + + + + container.biblio_record_entry_bucket + + + notetext + + + NOT NULL; + + + + + + + + + biblio_record_entry_bucket_typebiblio_record_entry_bucket_typeFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + labeltext + + + + UNIQUE; + + + + NOT NULL; + + + + + + + + + + + Tables referencing container.biblio_record_entry_bucket via Foreign Key Constraints + •container.biblio_record_entry_bucket + + + + + call_number_bucketcall_number_bucketFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.usr + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + btypetext + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + DEFAULT 'misc'::text; + + + + container.call_number_bucket_type + + + + + pubboolean + + + NOT NULL; + + + DEFAULT false; + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + Tables referencing container.call_number_bucket_item via Foreign Key Constraints + •container.call_number_bucket_item•container.call_number_bucket_note + + + + + call_number_bucket_itemcall_number_bucket_itemFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + bucketinteger + + + + + + NOT NULL; + + + + + container.call_number_bucket + + + target_call_numberinteger + + + + + + NOT NULL; + + + + + asset.call_number + + + posinteger + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + Tables referencing container.call_number_bucket_item_note via Foreign Key Constraints + •container.call_number_bucket_item_note + + + + + call_number_bucket_item_notecall_number_bucket_item_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + iteminteger + + + + + + NOT NULL; + + + + + container.call_number_bucket_item + + + notetext + + + NOT NULL; + + + + + + + + + call_number_bucket_notecall_number_bucket_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + bucketinteger + + + + + + NOT NULL; + + + + + container.call_number_bucket + + + notetext + + + NOT NULL; + + + + + + + + + call_number_bucket_typecall_number_bucket_typeFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + labeltext + + + + UNIQUE; + + + + NOT NULL; + + + + + + + + + + + Tables referencing container.call_number_bucket via Foreign Key Constraints + •container.call_number_bucket + + + + + copy_bucketcopy_bucketFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.usr + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + btypetext + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + DEFAULT 'misc'::text; + + + + + + container.copy_bucket_type + + + pubboolean + + + NOT NULL; + + + DEFAULT false; + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + Tables referencing container.copy_bucket_item via Foreign Key Constraints + •container.copy_bucket_item•container.copy_bucket_note + + + + + copy_bucket_itemcopy_bucket_itemFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + bucketinteger + + + + + + NOT NULL; + + + + + container.copy_bucket + + + target_copyinteger + + + + + + NOT NULL; + + + + + asset.copy + + + posinteger + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + Tables referencing container.copy_bucket_item_note via Foreign Key Constraints + •container.copy_bucket_item_note + + + + + copy_bucket_item_notecopy_bucket_item_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + iteminteger + + + + + + NOT NULL; + + + + + container.copy_bucket_item + + + notetext + + + NOT NULL; + + + + + + + + + copy_bucket_notecopy_bucket_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + bucketinteger + + + + + + NOT NULL; + + + + + container.copy_bucket + + + notetext + + + NOT NULL; + + + + + + + + + copy_bucket_typecopy_bucket_typeFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + labeltext + + + + UNIQUE; + + + + NOT NULL; + + + + + + + + + + + Tables referencing container.copy_bucket via Foreign Key Constraints + •container.copy_bucket + + + + + user_bucketuser_bucketFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.usr + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + btypetext + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + DEFAULT 'misc'::text; + + + + + + container.user_bucket_type + + + pubboolean + + + NOT NULL; + + + DEFAULT false; + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + Tables referencing container.user_bucket_item via Foreign Key Constraints + •container.user_bucket_item•container.user_bucket_note + + + + + user_bucket_itemuser_bucket_itemFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + bucketinteger + + + + + + NOT NULL; + + + + + container.user_bucket + + + target_userinteger + + + + + + NOT NULL; + + + + + actor.usr + + + posinteger + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + + + + + + Tables referencing container.user_bucket_item_note via Foreign Key Constraints + •container.user_bucket_item_note + + + + + user_bucket_item_noteuser_bucket_item_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + iteminteger + + + + + + NOT NULL; + + + + + container.user_bucket_item + + + notetext + + + NOT NULL; + + + + + + + + + user_bucket_noteuser_bucket_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + bucketinteger + + + + + + NOT NULL; + + + + + container.user_bucket + + + notetext + + + NOT NULL; + + + + + + + + + user_bucket_typeuser_bucket_typeFieldData TypeConstraints and Referencescodetext + + + PRIMARY KEY + + + + + + + + + labeltext + + + + UNIQUE; + + + + NOT NULL; + + + + + + + + + + + Tables referencing container.user_bucket via Foreign Key Constraints + •container.user_bucket + + + + + Schema extend_reporterSchema extend_reporterfull_circ_countfull_circ_countFieldData TypeConstraints and Referencesidbigint + + + + + circ_countbigint + + + + + + + + + + global_bibs_by_holding_updateglobal_bibs_by_holding_updateFieldData TypeConstraints and Referencesidbigint + + + + + holding_updatetimestamp with time zone + + + + + update_typetext + + + + + + + + + + legacy_circ_countlegacy_circ_countFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + + + + circ_countinteger + + + NOT NULL; + + + + + + + + + Schema metabibSchema metabibauthor_field_entryauthor_field_entryFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + sourcebigint + + + + + + NOT NULL; + + + + + biblio.record_entry + + + fieldinteger + + + + + + NOT NULL; + + + + + config.metabib_field + + + valuetext + + + NOT NULL; + + + + index_vectortsvector + + + NOT NULL; + + + + + + + + + facet_entryfacet_entryFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + sourcebigint + + + NOT NULL; + + + + fieldinteger + + + NOT NULL; + + + + valuetext + + + NOT NULL; + + + + + + + + + full_recfull_recFieldData TypeConstraints and Referencesidbigint + + + + + recordbigint + + + + + tagcharacter(3) + + + + + ind1text + + + + + ind2text + + + + + subfieldtext + + + + + valuetext + + + + + index_vectortsvector + + + + + + + + + + identifier_field_entryidentifier_field_entryFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + sourcebigint + + + + + + NOT NULL; + + + + + biblio.record_entry + + + fieldinteger + + + + + + NOT NULL; + + + + + config.metabib_field + + + valuetext + + + NOT NULL; + + + + index_vectortsvector + + + NOT NULL; + + + + + + + + + keyword_field_entrykeyword_field_entryFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + sourcebigint + + + + + + NOT NULL; + + + + + biblio.record_entry + + + fieldinteger + + + + + + NOT NULL; + + + + + config.metabib_field + + + valuetext + + + NOT NULL; + + + + index_vectortsvector + + + NOT NULL; + + + + + + + + + metarecordmetarecordFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + fingerprinttext + + + NOT NULL; + + + + master_recordbigint + + + + + + + + + biblio.record_entry + + + modstext + + + + + + + + + + Tables referencing metabib.metarecord_source_map via Foreign Key Constraints + •metabib.metarecord_source_map + + + + + metarecord_source_mapmetarecord_source_mapFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + metarecordbigint + + + + + + NOT NULL; + + + + + metabib.metarecord + + + sourcebigint + + + + + + NOT NULL; + + + + + biblio.record_entry + + + + + + + + real_full_recreal_full_recFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('metabib.full_rec_id_seq'::regclass); + + + + + recordbigint + + + + + + NOT NULL; + + + + + biblio.record_entry + + + tagcharacter(3) + + + NOT NULL; + + + + ind1text + + + + + ind2text + + + + + subfieldtext + + + + + valuetext + + + NOT NULL; + + + + index_vectortsvector + + + NOT NULL; + + + + + + + + + rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + recordbigint + + + + + + + + + biblio.record_entry + + + item_typetext + + + + + item_formtext + + + + + bib_leveltext + + + + + control_typetext + + + + + char_encodingtext + + + + + enc_leveltext + + + + + audiencetext + + + + + lit_formtext + + + + + type_mattext + + + + + cat_formtext + + + + + pub_statustext + + + + + item_langtext + + + + + vr_formattext + + + + + date1text + + + + + date2text + + + + + + + + + + series_field_entryseries_field_entryFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + sourcebigint + + + + + + NOT NULL; + + + + + biblio.record_entry + + + fieldinteger + + + + + + NOT NULL; + + + + + config.metabib_field + + + valuetext + + + NOT NULL; + + + + index_vectortsvector + + + NOT NULL; + + + + + + + + + subject_field_entrysubject_field_entryFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + sourcebigint + + + + + + NOT NULL; + + + + + biblio.record_entry + + + fieldinteger + + + + + + NOT NULL; + + + + + config.metabib_field + + + valuetext + + + NOT NULL; + + + + index_vectortsvector + + + NOT NULL; + + + + + + + + + title_field_entrytitle_field_entryFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + sourcebigint + + + + + + NOT NULL; + + + + + biblio.record_entry + + + fieldinteger + + + + + + NOT NULL; + + + + + config.metabib_field + + + valuetext + + + NOT NULL; + + + + index_vectortsvector + + + NOT NULL; + + + + + + + + + Schema moneySchema moneybillable_xactbillable_xactFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + usrinteger + + + + + + NOT NULL; + + + + + actor.usr + + + xact_starttimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + xact_finishtimestamp with time zone + + + + + unrecoveredboolean + + + + + + + + + + billable_xact_summarybillable_xact_summaryFieldData TypeConstraints and Referencesidbigint + + + + + usrinteger + + + + + xact_starttimestamp with time zone + + + + + xact_finishtimestamp with time zone + + + + + total_paidnumeric + + + + + last_payment_tstimestamp with time zone + + + + + last_payment_notetext + + + + + last_payment_typename + + + + + total_owednumeric + + + + + last_billing_tstimestamp with time zone + + + + + last_billing_notetext + + + + + last_billing_typetext + + + + + balance_owednumeric + + + + + xact_typename + + + + + + + + + + billable_xact_summary_location_viewbillable_xact_summary_location_viewFieldData TypeConstraints and Referencesidbigint + + + + + usrinteger + + + + + xact_starttimestamp with time zone + + + + + xact_finishtimestamp with time zone + + + + + total_paidnumeric + + + + + last_payment_tstimestamp with time zone + + + + + last_payment_notetext + + + + + last_payment_typename + + + + + total_owednumeric + + + + + last_billing_tstimestamp with time zone + + + + + last_billing_notetext + + + + + last_billing_typetext + + + + + balance_owednumeric + + + + + xact_typename + + + + + billing_locationinteger + + + + + + + + + + billable_xact_with_void_summarybillable_xact_with_void_summaryFieldData TypeConstraints and Referencesidbigint + + + + + usrinteger + + + + + xact_starttimestamp with time zone + + + + + xact_finishtimestamp with time zone + + + + + total_paidnumeric + + + + + last_payment_tstimestamp with time zone + + + + + last_payment_notetext + + + + + last_payment_typename + + + + + total_owednumeric + + + + + last_billing_tstimestamp with time zone + + + + + last_billing_notetext + + + + + last_billing_typetext + + + + + balance_owednumeric + + + + + xact_typename + + + + + + + + + + billingbillingFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + xactbigint + + + NOT NULL; + + + + billing_tstimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + voidedboolean + + + NOT NULL; + + + DEFAULT false; + + + voiderinteger + + + + + void_timetimestamp with time zone + + + + + amountnumeric(6,2) + + + NOT NULL; + + + + billing_typetext + + + NOT NULL; + + + + btypeinteger + + + + + + NOT NULL; + + + + + config.billing_type + + + notetext + + + + + + + + + + bnm_desk_paymentbnm_desk_paymentFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('money.payment_id_seq'::regclass); + + + + + xactbigint + + + NOT NULL; + + + + payment_tstimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + voidedboolean + + + NOT NULL; + + + DEFAULT false; + + + amountnumeric(6,2) + + + NOT NULL; + + + + notetext + + + + + amount_collectednumeric(6,2) + + + NOT NULL; + + + + accepting_usrinteger + + + NOT NULL; + + + + cash_drawerinteger + + + + + + + + + actor.workstation + + + + + + + + bnm_paymentbnm_paymentFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('money.payment_id_seq'::regclass); + + + + + xactbigint + + + NOT NULL; + + + + payment_tstimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + voidedboolean + + + NOT NULL; + + + DEFAULT false; + + + amountnumeric(6,2) + + + NOT NULL; + + + + notetext + + + + + amount_collectednumeric(6,2) + + + NOT NULL; + + + + accepting_usrinteger + + + NOT NULL; + + + + + + + + + bnm_payment_viewbnm_payment_viewFieldData TypeConstraints and Referencesidbigint + + + + + xactbigint + + + + + payment_tstimestamp with time zone + + + + + voidedboolean + + + + + amountnumeric(6,2) + + + + + notetext + + + + + amount_collectednumeric(6,2) + + + + + accepting_usrinteger + + + + + payment_typename + + + + + + + + + + cash_paymentcash_paymentFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('money.payment_id_seq'::regclass); + + + + + xactbigint + + + NOT NULL; + + + + payment_tstimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + voidedboolean + + + NOT NULL; + + + DEFAULT false; + + + amountnumeric(6,2) + + + NOT NULL; + + + + notetext + + + + + amount_collectednumeric(6,2) + + + NOT NULL; + + + + accepting_usrinteger + + + NOT NULL; + + + + cash_drawerinteger + + + + + + + + + + cashdrawer_payment_viewcashdrawer_payment_viewFieldData TypeConstraints and Referencesorg_unitinteger + + + + + cashdrawerinteger + + + + + payment_typename + + + + + payment_tstimestamp with time zone + + + + + amountnumeric(6,2) + + + + + voidedboolean + + + + + notetext + + + + + + + + + + check_paymentcheck_paymentFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('money.payment_id_seq'::regclass); + + + + + xactbigint + + + NOT NULL; + + + + payment_tstimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + voidedboolean + + + NOT NULL; + + + DEFAULT false; + + + amountnumeric(6,2) + + + NOT NULL; + + + + notetext + + + + + amount_collectednumeric(6,2) + + + NOT NULL; + + + + accepting_usrinteger + + + NOT NULL; + + + + cash_drawerinteger + + + + + check_numbertext + + + NOT NULL; + + + + + + + + + collections_trackercollections_trackerFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + usrinteger + + + + + + NOT NULL; + + + + + actor.usr + + + collectorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + locationinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + enter_timetimestamp with time zone + + + + + + + + + + credit_card_paymentcredit_card_paymentFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('money.payment_id_seq'::regclass); + + + + + xactbigint + + + NOT NULL; + + + + payment_tstimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + voidedboolean + + + NOT NULL; + + + DEFAULT false; + + + amountnumeric(6,2) + + + NOT NULL; + + + + notetext + + + + + amount_collectednumeric(6,2) + + + NOT NULL; + + + + accepting_usrinteger + + + NOT NULL; + + + + cash_drawerinteger + + + + + cc_typetext + + + + + cc_numbertext + + + + + cc_processortext + + + + + cc_first_nametext + + + + + cc_last_nametext + + + + + expire_monthinteger + + + + + expire_yearinteger + + + + + approval_codetext + + + + + + + + + + credit_paymentcredit_paymentFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('money.payment_id_seq'::regclass); + + + + + xactbigint + + + NOT NULL; + + + + payment_tstimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + voidedboolean + + + NOT NULL; + + + DEFAULT false; + + + amountnumeric(6,2) + + + NOT NULL; + + + + notetext + + + + + amount_collectednumeric(6,2) + + + NOT NULL; + + + + accepting_usrinteger + + + NOT NULL; + + + + + + + + + desk_payment_viewdesk_payment_viewFieldData TypeConstraints and Referencesidbigint + + + + + xactbigint + + + + + payment_tstimestamp with time zone + + + + + voidedboolean + + + + + amountnumeric(6,2) + + + + + notetext + + + + + amount_collectednumeric(6,2) + + + + + accepting_usrinteger + + + + + cash_drawerinteger + + + + + payment_typename + + + + + + + + + + forgive_paymentforgive_paymentFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('money.payment_id_seq'::regclass); + + + + + xactbigint + + + NOT NULL; + + + + payment_tstimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + voidedboolean + + + NOT NULL; + + + DEFAULT false; + + + amountnumeric(6,2) + + + NOT NULL; + + + + notetext + + + + + amount_collectednumeric(6,2) + + + NOT NULL; + + + + accepting_usrinteger + + + NOT NULL; + + + + + + + + + goods_paymentgoods_paymentFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('money.payment_id_seq'::regclass); + + + + + xactbigint + + + NOT NULL; + + + + payment_tstimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + voidedboolean + + + NOT NULL; + + + DEFAULT false; + + + amountnumeric(6,2) + + + NOT NULL; + + + + notetext + + + + + amount_collectednumeric(6,2) + + + NOT NULL; + + + + accepting_usrinteger + + + NOT NULL; + + + + + + + + + grocerygroceryFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('money.billable_xact_id_seq'::regclass); + + + + + usrinteger + + + NOT NULL; + + + + xact_starttimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + xact_finishtimestamp with time zone + + + + + unrecoveredboolean + + + + + billing_locationinteger + + + NOT NULL; + + + + notetext + + + + + + + + + + materialized_billable_xact_summarymaterialized_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + + + + usrinteger + + + + + xact_starttimestamp with time zone + + + + + xact_finishtimestamp with time zone + + + + + total_paidnumeric + + + + + last_payment_tstimestamp with time zone + + + + + last_payment_notetext + + + + + last_payment_typename + + + + + total_owednumeric + + + + + last_billing_tstimestamp with time zone + + + + + last_billing_notetext + + + + + last_billing_typetext + + + + + balance_owednumeric + + + + + xact_typename + + + + + + + + + + non_drawer_payment_viewnon_drawer_payment_viewFieldData TypeConstraints and Referencesidbigint + + + + + xactbigint + + + + + payment_tstimestamp with time zone + + + + + voidedboolean + + + + + amountnumeric(6,2) + + + + + notetext + + + + + amount_collectednumeric(6,2) + + + + + accepting_usrinteger + + + + + payment_typename + + + + + + + + + + open_billable_xact_summaryopen_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint + + + + + usrinteger + + + + + xact_starttimestamp with time zone + + + + + xact_finishtimestamp with time zone + + + + + total_paidnumeric + + + + + last_payment_tstimestamp with time zone + + + + + last_payment_notetext + + + + + last_payment_typename + + + + + total_owednumeric + + + + + last_billing_tstimestamp with time zone + + + + + last_billing_notetext + + + + + last_billing_typetext + + + + + balance_owednumeric + + + + + xact_typename + + + + + billing_locationinteger + + + + + + + + + + open_transaction_billing_summaryopen_transaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint + + + + + last_billing_typetext + + + + + last_billing_notetext + + + + + last_billing_tstimestamp with time zone + + + + + total_owednumeric + + + + + + + + + + open_transaction_billing_type_summaryopen_transaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint + + + + + last_billing_typetext + + + + + last_billing_notetext + + + + + last_billing_tstimestamp with time zone + + + + + total_owednumeric + + + + + + + + + + open_transaction_payment_summaryopen_transaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint + + + + + last_payment_typename + + + + + last_payment_notetext + + + + + last_payment_tstimestamp with time zone + + + + + total_paidnumeric + + + + + + + + + + open_usr_circulation_summaryopen_usr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger + + + + + total_paidnumeric + + + + + total_owednumeric + + + + + balance_owednumeric + + + + + + + + + + open_usr_summaryopen_usr_summaryFieldData TypeConstraints and Referencesusrinteger + + + + + total_paidnumeric + + + + + total_owednumeric + + + + + balance_owednumeric + + + + + + + + + + paymentpaymentFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + xactbigint + + + NOT NULL; + + + + payment_tstimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + voidedboolean + + + NOT NULL; + + + DEFAULT false; + + + amountnumeric(6,2) + + + NOT NULL; + + + + notetext + + + + + + + + + + payment_viewpayment_viewFieldData TypeConstraints and Referencesidbigint + + + + + xactbigint + + + + + payment_tstimestamp with time zone + + + + + voidedboolean + + + + + amountnumeric(6,2) + + + + + notetext + + + + + payment_typename + + + + + + + + + + transaction_billing_summarytransaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint + + + + + last_billing_typetext + + + + + last_billing_notetext + + + + + last_billing_tstimestamp with time zone + + + + + total_owednumeric + + + + + + + + + + transaction_billing_type_summarytransaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint + + + + + last_billing_typetext + + + + + last_billing_notetext + + + + + last_billing_tstimestamp with time zone + + + + + total_owednumeric + + + + + + + + + + transaction_billing_with_void_summarytransaction_billing_with_void_summaryFieldData TypeConstraints and Referencesxactbigint + + + + + last_billing_typetext + + + + + last_billing_notetext + + + + + last_billing_tstimestamp with time zone + + + + + total_owednumeric + + + + + + + + + + transaction_payment_summarytransaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint + + + + + last_payment_typename + + + + + last_payment_notetext + + + + + last_payment_tstimestamp with time zone + + + + + total_paidnumeric + + + + + + + + + + transaction_payment_with_void_summarytransaction_payment_with_void_summaryFieldData TypeConstraints and Referencesxactbigint + + + + + last_payment_typename + + + + + last_payment_notetext + + + + + last_payment_tstimestamp with time zone + + + + + total_paidnumeric + + + + + + + + + + usr_circulation_summaryusr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger + + + + + total_paidnumeric + + + + + total_owednumeric + + + + + balance_owednumeric + + + + + + + + + + usr_summaryusr_summaryFieldData TypeConstraints and Referencesusrinteger + + + + + total_paidnumeric + + + + + total_owednumeric + + + + + balance_owednumeric + + + + + + + + + + work_paymentwork_paymentFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('money.payment_id_seq'::regclass); + + + + + xactbigint + + + NOT NULL; + + + + payment_tstimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + voidedboolean + + + NOT NULL; + + + DEFAULT false; + + + amountnumeric(6,2) + + + NOT NULL; + + + + notetext + + + + + amount_collectednumeric(6,2) + + + NOT NULL; + + + + accepting_usrinteger + + + NOT NULL; + + + + + + + + + Schema offlineSchema offlinescriptscriptFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + sessiontext + + + NOT NULL; + + + + requestorinteger + + + NOT NULL; + + + + create_timeinteger + + + NOT NULL; + + + + workstationtext + + + NOT NULL; + + + + logfiletext + + + NOT NULL; + + + + time_deltainteger + + + NOT NULL; + + + + countinteger + + + NOT NULL; + + + + + + + + + sessionsessionFieldData TypeConstraints and Referenceskeytext + + + PRIMARY KEY + + + + + + + + + orginteger + + + NOT NULL; + + + + descriptiontext + + + + + creatorinteger + + + NOT NULL; + + + + create_timeinteger + + + NOT NULL; + + + + in_processinteger + + + NOT NULL; + + + + start_timeinteger + + + + + end_timeinteger + + + + + num_completeinteger + + + NOT NULL; + + + + + + + + + Schema permissionSchema permissiongrp_penalty_thresholdgrp_penalty_thresholdFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + grpinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + permission.grp_tree + + + + + org_unitinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + penaltyinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + config.standing_penalty + + + + + thresholdnumeric(8,2) + + + NOT NULL; + + + + + + + + + grp_perm_mapgrp_perm_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + grpinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + permission.grp_tree + + + + + perminteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + permission.perm_list + + + + + depthinteger + + + NOT NULL; + + + + grantableboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + grp_treegrp_treeFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + parentinteger + + + + + + + + + permission.grp_tree + + + usergroupboolean + + + NOT NULL; + + + DEFAULT true; + + + perm_intervalinterval + + + NOT NULL; + + + DEFAULT '3 years'::interval; + + + descriptiontext + + + + + application_permtext + + + + + + + + + + Tables referencing actor.usr via Foreign Key Constraints + •actor.usr•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•permission.grp_penalty_threshold•permission.grp_perm_map•permission.grp_tree•permission.usr_grp_map + + + + + perm_listperm_listFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + codetext + + + + UNIQUE; + + + + NOT NULL; + + + + + + descriptiontext + + + + + + + + + + Tables referencing config.org_unit_setting_type via Foreign Key Constraints + •config.org_unit_setting_type•permission.grp_perm_map•permission.usr_object_perm_map•permission.usr_perm_map + + + + + usr_grp_mapusr_grp_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + usrinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.usr + + + + + grpinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + permission.grp_tree + + + + + + + + + + usr_object_perm_mapusr_object_perm_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + usrinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.usr + + + perminteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + permission.perm_list + + + object_typetext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + object_idtext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + grantableboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + usr_perm_mapusr_perm_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + usrinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + actor.usr + + + perminteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + permission.perm_list + + + depthinteger + + + NOT NULL; + + + + grantableboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + usr_work_ou_mapusr_work_ou_mapFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + usrinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.usr + + + + + work_ouinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + + + + + + Schema publicSchema publicSchema querySchema querybind_variablebind_variableFieldData TypeConstraints and Referencesnametext + + + PRIMARY KEY + + + + + + + + + typetext + + + NOT NULL; + + + + descriptiontext + + + NOT NULL; + + + + default_valuetext + + + + + labeltext + + + NOT NULL; + + + + + + + Constraints on bind_variablebind_variable_typeCHECK ((type = ANY (ARRAY['string'::text, 'number'::text, 'string_list'::text, 'number_list'::text]))) + + + + + + Tables referencing query.expression via Foreign Key Constraints + •query.expression + + + + + case_branchcase_branchFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + parent_exprinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + query.expression + + + + + seq_nointeger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + conditioninteger + + + + + + + + + query.expression + + + resultinteger + + + + + + NOT NULL; + + + + + query.expression + + + + + + + + datatypedatatypeFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + datatype_nametext + + + + UNIQUE; + + + + NOT NULL; + + + + + + is_numericboolean + + + NOT NULL; + + + DEFAULT false; + + + is_compositeboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + Constraints on datatypeqdt_comp_not_numCHECK (((is_numeric IS FALSE) OR (is_composite IS FALSE))) + + + + + + Tables referencing query.expression via Foreign Key Constraints + •query.expression•query.function_param_def•query.function_sig•query.record_column•query.subfield + + + + + expr_xbetexpr_xbetFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + left_operandinteger + + + + + negateboolean + + + + + + + + + + expr_xbindexpr_xbindFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + bind_variabletext + + + + + + + + + + expr_xboolexpr_xboolFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + literaltext + + + + + negateboolean + + + + + + + + + + expr_xcaseexpr_xcaseFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + left_operandinteger + + + + + negateboolean + + + + + + + + + + expr_xcastexpr_xcastFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + left_operandinteger + + + + + cast_typeinteger + + + + + negateboolean + + + + + + + + + + expr_xcolexpr_xcolFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + table_aliastext + + + + + column_nametext + + + + + negateboolean + + + + + + + + + + expr_xexexpr_xexFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + subqueryinteger + + + + + negateboolean + + + + + + + + + + expr_xfuncexpr_xfuncFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + column_nametext + + + + + function_idinteger + + + + + negateboolean + + + + + + + + + + expr_xinexpr_xinFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + left_operandinteger + + + + + subqueryinteger + + + + + negateboolean + + + + + + + + + + expr_xisnullexpr_xisnullFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + left_operandinteger + + + + + negateboolean + + + + + + + + + + expr_xnullexpr_xnullFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + negateboolean + + + + + + + + + + expr_xnumexpr_xnumFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + literaltext + + + + + + + + + + expr_xopexpr_xopFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + left_operandinteger + + + + + operatortext + + + + + right_operandinteger + + + + + negateboolean + + + + + + + + + + expr_xserexpr_xserFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + operatortext + + + + + negateboolean + + + + + + + + + + expr_xstrexpr_xstrFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + literaltext + + + + + + + + + + expr_xsubqexpr_xsubqFieldData TypeConstraints and Referencesidinteger + + + + + parenthesizeboolean + + + + + parent_exprinteger + + + + + seq_nointeger + + + + + subqueryinteger + + + + + negateboolean + + + + + + + + + + expressionexpressionFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + typetext + + + NOT NULL; + + + + parenthesizeboolean + + + NOT NULL; + + + DEFAULT false; + + + parent_exprinteger + + + + + + + + + query.expression + + + seq_nointeger + + + NOT NULL; + + + DEFAULT 1; + + + literaltext + + + + + table_aliastext + + + + + column_nametext + + + + + left_operandinteger + + + + + + + + + query.expression + + + operatortext + + + + + right_operandinteger + + + + + + + + + query.expression + + + function_idinteger + + + + + + + + + query.function_sig + + + subqueryinteger + + + + + + + + + query.stored_query + + + cast_typeinteger + + + + + + + + + query.datatype + + + negateboolean + + + NOT NULL; + + + DEFAULT false; + + + bind_variabletext + + + + + + + + + query.bind_variable + + + + + + Constraints on expressionexpression_typeCHECK ((type = ANY (ARRAY['xbet'::text, 'xbind'::text, 'xbool'::text, 'xcase'::text, 'xcast'::text, 'xcol'::text, 'xex'::text, 'xfunc'::text, 'xin'::text, 'xisnull'::text, 'xnull'::text, 'xnum'::text, 'xop'::text, 'xser'::text, 'xstr'::text, 'xsubq'::text]))) + + + + + + Tables referencing query.case_branch via Foreign Key Constraints + •query.case_branch•query.expression•query.from_relation•query.order_by_item•query.select_item•query.stored_query + + + + + from_relationfrom_relationFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + typetext + + + NOT NULL; + + + + table_nametext + + + + + class_nametext + + + + + subqueryinteger + + + + + + + + + query.stored_query + + + function_callinteger + + + + + + + + + query.expression + + + table_aliastext + + + + + parent_relationinteger + + + + + + + + + query.from_relation + + + seq_nointeger + + + NOT NULL; + + + DEFAULT 1; + + + join_typetext + + + + + on_clauseinteger + + + + + + + + + query.expression + + + + + + Constraints on from_relationgood_join_typeCHECK (((join_type IS NULL) OR (join_type = ANY (ARRAY['INNER'::text, 'LEFT'::text, 'RIGHT'::text, 'FULL'::text]))))join_or_coreCHECK (((((parent_relation IS NULL) AND (join_type IS NULL)) AND (on_clause IS NULL)) OR (((parent_relation IS NOT NULL) AND (join_type IS NOT NULL)) AND (on_clause IS NOT NULL))))relation_typeCHECK ((type = ANY (ARRAY['RELATION'::text, 'SUBQUERY'::text, 'FUNCTION'::text]))) + + + + + + Tables referencing query.from_relation via Foreign Key Constraints + •query.from_relation•query.record_column•query.stored_query + + + + + function_param_deffunction_param_defFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + function_idinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + query.function_sig + + + + + seq_nointeger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + datatypeinteger + + + + + + NOT NULL; + + + + + query.datatype + + + + + + Constraints on function_param_defqfpd_pos_seq_noCHECK ((seq_no > 0)) + + + + + + function_sigfunction_sigFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + function_nametext + + + NOT NULL; + + + + return_typeinteger + + + + + + + + + query.datatype + + + is_aggregateboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + Constraints on function_sigqfd_rtn_or_aggrCHECK (((return_type IS NULL) OR (is_aggregate = false))) + + + + + + Tables referencing query.expression via Foreign Key Constraints + •query.expression•query.function_param_def + + + + + order_by_itemorder_by_itemFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + stored_queryinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + query.stored_query + + + + + seq_nointeger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + expressioninteger + + + + + + NOT NULL; + + + + + query.expression + + + + + + + + query_sequencequery_sequenceFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + parent_queryinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + query.stored_query + + + seq_nointeger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + child_queryinteger + + + + + + NOT NULL; + + + + + query.stored_query + + + + + + + + record_columnrecord_columnFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + from_relationinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + query.from_relation + + + seq_nointeger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + column_nametext + + + NOT NULL; + + + + column_typeinteger + + + + + + NOT NULL; + + + + + query.datatype + + + + + + + + select_itemselect_itemFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + stored_queryinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + query.stored_query + + + + + seq_nointeger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + expressioninteger + + + + + + NOT NULL; + + + + + query.expression + + + column_aliastext + + + + + grouped_byboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + stored_querystored_queryFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + typetext + + + NOT NULL; + + + + use_allboolean + + + NOT NULL; + + + DEFAULT false; + + + use_distinctboolean + + + NOT NULL; + + + DEFAULT false; + + + from_clauseinteger + + + + + + + + + query.from_relation + + + where_clauseinteger + + + + + + + + + query.expression + + + having_clauseinteger + + + + + + + + + query.expression + + + limit_countinteger + + + + + + + + + query.expression + + + offset_countinteger + + + + + + + + + query.expression + + + + + + Constraints on stored_queryquery_typeCHECK ((type = ANY (ARRAY['SELECT'::text, 'UNION'::text, 'INTERSECT'::text, 'EXCEPT'::text]))) + + + + + + Tables referencing action.fieldset via Foreign Key Constraints + •action.fieldset•query.expression•query.from_relation•query.order_by_item•query.query_sequence•query.select_item + + + + + subfieldsubfieldFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + composite_typeinteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + query.datatype + + + seq_nointeger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + subfield_typeinteger + + + + + + NOT NULL; + + + + + query.datatype + + + + + + Constraints on subfieldqsf_pos_seq_noCHECK ((seq_no > 0)) + + + + + + Schema reporterSchema reportercirc_typecirc_typeFieldData TypeConstraints and Referencesidbigint + + + + + typetext + + + + + + + + + + currently_runningcurrently_runningFieldData TypeConstraints and Referencesidinteger + + + + + runner_barcodetext + + + + + nametext + + + + + run_timetimestamp with time zone + + + + + scheduled_wait_timeinterval + + + + + + + + + + demographicdemographicFieldData TypeConstraints and Referencesidinteger + + + + + dobtimestamp with time zone + + + + + general_divisiontext + + + + + + + + + + hold_request_recordhold_request_recordFieldData TypeConstraints and Referencesidinteger + + + + + targetbigint + + + + + hold_typetext + + + + + bib_recordbigint + + + + + + + + + + materialized_simple_recordmaterialized_simple_recordFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + + + + fingerprinttext + + + + + qualityinteger + + + + + tcn_sourcetext + + + + + tcn_valuetext + + + + + titletext + + + + + authortext + + + + + publishertext + + + + + pubdatetext + + + + + isbntext[] + + + + + issntext[] + + + + + + + + + + old_super_simple_recordold_super_simple_recordFieldData TypeConstraints and Referencesidbigint + + + + + fingerprinttext + + + + + qualityinteger + + + + + tcn_sourcetext + + + + + tcn_valuetext + + + + + titletext + + + + + authortext + + + + + publishertext + + + + + pubdatetext + + + + + isbntext[] + + + + + issntext[] + + + + + + + + + + output_folderoutput_folderFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + parentinteger + + + + + + + + + reporter.output_folder + + + ownerinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + nametext + + + NOT NULL; + + + + sharedboolean + + + NOT NULL; + + + DEFAULT false; + + + share_withinteger + + + + + + + + + actor.org_unit + + + + + + + + Tables referencing reporter.output_folder via Foreign Key Constraints + •reporter.output_folder•reporter.schedule + + + + + overdue_circsoverdue_circsFieldData TypeConstraints and Referencesidbigint + + + + + usrinteger + + + + + xact_starttimestamp with time zone + + + + + xact_finishtimestamp with time zone + + + + + unrecoveredboolean + + + + + target_copybigint + + + + + circ_libinteger + + + + + circ_staffinteger + + + + + checkin_staffinteger + + + + + checkin_libinteger + + + + + renewal_remaininginteger + + + + + due_datetimestamp with time zone + + + + + stop_fines_timetimestamp with time zone + + + + + checkin_timetimestamp with time zone + + + + + create_timetimestamp with time zone + + + + + durationinterval + + + + + fine_intervalinterval + + + + + recurring_finenumeric(6,2) + + + + + max_finenumeric(6,2) + + + + + phone_renewalboolean + + + + + desk_renewalboolean + + + + + opac_renewalboolean + + + + + duration_ruletext + + + + + recurring_fine_ruletext + + + + + max_fine_ruletext + + + + + stop_finestext + + + + + workstationinteger + + + + + checkin_workstationinteger + + + + + checkin_scan_timetimestamp with time zone + + + + + parent_circbigint + + + + + + + + + + overdue_reportsoverdue_reportsFieldData TypeConstraints and Referencesidinteger + + + + + runner_barcodetext + + + + + nametext + + + + + run_timetimestamp with time zone + + + + + scheduled_wait_timeinterval + + + + + + + + + + pending_reportspending_reportsFieldData TypeConstraints and Referencesidinteger + + + + + runner_barcodetext + + + + + nametext + + + + + run_timetimestamp with time zone + + + + + scheduled_wait_timeinterval + + + + + + + + + + reportreportFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + nametext + + + NOT NULL; + + + DEFAULT ''::text; + + + descriptiontext + + + NOT NULL; + + + DEFAULT ''::text; + + + templateinteger + + + + + + NOT NULL; + + + + + reporter.template + + + datatext + + + NOT NULL; + + + + folderinteger + + + + + + NOT NULL; + + + + + reporter.report_folder + + + recurboolean + + + NOT NULL; + + + DEFAULT false; + + + recurrenceinterval + + + + + + + + + + Tables referencing reporter.schedule via Foreign Key Constraints + •reporter.schedule + + + + + report_folderreport_folderFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + parentinteger + + + + + + + + + reporter.report_folder + + + ownerinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + nametext + + + NOT NULL; + + + + sharedboolean + + + NOT NULL; + + + DEFAULT false; + + + share_withinteger + + + + + + + + + actor.org_unit + + + + + + + + Tables referencing reporter.report via Foreign Key Constraints + •reporter.report•reporter.report_folder + + + + + schedulescheduleFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + reportinteger + + + + + + NOT NULL; + + + + + reporter.report + + + folderinteger + + + + + + NOT NULL; + + + + + reporter.output_folder + + + runnerinteger + + + + + + NOT NULL; + + + + + actor.usr + + + run_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + start_timetimestamp with time zone + + + + + complete_timetimestamp with time zone + + + + + emailtext + + + + + excel_formatboolean + + + NOT NULL; + + + DEFAULT true; + + + html_formatboolean + + + NOT NULL; + + + DEFAULT true; + + + csv_formatboolean + + + NOT NULL; + + + DEFAULT true; + + + chart_pieboolean + + + NOT NULL; + + + DEFAULT false; + + + chart_barboolean + + + NOT NULL; + + + DEFAULT false; + + + chart_lineboolean + + + NOT NULL; + + + DEFAULT false; + + + error_codeinteger + + + + + error_texttext + + + + + + + + + + simple_recordsimple_recordFieldData TypeConstraints and Referencesidbigint + + + + + metarecordbigint + + + + + fingerprinttext + + + + + qualityinteger + + + + + tcn_sourcetext + + + + + tcn_valuetext + + + + + titletext + + + + + uniform_titletext + + + + + authortext + + + + + publishertext + + + + + pubdatetext + + + + + series_titletext + + + + + series_statementtext + + + + + summarytext + + + + + isbntext[] + + + + + issntext[] + + + + + topic_subjecttext[] + + + + + geographic_subjecttext[] + + + + + genretext[] + + + + + name_subjecttext[] + + + + + corporate_subjecttext[] + + + + + external_uritext[] + + + + + + + + + + super_simple_recordsuper_simple_recordFieldData TypeConstraints and Referencesidbigint + + + + + fingerprinttext + + + + + qualityinteger + + + + + tcn_sourcetext + + + + + tcn_valuetext + + + + + titletext + + + + + authortext + + + + + publishertext + + + + + pubdatetext + + + + + isbntext[] + + + + + issntext[] + + + + + + + + + + templatetemplateFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + nametext + + + NOT NULL; + + + + descriptiontext + + + NOT NULL; + + + + datatext + + + NOT NULL; + + + + folderinteger + + + + + + NOT NULL; + + + + + reporter.template_folder + + + + + + + + Tables referencing reporter.report via Foreign Key Constraints + •reporter.report + + + + + template_foldertemplate_folderFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + parentinteger + + + + + + + + + reporter.template_folder + + + ownerinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + nametext + + + NOT NULL; + + + + sharedboolean + + + NOT NULL; + + + DEFAULT false; + + + share_withinteger + + + + + + + + + actor.org_unit + + + + + + + + Tables referencing reporter.template via Foreign Key Constraints + •reporter.template•reporter.template_folder + + + + + xact_billing_totalsxact_billing_totalsFieldData TypeConstraints and Referencesxactbigint + + + + + unvoidednumeric + + + + + voidednumeric + + + + + totalnumeric + + + + + + + + + + xact_paid_totalsxact_paid_totalsFieldData TypeConstraints and Referencesxactbigint + + + + + unvoidednumeric + + + + + voidednumeric + + + + + totalnumeric + + + + + + + + + + Schema searchSchema searchrelevance_adjustmentrelevance_adjustmentFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + activeboolean + + + NOT NULL; + + + DEFAULT true; + + + fieldinteger + + + + + + NOT NULL; + + + + + config.metabib_field + + + bump_typetext + + + NOT NULL; + + + + multipliernumeric + + + NOT NULL; + + + DEFAULT 1.0; + + + + + + Constraints on relevance_adjustmentrelevance_adjustment _bump_type_checkCHECK ((bump_type = ANY (ARRAY['word_order'::text, 'first_word'::text, 'full_match'::text]))) + + + + + + Schema serialSchema serialbasic_summarybasic_summaryFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + distributioninteger + + + + + + NOT NULL; + + + + + serial.distribution + + + generated_coveragetext + + + NOT NULL; + + + + textual_holdingstext + + + + + show_generatedboolean + + + NOT NULL; + + + DEFAULT true; + + + + + + + + caption_and_patterncaption_and_patternFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + subscriptioninteger + + + + + + NOT NULL; + + + + + serial.subscription + + + typetext + + + NOT NULL; + + + + create_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + start_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + end_datetimestamp with time zone + + + + + activeboolean + + + NOT NULL; + + + DEFAULT false; + + + pattern_codetext + + + NOT NULL; + + + + enum_1text + + + + + enum_2text + + + + + enum_3text + + + + + enum_4text + + + + + enum_5text + + + + + enum_6text + + + + + chron_1text + + + + + chron_2text + + + + + chron_3text + + + + + chron_4text + + + + + chron_5text + + + + + + + + Constraints on caption_and_patterncap_typeCHECK ((type = ANY (ARRAY['basic'::text, 'supplement'::text, 'index'::text]))) + + + + + + Tables referencing serial.issuance via Foreign Key Constraints + •serial.issuance + + + + + distributiondistributionFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + record_entrybigint + + + + + + + + + serial.record_entry + + + summary_methodtext + + + + + subscriptioninteger + + + + + + NOT NULL; + + + + + serial.subscription + + + holding_libinteger + + + + + + NOT NULL; + + + + + actor.org_unit + + + labeltext + + + NOT NULL; + + + + receive_call_numberbigint + + + + + + + + + asset.call_number + + + receive_unit_templateinteger + + + + + + + + + asset.copy_template + + + bind_call_numberbigint + + + + + + + + + asset.call_number + + + bind_unit_templateinteger + + + + + + + + + asset.copy_template + + + unit_label_prefixtext + + + + + unit_label_suffixtext + + + + + + + + Constraints on distributionsdist_summary_method_checkCHECK (((summary_method IS NULL) OR (summary_method = ANY (ARRAY['add_to_sre'::text, 'merge_with_sre'::text, 'use_sre_only'::text, 'use_sdist_only'::text])))) + + + + + + Tables referencing serial.basic_summary via Foreign Key Constraints + •serial.basic_summary•serial.distribution_note•serial.index_summary•serial.stream•serial.supplement_summary + + + + + distribution_notedistribution_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + distributioninteger + + + + + + NOT NULL; + + + + + serial.distribution + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_datetimestamp with time zone + + + + DEFAULT now(); + + + pubboolean + + + NOT NULL; + + + DEFAULT false; + + + titletext + + + NOT NULL; + + + + valuetext + + + NOT NULL; + + + + + + + + + index_summaryindex_summaryFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + distributioninteger + + + + + + NOT NULL; + + + + + serial.distribution + + + generated_coveragetext + + + NOT NULL; + + + + textual_holdingstext + + + + + show_generatedboolean + + + NOT NULL; + + + DEFAULT true; + + + + + + + + issuanceissuanceFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + editorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + edit_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + subscriptioninteger + + + + + + NOT NULL; + + + + + serial.subscription + + + labeltext + + + + + date_publishedtimestamp with time zone + + + + + caption_and_patterninteger + + + + + + + + + serial.caption_and_pattern + + + holding_codetext + + + + + holding_typetext + + + + + holding_link_idinteger + + + + + + + + Constraints on issuancevalid_holding_typeCHECK (((holding_type IS NULL) OR (holding_type = ANY (ARRAY['basic'::text, 'supplement'::text, 'index'::text])))) + + + + + + Tables referencing serial.item via Foreign Key Constraints + •serial.item + + + + + itemitemFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + editorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + edit_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + issuanceinteger + + + + + + NOT NULL; + + + + + serial.issuance + + + streaminteger + + + + + + NOT NULL; + + + + + serial.stream + + + unitinteger + + + + + + + + + serial.unit + + + uriinteger + + + + + + + + + asset.uri + + + date_expectedtimestamp with time zone + + + + + date_receivedtimestamp with time zone + + + + + statustext + + + + DEFAULT 'Expected'::text; + + + shadowedboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + Constraints on itemvalid_statusCHECK ((status = ANY (ARRAY['Bindery'::text, 'Bound'::text, 'Claimed'::text, 'Discarded'::text, 'Expected'::text, 'Not Held'::text, 'Not Published'::text, 'Received'::text]))) + + + + + + Tables referencing acq.serial_claim via Foreign Key Constraints + •acq.serial_claim•serial.item_note + + + + + item_noteitem_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + iteminteger + + + + + + NOT NULL; + + + + + serial.item + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_datetimestamp with time zone + + + + DEFAULT now(); + + + pubboolean + + + NOT NULL; + + + DEFAULT false; + + + titletext + + + NOT NULL; + + + + valuetext + + + NOT NULL; + + + + + + + + + record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + recordbigint + + + + + + + + + biblio.record_entry + + + owning_libinteger + + + + + + NOT NULL; + + + DEFAULT 1; + + + + actor.org_unit + + + creatorinteger + + + NOT NULL; + + + DEFAULT 1; + + + editorinteger + + + NOT NULL; + + + DEFAULT 1; + + + sourceinteger + + + + + create_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + edit_datetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + activeboolean + + + NOT NULL; + + + DEFAULT true; + + + deletedboolean + + + NOT NULL; + + + DEFAULT false; + + + marctext + + + + + last_xact_idtext + + + NOT NULL; + + + + + + + + + Tables referencing serial.distribution via Foreign Key Constraints + •serial.distribution + + + + + routing_list_userrouting_list_userFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + streaminteger + + + + UNIQUE#1 + ; + + + + + + + NOT NULL; + + + + + + + serial.stream + + + posinteger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + DEFAULT 1; + + + + + readerinteger + + + + + + + + + actor.usr + + + departmenttext + + + + + notetext + + + + + + + + Constraints on routing_list_userreader_or_deptCHECK ((((reader IS NOT NULL) AND (department IS NULL)) OR ((reader IS NULL) AND (department IS NOT NULL)))) + + + + + + streamstreamFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + distributioninteger + + + + + + NOT NULL; + + + + + serial.distribution + + + routing_labeltext + + + + + + + + + + Tables referencing serial.item via Foreign Key Constraints + •serial.item•serial.routing_list_user + + + + + subscriptionsubscriptionFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + owning_libinteger + + + + + + NOT NULL; + + + DEFAULT 1; + + + + actor.org_unit + + + start_datetimestamp with time zone + + + NOT NULL; + + + + end_datetimestamp with time zone + + + + + record_entrybigint + + + + + + + + + biblio.record_entry + + + expected_date_offsetinterval + + + + + + + + + + Tables referencing serial.caption_and_pattern via Foreign Key Constraints + •serial.caption_and_pattern•serial.distribution•serial.issuance•serial.subscription_note + + + + + subscription_notesubscription_noteFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + subscriptioninteger + + + + + + NOT NULL; + + + + + serial.subscription + + + creatorinteger + + + + + + NOT NULL; + + + + + actor.usr + + + create_datetimestamp with time zone + + + + DEFAULT now(); + + + pubboolean + + + NOT NULL; + + + DEFAULT false; + + + titletext + + + NOT NULL; + + + + valuetext + + + NOT NULL; + + + + + + + + + supplement_summarysupplement_summaryFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + distributioninteger + + + + + + NOT NULL; + + + + + serial.distribution + + + generated_coveragetext + + + NOT NULL; + + + + textual_holdingstext + + + + + show_generatedboolean + + + NOT NULL; + + + DEFAULT true; + + + + + + + + unitunitFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('asset.copy_id_seq'::regclass); + + + + + circ_libinteger + + + NOT NULL; + + + + creatorbigint + + + + + + NOT NULL; + + + + + actor.usr + + + call_numberbigint + + + + + + NOT NULL; + + + + + asset.call_number + + + editorbigint + + + + + + NOT NULL; + + + + + actor.usr + + + create_datetimestamp with time zone + + + + DEFAULT now(); + + + edit_datetimestamp with time zone + + + + DEFAULT now(); + + + copy_numberinteger + + + + + statusinteger + + + NOT NULL; + + + + locationinteger + + + NOT NULL; + + + DEFAULT 1; + + + loan_durationinteger + + + NOT NULL; + + + + fine_levelinteger + + + NOT NULL; + + + + age_protectinteger + + + + + circulateboolean + + + NOT NULL; + + + DEFAULT true; + + + depositboolean + + + NOT NULL; + + + DEFAULT false; + + + refboolean + + + NOT NULL; + + + DEFAULT false; + + + holdableboolean + + + NOT NULL; + + + DEFAULT true; + + + deposit_amountnumeric(6,2) + + + NOT NULL; + + + DEFAULT 0.00; + + + pricenumeric(8,2) + + + + + barcodetext + + + NOT NULL; + + + + circ_modifiertext + + + + + circ_as_typetext + + + + + dummy_titletext + + + + + dummy_authortext + + + + + alert_messagetext + + + + + opac_visibleboolean + + + NOT NULL; + + + DEFAULT true; + + + deletedboolean + + + NOT NULL; + + + DEFAULT false; + + + floatingboolean + + + NOT NULL; + + + DEFAULT false; + + + dummy_isbntext + + + + + status_changed_timetimestamp with time zone + + + + + mint_conditionboolean + + + NOT NULL; + + + DEFAULT true; + + + costnumeric(8,2) + + + + + sort_keytext + + + + + detailed_contentstext + + + NOT NULL; + + + + summary_contentstext + + + NOT NULL; + + + + + + + Constraints on unitcopy_fine_level_checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_loan_duration_checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) + + + + + + Tables referencing serial.item via Foreign Key Constraints + •serial.item + + + + + Schema stagingSchema stagingbilling_address_stagebilling_address_stageFieldData TypeConstraints and Referencesrow_idbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('staging.mailing_address_ stage_row_id_seq'::regclass); + + + + + row_datetimestamp with time zone + + + + DEFAULT now(); + + + usrnametext + + + NOT NULL; + + + + street1text + + + + + street2text + + + + + citytext + + + NOT NULL; + + + DEFAULT ''::text; + + + statetext + + + NOT NULL; + + + DEFAULT 'OK'::text; + + + countrytext + + + NOT NULL; + + + DEFAULT 'US'::text; + + + post_codetext + + + NOT NULL; + + + + completeboolean + + + + DEFAULT false; + + + + + + + + card_stagecard_stageFieldData TypeConstraints and Referencesrow_idbigserial + + + PRIMARY KEY + + + + + + + + + row_datetimestamp with time zone + + + + DEFAULT now(); + + + usrnametext + + + NOT NULL; + + + + barcodetext + + + NOT NULL; + + + + completeboolean + + + + DEFAULT false; + + + + + + + + mailing_address_stagemailing_address_stageFieldData TypeConstraints and Referencesrow_idbigserial + + + PRIMARY KEY + + + + + + + + + row_datetimestamp with time zone + + + + DEFAULT now(); + + + usrnametext + + + NOT NULL; + + + + street1text + + + + + street2text + + + + + citytext + + + NOT NULL; + + + DEFAULT ''::text; + + + statetext + + + NOT NULL; + + + DEFAULT 'OK'::text; + + + countrytext + + + NOT NULL; + + + DEFAULT 'US'::text; + + + post_codetext + + + NOT NULL; + + + + completeboolean + + + + DEFAULT false; + + + + + + + + statcat_stagestatcat_stageFieldData TypeConstraints and Referencesrow_idbigserial + + + PRIMARY KEY + + + + + + + + + row_datetimestamp with time zone + + + + DEFAULT now(); + + + usrnametext + + + NOT NULL; + + + + statcattext + + + NOT NULL; + + + + valuetext + + + NOT NULL; + + + + completeboolean + + + + DEFAULT false; + + + + + + + + user_stageuser_stageFieldData TypeConstraints and Referencesrow_idbigserial + + + PRIMARY KEY + + + + + + + + + row_datetimestamp with time zone + + + + DEFAULT now(); + + + usrnametext + + + NOT NULL; + + + + profiletext + + + + + emailtext + + + + + passwdtext + + + + + ident_typeinteger + + + + DEFAULT 3; + + + first_given_nametext + + + + + second_given_nametext + + + + + family_nametext + + + + + day_phonetext + + + + + evening_phonetext + + + + + home_ouinteger + + + + DEFAULT 2; + + + dobtext + + + + + completeboolean + + + + DEFAULT false; + + + + + + + + Schema statsSchema statsfleshed_call_numberfleshed_call_numberFieldData TypeConstraints and Referencesidbigint + + + + + creatorbigint + + + + + create_datetimestamp with time zone + + + + + editorbigint + + + + + edit_datetimestamp with time zone + + + + + recordbigint + + + + + owning_libinteger + + + + + labeltext + + + + + deletedboolean + + + + + label_classbigint + + + + + label_sortkeytext + + + + + create_date_daydate + + + + + edit_date_daydate + + + + + create_date_hourtimestamp with time zone + + + + + edit_date_hourtimestamp with time zone + + + + + item_langtext + + + + + item_typetext + + + + + item_formtext + + + + + + + + + + fleshed_circulationfleshed_circulationFieldData TypeConstraints and Referencesidbigint + + + + + usrinteger + + + + + xact_starttimestamp with time zone + + + + + xact_finishtimestamp with time zone + + + + + unrecoveredboolean + + + + + target_copybigint + + + + + circ_libinteger + + + + + circ_staffinteger + + + + + checkin_staffinteger + + + + + checkin_libinteger + + + + + renewal_remaininginteger + + + + + due_datetimestamp with time zone + + + + + stop_fines_timetimestamp with time zone + + + + + checkin_timetimestamp with time zone + + + + + create_timetimestamp with time zone + + + + + durationinterval + + + + + fine_intervalinterval + + + + + recurring_finenumeric(6,2) + + + + + max_finenumeric(6,2) + + + + + phone_renewalboolean + + + + + desk_renewalboolean + + + + + opac_renewalboolean + + + + + duration_ruletext + + + + + recurring_fine_ruletext + + + + + max_fine_ruletext + + + + + stop_finestext + + + + + workstationinteger + + + + + checkin_workstationinteger + + + + + checkin_scan_timetimestamp with time zone + + + + + parent_circbigint + + + + + start_date_daydate + + + + + finish_date_daydate + + + + + start_date_hourtimestamp with time zone + + + + + finish_date_hourtimestamp with time zone + + + + + call_number_labeltext + + + + + owning_libinteger + + + + + item_langtext + + + + + item_typetext + + + + + item_formtext + + + + + + + + + + fleshed_copyfleshed_copyFieldData TypeConstraints and Referencesidbigint + + + + + circ_libinteger + + + + + creatorbigint + + + + + call_numberbigint + + + + + editorbigint + + + + + create_datetimestamp with time zone + + + + + edit_datetimestamp with time zone + + + + + copy_numberinteger + + + + + statusinteger + + + + + locationinteger + + + + + loan_durationinteger + + + + + fine_levelinteger + + + + + age_protectinteger + + + + + circulateboolean + + + + + depositboolean + + + + + refboolean + + + + + holdableboolean + + + + + deposit_amountnumeric(6,2) + + + + + pricenumeric(8,2) + + + + + barcodetext + + + + + circ_modifiertext + + + + + circ_as_typetext + + + + + dummy_titletext + + + + + dummy_authortext + + + + + alert_messagetext + + + + + opac_visibleboolean + + + + + deletedboolean + + + + + floatingboolean + + + + + dummy_isbntext + + + + + status_changed_timetimestamp with time zone + + + + + mint_conditionboolean + + + + + costnumeric(8,2) + + + + + create_date_daydate + + + + + edit_date_daydate + + + + + create_date_hourtimestamp with time zone + + + + + edit_date_hourtimestamp with time zone + + + + + call_number_labeltext + + + + + owning_libinteger + + + + + item_langtext + + + + + item_typetext + + + + + item_formtext + + + + + + + + + + Schema vandelaySchema vandelayauthority_attr_definitionauthority_attr_definitionFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + codetext + + + + UNIQUE; + + + + NOT NULL; + + + + + + descriptiontext + + + + + xpathtext + + + NOT NULL; + + + + removetext + + + NOT NULL; + + + DEFAULT ''::text; + + + identboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + Tables referencing vandelay.queued_authority_record_attr via Foreign Key Constraints + •vandelay.queued_authority_record_attr + + + + + authority_matchauthority_matchFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + matched_attrinteger + + + + + + + + + vandelay.queued_authority_record_attr + + + queued_recordbigint + + + + + + + + + vandelay.queued_authority_record + + + eg_recordbigint + + + + + + + + + authority.record_entry + + + + + + + + authority_queueauthority_queueFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('vandelay.queue_id_seq'::regclass); + + + + + ownerinteger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + completeboolean + + + NOT NULL; + + + DEFAULT false; + + + queue_typetext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + DEFAULT 'authority'::text; + + + + + + + + Constraints on authority_queueauthority_queue_ queue_type_checkCHECK ((queue_type = 'authority'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) + + + + + + Tables referencing vandelay.queued_authority_record via Foreign Key Constraints + •vandelay.queued_authority_record + + + + + bib_attr_definitionbib_attr_definitionFieldData TypeConstraints and Referencesidserial + + + PRIMARY KEY + + + + + + + + + codetext + + + + UNIQUE; + + + + NOT NULL; + + + + + + descriptiontext + + + + + xpathtext + + + NOT NULL; + + + + removetext + + + NOT NULL; + + + DEFAULT ''::text; + + + identboolean + + + NOT NULL; + + + DEFAULT false; + + + + + + + + Tables referencing vandelay.queued_bib_record_attr via Foreign Key Constraints + •vandelay.queued_bib_record_attr + + + + + bib_matchbib_matchFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + field_typetext + + + NOT NULL; + + + + matched_attrinteger + + + + + + + + + vandelay.queued_bib_record_attr + + + queued_recordbigint + + + + + + + + + vandelay.queued_bib_record + + + eg_recordbigint + + + + + + + + + biblio.record_entry + + + + + + Constraints on bib_matchbib_match_field_type_checkCHECK ((field_type = ANY (ARRAY['isbn'::text, 'tcn_value'::text, 'id'::text]))) + + + + + + bib_queuebib_queueFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('vandelay.queue_id_seq'::regclass); + + + + + ownerinteger + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + completeboolean + + + NOT NULL; + + + DEFAULT false; + + + queue_typetext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + DEFAULT 'bib'::text; + + + + + item_attr_defbigint + + + + + + + + + vandelay.import_item_attr_definition + + + + + + Constraints on bib_queuebib_queue_queue_type_checkCHECK ((queue_type = 'bib'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) + + + + + + Tables referencing vandelay.queued_bib_record via Foreign Key Constraints + •vandelay.queued_bib_record + + + + + import_bib_trash_fieldsimport_bib_trash_fieldsFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + fieldtext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + + + + + + import_itemimport_itemFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + recordbigint + + + + + + NOT NULL; + + + + + vandelay.queued_bib_record + + + definitionbigint + + + + + + NOT NULL; + + + + + vandelay.import_item_attr_definition + + + owning_libinteger + + + + + circ_libinteger + + + + + call_numbertext + + + + + copy_numberinteger + + + + + statusinteger + + + + + locationinteger + + + + + circulateboolean + + + + + depositboolean + + + + + deposit_amountnumeric(8,2) + + + + + refboolean + + + + + holdableboolean + + + + + pricenumeric(8,2) + + + + + barcodetext + + + + + circ_modifiertext + + + + + circ_as_typetext + + + + + alert_messagetext + + + + + pub_notetext + + + + + priv_notetext + + + + + opac_visibleboolean + + + + + + + + + + import_item_attr_definitionimport_item_attr_definitionFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + tagtext + + + NOT NULL; + + + + keepboolean + + + NOT NULL; + + + DEFAULT false; + + + owning_libtext + + + + + circ_libtext + + + + + call_numbertext + + + + + copy_numbertext + + + + + statustext + + + + + locationtext + + + + + circulatetext + + + + + deposittext + + + + + deposit_amounttext + + + + + reftext + + + + + holdabletext + + + + + pricetext + + + + + barcodetext + + + + + circ_modifiertext + + + + + circ_as_typetext + + + + + alert_messagetext + + + + + opac_visibletext + + + + + pub_note_titletext + + + + + pub_notetext + + + + + priv_note_titletext + + + + + priv_notetext + + + + + + + + + + Tables referencing vandelay.bib_queue via Foreign Key Constraints + •vandelay.bib_queue•vandelay.import_item + + + + + merge_profilemerge_profileFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.org_unit + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + add_spectext + + + + + replace_spectext + + + + + strip_spectext + + + + + preserve_spectext + + + + + + + + Constraints on merge_profileadd_replace_strip_or_preserveCHECK ((((preserve_spec IS NOT NULL) OR (replace_spec IS NOT NULL)) OR ((preserve_spec IS NULL) AND (replace_spec IS NULL)))) + + + + + + queuequeueFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + ownerinteger + + + + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + actor.usr + + + + + nametext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + + + + completeboolean + + + NOT NULL; + + + DEFAULT false; + + + queue_typetext + + + + UNIQUE#1 + ; + + + + NOT NULL; + + + DEFAULT 'bib'::text; + + + + + + + + Constraints on queuequeue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) + + + + + + queued_authority_recordqueued_authority_recordFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('vandelay.queued_record_id_seq'::regclass); + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + import_timetimestamp with time zone + + + + + purposetext + + + NOT NULL; + + + DEFAULT 'import'::text; + + + marctext + + + NOT NULL; + + + + queueinteger + + + + + + NOT NULL; + + + + + vandelay.authority_queue + + + imported_asinteger + + + + + + + + + authority.record_entry + + + + + + Constraints on queued_authority_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) + + + + + + Tables referencing vandelay.authority_match via Foreign Key Constraints + •vandelay.authority_match•vandelay.queued_authority_record_attr + + + + + queued_authority_record_attrqueued_authority_record_attrFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + recordbigint + + + + + + NOT NULL; + + + + + vandelay.queued_authority_record + + + fieldinteger + + + + + + NOT NULL; + + + + + vandelay.authority_attr_definition + + + attr_valuetext + + + NOT NULL; + + + + + + + + + Tables referencing vandelay.authority_match via Foreign Key Constraints + •vandelay.authority_match + + + + + queued_bib_recordqueued_bib_recordFieldData TypeConstraints and Referencesidbigint + + + PRIMARY KEY + + + + + + DEFAULT nextval('vandelay.queued_record_id_seq'::regclass); + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + import_timetimestamp with time zone + + + + + purposetext + + + NOT NULL; + + + DEFAULT 'import'::text; + + + marctext + + + NOT NULL; + + + + queueinteger + + + + + + NOT NULL; + + + + + vandelay.bib_queue + + + bib_sourceinteger + + + + + + + + + config.bib_source + + + imported_asbigint + + + + + + + + + biblio.record_entry + + + + + + Constraints on queued_bib_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) + + + + + + Tables referencing vandelay.bib_match via Foreign Key Constraints + •vandelay.bib_match•vandelay.import_item•vandelay.queued_bib_record_attr + + + + + queued_bib_record_attrqueued_bib_record_attrFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + recordbigint + + + + + + NOT NULL; + + + + + vandelay.queued_bib_record + + + fieldinteger + + + + + + NOT NULL; + + + + + vandelay.bib_attr_definition + + + attr_valuetext + + + NOT NULL; + + + + + + + + + Tables referencing vandelay.bib_match via Foreign Key Constraints + •vandelay.bib_match + + + + + queued_recordqueued_recordFieldData TypeConstraints and Referencesidbigserial + + + PRIMARY KEY + + + + + + + + + create_timetimestamp with time zone + + + NOT NULL; + + + DEFAULT now(); + + + import_timetimestamp with time zone + + + + + purposetext + + + NOT NULL; + + + DEFAULT 'import'::text; + + + marctext + + + NOT NULL; + + + + + + + Constraints on queued_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) + + + + + + + Appendix A. About this DocumentationAppendix A. About this Documentation + Report errors in this documentation using Launchpad. + Appendix A. About this Documentation + Report any errors in this documentation using Launchpad. + Appendix A. About this DocumentationAppendix A. About this Documentation + + About the Documentation Interest Group (DIG)About the Documentation Interest Group (DIG) + + The Evergreen DIG was established in May 2009 at the first Evergreen International Conference, where members of the Evergreen community committed to developing single-source, + standards-based documentation for Evergreen. Since then, the DIG has been actively working toward that goal. + Table A.1. Evergreen DIG ParticipantsNameOrganizationJeremy BuhlerSITKAPaula Burton King County Library SystemMatt CarlsonKing County Library System Sarah ChildsHussey-Mayfield Memorial Public LibraryAnton ChuppinNova Scotia Provincial LibraryMarlene ColemanBeaufort Country LibraryKaren CollierKent County Public Library Shannon DineenSITKAGeorge DuimovichNRCan LibraryJennifer DurhamStatesboro Regional Library SystemJennifer FinneyFlorence County LibraryLynn FloydAnderson County LibrarySally FortinEquinox SoftwareTina JiSITKACatherine LemmerIndiana State LibraryRoma MatottPioneer Library SystemAndrea NeimanKent County Public LibraryKevin PischkeWilliam Jessup University Tara RobertsonN/ARod SchiffmanAlpha-G ConsultingSteve SheppardOpenBen ShumBibliomationRobert SoulliereMohawk CollegeLindsay StrattonPioneer Library SystemJenny TurnerPALSRepke de VriesInternational Institute for Social HistoryD. Ceabron WilliamsFlint River Regional Library System Tigran ZargaryanFundamental Scientific Library of the National Academy of Sciences + Table A.2. Past DIG ParticipantsNameOrganizationPaul Weiss Consultant/Sage Library SystemKaren SchneiderEquinox Software + Special thanks goes to: + •Jeremy Buhler and SITKA for providing DocBook style sheets, style guides and large portions of content for this documentation.•Dan Scott from Laurentian University for providing large portions of content and many helpful tips.•Mike Rylander, Grace Dunbar, Galen Charleton, Jason Etheridge, Bill Erickson, Joe Atzberger, Scott McKellar and all the other folks at Equinox Software for + contributing large portions of content on the wiki. + There have been many other who have contributed there time to the Book of Evergreen project. Without their contributions to this community driven project, this documentation + would not be possible. + + How to ParticipateHow to Participate + + Contributing to documentation is an excellent way to support Evergreen, even if you are new to documentation. In fact, beginners often have a distinct advantage over the + experts, more easily spotting the places where documentation is lacking or where it is unclear. + We welcome your contribution with planning, writing, editing, testing, translating to DocBook, and other tasks. Whatever your background or experience we are keen to + have your help! + What you can do: + •Join the Evergreen documentation listserv: list.georgialibraries.org/mailman/listinfo/open-ils-documentation . This is the primary way we communicate with each other. + Please send an email introducing yourself to the list.•Add yourself to the participant list + if you have an Evergreen DokuWiki account, or send a request to <docs@evergreen-ils.org>.•Check out the documentation outline to see which areas need work, + and let the DIG list know in which areas you would like to work.•Review the documentation and report any error or make suggestion using Launchpad. + Volunteer RolesVolunteer Roles + + We are now looking for people to help produce the documentation. If you interested in participating, email the DIG facilitators at <docs@evergreen-ils.org> + or post on the documentation mailing list. We're looking for volunteers to work on the following: + •Writing – Produce the documentation (“from scratch,” and/or revised from existing materials). We're open to receiving content in any formats, such as + Word or Open Office, but of course, would be most delighted with DocBook xml format.•Testing – Compare the documents with the functions they describe and ensuring that the procedures accomplish the desired results. Even if you are not + officially in the DIG, we would appreciate any suggestions you may have for Evergreen documentation.•XML conversion – Convert existing documentation to DocBook format.•Editorial review – Ensuring the documentation is clear and follows Evergreen DIG style + guide conventions.•Style and Design – Edit the DocBook style sheets or post style tips and suggestions on the DIG list. + + + + Appendix B. Getting More InformationAppendix B. Getting More Information + Report errors in this documentation using Launchpad. + Appendix B. Getting More Information + Report any errors in this documentation using Launchpad. + Appendix B. Getting More InformationAppendix B. Getting More Information + + This documentation is just one way to learn about Evergreen and find solutions to Evergreen challenges. Below is a list of many other resources to help you find answers to almost any question + you might have. + Evergreen Wiki - Loads of information and the main portal to the Evergreen community. + Evergreen mailing lists - These are excellent for initiating questions. There are several lists including: + •General list - General inquiries regarding Evergreen. If unsure about + which list to use, this is a good stating point.•Developer list - Technical questions should be asked here including + questions regarding installation. As well, patches can be submitted using this list and developer communication also takes place here. •DIG list - This list is used for questions and + feedback regarding this documentation, the Documentation Interest Group and other documentation related ideas and issues. + + Evergreen Blog - Great for getting general news and updates about Evergreen. It is also an interesting historical read + with entries dating back to the early beginnings of Evergreen. + Evergreen IRC channel - Allows live chat. Many developers hang out here and will try to field technical questions. This + is often the quickest way to get a solution to a specific problem. Just remember that while the channel is open 24/7, there are times when no one is available in the channel. The most + active times for the IRC channel seem to be weekday afternoons (Eastern Standard Time). There is also an archive of logs from the chat sessions available on the + IRC page. + Evergreen related community blogs - Evergreen related blog entries from the community. + Resource Sharing Cooperative of Evergreen Libraries (RSCEL) - Provides some technical documents and a means for the + Evergreen community to collaborate with other libraries. + List of current Evergreen libraries - Locate other libraries who are + using Evergreen. + + GlossaryGlossary + Report errors in this documentation using Launchpad. + Glossary + Report any errors in this documentation using Launchpad. + GlossaryGlossary + In this section we expand acronyms, define terms, and generally try + to explain concepts used by Evergreen software. + AApacheOpen-source web server software used to serve both static content and + dynamic web pages in a secure and reliable way. More information is available + at http://apache.org.BBookbagsBookbags are lists of items that can be used for any number of + purposes. For example, to keep track of what books you have read, books you + would like to read, to maintain a class reading list, to maintain a reading + list for a book club, to keep a list of books you would like for your + birthday. There are an unlimited number of uses.CCentOSA popular open-source operating system based on Red Hat Enterprises + Linux (a.k.a. "RHEL") and often used + for in web servers. More information is available at http://www.centos.org.Closure CompilerA suite of open-source tools used to build web applications with + Javascript; originally developed by Google. + More information is available at + + http://code.google.com/closure/compiler/.CPANAn open-source archive of software modules written in + Perl. More information is available at + http://www.cpan.org.DDebianOne of the most popular open-source operating system using the + Linux kernel that provides over 25000 + useful precompiled software packages. Also known as + Debian GNU/Linux. More information is + available at http://www.debian.org.Domain nameA unique set of case-insensitive, alphanumeric strings separated by + periods that are used to name organizations, web sites and addresses on the + Internet (e.g.: www.esilibrary.com). Domain names can be reserved + via third-party registration services, and can be associated with a unique IP + address or suite of IP addresses.See Also IP Address.EejabberdAn open-source Jabber/XMPP instant messaging server that runs under popular + operating systems (e.g., Mac OSX, + GNU/Linux, and + Microsoft Windows). One popular use is + to provide XMPP messaging services for a + Jabber domain across an extendable cluster of + cheap, easily-replaced machine nodes. More information is available at + http://www.ejabberd.im.See Also Jabber, XMPP.FGGentooA popular open-source operating system built on the + Linux kernel. More information is available + at http://www.gentoo.org.HIIP Address(Internet Protocol address) A numerical label consisting of four numbers + separated by periods (e.g., "192.168.1.15") assigned to individual members of + networked computing systems. It uniquely identifies each system on the network + and allows controlled communication between such systems. The numerical label + scheme must adhere to a strictly defined naming convention that is currently + defined and overseen by the Internet Corporation for Assigned Names and + Numbers ("ICANN").Item/copy BucketsVirtual “containers” to use in batch processing of item or + copy records. They can be used to perform various cataloging/holdings + maintenance tasks in batch.JJabberNow known as XMPP (eXtensible Messaging and Presence Protocol), it was + originally named "Jabber".See Also XMPP.KLMMARCThe MARC formats are standards for the representation and communication + of bibliographic and related information in machine-readable form.MARCXMLFramework for working with MARC data in a XML environment.McCoyAn open-source application that allows add-on authors to provide secure + updates to their users. More information is available at + + http://developer.mozilla.org/en/McCoy.memcachedA general-purpose distributed memory caching system, usually with a + client~server architecture spread over multiple computing systems. It reduces + the number of times a data source (e.g., a database) must be directly accessed + by temporarily caching data in memory, therefore dramatically speeding up + database-driven web applications.NNetwork addressAlso known as an IP address (Internet Protocol address).See Also IP Address.nsisAn open-source software tool used to create Windows installers. + More information is available at + http://nsis.sourceforge.net.OOPACThe "Online Public Access Catalog"; an online database of a library's + holdings; used to find resources in their collections; possibly searchable by + keyword, title, author, subject or call number.OpenSRFThe "Open Scalable Request Framework" (pronounced 'open surf') is a + stateful, decentralized service architecture that allows developers to create + applications for Evergreen with a minimum of knowledge of its + structure.PPKIPublic Key Infrastructure (PKI) describes the schemes needed to generate + and maintain digital SSL Certificates.See Also SSL Certificate.PostgreSQLA popular open-source object-relational database management system that + underpins Evergreen software.PuttyA popular open-source telnet/ssh client for the Windows and Unix + platforms. More information is available at + + http://www.chiark.greenend.org.uk/~sgtatham/putty/.QRResource HackerAn open-source utility used to view, modify, rename, add, delete and + extract resources in 32bit Windows executables. + More information is available at + Resource HackerRHELAlso known as "Red Hat Enterprises + Linux". An official + Linux distribution that is targeted + at the commercial market. It is the basis of other popular + Linux distributions, e.g., + CentOS. More information is available + at http://www.redhat.com.SSIPSIP, standing for Standard Interchange Protocol, was developed by the 3M + Corporation to be a common protocol for data transfer between ILS' and third party + devices.srfshA command language interpreter (shell) that executes commands read from the + standard input. It is used to test the Open Service Request Framework (OpenSRF).SRUSRU is a standard XML-focused search protocol for Internet search + queries, utilizing CQL (Contextual Query Language), a standard syntax for + representing queries.SSHAn encrypted network protocol using public-key cryptography that allows + secure communications between systems on an insecure network. Typically used + to access shell accounts but also supports tunneling, forwarding TCP ports and + X11 connections, and transferring files.SSH proxyAs used in Evergreen, a method of allowing one or more Staff Clients to + communicate with one or more Evergreen servers over an insecure network by + sending data through a secure SSH tunnel. It also buffers and caches all data + travelling to and from Staff Clients to speed up access to resources on + Evergreen servers.See Also SSH, tunneling, SSH tunnel.SSH tunnelAn encrypted data channel existing over an SSH network connection. Used + to securely transfer unencrypted data streams over insecure networks.See Also SSH, tunneling.SSL CertificateAs used in Evergreen, it is a method of ensuring that Staff Clients are + able to connect to legitimate Evergreen servers.In general, it is a special electronic document used to guarantee + authenticity of a digital message. Also known as a "public key", or "identity" + or "digital" certificate. It combines an identity (of a person or an + organization) and a unique public key to form a so-called digital signature, + and is used to verify that the public key does, in fact, belong with that + particular identity.TtunnelingAs used in Evergreen, it is a method of allowing Staff Clients to securely + connect to legitimate Evergreen servers.In general, it is a method of encapsulating data provided in one network + protocol (the "delivery"protocol), within data in a different network protocol + (the "tunneling" protocol). Used to provide a secure path and secure + communications through an insecure or incompatible network. Can be used to + bypass firewalls by communicating via a protocol the firewall normally blocks, + but "wrapped" inside a protocol that the firewall does not block.See Also SSH tunnel.UUbuntuA popular open-source operating system using the + Linux kernel that was originally based on the + Debian GNU/Linux operating system. + More information is available at + http://www.ubuntu.com.See Also Debian.VVirtual PCA popular commercial package of virtualization software that emulates the x86 + microprocessor architecture. It is installed on a Windows "host" operating system and + allows other "guest" (typically including Linux + and Windows) operating systems to be loaded and + executed.See Also Virtualization.VirtualBoxA popular commercial package of virtualization software that emulates the x86 + microprocessor architecture. It can be installed on + Linux, Mac OS X, + Windows or + Solaris "host" operating systems and allows + other "guest" (typically including Linux and + Windows) operating systems to be loaded and executed.See Also Virtualization.VirtualizationA method of executing software in a special environment that is partitioned or + separated from the real underlying hardware and software resources. In typical usage, it + allows a host operating system to encapsulate or emulate another + operating system environment in such a way that the emulated environment is completely + unaware of the hosting environment. For instance, it allows a + Linux application to execute within a + Windows environment.VMwareA popular commercial package of virtualization software that emulates the x86 + microprocessor architecture. It can be installed on + Linux, + Mac OS X, + Windows or + Solaris "host" operating systems and + allows other "guest" (typically including + Linux and + Windows) operating systems to be + loaded and executed.See Also Virtualization.Volume BucketsVirtual “containers” to use in batch processing of multiple + volumes. They can be used to perform various cataloging/holdings maintenance + tasks in batch.WWineA popular open-source application that allows + Linux and + Unix + systems to run Windows executables. + More information is available at + http://www.winehq.org/.XXMLThe eXtensible Markup Language, a subset of SGML; a set of rules for + encoding information in a way that is both human- and machine-readable. It is + primarily used to define documents but can also be used to define arbitrary + data structures. It was originally defined by the World Wide Web Consortium + (W3C).XMPPAn open-standard communications protocol, based on XML, used in + message-oriented middleware. It supports the concept of a consistent + domain of message types that flow between software + applications, possibly on different operating systems and architectures. + More information is available at + http://xmpp.org.See Also Jabber.xpathThe XML Path Language, a query language based on a tree representation + of an XML document. It is used to programmatically select nodes from an XML + document and to do minor computation involving strings, numbers and Boolean + values. It allows you to identify parts of the XML document tree, to navigate + around the tree, and to uniquely select nodes. The currently version is "XPath + 2.0". It was originally defined by the World Wide Web Consortium (W3C).XULThe XML User Interface Language, a specialized interface language that allows + building cross-platform applications that drive Mozilla + -based browsers such as Firefox. More information is + available at + + https://developer.mozilla.org/en/XUL.xulrunnerA specialized run-time application environment that provides support for + installing, upgrading and uninstalling XUL + applications. It operates with Mozilla-based + applications such as the Firefox browser. + More information is available at + + https://developer.mozilla.org/en/XULRunner.See Also XUL.YYAZA programmers’ toolkit supporting the development of Z39.50/SRW/SRU + clients and servers.yaz-clientZ39.50/SRU client for connecting to YAZ servers. + More information is available at + http://www.indexdata.com/yazZZ39.50A client–server protocol for searching and retrieving information from + remote computer databases. + IndexIndex + Report errors in this documentation using Launchpad. + Index + Report any errors in this documentation using Launchpad. + IndexIndex + +Aaction triggers, creating, event definitions, hooks, processing, reactors, runner, validators, added contentSyndetics, ANSI, Apache, , logs, Apache modules, autogen, , , , Automated Circulation System, Automated Material Handling (AMH), , BBibTemplate, bookbags, booking reservationcancelling, capturing items, creating, creating a pull list, making a cataloged item bookable, non-bibliographic items, picking up, returning, setting booking permissions, BRE JSON, Ccomma delimited files, configuration filesApache, , , eg.conf, eg_vhost.conf, fm_IDL.xml, oils_sip.xml, , , oils_web.xml, opensrf.xml, , , opensrf_core.xml, , , , startup.pl, copy buckets, (see also item buckets)CQL, CSS3, , cstore, Ddatabases, backing up, PostgreSQL, , , , , , , , , , , directoriesbacking up, DocBook, Documentation Interest Group (DIG), , Dojo toolkit, , domain name, Dublin Core, , Eejabberd, , Ejabberd, environment variablePATH, PERL5LIB, Evergreen Interface Definition Language (IDL), action element, class element, context_field element, field element, fields element, permacrud element, Evergreen software dependencies, FFGDC, Fieldmapper, firewall, GGNU General Public License, HHTTPtranslator, IIP Address, IRC chat, item buckets, (see also copy buckets)JJabber, jabber, (see also XMPP)Java, JavaScript, , , , JSON, , SELECT clauses, LLinuxCentOS, , commands, Debian, , , , , Gentoo, Red Hat, RHEL, Ubuntu, , , , Wine, logsApache, gateway.log, Log Rotate, logging levels, managing, osrfsys.log, router, srfsh.log, Mmagnetic media, mailing lists, , MARC, , MARCXML, , , memcached, , , migratingadding metarecords, batch updating patrons, bibliograohic records, exporting bibliographic recordsMARC, holdings, import into PostgreSQL, importing authority records, importing bibliographic records, patrons, MODS, Nnetwork address, (see also ip address)OOPAC, added content, Google Books, customizingchanging the color scheme, details page, text and labels, testing, OpenSRF, , , , Communication Flows, configure, download, installation, services, , , Ppcrud, PerlCPAN, , , permissions, PKI, proximity map, Python, , RRAID, receipt template editor, reportsstarting, Resource Sharing Cooperative of Evergreen Libraries (RSCEL), rsync, SSAN, search resultscaching, security, SelfCheck, , Simple2ZOOM, SIP, , , , slot formatter, slot marker, sql, , tables, srfsh, , , , srfsh (command-line OpenSRF client), SRU, , SSH, proxy, Putty, tunneling, SSL, certificates, staff clientcustomizing, labels and messages, testing, staging table, SuperCat, formats, adding, customizing, ISBNs, recent records, records, syslog, , syslog-NG, Ttelnet, tunneling, (see also SSH tunneling)UunAPI, Unicode, VVersion Control Systemgit, Subversion, , virtualization, virtualization softwareVirtual PC, VirtualBox, VMware, volume buckets, Wweb serverApache, , , , , wiki, XXML, XMPP, , , (see also jabber)xpath, xUL, XULRunner, Yyaz, , , ZZ39.50, , + + diff --git a/2.0/root.xml b/2.0/root.xml index 16e6a0f..bb0cbf1 100755 --- a/2.0/root.xml +++ b/2.0/root.xml @@ -52,14 +52,22 @@ + Administration + + + + + + + diff --git a/2.0/stafftasks/booking.xml b/2.0/stafftasks/booking.xml new file mode 100644 index 0000000..87c8ac7 --- /dev/null +++ b/2.0/stafftasks/booking.xml @@ -0,0 +1,202 @@ + + + + Using the Booking Module + + The following chapter will help staff create reservations for cataloged and non- + bibliographic items; create pull lists for reserved items; capture resources; and pick up and + return reservations. + + + +
+ Creating a Booking Reservation + booking reservationcreating + Only staff members can create reservations. To initiate a reservation, staff can + + search the catalog, + enter a patron record, + or use the booking module. + + + + Search the catalog to create a reservation + + In the staff client, select Search Search the Catalog + + Search for the item to be booked. + Click Submit Search. + A list of results will appear. Select the title of the item to be reserved. + After clicking the title, the record summary appears. Beneath the record summary, + the copy summary will appear. In the Actions column, select Copy Details. + + The Copy Details will appear in a new row. In the barcode column, click the book now + link. + A screen showing the title and barcodes of available copies will appear. + Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode + does not exist, a pop up box will appear to alert you to the error. After entering the + patron’s barcode, the user’s existing reservations will appear at the bottom of the + screen. + To the right, a section titled, I need this resource... will allow you to set the dates and + times for which the item should be reserved. If the date/time boxes appear in red, + then the date and time set is incorrect. For example, if the time for which the + reservation is set has already passed, the boxes will appear in red. The times must be + set correctly for the reservation to be accomplished. If the item has already been + reserved at the time for which you are trying to reserve the item, then you will receive + an error message. + Finally, select the barcode of the item that you want to reserve. If multiple copies of + the item exist, choose the barcode of the copy that you want to reserve, and click + Reserve Selected. If you do not select a barcode, and you click Reserve Selected, you + will receive an error message. If you do not have a preference, you do not have to + select a barcode, and you may click Reserve Any. One of the barcodes will be pulled + from the list. + An item must have a status of available or reshelving in order to + be targeted for a reservation. If the item is in another status, the reservation will fail. + After you have made the reservation, a message will confirm that the action succeeded. Click OK. + The screen will refresh, and the reservation will appear below the user’s name. + + + + Enter a patron’s record to create a reservation + + Enter the barcode or patron information, and click Search to retrieve the patron’s record. + The match(es) should appear in the right pane. Click the desired patron’s name. In the + left panel, a summary of the patron’s information will appear. Click the Retrieve + Patron button in the right corner to access more options in the patron’s record. + Eight buttons will appear in the top right corner. Select Other Booking + to create, cancel, pick up, and return reservations. + The Copy Details will appear in a new row. In the barcode column, click the book now + link. + A screen showing the title and barcodes of available copies will appear. + Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode + does not exist, a pop up box will appear to alert you to the error. After entering the + patron’s barcode, the user’s existing reservations will appear at the bottom of the + screen. + To the right, a section titled, I need this resource... will allow you to set the dates and + times for which the item should be reserved. If the date/time boxes appear in red, + then the date and time set is incorrect. For example, if the time for which the + reservation is set has already passed, the boxes will appear in red. The times must be + set correctly for the reservation to be accomplished. If the item has already been + reserved at the time for which you are trying to reserve the item, then you will receive + an error message. + Finally, select the barcode of the item that you want to reserve. If multiple copies of + the item exist, choose the barcode of the copy that you want to reserve, and click + Reserve Selected. If you do not select a barcode, and you click Reserve Selected, you + will receive an error message. If you do not have a preference, you do not have to + select a barcode, and you may click Reserve Any. One of the barcodes will be pulled + from the list. + An item must have a status of available or reshelving in order to + be targeted for a reservation. If the item is in another status, the reservation will fail. + After you have made the reservation, a message will confirm that the action succeeded. Click OK. + The screen will refresh, and the reservation will appear below the user’s name. + + + + Use the booking module to create a reservation + + Select BookingCreate or Edit Reservations + Enter the barcode of the item and click Next. + A screen showing the name of the available resource will appear. + Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode + does not exist, a pop up box will appear to alert you to the error. After entering the + patron’s barcode, the user’s existing reservations will appear. + To the right, a section titled, I need this resource... will allow you to set the dates and + times for which the item should be reserved. If the date/time boxes appear in red, + then the date and time set is incorrect. For example, if the time for which the + reservation is set has already passed, the boxes will appear in red. The times must be + set correctly for the reservation to be accomplished. If the resource has already been + reserved at the time for which you want to reserve the item, then the item will + disappear. + Finally, select the resource that you want to reserve. If multiple items or rooms exist, + choose the resource that you want to reserve, and click Reserve Selected. If you do + not select a resource, and you click Reserve Selected, you will receive an error + message. If you do not have a preference, you may click Reserve Any, and one of the + resources will be pulled from the list. + After you have made the reservation, a message will confirm that the action + succeeded. Click OK. + The screen will refresh, and the reservation will appear below the user’s name. + + +
+
+ Cancelling a Reservation + booking reservationcancelling + Staff members can cancel a patron’s reservation through the Create or Cancel Reservations tab available in a patron’s record. Staff members can also cancel a + reservation immediately after it has been made. + + Enter the patron’s record to cancel a reservation + + Search for and retrieve a patron’s record. + Select OtherBookingCreate or Cancel Reservations. + The existing reservations will appear at the bottom of the screen. + To cancel a reservation, highlight the reservation that you want to cancel. Click Cancel Selected. + A pop-up window will confirm that you cancelled the reservation. Click OK. + The screen will refresh, and the cancelled reservation will disappear. + To the right, a section titled, I need this resource... will allow you to set the dates and + times for which the item should be reserved. If the date/time boxes appear in red, + then the date and time set is incorrect. For example, if the time for which the + reservation is set has already passed, the boxes will appear in red. The times must be + set correctly for the reservation to be accomplished. If the item has already been + reserved at the time for which you are trying to reserve the item, then you will receive + an error message. + + + + Cancel a reservation immediately after it has been made + + Create the reservation. + Follow steps four through six in the section, Enter the patron’s record to cancel a reservation, to cancel the reservation. + The existing reservations will appear at the bottom of the screen. + + +
+
+ Creating a Pull List + booking reservationcreating a pull list + Staff members can create a pull list to retrieve items from the stacks. + + To create a pull list, select BookingPull List. + To find a pull list for your library, select a library from the dropdown box adjacent to See pull list for library. + You can decide how many days in advance you would like to select reserved items. Enter the number of days in the box adjacent to Generate + list for this many days hence. For example, if you would like to pull items that are needed today, you can enter 1 in the box, and you will retrieve items that need to be pulled today. + Click Fetch to retrieve the pull list. + The pull list will appear. Click Print to print the pull list. + +
+
+ Capturing Items for Reservations + booking reservationcapturing items + Staff members can capture items for reservations. + + In the staff client, select BookingCapture Resources. + Enter the barcode of the items to be captured. Click Capture. + A Capture Succeeded message will appear to the right. Information about the item will appear below the message. You can print this + information as a receipt and add it to the item if desired. + +
+
+ Picking Up Reservations + booking reservationpicking up + Staff members can help users pick up their reservations. + + In the staff client, select BookingPick Up Reservations + Enter the user’s barcode. Click Go. + The title available for pickup will appear. Highlight the title of the item to pick up, and click Pick Up. + The screen will refresh to show that the patron has picked up the reservation. + +
+
+ Returning Reservations + booking reservationreturning + Staff members can help users return their reservations. + + In the staff client, select BookingReturn Reservations. + You can return the item by patron or item barcode. Choose Resource or Patron, enter the + barcode, and click Go. + A pop up box will tell you that the item was returned. Click OK. + The screen will refresh to show the reservations that remain out and the resources that have been returned. + +
+
+